00:00:00.001 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v22.11" build number 593 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3258 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.098 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/ubuntu20-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.099 The recommended git tool is: git 00:00:00.099 using credential 00000000-0000-0000-0000-000000000002 00:00:00.101 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/ubuntu20-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.130 Fetching changes from the remote Git repository 00:00:00.131 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.166 Using shallow fetch with depth 1 00:00:00.166 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.166 > git --version # timeout=10 00:00:00.192 > git --version # 'git version 2.39.2' 00:00:00.192 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.214 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.214 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.239 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.250 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.261 Checking out Revision 4b79378c7834917407ff4d2cff4edf1dcbb13c5f (FETCH_HEAD) 00:00:06.261 > git config core.sparsecheckout # timeout=10 00:00:06.271 > git read-tree -mu HEAD # timeout=10 00:00:06.286 > git checkout -f 4b79378c7834917407ff4d2cff4edf1dcbb13c5f # timeout=5 00:00:06.302 Commit message: "jbp-per-patch: add create-perf-report job as a part of testing" 00:00:06.302 > git rev-list --no-walk 4b79378c7834917407ff4d2cff4edf1dcbb13c5f # timeout=10 00:00:06.383 [Pipeline] Start of Pipeline 00:00:06.394 [Pipeline] library 00:00:06.396 Loading library shm_lib@master 00:00:06.396 Library shm_lib@master is cached. Copying from home. 00:00:06.409 [Pipeline] node 00:00:06.417 Running on VM-host-SM16 in /var/jenkins/workspace/ubuntu20-vg-autotest 00:00:06.419 [Pipeline] { 00:00:06.426 [Pipeline] catchError 00:00:06.428 [Pipeline] { 00:00:06.438 [Pipeline] wrap 00:00:06.446 [Pipeline] { 00:00:06.453 [Pipeline] stage 00:00:06.454 [Pipeline] { (Prologue) 00:00:06.471 [Pipeline] echo 00:00:06.472 Node: VM-host-SM16 00:00:06.476 [Pipeline] cleanWs 00:00:06.482 [WS-CLEANUP] Deleting project workspace... 00:00:06.482 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.487 [WS-CLEANUP] done 00:00:06.655 [Pipeline] setCustomBuildProperty 00:00:06.745 [Pipeline] httpRequest 00:00:06.777 [Pipeline] echo 00:00:06.778 Sorcerer 10.211.164.101 is alive 00:00:06.784 [Pipeline] httpRequest 00:00:06.787 HttpMethod: GET 00:00:06.788 URL: http://10.211.164.101/packages/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:06.788 Sending request to url: http://10.211.164.101/packages/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:06.807 Response Code: HTTP/1.1 200 OK 00:00:06.808 Success: Status code 200 is in the accepted range: 200,404 00:00:06.808 Saving response body to /var/jenkins/workspace/ubuntu20-vg-autotest/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:25.371 [Pipeline] sh 00:00:25.653 + tar --no-same-owner -xf jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:25.671 [Pipeline] httpRequest 00:00:25.702 [Pipeline] echo 00:00:25.704 Sorcerer 10.211.164.101 is alive 00:00:25.714 [Pipeline] httpRequest 00:00:25.718 HttpMethod: GET 00:00:25.719 URL: http://10.211.164.101/packages/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:25.719 Sending request to url: http://10.211.164.101/packages/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:25.737 Response Code: HTTP/1.1 200 OK 00:00:25.738 Success: Status code 200 is in the accepted range: 200,404 00:00:25.738 Saving response body to /var/jenkins/workspace/ubuntu20-vg-autotest/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:01:28.865 [Pipeline] sh 00:01:29.143 + tar --no-same-owner -xf spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:01:31.692 [Pipeline] sh 00:01:31.969 + git -C spdk log --oneline -n5 00:01:31.969 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:01:31.969 507e9ba07 nvme: add lock_depth for ctrlr_lock 00:01:31.969 62fda7b5f nvme: check pthread_mutex_destroy() return value 00:01:31.969 e03c164a1 nvme: add nvme_ctrlr_lock 00:01:31.969 d61f89a86 nvme/cuse: Add ctrlr_lock for cuse register and unregister 00:01:31.987 [Pipeline] withCredentials 00:01:31.996 > git --version # timeout=10 00:01:32.007 > git --version # 'git version 2.39.2' 00:01:32.021 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:32.023 [Pipeline] { 00:01:32.032 [Pipeline] retry 00:01:32.034 [Pipeline] { 00:01:32.050 [Pipeline] sh 00:01:32.328 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:32.596 [Pipeline] } 00:01:32.618 [Pipeline] // retry 00:01:32.623 [Pipeline] } 00:01:32.640 [Pipeline] // withCredentials 00:01:32.647 [Pipeline] httpRequest 00:01:32.667 [Pipeline] echo 00:01:32.669 Sorcerer 10.211.164.101 is alive 00:01:32.676 [Pipeline] httpRequest 00:01:32.680 HttpMethod: GET 00:01:32.680 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:32.681 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:32.689 Response Code: HTTP/1.1 200 OK 00:01:32.689 Success: Status code 200 is in the accepted range: 200,404 00:01:32.690 Saving response body to /var/jenkins/workspace/ubuntu20-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:37.494 [Pipeline] sh 00:01:37.769 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:39.151 [Pipeline] sh 00:01:39.431 + git -C dpdk log --oneline -n5 00:01:39.431 caf0f5d395 version: 22.11.4 00:01:39.431 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:39.431 dc9c799c7d vhost: fix missing spinlock unlock 00:01:39.431 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:39.431 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:39.449 [Pipeline] writeFile 00:01:39.461 [Pipeline] sh 00:01:39.735 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:39.746 [Pipeline] sh 00:01:40.020 + cat autorun-spdk.conf 00:01:40.020 SPDK_TEST_UNITTEST=1 00:01:40.020 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:40.020 SPDK_TEST_NVME=1 00:01:40.020 SPDK_TEST_BLOCKDEV=1 00:01:40.020 SPDK_RUN_ASAN=1 00:01:40.020 SPDK_RUN_UBSAN=1 00:01:40.020 SPDK_TEST_RAID5=1 00:01:40.020 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:40.020 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:40.020 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:40.027 RUN_NIGHTLY=1 00:01:40.029 [Pipeline] } 00:01:40.046 [Pipeline] // stage 00:01:40.063 [Pipeline] stage 00:01:40.066 [Pipeline] { (Run VM) 00:01:40.083 [Pipeline] sh 00:01:40.363 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:40.363 + echo 'Start stage prepare_nvme.sh' 00:01:40.363 Start stage prepare_nvme.sh 00:01:40.363 + [[ -n 3 ]] 00:01:40.363 + disk_prefix=ex3 00:01:40.363 + [[ -n /var/jenkins/workspace/ubuntu20-vg-autotest ]] 00:01:40.363 + [[ -e /var/jenkins/workspace/ubuntu20-vg-autotest/autorun-spdk.conf ]] 00:01:40.363 + source /var/jenkins/workspace/ubuntu20-vg-autotest/autorun-spdk.conf 00:01:40.363 ++ SPDK_TEST_UNITTEST=1 00:01:40.363 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:40.363 ++ SPDK_TEST_NVME=1 00:01:40.363 ++ SPDK_TEST_BLOCKDEV=1 00:01:40.363 ++ SPDK_RUN_ASAN=1 00:01:40.363 ++ SPDK_RUN_UBSAN=1 00:01:40.363 ++ SPDK_TEST_RAID5=1 00:01:40.363 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:40.363 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:40.363 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:40.363 ++ RUN_NIGHTLY=1 00:01:40.363 + cd /var/jenkins/workspace/ubuntu20-vg-autotest 00:01:40.363 + nvme_files=() 00:01:40.363 + declare -A nvme_files 00:01:40.363 + backend_dir=/var/lib/libvirt/images/backends 00:01:40.363 + nvme_files['nvme.img']=5G 00:01:40.363 + nvme_files['nvme-cmb.img']=5G 00:01:40.363 + nvme_files['nvme-multi0.img']=4G 00:01:40.363 + nvme_files['nvme-multi1.img']=4G 00:01:40.363 + nvme_files['nvme-multi2.img']=4G 00:01:40.363 + nvme_files['nvme-openstack.img']=8G 00:01:40.363 + nvme_files['nvme-zns.img']=5G 00:01:40.363 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:40.363 + (( SPDK_TEST_FTL == 1 )) 00:01:40.363 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:40.363 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:40.363 + for nvme in "${!nvme_files[@]}" 00:01:40.363 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:01:40.363 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:40.363 + for nvme in "${!nvme_files[@]}" 00:01:40.363 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:01:40.363 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:40.363 + for nvme in "${!nvme_files[@]}" 00:01:40.363 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:01:40.363 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:40.363 + for nvme in "${!nvme_files[@]}" 00:01:40.363 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:01:40.363 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:40.363 + for nvme in "${!nvme_files[@]}" 00:01:40.363 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:01:40.363 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:40.363 + for nvme in "${!nvme_files[@]}" 00:01:40.363 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:01:40.363 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:40.363 + for nvme in "${!nvme_files[@]}" 00:01:40.363 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:01:41.299 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:41.299 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:01:41.299 + echo 'End stage prepare_nvme.sh' 00:01:41.299 End stage prepare_nvme.sh 00:01:41.314 [Pipeline] sh 00:01:41.593 + DISTRO=ubuntu2004 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:41.593 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex3-nvme.img -H -a -v -f ubuntu2004 00:01:41.593 00:01:41.593 DIR=/var/jenkins/workspace/ubuntu20-vg-autotest/spdk/scripts/vagrant 00:01:41.593 SPDK_DIR=/var/jenkins/workspace/ubuntu20-vg-autotest/spdk 00:01:41.593 VAGRANT_TARGET=/var/jenkins/workspace/ubuntu20-vg-autotest 00:01:41.593 HELP=0 00:01:41.593 DRY_RUN=0 00:01:41.593 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme.img, 00:01:41.593 NVME_DISKS_TYPE=nvme, 00:01:41.593 NVME_AUTO_CREATE=0 00:01:41.593 NVME_DISKS_NAMESPACES=, 00:01:41.593 NVME_CMB=, 00:01:41.593 NVME_PMR=, 00:01:41.593 NVME_ZNS=, 00:01:41.593 NVME_MS=, 00:01:41.593 NVME_FDP=, 00:01:41.593 SPDK_VAGRANT_DISTRO=ubuntu2004 00:01:41.593 SPDK_VAGRANT_VMCPU=10 00:01:41.593 SPDK_VAGRANT_VMRAM=12288 00:01:41.593 SPDK_VAGRANT_PROVIDER=libvirt 00:01:41.593 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:41.593 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:41.593 SPDK_OPENSTACK_NETWORK=0 00:01:41.593 VAGRANT_PACKAGE_BOX=0 00:01:41.593 VAGRANTFILE=/var/jenkins/workspace/ubuntu20-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:41.593 FORCE_DISTRO=true 00:01:41.593 VAGRANT_BOX_VERSION= 00:01:41.593 EXTRA_VAGRANTFILES= 00:01:41.593 NIC_MODEL=e1000 00:01:41.593 00:01:41.593 mkdir: created directory '/var/jenkins/workspace/ubuntu20-vg-autotest/ubuntu2004-libvirt' 00:01:41.593 /var/jenkins/workspace/ubuntu20-vg-autotest/ubuntu2004-libvirt /var/jenkins/workspace/ubuntu20-vg-autotest 00:01:44.123 Bringing machine 'default' up with 'libvirt' provider... 00:01:44.689 ==> default: Creating image (snapshot of base box volume). 00:01:44.946 ==> default: Creating domain with the following settings... 00:01:44.946 ==> default: -- Name: ubuntu2004-20.04-1712646987-2220_default_1720664709_c145f967282ea0c6f763 00:01:44.946 ==> default: -- Domain type: kvm 00:01:44.946 ==> default: -- Cpus: 10 00:01:44.946 ==> default: -- Feature: acpi 00:01:44.946 ==> default: -- Feature: apic 00:01:44.946 ==> default: -- Feature: pae 00:01:44.946 ==> default: -- Memory: 12288M 00:01:44.946 ==> default: -- Memory Backing: hugepages: 00:01:44.946 ==> default: -- Management MAC: 00:01:44.946 ==> default: -- Loader: 00:01:44.946 ==> default: -- Nvram: 00:01:44.946 ==> default: -- Base box: spdk/ubuntu2004 00:01:44.946 ==> default: -- Storage pool: default 00:01:44.946 ==> default: -- Image: /var/lib/libvirt/images/ubuntu2004-20.04-1712646987-2220_default_1720664709_c145f967282ea0c6f763.img (20G) 00:01:44.946 ==> default: -- Volume Cache: default 00:01:44.946 ==> default: -- Kernel: 00:01:44.946 ==> default: -- Initrd: 00:01:44.946 ==> default: -- Graphics Type: vnc 00:01:44.946 ==> default: -- Graphics Port: -1 00:01:44.946 ==> default: -- Graphics IP: 127.0.0.1 00:01:44.946 ==> default: -- Graphics Password: Not defined 00:01:44.946 ==> default: -- Video Type: cirrus 00:01:44.946 ==> default: -- Video VRAM: 9216 00:01:44.946 ==> default: -- Sound Type: 00:01:44.946 ==> default: -- Keymap: en-us 00:01:44.946 ==> default: -- TPM Path: 00:01:44.946 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:44.946 ==> default: -- Command line args: 00:01:44.946 ==> default: -> value=-device, 00:01:44.946 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:44.946 ==> default: -> value=-drive, 00:01:44.946 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-0-drive0, 00:01:44.946 ==> default: -> value=-device, 00:01:44.946 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:44.946 ==> default: Creating shared folders metadata... 00:01:44.946 ==> default: Starting domain. 00:01:46.848 ==> default: Waiting for domain to get an IP address... 00:01:56.827 ==> default: Waiting for SSH to become available... 00:01:57.086 ==> default: Configuring and enabling network interfaces... 00:01:59.618 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:04.884 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:08.168 ==> default: Mounting SSHFS shared folder... 00:02:08.426 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest/ubuntu2004-libvirt/output => /home/vagrant/spdk_repo/output 00:02:08.426 ==> default: Checking Mount.. 00:02:11.009 ==> default: Checking Mount.. 00:02:11.009 ==> default: Folder Successfully Mounted! 00:02:11.009 ==> default: Running provisioner: file... 00:02:11.267 default: ~/.gitconfig => .gitconfig 00:02:11.267 00:02:11.267 SUCCESS! 00:02:11.267 00:02:11.267 cd to /var/jenkins/workspace/ubuntu20-vg-autotest/ubuntu2004-libvirt and type "vagrant ssh" to use. 00:02:11.267 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:11.267 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/ubuntu20-vg-autotest/ubuntu2004-libvirt" to destroy all trace of vm. 00:02:11.267 00:02:11.276 [Pipeline] } 00:02:11.295 [Pipeline] // stage 00:02:11.306 [Pipeline] dir 00:02:11.307 Running in /var/jenkins/workspace/ubuntu20-vg-autotest/ubuntu2004-libvirt 00:02:11.309 [Pipeline] { 00:02:11.327 [Pipeline] catchError 00:02:11.328 [Pipeline] { 00:02:11.342 [Pipeline] sh 00:02:11.618 + vagrant ssh-config --host vagrant 00:02:11.618 + sed -ne /^Host/,$p 00:02:11.618 + tee ssh_conf 00:02:14.902 Host vagrant 00:02:14.902 HostName 192.168.121.99 00:02:14.902 User vagrant 00:02:14.902 Port 22 00:02:14.902 UserKnownHostsFile /dev/null 00:02:14.902 StrictHostKeyChecking no 00:02:14.902 PasswordAuthentication no 00:02:14.902 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-ubuntu2004/20.04-1712646987-2220/libvirt/ubuntu2004 00:02:14.902 IdentitiesOnly yes 00:02:14.902 LogLevel FATAL 00:02:14.902 ForwardAgent yes 00:02:14.902 ForwardX11 yes 00:02:14.902 00:02:14.915 [Pipeline] withEnv 00:02:14.917 [Pipeline] { 00:02:14.932 [Pipeline] sh 00:02:15.208 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:15.208 source /etc/os-release 00:02:15.208 [[ -e /image.version ]] && img=$(< /image.version) 00:02:15.208 # Minimal, systemd-like check. 00:02:15.208 if [[ -e /.dockerenv ]]; then 00:02:15.208 # Clear garbage from the node's name: 00:02:15.208 # agt-er_autotest_547-896 -> autotest_547-896 00:02:15.208 # $HOSTNAME is the actual container id 00:02:15.208 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:15.208 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:15.208 # We can assume this is a mount from a host where container is running, 00:02:15.208 # so fetch its hostname to easily identify the target swarm worker. 00:02:15.208 container="$(< /etc/hostname) ($agent)" 00:02:15.208 else 00:02:15.208 # Fallback 00:02:15.208 container=$agent 00:02:15.208 fi 00:02:15.208 fi 00:02:15.208 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:15.208 00:02:15.787 [Pipeline] } 00:02:15.808 [Pipeline] // withEnv 00:02:15.817 [Pipeline] setCustomBuildProperty 00:02:15.833 [Pipeline] stage 00:02:15.835 [Pipeline] { (Tests) 00:02:15.856 [Pipeline] sh 00:02:16.137 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu20-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:16.719 [Pipeline] sh 00:02:16.999 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu20-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:17.657 [Pipeline] timeout 00:02:17.657 Timeout set to expire in 1 hr 30 min 00:02:17.659 [Pipeline] { 00:02:17.675 [Pipeline] sh 00:02:17.952 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:18.884 HEAD is now at 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:02:18.893 [Pipeline] sh 00:02:19.165 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:19.730 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:02:19.745 [Pipeline] sh 00:02:20.023 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu20-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:20.603 [Pipeline] sh 00:02:20.881 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=ubuntu20-vg-autotest ./autoruner.sh spdk_repo 00:02:21.447 ++ readlink -f spdk_repo 00:02:21.447 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:21.447 + [[ -n /home/vagrant/spdk_repo ]] 00:02:21.447 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:21.447 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:21.447 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:21.447 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:21.447 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:21.447 + [[ ubuntu20-vg-autotest == pkgdep-* ]] 00:02:21.447 + cd /home/vagrant/spdk_repo 00:02:21.447 + source /etc/os-release 00:02:21.447 ++ NAME=Ubuntu 00:02:21.447 ++ VERSION='20.04.6 LTS (Focal Fossa)' 00:02:21.447 ++ ID=ubuntu 00:02:21.447 ++ ID_LIKE=debian 00:02:21.447 ++ PRETTY_NAME='Ubuntu 20.04.6 LTS' 00:02:21.447 ++ VERSION_ID=20.04 00:02:21.447 ++ HOME_URL=https://www.ubuntu.com/ 00:02:21.447 ++ SUPPORT_URL=https://help.ubuntu.com/ 00:02:21.447 ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 00:02:21.447 ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 00:02:21.447 ++ VERSION_CODENAME=focal 00:02:21.447 ++ UBUNTU_CODENAME=focal 00:02:21.447 + uname -a 00:02:21.447 Linux ubuntu2004-cloud-1712646987-2220 5.4.0-176-generic #196-Ubuntu SMP Fri Mar 22 16:46:39 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux 00:02:21.447 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:21.447 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:02:21.705 Hugepages 00:02:21.705 node hugesize free / total 00:02:21.705 node0 1048576kB 0 / 0 00:02:21.705 node0 2048kB 0 / 0 00:02:21.705 00:02:21.705 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:21.705 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:21.705 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:21.705 + rm -f /tmp/spdk-ld-path 00:02:21.705 + source autorun-spdk.conf 00:02:21.705 ++ SPDK_TEST_UNITTEST=1 00:02:21.705 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:21.705 ++ SPDK_TEST_NVME=1 00:02:21.705 ++ SPDK_TEST_BLOCKDEV=1 00:02:21.705 ++ SPDK_RUN_ASAN=1 00:02:21.705 ++ SPDK_RUN_UBSAN=1 00:02:21.705 ++ SPDK_TEST_RAID5=1 00:02:21.705 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:21.705 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:21.705 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:21.705 ++ RUN_NIGHTLY=1 00:02:21.705 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:21.705 + [[ -n '' ]] 00:02:21.705 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:21.705 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:02:21.705 + for M in /var/spdk/build-*-manifest.txt 00:02:21.705 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:21.705 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:21.705 + for M in /var/spdk/build-*-manifest.txt 00:02:21.705 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:21.705 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:21.705 ++ uname 00:02:21.705 + [[ Linux == \L\i\n\u\x ]] 00:02:21.705 + sudo dmesg -T 00:02:21.705 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:02:21.705 + sudo dmesg --clear 00:02:21.705 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:02:21.705 + dmesg_pid=2525 00:02:21.705 + sudo dmesg -Tw 00:02:21.705 + [[ Ubuntu == FreeBSD ]] 00:02:21.705 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:21.705 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:21.705 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:21.705 + [[ -x /usr/src/fio-static/fio ]] 00:02:21.705 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:21.705 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:21.706 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:21.706 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:02:21.706 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:02:21.706 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:02:21.706 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:21.706 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:21.706 Test configuration: 00:02:21.706 SPDK_TEST_UNITTEST=1 00:02:21.706 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:21.706 SPDK_TEST_NVME=1 00:02:21.706 SPDK_TEST_BLOCKDEV=1 00:02:21.706 SPDK_RUN_ASAN=1 00:02:21.706 SPDK_RUN_UBSAN=1 00:02:21.706 SPDK_TEST_RAID5=1 00:02:21.706 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:21.706 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:21.706 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:21.706 RUN_NIGHTLY=1 02:25:45 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:21.706 02:25:45 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:21.706 02:25:45 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:21.706 02:25:45 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:21.706 02:25:45 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:02:21.706 02:25:45 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:02:21.706 02:25:45 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:02:21.706 02:25:45 -- paths/export.sh@5 -- $ export PATH 00:02:21.706 02:25:45 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:02:21.706 02:25:45 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:21.706 02:25:45 -- common/autobuild_common.sh@435 -- $ date +%s 00:02:21.706 02:25:45 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1720664745.XXXXXX 00:02:21.706 02:25:45 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1720664745.Pl6Lce 00:02:21.706 02:25:45 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:02:21.706 02:25:45 -- common/autobuild_common.sh@441 -- $ '[' -n v22.11.4 ']' 00:02:21.706 02:25:45 -- common/autobuild_common.sh@442 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:21.706 02:25:45 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:21.706 02:25:45 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:21.706 02:25:45 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:21.706 02:25:45 -- common/autobuild_common.sh@451 -- $ get_config_params 00:02:21.706 02:25:45 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:02:21.706 02:25:45 -- common/autotest_common.sh@10 -- $ set +x 00:02:21.964 02:25:45 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:02:21.964 02:25:45 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:21.964 02:25:45 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:21.964 02:25:45 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:21.964 02:25:45 -- spdk/autobuild.sh@16 -- $ date -u 00:02:21.964 Thu Jul 11 02:25:45 UTC 2024 00:02:21.964 02:25:45 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:21.964 LTS-59-g4b94202c6 00:02:21.964 02:25:45 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:21.964 02:25:45 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:21.964 02:25:45 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:02:21.964 02:25:45 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:21.964 02:25:45 -- common/autotest_common.sh@10 -- $ set +x 00:02:21.964 ************************************ 00:02:21.964 START TEST asan 00:02:21.964 ************************************ 00:02:21.964 using asan 00:02:21.964 ************************************ 00:02:21.964 END TEST asan 00:02:21.964 ************************************ 00:02:21.964 02:25:45 -- common/autotest_common.sh@1104 -- $ echo 'using asan' 00:02:21.964 00:02:21.964 real 0m0.000s 00:02:21.964 user 0m0.000s 00:02:21.964 sys 0m0.000s 00:02:21.964 02:25:45 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:21.964 02:25:45 -- common/autotest_common.sh@10 -- $ set +x 00:02:21.964 02:25:45 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:21.964 02:25:45 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:21.964 02:25:45 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:02:21.964 02:25:45 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:21.964 02:25:45 -- common/autotest_common.sh@10 -- $ set +x 00:02:21.964 ************************************ 00:02:21.964 START TEST ubsan 00:02:21.964 ************************************ 00:02:21.964 02:25:45 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:02:21.964 using ubsan 00:02:21.964 00:02:21.964 real 0m0.000s 00:02:21.964 user 0m0.000s 00:02:21.964 sys 0m0.000s 00:02:21.964 02:25:45 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:21.964 02:25:45 -- common/autotest_common.sh@10 -- $ set +x 00:02:21.964 ************************************ 00:02:21.964 END TEST ubsan 00:02:21.964 ************************************ 00:02:21.964 02:25:45 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:21.964 02:25:45 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:21.964 02:25:45 -- common/autobuild_common.sh@427 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:21.964 02:25:45 -- common/autotest_common.sh@1077 -- $ '[' 2 -le 1 ']' 00:02:21.964 02:25:45 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:21.964 02:25:45 -- common/autotest_common.sh@10 -- $ set +x 00:02:21.964 ************************************ 00:02:21.964 START TEST build_native_dpdk 00:02:21.964 ************************************ 00:02:21.964 02:25:45 -- common/autotest_common.sh@1104 -- $ _build_native_dpdk 00:02:21.964 02:25:45 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:21.964 02:25:45 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:21.964 02:25:45 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:21.964 02:25:45 -- common/autobuild_common.sh@51 -- $ local compiler 00:02:21.964 02:25:45 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:21.964 02:25:45 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:21.964 02:25:45 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:21.964 02:25:45 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:21.964 02:25:45 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:21.964 02:25:45 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:21.964 02:25:45 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:21.964 02:25:45 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:21.964 02:25:45 -- common/autobuild_common.sh@68 -- $ compiler_version=9 00:02:21.964 02:25:45 -- common/autobuild_common.sh@69 -- $ compiler_version=9 00:02:21.964 02:25:45 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:21.964 02:25:45 -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:21.964 02:25:45 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:21.964 02:25:45 -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:21.964 02:25:45 -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:21.964 02:25:45 -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:21.964 caf0f5d395 version: 22.11.4 00:02:21.964 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:21.964 dc9c799c7d vhost: fix missing spinlock unlock 00:02:21.964 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:21.964 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:21.964 02:25:45 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:21.964 02:25:45 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:21.964 02:25:45 -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:21.964 02:25:45 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:21.964 02:25:45 -- common/autobuild_common.sh@89 -- $ [[ 9 -ge 5 ]] 00:02:21.964 02:25:45 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:21.964 02:25:45 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:21.964 02:25:45 -- common/autobuild_common.sh@93 -- $ [[ 9 -ge 10 ]] 00:02:21.964 02:25:45 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:21.964 02:25:45 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:21.964 02:25:45 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:21.964 02:25:45 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:21.965 02:25:45 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:21.965 02:25:45 -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:21.965 02:25:45 -- common/autobuild_common.sh@168 -- $ uname -s 00:02:21.965 02:25:45 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:21.965 02:25:45 -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:02:21.965 02:25:45 -- scripts/common.sh@372 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:21.965 02:25:45 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:21.965 02:25:45 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:21.965 02:25:45 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:21.965 02:25:45 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:21.965 02:25:45 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:21.965 02:25:45 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:21.965 02:25:45 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:21.965 02:25:45 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:21.965 02:25:45 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:21.965 02:25:45 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:21.965 02:25:45 -- scripts/common.sh@343 -- $ case "$op" in 00:02:21.965 02:25:45 -- scripts/common.sh@344 -- $ : 1 00:02:21.965 02:25:45 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:21.965 02:25:45 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:21.965 02:25:45 -- scripts/common.sh@364 -- $ decimal 22 00:02:21.965 02:25:45 -- scripts/common.sh@352 -- $ local d=22 00:02:21.965 02:25:45 -- scripts/common.sh@353 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:21.965 02:25:45 -- scripts/common.sh@354 -- $ echo 22 00:02:21.965 02:25:45 -- scripts/common.sh@364 -- $ ver1[v]=22 00:02:21.965 02:25:45 -- scripts/common.sh@365 -- $ decimal 21 00:02:21.965 02:25:45 -- scripts/common.sh@352 -- $ local d=21 00:02:21.965 02:25:45 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:21.965 02:25:45 -- scripts/common.sh@354 -- $ echo 21 00:02:21.965 02:25:45 -- scripts/common.sh@365 -- $ ver2[v]=21 00:02:21.965 02:25:45 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:21.965 02:25:45 -- scripts/common.sh@366 -- $ return 1 00:02:21.965 02:25:45 -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:21.965 patching file config/rte_config.h 00:02:21.965 Hunk #1 succeeded at 60 (offset 1 line). 00:02:21.965 02:25:45 -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:02:21.965 02:25:45 -- common/autobuild_common.sh@178 -- $ uname -s 00:02:21.965 02:25:45 -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:02:21.965 02:25:45 -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:21.965 02:25:45 -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:27.230 The Meson build system 00:02:27.230 Version: 1.4.0 00:02:27.230 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:27.230 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:27.230 Build type: native build 00:02:27.230 Program cat found: YES (/usr/bin/cat) 00:02:27.230 Project name: DPDK 00:02:27.230 Project version: 22.11.4 00:02:27.230 C compiler for the host machine: gcc (gcc 9.4.0 "gcc (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0") 00:02:27.230 C linker for the host machine: gcc ld.bfd 2.34 00:02:27.230 Host machine cpu family: x86_64 00:02:27.230 Host machine cpu: x86_64 00:02:27.230 Message: ## Building in Developer Mode ## 00:02:27.230 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:27.230 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:27.230 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:27.230 Program objdump found: YES (/usr/bin/objdump) 00:02:27.230 Program python3 found: YES (/usr/bin/python3) 00:02:27.230 Program cat found: YES (/usr/bin/cat) 00:02:27.230 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:27.230 Checking for size of "void *" : 8 00:02:27.230 Checking for size of "void *" : 8 (cached) 00:02:27.230 Library m found: YES 00:02:27.230 Library numa found: YES 00:02:27.230 Has header "numaif.h" : YES 00:02:27.230 Library fdt found: NO 00:02:27.230 Library execinfo found: NO 00:02:27.230 Has header "execinfo.h" : YES 00:02:27.230 Found pkg-config: YES (/usr/bin/pkg-config) 0.29.1 00:02:27.230 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:27.230 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:27.230 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:27.230 Run-time dependency openssl found: YES 1.1.1f 00:02:27.230 Run-time dependency libpcap found: NO (tried pkgconfig) 00:02:27.230 Library pcap found: NO 00:02:27.230 Compiler for C supports arguments -Wcast-qual: YES 00:02:27.230 Compiler for C supports arguments -Wdeprecated: YES 00:02:27.230 Compiler for C supports arguments -Wformat: YES 00:02:27.230 Compiler for C supports arguments -Wformat-nonliteral: YES 00:02:27.230 Compiler for C supports arguments -Wformat-security: YES 00:02:27.230 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:27.230 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:27.230 Compiler for C supports arguments -Wnested-externs: YES 00:02:27.230 Compiler for C supports arguments -Wold-style-definition: YES 00:02:27.230 Compiler for C supports arguments -Wpointer-arith: YES 00:02:27.230 Compiler for C supports arguments -Wsign-compare: YES 00:02:27.230 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:27.230 Compiler for C supports arguments -Wundef: YES 00:02:27.230 Compiler for C supports arguments -Wwrite-strings: YES 00:02:27.230 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:27.230 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:27.230 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:27.230 Compiler for C supports arguments -mavx512f: YES 00:02:27.230 Checking if "AVX512 checking" compiles: YES 00:02:27.230 Fetching value of define "__SSE4_2__" : 1 00:02:27.230 Fetching value of define "__AES__" : 1 00:02:27.230 Fetching value of define "__AVX__" : 1 00:02:27.230 Fetching value of define "__AVX2__" : 1 00:02:27.230 Fetching value of define "__AVX512BW__" : (undefined) 00:02:27.230 Fetching value of define "__AVX512CD__" : (undefined) 00:02:27.230 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:27.230 Fetching value of define "__AVX512F__" : (undefined) 00:02:27.230 Fetching value of define "__AVX512VL__" : (undefined) 00:02:27.230 Fetching value of define "__PCLMUL__" : 1 00:02:27.230 Fetching value of define "__RDRND__" : 1 00:02:27.230 Fetching value of define "__RDSEED__" : 1 00:02:27.230 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:27.230 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:27.230 Message: lib/kvargs: Defining dependency "kvargs" 00:02:27.230 Message: lib/telemetry: Defining dependency "telemetry" 00:02:27.230 Checking for function "getentropy" : YES 00:02:27.230 Message: lib/eal: Defining dependency "eal" 00:02:27.230 Message: lib/ring: Defining dependency "ring" 00:02:27.230 Message: lib/rcu: Defining dependency "rcu" 00:02:27.230 Message: lib/mempool: Defining dependency "mempool" 00:02:27.230 Message: lib/mbuf: Defining dependency "mbuf" 00:02:27.230 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:27.230 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:27.230 Compiler for C supports arguments -mpclmul: YES 00:02:27.230 Compiler for C supports arguments -maes: YES 00:02:27.230 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:27.230 Compiler for C supports arguments -mavx512bw: YES 00:02:27.230 Compiler for C supports arguments -mavx512dq: YES 00:02:27.230 Compiler for C supports arguments -mavx512vl: YES 00:02:27.230 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:27.230 Compiler for C supports arguments -mavx2: YES 00:02:27.230 Compiler for C supports arguments -mavx: YES 00:02:27.230 Message: lib/net: Defining dependency "net" 00:02:27.230 Message: lib/meter: Defining dependency "meter" 00:02:27.230 Message: lib/ethdev: Defining dependency "ethdev" 00:02:27.230 Message: lib/pci: Defining dependency "pci" 00:02:27.230 Message: lib/cmdline: Defining dependency "cmdline" 00:02:27.230 Message: lib/metrics: Defining dependency "metrics" 00:02:27.230 Message: lib/hash: Defining dependency "hash" 00:02:27.230 Message: lib/timer: Defining dependency "timer" 00:02:27.230 Fetching value of define "__AVX2__" : 1 (cached) 00:02:27.230 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:27.230 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:27.230 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:27.230 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:27.230 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:27.230 Message: lib/acl: Defining dependency "acl" 00:02:27.230 Message: lib/bbdev: Defining dependency "bbdev" 00:02:27.230 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:27.230 Run-time dependency libelf found: YES 0.176 00:02:27.230 lib/bpf/meson.build:43: WARNING: libpcap is missing, rte_bpf_convert API will be disabled 00:02:27.230 Message: lib/bpf: Defining dependency "bpf" 00:02:27.230 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:27.230 Message: lib/compressdev: Defining dependency "compressdev" 00:02:27.230 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:27.230 Message: lib/distributor: Defining dependency "distributor" 00:02:27.230 Message: lib/efd: Defining dependency "efd" 00:02:27.230 Message: lib/eventdev: Defining dependency "eventdev" 00:02:27.230 Message: lib/gpudev: Defining dependency "gpudev" 00:02:27.230 Message: lib/gro: Defining dependency "gro" 00:02:27.230 Message: lib/gso: Defining dependency "gso" 00:02:27.230 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:27.230 Message: lib/jobstats: Defining dependency "jobstats" 00:02:27.230 Message: lib/latencystats: Defining dependency "latencystats" 00:02:27.230 Message: lib/lpm: Defining dependency "lpm" 00:02:27.230 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:27.230 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:27.230 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:27.230 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:27.230 Message: lib/member: Defining dependency "member" 00:02:27.230 Message: lib/pcapng: Defining dependency "pcapng" 00:02:27.230 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:27.230 Message: lib/power: Defining dependency "power" 00:02:27.230 Message: lib/rawdev: Defining dependency "rawdev" 00:02:27.230 Message: lib/regexdev: Defining dependency "regexdev" 00:02:27.230 Message: lib/dmadev: Defining dependency "dmadev" 00:02:27.230 Message: lib/rib: Defining dependency "rib" 00:02:27.230 Message: lib/reorder: Defining dependency "reorder" 00:02:27.230 Message: lib/sched: Defining dependency "sched" 00:02:27.230 Message: lib/security: Defining dependency "security" 00:02:27.230 Message: lib/stack: Defining dependency "stack" 00:02:27.230 Has header "linux/userfaultfd.h" : YES 00:02:27.230 Message: lib/vhost: Defining dependency "vhost" 00:02:27.231 Message: lib/ipsec: Defining dependency "ipsec" 00:02:27.231 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:27.231 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:27.231 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:27.231 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:27.231 Message: lib/fib: Defining dependency "fib" 00:02:27.231 Message: lib/port: Defining dependency "port" 00:02:27.231 Message: lib/pdump: Defining dependency "pdump" 00:02:27.231 Message: lib/table: Defining dependency "table" 00:02:27.231 Message: lib/pipeline: Defining dependency "pipeline" 00:02:27.231 Message: lib/graph: Defining dependency "graph" 00:02:27.231 Message: lib/node: Defining dependency "node" 00:02:27.231 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:27.231 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:27.231 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:27.231 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:27.231 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:27.231 Compiler for C supports arguments -Wno-unused-value: YES 00:02:27.231 Compiler for C supports arguments -Wno-format: YES 00:02:27.231 Compiler for C supports arguments -Wno-format-security: YES 00:02:27.231 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:27.797 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:27.797 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:27.797 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:27.797 Fetching value of define "__AVX2__" : 1 (cached) 00:02:27.797 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:27.797 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:27.797 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:27.797 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:27.797 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:27.797 Program doxygen found: YES (/usr/bin/doxygen) 00:02:27.797 Configuring doxy-api.conf using configuration 00:02:27.797 Program sphinx-build found: NO 00:02:27.797 Configuring rte_build_config.h using configuration 00:02:27.797 Message: 00:02:27.797 ================= 00:02:27.797 Applications Enabled 00:02:27.797 ================= 00:02:27.797 00:02:27.797 apps: 00:02:27.797 pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, test-eventdev, 00:02:27.797 test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, test-security-perf, 00:02:27.797 00:02:27.797 00:02:27.797 Message: 00:02:27.797 ================= 00:02:27.797 Libraries Enabled 00:02:27.797 ================= 00:02:27.797 00:02:27.797 libs: 00:02:27.797 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:27.797 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:27.797 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:27.798 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:27.798 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:27.798 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:27.798 table, pipeline, graph, node, 00:02:27.798 00:02:27.798 Message: 00:02:27.798 =============== 00:02:27.798 Drivers Enabled 00:02:27.798 =============== 00:02:27.798 00:02:27.798 common: 00:02:27.798 00:02:27.798 bus: 00:02:27.798 pci, vdev, 00:02:27.798 mempool: 00:02:27.798 ring, 00:02:27.798 dma: 00:02:27.798 00:02:27.798 net: 00:02:27.798 i40e, 00:02:27.798 raw: 00:02:27.798 00:02:27.798 crypto: 00:02:27.798 00:02:27.798 compress: 00:02:27.798 00:02:27.798 regex: 00:02:27.798 00:02:27.798 vdpa: 00:02:27.798 00:02:27.798 event: 00:02:27.798 00:02:27.798 baseband: 00:02:27.798 00:02:27.798 gpu: 00:02:27.798 00:02:27.798 00:02:27.798 Message: 00:02:27.798 ================= 00:02:27.798 Content Skipped 00:02:27.798 ================= 00:02:27.798 00:02:27.798 apps: 00:02:27.798 dumpcap: missing dependency, "libpcap" 00:02:27.798 00:02:27.798 libs: 00:02:27.798 kni: explicitly disabled via build config (deprecated lib) 00:02:27.798 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:27.798 00:02:27.798 drivers: 00:02:27.798 common/cpt: not in enabled drivers build config 00:02:27.798 common/dpaax: not in enabled drivers build config 00:02:27.798 common/iavf: not in enabled drivers build config 00:02:27.798 common/idpf: not in enabled drivers build config 00:02:27.798 common/mvep: not in enabled drivers build config 00:02:27.798 common/octeontx: not in enabled drivers build config 00:02:27.798 bus/auxiliary: not in enabled drivers build config 00:02:27.798 bus/dpaa: not in enabled drivers build config 00:02:27.798 bus/fslmc: not in enabled drivers build config 00:02:27.798 bus/ifpga: not in enabled drivers build config 00:02:27.798 bus/vmbus: not in enabled drivers build config 00:02:27.798 common/cnxk: not in enabled drivers build config 00:02:27.798 common/mlx5: not in enabled drivers build config 00:02:27.798 common/qat: not in enabled drivers build config 00:02:27.798 common/sfc_efx: not in enabled drivers build config 00:02:27.798 mempool/bucket: not in enabled drivers build config 00:02:27.798 mempool/cnxk: not in enabled drivers build config 00:02:27.798 mempool/dpaa: not in enabled drivers build config 00:02:27.798 mempool/dpaa2: not in enabled drivers build config 00:02:27.798 mempool/octeontx: not in enabled drivers build config 00:02:27.798 mempool/stack: not in enabled drivers build config 00:02:27.798 dma/cnxk: not in enabled drivers build config 00:02:27.798 dma/dpaa: not in enabled drivers build config 00:02:27.798 dma/dpaa2: not in enabled drivers build config 00:02:27.798 dma/hisilicon: not in enabled drivers build config 00:02:27.798 dma/idxd: not in enabled drivers build config 00:02:27.798 dma/ioat: not in enabled drivers build config 00:02:27.798 dma/skeleton: not in enabled drivers build config 00:02:27.798 net/af_packet: not in enabled drivers build config 00:02:27.798 net/af_xdp: not in enabled drivers build config 00:02:27.798 net/ark: not in enabled drivers build config 00:02:27.798 net/atlantic: not in enabled drivers build config 00:02:27.798 net/avp: not in enabled drivers build config 00:02:27.798 net/axgbe: not in enabled drivers build config 00:02:27.798 net/bnx2x: not in enabled drivers build config 00:02:27.798 net/bnxt: not in enabled drivers build config 00:02:27.798 net/bonding: not in enabled drivers build config 00:02:27.798 net/cnxk: not in enabled drivers build config 00:02:27.798 net/cxgbe: not in enabled drivers build config 00:02:27.798 net/dpaa: not in enabled drivers build config 00:02:27.798 net/dpaa2: not in enabled drivers build config 00:02:27.798 net/e1000: not in enabled drivers build config 00:02:27.798 net/ena: not in enabled drivers build config 00:02:27.798 net/enetc: not in enabled drivers build config 00:02:27.798 net/enetfec: not in enabled drivers build config 00:02:27.798 net/enic: not in enabled drivers build config 00:02:27.798 net/failsafe: not in enabled drivers build config 00:02:27.798 net/fm10k: not in enabled drivers build config 00:02:27.798 net/gve: not in enabled drivers build config 00:02:27.798 net/hinic: not in enabled drivers build config 00:02:27.798 net/hns3: not in enabled drivers build config 00:02:27.798 net/iavf: not in enabled drivers build config 00:02:27.798 net/ice: not in enabled drivers build config 00:02:27.798 net/idpf: not in enabled drivers build config 00:02:27.798 net/igc: not in enabled drivers build config 00:02:27.798 net/ionic: not in enabled drivers build config 00:02:27.798 net/ipn3ke: not in enabled drivers build config 00:02:27.798 net/ixgbe: not in enabled drivers build config 00:02:27.798 net/kni: not in enabled drivers build config 00:02:27.798 net/liquidio: not in enabled drivers build config 00:02:27.798 net/mana: not in enabled drivers build config 00:02:27.798 net/memif: not in enabled drivers build config 00:02:27.798 net/mlx4: not in enabled drivers build config 00:02:27.798 net/mlx5: not in enabled drivers build config 00:02:27.798 net/mvneta: not in enabled drivers build config 00:02:27.798 net/mvpp2: not in enabled drivers build config 00:02:27.798 net/netvsc: not in enabled drivers build config 00:02:27.798 net/nfb: not in enabled drivers build config 00:02:27.798 net/nfp: not in enabled drivers build config 00:02:27.798 net/ngbe: not in enabled drivers build config 00:02:27.798 net/null: not in enabled drivers build config 00:02:27.798 net/octeontx: not in enabled drivers build config 00:02:27.798 net/octeon_ep: not in enabled drivers build config 00:02:27.798 net/pcap: not in enabled drivers build config 00:02:27.798 net/pfe: not in enabled drivers build config 00:02:27.798 net/qede: not in enabled drivers build config 00:02:27.798 net/ring: not in enabled drivers build config 00:02:27.798 net/sfc: not in enabled drivers build config 00:02:27.798 net/softnic: not in enabled drivers build config 00:02:27.798 net/tap: not in enabled drivers build config 00:02:27.798 net/thunderx: not in enabled drivers build config 00:02:27.798 net/txgbe: not in enabled drivers build config 00:02:27.798 net/vdev_netvsc: not in enabled drivers build config 00:02:27.798 net/vhost: not in enabled drivers build config 00:02:27.798 net/virtio: not in enabled drivers build config 00:02:27.798 net/vmxnet3: not in enabled drivers build config 00:02:27.798 raw/cnxk_bphy: not in enabled drivers build config 00:02:27.798 raw/cnxk_gpio: not in enabled drivers build config 00:02:27.798 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:27.798 raw/ifpga: not in enabled drivers build config 00:02:27.798 raw/ntb: not in enabled drivers build config 00:02:27.798 raw/skeleton: not in enabled drivers build config 00:02:27.798 crypto/armv8: not in enabled drivers build config 00:02:27.798 crypto/bcmfs: not in enabled drivers build config 00:02:27.798 crypto/caam_jr: not in enabled drivers build config 00:02:27.798 crypto/ccp: not in enabled drivers build config 00:02:27.798 crypto/cnxk: not in enabled drivers build config 00:02:27.798 crypto/dpaa_sec: not in enabled drivers build config 00:02:27.798 crypto/dpaa2_sec: not in enabled drivers build config 00:02:27.798 crypto/ipsec_mb: not in enabled drivers build config 00:02:27.798 crypto/mlx5: not in enabled drivers build config 00:02:27.798 crypto/mvsam: not in enabled drivers build config 00:02:27.798 crypto/nitrox: not in enabled drivers build config 00:02:27.798 crypto/null: not in enabled drivers build config 00:02:27.798 crypto/octeontx: not in enabled drivers build config 00:02:27.798 crypto/openssl: not in enabled drivers build config 00:02:27.798 crypto/scheduler: not in enabled drivers build config 00:02:27.798 crypto/uadk: not in enabled drivers build config 00:02:27.798 crypto/virtio: not in enabled drivers build config 00:02:27.798 compress/isal: not in enabled drivers build config 00:02:27.798 compress/mlx5: not in enabled drivers build config 00:02:27.798 compress/octeontx: not in enabled drivers build config 00:02:27.798 compress/zlib: not in enabled drivers build config 00:02:27.798 regex/mlx5: not in enabled drivers build config 00:02:27.798 regex/cn9k: not in enabled drivers build config 00:02:27.798 vdpa/ifc: not in enabled drivers build config 00:02:27.798 vdpa/mlx5: not in enabled drivers build config 00:02:27.798 vdpa/sfc: not in enabled drivers build config 00:02:27.798 event/cnxk: not in enabled drivers build config 00:02:27.798 event/dlb2: not in enabled drivers build config 00:02:27.798 event/dpaa: not in enabled drivers build config 00:02:27.798 event/dpaa2: not in enabled drivers build config 00:02:27.798 event/dsw: not in enabled drivers build config 00:02:27.798 event/opdl: not in enabled drivers build config 00:02:27.798 event/skeleton: not in enabled drivers build config 00:02:27.798 event/sw: not in enabled drivers build config 00:02:27.798 event/octeontx: not in enabled drivers build config 00:02:27.798 baseband/acc: not in enabled drivers build config 00:02:27.798 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:27.798 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:27.798 baseband/la12xx: not in enabled drivers build config 00:02:27.798 baseband/null: not in enabled drivers build config 00:02:27.798 baseband/turbo_sw: not in enabled drivers build config 00:02:27.798 gpu/cuda: not in enabled drivers build config 00:02:27.798 00:02:27.798 00:02:27.798 Build targets in project: 313 00:02:27.798 00:02:27.798 DPDK 22.11.4 00:02:27.798 00:02:27.798 User defined options 00:02:27.798 libdir : lib 00:02:27.798 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:27.798 c_args : -fPIC -g -fcommon -Werror 00:02:27.798 c_link_args : 00:02:27.798 enable_docs : false 00:02:27.798 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:27.798 enable_kmods : false 00:02:27.798 machine : native 00:02:27.798 tests : false 00:02:27.798 00:02:27.798 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:27.798 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:27.798 02:25:52 -- common/autobuild_common.sh@186 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:27.798 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:28.057 [1/740] Generating lib/rte_kvargs_def with a custom command 00:02:28.057 [2/740] Generating lib/rte_telemetry_mingw with a custom command 00:02:28.057 [3/740] Generating lib/rte_kvargs_mingw with a custom command 00:02:28.057 [4/740] Generating lib/rte_telemetry_def with a custom command 00:02:28.057 [5/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:28.057 [6/740] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:28.057 [7/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:28.057 [8/740] Linking static target lib/librte_kvargs.a 00:02:28.057 [9/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:28.057 [10/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:28.057 [11/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:28.057 [12/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:28.057 [13/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:28.315 [14/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:28.315 [15/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:28.315 [16/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:28.315 [17/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:28.315 [18/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:28.315 [19/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:28.315 [20/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:28.315 [21/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:28.315 [22/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:28.573 [23/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:28.573 [24/740] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.573 [25/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:28.573 [26/740] Linking target lib/librte_kvargs.so.23.0 00:02:28.573 [27/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:28.573 [28/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:28.573 [29/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:28.573 [30/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:28.573 [31/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:28.573 [32/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:28.573 [33/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:28.573 [34/740] Linking static target lib/librte_telemetry.a 00:02:28.573 [35/740] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:28.573 [36/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:28.573 [37/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:28.573 [38/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:28.831 [39/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:28.831 [40/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:28.831 [41/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:28.831 [42/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:28.831 [43/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:28.831 [44/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:29.089 [45/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:29.089 [46/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:29.089 [47/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:29.089 [48/740] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.089 [49/740] Linking target lib/librte_telemetry.so.23.0 00:02:29.089 [50/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:29.089 [51/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:29.089 [52/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:29.089 [53/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:29.089 [54/740] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:29.089 [55/740] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:29.089 [56/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:29.089 [57/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:29.089 [58/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:29.089 [59/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:29.089 [60/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:29.089 [61/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:29.089 [62/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:29.089 [63/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:29.347 [64/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:29.347 [65/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:29.347 [66/740] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:29.347 [67/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:29.347 [68/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:29.347 [69/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:29.347 [70/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:29.347 [71/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:29.347 [72/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:29.347 [73/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:29.347 [74/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:29.347 [75/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:29.347 [76/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:29.347 [77/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:29.347 [78/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:29.347 [79/740] Generating lib/rte_eal_def with a custom command 00:02:29.347 [80/740] Generating lib/rte_eal_mingw with a custom command 00:02:29.606 [81/740] Generating lib/rte_ring_def with a custom command 00:02:29.606 [82/740] Generating lib/rte_ring_mingw with a custom command 00:02:29.606 [83/740] Generating lib/rte_rcu_def with a custom command 00:02:29.606 [84/740] Generating lib/rte_rcu_mingw with a custom command 00:02:29.606 [85/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:29.606 [86/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:29.606 [87/740] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:29.606 [88/740] Linking static target lib/librte_ring.a 00:02:29.606 [89/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:29.606 [90/740] Generating lib/rte_mempool_def with a custom command 00:02:29.864 [91/740] Generating lib/rte_mempool_mingw with a custom command 00:02:29.864 [92/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:29.864 [93/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:29.864 [94/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:29.864 [95/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:29.864 [96/740] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:29.864 [97/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:29.864 [98/740] Linking static target lib/librte_eal.a 00:02:30.123 [99/740] Generating lib/rte_mbuf_def with a custom command 00:02:30.123 [100/740] Generating lib/rte_mbuf_mingw with a custom command 00:02:30.123 [101/740] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.123 [102/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:30.123 [103/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:30.123 [104/740] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:30.123 [105/740] Linking static target lib/librte_rcu.a 00:02:30.381 [106/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:30.381 [107/740] Linking static target lib/librte_mempool.a 00:02:30.381 [108/740] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:30.381 [109/740] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:30.381 [110/740] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:30.381 [111/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:30.381 [112/740] Generating lib/rte_net_def with a custom command 00:02:30.381 [113/740] Generating lib/rte_net_mingw with a custom command 00:02:30.381 [114/740] Generating lib/rte_meter_def with a custom command 00:02:30.381 [115/740] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:30.640 [116/740] Generating lib/rte_meter_mingw with a custom command 00:02:30.640 [117/740] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:30.640 [118/740] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.640 [119/740] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:30.640 [120/740] Linking static target lib/librte_meter.a 00:02:30.640 [121/740] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:30.640 [122/740] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:30.898 [123/740] Linking static target lib/librte_net.a 00:02:30.898 [124/740] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.898 [125/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:30.898 [126/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:30.898 [127/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:30.898 [128/740] Linking static target lib/librte_mbuf.a 00:02:31.156 [129/740] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.156 [130/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:31.156 [131/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:31.156 [132/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:31.156 [133/740] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.415 [134/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:31.415 [135/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:31.673 [136/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:31.673 [137/740] Generating lib/rte_ethdev_def with a custom command 00:02:31.673 [138/740] Generating lib/rte_ethdev_mingw with a custom command 00:02:31.673 [139/740] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.673 [140/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:31.673 [141/740] Generating lib/rte_pci_def with a custom command 00:02:31.673 [142/740] Generating lib/rte_pci_mingw with a custom command 00:02:31.673 [143/740] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:31.673 [144/740] Linking static target lib/librte_pci.a 00:02:31.673 [145/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:31.673 [146/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:31.673 [147/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:31.933 [148/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:31.933 [149/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:31.933 [150/740] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.933 [151/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:31.933 [152/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:31.933 [153/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:31.933 [154/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:31.933 [155/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:31.933 [156/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:31.933 [157/740] Generating lib/rte_cmdline_def with a custom command 00:02:31.933 [158/740] Generating lib/rte_cmdline_mingw with a custom command 00:02:32.192 [159/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:32.192 [160/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:32.192 [161/740] Generating lib/rte_metrics_def with a custom command 00:02:32.192 [162/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:32.192 [163/740] Generating lib/rte_metrics_mingw with a custom command 00:02:32.192 [164/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:32.451 [165/740] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:32.451 [166/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:32.451 [167/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:32.451 [168/740] Generating lib/rte_hash_def with a custom command 00:02:32.451 [169/740] Generating lib/rte_hash_mingw with a custom command 00:02:32.451 [170/740] Generating lib/rte_timer_def with a custom command 00:02:32.451 [171/740] Generating lib/rte_timer_mingw with a custom command 00:02:32.451 [172/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:32.451 [173/740] Linking static target lib/librte_cmdline.a 00:02:32.451 [174/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:32.451 [175/740] Linking static target lib/librte_metrics.a 00:02:32.710 [176/740] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:32.710 [177/740] Linking static target lib/librte_timer.a 00:02:32.970 [178/740] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.970 [179/740] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:32.970 [180/740] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:33.229 [181/740] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.229 [182/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:33.229 [183/740] Linking static target lib/librte_ethdev.a 00:02:33.229 [184/740] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:33.488 [185/740] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:33.488 [186/740] Generating lib/rte_acl_def with a custom command 00:02:33.488 [187/740] Generating lib/rte_acl_mingw with a custom command 00:02:33.488 [188/740] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.488 [189/740] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:33.488 [190/740] Generating lib/rte_bbdev_def with a custom command 00:02:33.488 [191/740] Generating lib/rte_bbdev_mingw with a custom command 00:02:33.488 [192/740] Generating lib/rte_bitratestats_def with a custom command 00:02:33.488 [193/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:33.488 [194/740] Generating lib/rte_bitratestats_mingw with a custom command 00:02:34.057 [195/740] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:34.057 [196/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:34.057 [197/740] Linking static target lib/librte_bitratestats.a 00:02:34.057 [198/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:34.057 [199/740] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.057 [200/740] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:34.057 [201/740] Linking static target lib/librte_bbdev.a 00:02:34.346 [202/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:34.611 [203/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:34.611 [204/740] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:34.611 [205/740] Linking static target lib/librte_hash.a 00:02:34.611 [206/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:34.611 [207/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:34.871 [208/740] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.871 [209/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:34.871 [210/740] Generating lib/rte_bpf_def with a custom command 00:02:34.871 [211/740] Generating lib/rte_bpf_mingw with a custom command 00:02:34.871 [212/740] Generating lib/rte_cfgfile_def with a custom command 00:02:34.871 [213/740] Generating lib/rte_cfgfile_mingw with a custom command 00:02:35.130 [214/740] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:35.130 [215/740] Linking static target lib/acl/libavx512_tmp.a 00:02:35.130 [216/740] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:35.130 [217/740] Linking static target lib/librte_cfgfile.a 00:02:35.130 [218/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:35.389 [219/740] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.389 [220/740] Generating lib/rte_compressdev_def with a custom command 00:02:35.389 [221/740] Generating lib/rte_compressdev_mingw with a custom command 00:02:35.389 [222/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:35.389 [223/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:35.389 [224/740] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.647 [225/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:35.647 [226/740] Generating lib/rte_cryptodev_def with a custom command 00:02:35.647 [227/740] Generating lib/rte_cryptodev_mingw with a custom command 00:02:35.647 [228/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:35.647 [229/740] Linking static target lib/librte_acl.a 00:02:35.647 [230/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:35.647 [231/740] Linking static target lib/librte_compressdev.a 00:02:35.906 [232/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:35.906 [233/740] Linking static target lib/librte_bpf.a 00:02:35.906 [234/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:35.906 [235/740] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.906 [236/740] Generating lib/rte_distributor_def with a custom command 00:02:35.906 [237/740] Generating lib/rte_distributor_mingw with a custom command 00:02:35.906 [238/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:36.165 [239/740] Generating lib/rte_efd_def with a custom command 00:02:36.165 [240/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:36.165 [241/740] Generating lib/rte_efd_mingw with a custom command 00:02:36.165 [242/740] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.165 [243/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:36.423 [244/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:36.423 [245/740] Linking static target lib/librte_distributor.a 00:02:36.423 [246/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:36.423 [247/740] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.682 [248/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:36.682 [249/740] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.941 [250/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:36.941 [251/740] Generating lib/rte_eventdev_def with a custom command 00:02:36.941 [252/740] Generating lib/rte_eventdev_mingw with a custom command 00:02:37.200 [253/740] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:37.200 [254/740] Linking static target lib/librte_efd.a 00:02:37.200 [255/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:37.200 [256/740] Generating lib/rte_gpudev_def with a custom command 00:02:37.200 [257/740] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.200 [258/740] Generating lib/rte_gpudev_mingw with a custom command 00:02:37.460 [259/740] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:37.460 [260/740] Linking static target lib/librte_gpudev.a 00:02:37.718 [261/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:37.718 [262/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:37.718 [263/740] Linking static target lib/librte_cryptodev.a 00:02:37.718 [264/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:37.718 [265/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:37.977 [266/740] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:37.977 [267/740] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:37.977 [268/740] Generating lib/rte_gro_def with a custom command 00:02:37.977 [269/740] Generating lib/rte_gro_mingw with a custom command 00:02:37.977 [270/740] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.977 [271/740] Linking target lib/librte_eal.so.23.0 00:02:38.235 [272/740] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:38.235 [273/740] Linking target lib/librte_ring.so.23.0 00:02:38.235 [274/740] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:38.235 [275/740] Linking target lib/librte_meter.so.23.0 00:02:38.235 [276/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:38.235 [277/740] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:38.235 [278/740] Linking target lib/librte_pci.so.23.0 00:02:38.235 [279/740] Linking target lib/librte_rcu.so.23.0 00:02:38.235 [280/740] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:38.235 [281/740] Linking target lib/librte_mempool.so.23.0 00:02:38.235 [282/740] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:38.494 [283/740] Linking target lib/librte_timer.so.23.0 00:02:38.494 [284/740] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:38.494 [285/740] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:38.494 [286/740] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:38.494 [287/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:38.494 [288/740] Linking target lib/librte_cfgfile.so.23.0 00:02:38.494 [289/740] Linking static target lib/librte_gro.a 00:02:38.494 [290/740] Linking target lib/librte_acl.so.23.0 00:02:38.494 [291/740] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:38.494 [292/740] Linking target lib/librte_mbuf.so.23.0 00:02:38.494 [293/740] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:38.494 [294/740] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:38.494 [295/740] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.494 [296/740] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:38.494 [297/740] Generating lib/rte_gso_def with a custom command 00:02:38.494 [298/740] Linking target lib/librte_net.so.23.0 00:02:38.752 [299/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:38.752 [300/740] Linking target lib/librte_bbdev.so.23.0 00:02:38.752 [301/740] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.752 [302/740] Linking target lib/librte_compressdev.so.23.0 00:02:38.752 [303/740] Linking static target lib/librte_eventdev.a 00:02:38.752 [304/740] Linking target lib/librte_distributor.so.23.0 00:02:38.752 [305/740] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.752 [306/740] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:38.752 [307/740] Generating lib/rte_gso_mingw with a custom command 00:02:38.752 [308/740] Linking target lib/librte_gpudev.so.23.0 00:02:38.752 [309/740] Linking target lib/librte_cmdline.so.23.0 00:02:38.752 [310/740] Linking target lib/librte_hash.so.23.0 00:02:38.752 [311/740] Linking target lib/librte_ethdev.so.23.0 00:02:38.752 [312/740] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:38.752 [313/740] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:38.752 [314/740] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:39.010 [315/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:39.010 [316/740] Linking target lib/librte_efd.so.23.0 00:02:39.010 [317/740] Linking target lib/librte_metrics.so.23.0 00:02:39.010 [318/740] Linking target lib/librte_gro.so.23.0 00:02:39.010 [319/740] Linking target lib/librte_bpf.so.23.0 00:02:39.010 [320/740] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:39.010 [321/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:39.010 [322/740] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:39.010 [323/740] Linking static target lib/librte_gso.a 00:02:39.010 [324/740] Linking target lib/librte_bitratestats.so.23.0 00:02:39.010 [325/740] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:39.010 [326/740] Generating lib/rte_ip_frag_def with a custom command 00:02:39.010 [327/740] Generating lib/rte_ip_frag_mingw with a custom command 00:02:39.268 [328/740] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.268 [329/740] Linking target lib/librte_gso.so.23.0 00:02:39.268 [330/740] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:39.268 [331/740] Linking static target lib/librte_jobstats.a 00:02:39.268 [332/740] Generating lib/rte_jobstats_def with a custom command 00:02:39.268 [333/740] Generating lib/rte_jobstats_mingw with a custom command 00:02:39.268 [334/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:39.268 [335/740] Generating lib/rte_latencystats_def with a custom command 00:02:39.268 [336/740] Generating lib/rte_latencystats_mingw with a custom command 00:02:39.268 [337/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:39.268 [338/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:39.527 [339/740] Generating lib/rte_lpm_def with a custom command 00:02:39.527 [340/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:39.527 [341/740] Generating lib/rte_lpm_mingw with a custom command 00:02:39.527 [342/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:39.527 [343/740] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.527 [344/740] Linking target lib/librte_jobstats.so.23.0 00:02:39.527 [345/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:39.527 [346/740] Linking static target lib/librte_ip_frag.a 00:02:39.786 [347/740] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:39.786 [348/740] Linking static target lib/librte_latencystats.a 00:02:39.786 [349/740] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:39.786 [350/740] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.786 [351/740] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:39.786 [352/740] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:39.786 [353/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:40.044 [354/740] Generating lib/rte_member_def with a custom command 00:02:40.044 [355/740] Generating lib/rte_member_mingw with a custom command 00:02:40.044 [356/740] Linking target lib/librte_ip_frag.so.23.0 00:02:40.044 [357/740] Generating lib/rte_pcapng_def with a custom command 00:02:40.044 [358/740] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.044 [359/740] Generating lib/rte_pcapng_mingw with a custom command 00:02:40.044 [360/740] Linking target lib/librte_latencystats.so.23.0 00:02:40.044 [361/740] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:40.044 [362/740] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:40.044 [363/740] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:40.045 [364/740] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.303 [365/740] Linking target lib/librte_cryptodev.so.23.0 00:02:40.303 [366/740] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:40.303 [367/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:40.303 [368/740] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:40.303 [369/740] Linking static target lib/librte_lpm.a 00:02:40.303 [370/740] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:40.303 [371/740] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:40.562 [372/740] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:40.562 [373/740] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:40.562 [374/740] Generating lib/rte_power_def with a custom command 00:02:40.562 [375/740] Generating lib/rte_power_mingw with a custom command 00:02:40.562 [376/740] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:40.562 [377/740] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:40.562 [378/740] Linking static target lib/librte_pcapng.a 00:02:40.562 [379/740] Generating lib/rte_rawdev_def with a custom command 00:02:40.562 [380/740] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.562 [381/740] Generating lib/rte_rawdev_mingw with a custom command 00:02:40.820 [382/740] Linking target lib/librte_lpm.so.23.0 00:02:40.820 [383/740] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:40.820 [384/740] Generating lib/rte_regexdev_def with a custom command 00:02:40.820 [385/740] Generating lib/rte_regexdev_mingw with a custom command 00:02:40.820 [386/740] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:40.820 [387/740] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:40.820 [388/740] Generating lib/rte_dmadev_def with a custom command 00:02:40.820 [389/740] Generating lib/rte_dmadev_mingw with a custom command 00:02:40.820 [390/740] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:40.820 [391/740] Generating lib/rte_rib_def with a custom command 00:02:40.820 [392/740] Generating lib/rte_rib_mingw with a custom command 00:02:41.078 [393/740] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.078 [394/740] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:41.078 [395/740] Linking static target lib/librte_rawdev.a 00:02:41.078 [396/740] Linking target lib/librte_pcapng.so.23.0 00:02:41.078 [397/740] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.078 [398/740] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:41.078 [399/740] Linking static target lib/librte_power.a 00:02:41.078 [400/740] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:41.078 [401/740] Linking target lib/librte_eventdev.so.23.0 00:02:41.078 [402/740] Generating lib/rte_reorder_def with a custom command 00:02:41.078 [403/740] Generating lib/rte_reorder_mingw with a custom command 00:02:41.078 [404/740] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:41.078 [405/740] Linking static target lib/librte_dmadev.a 00:02:41.336 [406/740] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:41.336 [407/740] Linking static target lib/librte_regexdev.a 00:02:41.336 [408/740] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:41.336 [409/740] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:41.336 [410/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:41.595 [411/740] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:41.595 [412/740] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:41.595 [413/740] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.595 [414/740] Linking static target lib/librte_member.a 00:02:41.595 [415/740] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:41.595 [416/740] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:41.595 [417/740] Generating lib/rte_sched_def with a custom command 00:02:41.595 [418/740] Linking static target lib/librte_reorder.a 00:02:41.595 [419/740] Linking target lib/librte_rawdev.so.23.0 00:02:41.595 [420/740] Generating lib/rte_sched_mingw with a custom command 00:02:41.595 [421/740] Generating lib/rte_security_def with a custom command 00:02:41.595 [422/740] Generating lib/rte_security_mingw with a custom command 00:02:41.595 [423/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:41.595 [424/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:41.595 [425/740] Generating lib/rte_stack_def with a custom command 00:02:41.853 [426/740] Generating lib/rte_stack_mingw with a custom command 00:02:41.853 [427/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:41.853 [428/740] Linking static target lib/librte_stack.a 00:02:41.853 [429/740] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.853 [430/740] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.853 [431/740] Linking target lib/librte_reorder.so.23.0 00:02:41.853 [432/740] Linking target lib/librte_dmadev.so.23.0 00:02:41.853 [433/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:41.853 [434/740] Linking static target lib/librte_rib.a 00:02:41.853 [435/740] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.853 [436/740] Linking target lib/librte_member.so.23.0 00:02:41.853 [437/740] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:41.853 [438/740] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:41.853 [439/740] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.853 [440/740] Linking target lib/librte_stack.so.23.0 00:02:41.853 [441/740] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.112 [442/740] Linking target lib/librte_regexdev.so.23.0 00:02:42.112 [443/740] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.112 [444/740] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:42.112 [445/740] Linking static target lib/librte_security.a 00:02:42.112 [446/740] Linking target lib/librte_power.so.23.0 00:02:42.370 [447/740] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.370 [448/740] Linking target lib/librte_rib.so.23.0 00:02:42.370 [449/740] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:42.370 [450/740] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:42.370 [451/740] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:42.370 [452/740] Generating lib/rte_vhost_def with a custom command 00:02:42.370 [453/740] Generating lib/rte_vhost_mingw with a custom command 00:02:42.629 [454/740] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:42.629 [455/740] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.629 [456/740] Linking target lib/librte_security.so.23.0 00:02:42.886 [457/740] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:42.886 [458/740] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:42.886 [459/740] Linking static target lib/librte_sched.a 00:02:42.886 [460/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:43.144 [461/740] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:43.144 [462/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:43.144 [463/740] Generating lib/rte_ipsec_def with a custom command 00:02:43.144 [464/740] Generating lib/rte_ipsec_mingw with a custom command 00:02:43.402 [465/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:43.402 [466/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:43.402 [467/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:43.660 [468/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:43.660 [469/740] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:43.660 [470/740] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:43.660 [471/740] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:43.660 [472/740] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.660 [473/740] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:43.660 [474/740] Generating lib/rte_fib_def with a custom command 00:02:43.660 [475/740] Generating lib/rte_fib_mingw with a custom command 00:02:43.660 [476/740] Linking target lib/librte_sched.so.23.0 00:02:43.660 [477/740] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:43.919 [478/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:43.919 [479/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:44.176 [480/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:44.176 [481/740] Linking static target lib/librte_ipsec.a 00:02:44.448 [482/740] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:44.448 [483/740] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:44.448 [484/740] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:44.448 [485/740] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:44.448 [486/740] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.449 [487/740] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:44.449 [488/740] Linking target lib/librte_ipsec.so.23.0 00:02:44.449 [489/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:44.449 [490/740] Linking static target lib/librte_fib.a 00:02:44.706 [491/740] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:44.706 [492/740] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.706 [493/740] Linking target lib/librte_fib.so.23.0 00:02:44.963 [494/740] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:44.963 [495/740] Generating lib/rte_port_def with a custom command 00:02:44.963 [496/740] Generating lib/rte_port_mingw with a custom command 00:02:45.221 [497/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:45.221 [498/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:45.221 [499/740] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:45.221 [500/740] Generating lib/rte_pdump_def with a custom command 00:02:45.221 [501/740] Generating lib/rte_pdump_mingw with a custom command 00:02:45.221 [502/740] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:45.221 [503/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:45.221 [504/740] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:45.479 [505/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:45.479 [506/740] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:45.479 [507/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:45.479 [508/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:45.479 [509/740] Linking static target lib/librte_port.a 00:02:45.737 [510/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:45.737 [511/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:45.737 [512/740] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:45.994 [513/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:45.994 [514/740] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:45.994 [515/740] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:45.994 [516/740] Linking static target lib/librte_pdump.a 00:02:46.252 [517/740] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.252 [518/740] Linking target lib/librte_pdump.so.23.0 00:02:46.252 [519/740] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.252 [520/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:46.509 [521/740] Linking target lib/librte_port.so.23.0 00:02:46.509 [522/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:46.509 [523/740] Generating lib/rte_table_def with a custom command 00:02:46.509 [524/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:46.509 [525/740] Generating lib/rte_table_mingw with a custom command 00:02:46.509 [526/740] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:46.767 [527/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:46.767 [528/740] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:46.767 [529/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:46.767 [530/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:46.767 [531/740] Generating lib/rte_pipeline_def with a custom command 00:02:46.767 [532/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:46.767 [533/740] Linking static target lib/librte_table.a 00:02:46.767 [534/740] Generating lib/rte_pipeline_mingw with a custom command 00:02:47.025 [535/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:47.283 [536/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:47.283 [537/740] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:47.283 [538/740] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:47.541 [539/740] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.541 [540/740] Linking target lib/librte_table.so.23.0 00:02:47.541 [541/740] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:47.799 [542/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:47.799 [543/740] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:47.799 [544/740] Generating lib/rte_graph_def with a custom command 00:02:47.799 [545/740] Generating lib/rte_graph_mingw with a custom command 00:02:47.799 [546/740] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:48.057 [547/740] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:48.057 [548/740] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:48.057 [549/740] Linking static target lib/librte_graph.a 00:02:48.057 [550/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:48.057 [551/740] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:48.314 [552/740] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:48.314 [553/740] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:48.314 [554/740] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:48.572 [555/740] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:48.572 [556/740] Generating lib/rte_node_def with a custom command 00:02:48.572 [557/740] Generating lib/rte_node_mingw with a custom command 00:02:48.572 [558/740] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:48.830 [559/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:48.830 [560/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:48.830 [561/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:48.830 [562/740] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:48.830 [563/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:48.830 [564/740] Generating drivers/rte_bus_pci_def with a custom command 00:02:48.830 [565/740] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:48.830 [566/740] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.830 [567/740] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:49.088 [568/740] Linking target lib/librte_graph.so.23.0 00:02:49.088 [569/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:49.088 [570/740] Generating drivers/rte_bus_vdev_def with a custom command 00:02:49.088 [571/740] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:49.088 [572/740] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:49.088 [573/740] Generating drivers/rte_mempool_ring_def with a custom command 00:02:49.088 [574/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:49.088 [575/740] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:49.088 [576/740] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:49.088 [577/740] Linking static target lib/librte_node.a 00:02:49.088 [578/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:49.346 [579/740] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:49.346 [580/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:49.346 [581/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:49.346 [582/740] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:49.346 [583/740] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:49.346 [584/740] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.346 [585/740] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:49.346 [586/740] Linking static target drivers/librte_bus_vdev.a 00:02:49.346 [587/740] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:49.604 [588/740] Linking target lib/librte_node.so.23.0 00:02:49.604 [589/740] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:49.604 [590/740] Linking static target drivers/librte_bus_pci.a 00:02:49.604 [591/740] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:49.604 [592/740] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:49.604 [593/740] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.604 [594/740] Linking target drivers/librte_bus_vdev.so.23.0 00:02:49.862 [595/740] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:49.862 [596/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:49.862 [597/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:49.862 [598/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:49.862 [599/740] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.124 [600/740] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:50.124 [601/740] Linking target drivers/librte_bus_pci.so.23.0 00:02:50.124 [602/740] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:50.124 [603/740] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:50.124 [604/740] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:50.124 [605/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:50.124 [606/740] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:50.124 [607/740] Linking static target drivers/librte_mempool_ring.a 00:02:50.124 [608/740] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:50.384 [609/740] Linking target drivers/librte_mempool_ring.so.23.0 00:02:50.642 [610/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:50.900 [611/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:51.157 [612/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:51.157 [613/740] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:51.157 [614/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:51.723 [615/740] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:51.723 [616/740] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:51.723 [617/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:51.723 [618/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:51.980 [619/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:52.238 [620/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:52.238 [621/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:52.238 [622/740] Generating drivers/rte_net_i40e_def with a custom command 00:02:52.238 [623/740] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:52.810 [624/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:53.068 [625/740] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:53.068 [626/740] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:53.326 [627/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:53.326 [628/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:53.326 [629/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:53.326 [630/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:53.326 [631/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:53.584 [632/740] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:53.584 [633/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:02:53.584 [634/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:54.150 [635/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:54.150 [636/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:54.150 [637/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:54.408 [638/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:54.408 [639/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:54.408 [640/740] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:54.667 [641/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:54.667 [642/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:54.667 [643/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:54.667 [644/740] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:54.667 [645/740] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:54.667 [646/740] Linking static target drivers/librte_net_i40e.a 00:02:54.926 [647/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:54.926 [648/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:54.926 [649/740] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:54.926 [650/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:55.185 [651/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:55.443 [652/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:55.443 [653/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:55.443 [654/740] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.443 [655/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:55.443 [656/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:55.702 [657/740] Linking target drivers/librte_net_i40e.so.23.0 00:02:55.702 [658/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:55.702 [659/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:55.702 [660/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:55.961 [661/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:55.961 [662/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:55.961 [663/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:55.961 [664/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:56.219 [665/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:56.219 [666/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:56.477 [667/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:56.752 [668/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:57.018 [669/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:57.018 [670/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:57.276 [671/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:57.276 [672/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:57.276 [673/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:57.535 [674/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:57.535 [675/740] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:57.535 [676/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:57.535 [677/740] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:57.535 [678/740] Linking static target lib/librte_vhost.a 00:02:57.793 [679/740] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:57.793 [680/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:57.793 [681/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:57.793 [682/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:58.052 [683/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:58.052 [684/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:58.310 [685/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:58.310 [686/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:58.310 [687/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:58.310 [688/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:58.310 [689/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:58.569 [690/740] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:58.569 [691/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:58.826 [692/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:58.827 [693/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:58.827 [694/740] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.084 [695/740] Linking target lib/librte_vhost.so.23.0 00:02:59.084 [696/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:59.084 [697/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:59.649 [698/740] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:59.649 [699/740] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:59.649 [700/740] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:59.649 [701/740] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:59.906 [702/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:00.164 [703/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:00.164 [704/740] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:00.164 [705/740] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:00.421 [706/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:00.421 [707/740] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:00.421 [708/740] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:00.679 [709/740] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:00.937 [710/740] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:01.195 [711/740] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:01.195 [712/740] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:01.195 [713/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:01.453 [714/740] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:01.453 [715/740] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:01.453 [716/740] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:01.453 [717/740] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:01.453 [718/740] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:02.025 [719/740] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:02.284 [720/740] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:02.851 [721/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:02.851 [722/740] Linking static target lib/librte_pipeline.a 00:03:03.417 [723/740] Linking target app/dpdk-test-acl 00:03:03.417 [724/740] Linking target app/dpdk-pdump 00:03:03.417 [725/740] Linking target app/dpdk-test-crypto-perf 00:03:03.417 [726/740] Linking target app/dpdk-test-cmdline 00:03:03.417 [727/740] Linking target app/dpdk-test-bbdev 00:03:03.417 [728/740] Linking target app/dpdk-proc-info 00:03:03.675 [729/740] Linking target app/dpdk-test-fib 00:03:03.675 [730/740] Linking target app/dpdk-test-compress-perf 00:03:03.675 [731/740] Linking target app/dpdk-test-eventdev 00:03:04.241 [732/740] Linking target app/dpdk-test-pipeline 00:03:04.241 [733/740] Linking target app/dpdk-test-gpudev 00:03:04.241 [734/740] Linking target app/dpdk-test-flow-perf 00:03:04.241 [735/740] Linking target app/dpdk-test-sad 00:03:04.241 [736/740] Linking target app/dpdk-test-regex 00:03:04.241 [737/740] Linking target app/dpdk-test-security-perf 00:03:04.241 [738/740] Linking target app/dpdk-testpmd 00:03:06.142 [739/740] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.400 [740/740] Linking target lib/librte_pipeline.so.23.0 00:03:06.400 02:26:31 -- common/autobuild_common.sh@187 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:06.400 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:06.400 [0/1] Installing files. 00:03:06.661 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:06.661 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:06.661 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:06.661 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:06.661 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:06.661 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:06.661 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:06.661 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:06.661 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:06.661 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:06.661 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:06.661 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:06.661 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:06.661 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.662 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:06.663 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.664 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.924 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.924 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.924 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.924 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.924 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.924 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.924 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.924 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.924 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.924 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.924 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.924 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.924 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.924 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.924 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.924 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.924 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.924 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.924 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.924 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.924 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.924 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:06.924 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:06.924 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:06.924 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:06.924 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:06.924 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:06.924 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:06.924 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:06.924 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:06.924 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:06.924 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:06.924 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:06.924 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:06.924 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:06.924 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:06.924 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:06.924 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:06.924 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:06.924 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:06.924 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:06.924 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:06.924 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:06.924 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:06.925 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:06.925 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:06.925 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:06.925 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:06.925 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:06.925 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:06.925 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:06.925 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:06.925 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:06.925 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:06.925 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:06.925 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:06.925 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:06.925 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:06.925 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:06.925 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:06.925 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:06.925 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:06.925 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:06.925 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:06.925 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:06.925 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:06.925 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:06.925 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:06.925 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:06.925 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:06.925 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:06.925 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:06.925 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.925 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.925 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.925 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.925 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.925 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.925 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.925 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.925 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.925 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.925 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.925 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.925 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.925 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.925 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.925 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.925 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:06.925 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:06.925 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:06.925 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:06.925 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:06.925 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:06.925 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:06.925 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:06.925 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:06.925 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:06.925 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:06.925 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:06.925 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.925 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.925 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.925 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.925 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.925 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.925 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.925 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.925 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.925 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.925 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.925 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.925 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.925 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.925 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.184 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.184 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.184 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.184 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.184 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.184 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.184 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.184 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.184 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.184 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.184 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.184 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.184 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.184 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.184 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.184 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.184 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.184 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.184 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.184 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.184 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.184 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.184 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.184 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.184 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.184 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.184 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.184 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.184 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.184 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.184 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.184 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.184 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.184 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.184 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.184 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.184 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.184 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.184 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.184 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.184 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.184 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.184 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.184 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.185 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.185 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.185 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.185 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.185 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.185 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.185 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.185 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.185 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.185 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.185 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.185 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.185 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.185 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.185 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.185 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.185 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.185 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.185 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.185 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.185 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.185 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.185 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.185 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.185 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.185 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.185 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.185 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.185 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.185 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.185 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.185 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.185 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.185 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.185 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.185 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.185 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.185 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.185 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.185 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.185 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.185 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.185 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.185 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.185 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.185 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.185 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:07.185 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.185 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:07.185 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.185 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:07.185 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.185 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:07.185 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:07.185 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:07.185 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:07.185 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:07.185 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:07.185 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:07.185 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:07.185 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:07.185 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:07.446 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:07.446 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:07.446 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:07.446 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:07.446 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:07.446 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:07.446 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:07.446 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.446 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.446 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.446 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:07.446 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:07.446 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:07.446 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:07.446 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:07.446 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.447 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.448 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.449 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.450 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.450 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.450 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.450 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.450 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.450 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.450 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.450 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.450 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.450 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.450 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.450 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.450 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.450 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.450 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.450 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.450 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.450 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.450 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.450 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.450 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.450 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.450 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.450 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.450 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.450 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.450 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.450 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.450 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.450 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.450 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.450 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.450 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.450 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.450 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.450 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.450 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.450 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.450 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.450 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.450 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.450 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.450 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.450 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.450 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.450 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.450 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.450 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.450 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.450 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.450 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:07.450 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:07.450 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:07.450 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:07.450 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:07.450 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:07.450 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:07.450 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:03:07.450 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:07.450 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:03:07.450 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:07.450 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:03:07.450 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:07.450 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:03:07.450 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:07.450 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:03:07.450 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:07.450 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:03:07.450 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:07.450 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:03:07.450 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:07.450 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:03:07.450 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:07.450 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:03:07.450 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:07.451 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:03:07.451 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:07.451 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:03:07.451 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:07.451 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:03:07.451 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:07.451 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:03:07.451 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:07.451 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:03:07.451 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:07.451 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:03:07.451 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:07.451 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:03:07.451 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:07.451 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:03:07.451 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:07.451 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:03:07.451 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:07.451 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:03:07.451 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:07.451 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:03:07.451 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:07.451 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:03:07.451 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:07.451 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:03:07.451 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:07.451 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:03:07.451 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:07.451 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:03:07.451 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:07.451 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:03:07.451 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:07.451 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:03:07.451 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:07.451 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:07.451 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:07.451 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:07.451 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:07.451 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:07.451 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:07.451 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:07.451 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:07.451 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:07.451 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:07.451 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:07.451 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:07.451 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:03:07.451 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:07.451 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:03:07.451 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:07.451 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:03:07.451 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:07.451 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:03:07.451 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:07.451 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:03:07.451 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:07.451 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:03:07.451 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:07.451 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:03:07.451 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:07.451 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:03:07.451 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:07.451 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:03:07.451 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:07.451 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:03:07.451 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:07.451 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:03:07.451 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:07.451 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:03:07.451 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:07.451 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:03:07.451 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:07.451 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:03:07.451 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:07.451 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:03:07.452 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:07.452 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:03:07.452 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:07.452 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:03:07.452 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:07.452 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:03:07.452 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:07.452 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:03:07.452 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:07.452 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:03:07.452 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:07.452 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:03:07.452 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:07.452 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:03:07.452 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:07.452 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:03:07.452 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:07.452 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:03:07.452 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:07.452 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:03:07.452 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:07.452 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:03:07.452 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:07.452 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:07.452 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:07.452 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:07.452 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:07.452 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:07.452 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:07.452 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:07.452 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:07.452 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:07.452 02:26:32 -- common/autobuild_common.sh@189 -- $ uname -s 00:03:07.452 02:26:32 -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:07.452 02:26:32 -- common/autobuild_common.sh@200 -- $ cat 00:03:07.452 02:26:32 -- common/autobuild_common.sh@205 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:07.452 00:03:07.452 real 0m46.730s 00:03:07.452 user 5m0.737s 00:03:07.452 sys 0m41.511s 00:03:07.452 02:26:32 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:07.452 02:26:32 -- common/autotest_common.sh@10 -- $ set +x 00:03:07.452 ************************************ 00:03:07.452 END TEST build_native_dpdk 00:03:07.452 ************************************ 00:03:07.710 02:26:32 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:07.710 02:26:32 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:07.710 02:26:32 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:07.710 02:26:32 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:07.710 02:26:32 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:03:07.710 02:26:32 -- spdk/autobuild.sh@58 -- $ unittest_build 00:03:07.710 02:26:32 -- common/autobuild_common.sh@411 -- $ run_test unittest_build _unittest_build 00:03:07.710 02:26:32 -- common/autotest_common.sh@1077 -- $ '[' 2 -le 1 ']' 00:03:07.710 02:26:32 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:03:07.710 02:26:32 -- common/autotest_common.sh@10 -- $ set +x 00:03:07.710 ************************************ 00:03:07.710 START TEST unittest_build 00:03:07.710 ************************************ 00:03:07.710 02:26:32 -- common/autotest_common.sh@1104 -- $ _unittest_build 00:03:07.710 02:26:32 -- common/autobuild_common.sh@402 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --without-shared 00:03:07.710 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:07.711 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.711 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:07.711 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:08.277 Using 'verbs' RDMA provider 00:03:23.413 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:03:38.288 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:03:38.288 Creating mk/config.mk...done. 00:03:38.288 Creating mk/cc.flags.mk...done. 00:03:38.288 Type 'make' to build. 00:03:38.288 02:27:01 -- common/autobuild_common.sh@403 -- $ make -j10 00:03:38.288 make[1]: Nothing to be done for 'all'. 00:03:38.854 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:39.420 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:39.420 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:39.420 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:39.420 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:39.678 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:39.678 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:39.678 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:39.678 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:39.678 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:39.678 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:39.678 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:39.941 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:39.941 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:39.941 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:39.941 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:39.941 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:39.941 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:40.205 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:40.205 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:40.205 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:40.205 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:40.205 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:40.205 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:40.464 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:40.464 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:40.464 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:40.464 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:40.464 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:40.464 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:40.464 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:40.722 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:40.722 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:40.722 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:40.722 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:40.722 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:40.722 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:40.722 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:40.722 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:40.722 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:40.981 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:40.981 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:40.981 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:40.981 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:40.981 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:40.981 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:40.981 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:41.240 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:41.240 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:41.240 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:41.240 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:41.240 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:41.240 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:41.240 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:41.240 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:41.499 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:41.499 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:41.499 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:41.499 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:41.499 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:41.499 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:41.499 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:41.758 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:41.758 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:41.758 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:41.758 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:42.017 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:42.017 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:42.017 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:42.017 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:42.017 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:42.017 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:42.276 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:42.276 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:42.276 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:42.276 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:42.276 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:42.276 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:42.276 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:42.276 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:42.276 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:42.535 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:42.535 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:42.535 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:42.535 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:42.535 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:42.535 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:42.535 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:42.535 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:42.536 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:42.795 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:42.795 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:42.795 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:42.795 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:42.795 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:42.795 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:42.795 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:42.795 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:42.795 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:43.054 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:43.054 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:43.054 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:43.054 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:43.054 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:43.054 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:43.054 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:43.312 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:43.312 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:43.312 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:43.312 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:43.312 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:43.312 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:43.572 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:43.572 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:43.572 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:43.572 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:43.572 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:43.572 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:43.831 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:43.831 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:43.831 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:43.831 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:43.831 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:43.831 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:43.831 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:44.090 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:44.090 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:44.090 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:44.090 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:44.090 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:44.090 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:44.090 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:44.090 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:44.349 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:44.349 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:44.349 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:44.349 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:44.349 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:44.350 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:44.608 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:44.608 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:44.608 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:44.608 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:44.608 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:44.867 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:44.867 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:44.867 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:44.867 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:45.125 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:45.125 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:45.125 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:45.125 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:45.125 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:45.125 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:45.383 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:45.383 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:45.383 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:45.383 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:45.383 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:45.383 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:45.383 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:45.383 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:45.383 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:45.641 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:45.641 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:45.641 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:45.641 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:45.641 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:45.641 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:45.642 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:45.642 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:45.900 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:45.900 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:45.900 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:45.900 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:46.158 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:46.158 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:46.158 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:46.158 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:46.417 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:46.417 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:46.417 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:46.417 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:46.417 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:46.417 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:46.417 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:46.676 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:46.676 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:46.676 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:46.676 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:46.676 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:46.676 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:46.934 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:46.934 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:46.934 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:46.934 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:46.934 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:47.192 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:47.451 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:47.451 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:47.451 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:47.451 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:47.451 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:47.710 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:47.710 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:47.710 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:47.710 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:47.710 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:47.710 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:47.710 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:47.710 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:47.968 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:47.968 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:47.968 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:47.968 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:48.226 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:48.226 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:48.226 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:48.226 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:48.226 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:48.226 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:48.484 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:48.741 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:48.741 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:48.741 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:48.741 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:48.741 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:48.741 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:48.741 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:48.741 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:48.741 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:48.998 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:48.998 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:48.998 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:48.998 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:48.998 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:48.998 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:48.998 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:48.998 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:49.256 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:49.256 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:49.513 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:49.770 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:49.770 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:49.770 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:50.028 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:50.028 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:50.286 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:50.545 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:50.545 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:50.545 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:50.802 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:50.802 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:50.802 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:50.802 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:50.802 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:51.063 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:51.063 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:51.063 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:51.063 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:51.330 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:51.330 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:51.330 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:51.330 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:51.330 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:51.600 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:51.600 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:51.600 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:51.600 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:51.600 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:51.858 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:51.858 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:51.858 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:51.858 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:52.116 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:52.116 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:52.116 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:52.116 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:52.373 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:52.373 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:52.373 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:52.373 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:52.630 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:52.630 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:52.630 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:52.630 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:52.630 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:52.886 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:52.886 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:52.886 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:52.886 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:53.143 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:53.143 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:55.043 CC lib/ut/ut.o 00:03:55.043 CC lib/log/log.o 00:03:55.043 CC lib/ut_mock/mock.o 00:03:55.043 CC lib/log/log_flags.o 00:03:55.043 CC lib/log/log_deprecated.o 00:03:55.043 LIB libspdk_ut_mock.a 00:03:55.043 LIB libspdk_log.a 00:03:55.043 LIB libspdk_ut.a 00:03:55.043 CC lib/ioat/ioat.o 00:03:55.043 CC lib/util/base64.o 00:03:55.043 CXX lib/trace_parser/trace.o 00:03:55.043 CC lib/util/bit_array.o 00:03:55.043 CC lib/util/cpuset.o 00:03:55.043 CC lib/util/crc16.o 00:03:55.043 CC lib/util/crc32.o 00:03:55.043 CC lib/util/crc32c.o 00:03:55.043 CC lib/dma/dma.o 00:03:55.301 CC lib/vfio_user/host/vfio_user_pci.o 00:03:55.301 CC lib/util/crc32_ieee.o 00:03:55.301 CC lib/vfio_user/host/vfio_user.o 00:03:55.301 CC lib/util/crc64.o 00:03:55.301 LIB libspdk_dma.a 00:03:55.301 CC lib/util/dif.o 00:03:55.301 CC lib/util/fd.o 00:03:55.301 CC lib/util/file.o 00:03:55.301 CC lib/util/hexlify.o 00:03:55.560 CC lib/util/iov.o 00:03:55.560 CC lib/util/math.o 00:03:55.560 LIB libspdk_ioat.a 00:03:55.560 CC lib/util/pipe.o 00:03:55.560 CC lib/util/strerror_tls.o 00:03:55.560 CC lib/util/string.o 00:03:55.560 CC lib/util/uuid.o 00:03:55.560 CC lib/util/fd_group.o 00:03:55.560 LIB libspdk_vfio_user.a 00:03:55.560 CC lib/util/xor.o 00:03:55.560 CC lib/util/zipf.o 00:03:56.127 LIB libspdk_util.a 00:03:56.385 CC lib/idxd/idxd.o 00:03:56.385 CC lib/idxd/idxd_user.o 00:03:56.385 CC lib/json/json_parse.o 00:03:56.385 CC lib/json/json_util.o 00:03:56.385 CC lib/rdma/common.o 00:03:56.385 CC lib/json/json_write.o 00:03:56.385 CC lib/vmd/vmd.o 00:03:56.385 CC lib/conf/conf.o 00:03:56.385 CC lib/env_dpdk/env.o 00:03:56.385 LIB libspdk_trace_parser.a 00:03:56.385 CC lib/env_dpdk/memory.o 00:03:56.644 LIB libspdk_conf.a 00:03:56.644 CC lib/env_dpdk/pci.o 00:03:56.644 CC lib/env_dpdk/init.o 00:03:56.644 CC lib/env_dpdk/threads.o 00:03:56.644 CC lib/rdma/rdma_verbs.o 00:03:56.644 CC lib/env_dpdk/pci_ioat.o 00:03:56.644 LIB libspdk_json.a 00:03:56.644 CC lib/env_dpdk/pci_virtio.o 00:03:56.644 CC lib/env_dpdk/pci_vmd.o 00:03:56.644 CC lib/env_dpdk/pci_idxd.o 00:03:56.903 CC lib/env_dpdk/pci_event.o 00:03:56.903 LIB libspdk_rdma.a 00:03:56.903 CC lib/jsonrpc/jsonrpc_server.o 00:03:56.903 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:56.903 CC lib/jsonrpc/jsonrpc_client.o 00:03:56.903 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:56.903 LIB libspdk_idxd.a 00:03:56.903 CC lib/env_dpdk/sigbus_handler.o 00:03:56.903 CC lib/vmd/led.o 00:03:56.903 CC lib/env_dpdk/pci_dpdk.o 00:03:56.903 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:57.162 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:57.162 LIB libspdk_vmd.a 00:03:57.162 LIB libspdk_jsonrpc.a 00:03:57.162 CC lib/rpc/rpc.o 00:03:57.421 LIB libspdk_rpc.a 00:03:57.680 CC lib/trace/trace.o 00:03:57.680 CC lib/trace/trace_flags.o 00:03:57.680 CC lib/trace/trace_rpc.o 00:03:57.680 CC lib/sock/sock.o 00:03:57.680 CC lib/sock/sock_rpc.o 00:03:57.680 CC lib/notify/notify_rpc.o 00:03:57.680 CC lib/notify/notify.o 00:03:57.680 LIB libspdk_notify.a 00:03:57.938 LIB libspdk_env_dpdk.a 00:03:57.938 LIB libspdk_trace.a 00:03:57.938 CC lib/thread/thread.o 00:03:57.938 CC lib/thread/iobuf.o 00:03:57.939 LIB libspdk_sock.a 00:03:58.197 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:58.197 CC lib/nvme/nvme_ctrlr.o 00:03:58.197 CC lib/nvme/nvme_fabric.o 00:03:58.197 CC lib/nvme/nvme_ns_cmd.o 00:03:58.197 CC lib/nvme/nvme_ns.o 00:03:58.197 CC lib/nvme/nvme_pcie_common.o 00:03:58.197 CC lib/nvme/nvme_pcie.o 00:03:58.197 CC lib/nvme/nvme_qpair.o 00:03:58.197 CC lib/nvme/nvme.o 00:03:58.764 CC lib/nvme/nvme_quirks.o 00:03:58.764 CC lib/nvme/nvme_transport.o 00:03:58.764 CC lib/nvme/nvme_discovery.o 00:03:58.764 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:58.764 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:59.022 CC lib/nvme/nvme_tcp.o 00:03:59.022 CC lib/nvme/nvme_opal.o 00:03:59.022 CC lib/nvme/nvme_io_msg.o 00:03:59.280 CC lib/nvme/nvme_poll_group.o 00:03:59.280 CC lib/nvme/nvme_zns.o 00:03:59.280 CC lib/nvme/nvme_cuse.o 00:03:59.280 CC lib/nvme/nvme_vfio_user.o 00:03:59.280 CC lib/nvme/nvme_rdma.o 00:03:59.845 LIB libspdk_thread.a 00:03:59.845 CC lib/init/subsystem.o 00:03:59.845 CC lib/accel/accel.o 00:03:59.845 CC lib/init/subsystem_rpc.o 00:03:59.845 CC lib/init/json_config.o 00:03:59.845 CC lib/blob/blobstore.o 00:03:59.845 CC lib/virtio/virtio.o 00:03:59.845 CC lib/init/rpc.o 00:04:00.104 CC lib/blob/request.o 00:04:00.104 CC lib/blob/zeroes.o 00:04:00.104 CC lib/blob/blob_bs_dev.o 00:04:00.104 LIB libspdk_init.a 00:04:00.104 CC lib/virtio/virtio_vhost_user.o 00:04:00.104 CC lib/virtio/virtio_vfio_user.o 00:04:00.104 CC lib/virtio/virtio_pci.o 00:04:00.362 CC lib/accel/accel_rpc.o 00:04:00.362 CC lib/event/app.o 00:04:00.362 CC lib/accel/accel_sw.o 00:04:00.362 CC lib/event/reactor.o 00:04:00.620 CC lib/event/log_rpc.o 00:04:00.620 CC lib/event/app_rpc.o 00:04:00.620 LIB libspdk_virtio.a 00:04:00.620 CC lib/event/scheduler_static.o 00:04:00.620 LIB libspdk_nvme.a 00:04:00.879 LIB libspdk_event.a 00:04:01.138 LIB libspdk_accel.a 00:04:01.138 CC lib/bdev/bdev.o 00:04:01.138 CC lib/bdev/bdev_rpc.o 00:04:01.138 CC lib/bdev/bdev_zone.o 00:04:01.138 CC lib/bdev/part.o 00:04:01.138 CC lib/bdev/scsi_nvme.o 00:04:03.673 LIB libspdk_blob.a 00:04:03.673 CC lib/lvol/lvol.o 00:04:03.673 CC lib/blobfs/tree.o 00:04:03.673 CC lib/blobfs/blobfs.o 00:04:04.610 LIB libspdk_bdev.a 00:04:04.610 CC lib/nbd/nbd.o 00:04:04.610 CC lib/nbd/nbd_rpc.o 00:04:04.610 CC lib/nvmf/ctrlr_bdev.o 00:04:04.610 CC lib/nvmf/ctrlr_discovery.o 00:04:04.610 CC lib/nvmf/ctrlr.o 00:04:04.610 CC lib/nvmf/subsystem.o 00:04:04.610 CC lib/scsi/dev.o 00:04:04.610 CC lib/ftl/ftl_core.o 00:04:04.610 LIB libspdk_blobfs.a 00:04:04.610 CC lib/ftl/ftl_init.o 00:04:04.610 LIB libspdk_lvol.a 00:04:04.610 CC lib/nvmf/nvmf.o 00:04:04.869 CC lib/nvmf/nvmf_rpc.o 00:04:04.869 CC lib/scsi/lun.o 00:04:04.869 CC lib/ftl/ftl_layout.o 00:04:04.869 LIB libspdk_nbd.a 00:04:04.869 CC lib/nvmf/transport.o 00:04:05.128 CC lib/nvmf/tcp.o 00:04:05.128 CC lib/nvmf/rdma.o 00:04:05.128 CC lib/scsi/port.o 00:04:05.386 CC lib/ftl/ftl_debug.o 00:04:05.387 CC lib/scsi/scsi.o 00:04:05.387 CC lib/ftl/ftl_io.o 00:04:05.645 CC lib/scsi/scsi_bdev.o 00:04:05.645 CC lib/ftl/ftl_sb.o 00:04:05.645 CC lib/ftl/ftl_l2p.o 00:04:05.645 CC lib/ftl/ftl_l2p_flat.o 00:04:05.645 CC lib/ftl/ftl_nv_cache.o 00:04:05.903 CC lib/scsi/scsi_pr.o 00:04:05.903 CC lib/ftl/ftl_band.o 00:04:05.903 CC lib/ftl/ftl_band_ops.o 00:04:05.903 CC lib/ftl/ftl_writer.o 00:04:05.903 CC lib/ftl/ftl_rq.o 00:04:06.161 CC lib/ftl/ftl_reloc.o 00:04:06.161 CC lib/ftl/ftl_l2p_cache.o 00:04:06.161 CC lib/scsi/scsi_rpc.o 00:04:06.161 CC lib/scsi/task.o 00:04:06.161 CC lib/ftl/ftl_p2l.o 00:04:06.419 CC lib/ftl/mngt/ftl_mngt.o 00:04:06.419 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:06.419 LIB libspdk_scsi.a 00:04:06.419 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:06.419 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:06.677 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:06.677 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:06.677 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:06.677 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:06.678 CC lib/vhost/vhost.o 00:04:06.678 CC lib/iscsi/conn.o 00:04:06.935 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:06.935 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:06.935 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:06.935 CC lib/iscsi/init_grp.o 00:04:06.935 CC lib/vhost/vhost_rpc.o 00:04:06.935 CC lib/vhost/vhost_scsi.o 00:04:06.935 CC lib/vhost/vhost_blk.o 00:04:07.193 CC lib/vhost/rte_vhost_user.o 00:04:07.193 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:07.193 CC lib/iscsi/iscsi.o 00:04:07.193 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:07.193 CC lib/ftl/utils/ftl_conf.o 00:04:07.452 CC lib/ftl/utils/ftl_md.o 00:04:07.452 CC lib/ftl/utils/ftl_mempool.o 00:04:07.452 CC lib/ftl/utils/ftl_bitmap.o 00:04:07.452 CC lib/ftl/utils/ftl_property.o 00:04:07.710 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:07.710 LIB libspdk_nvmf.a 00:04:07.710 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:07.710 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:07.710 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:07.710 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:07.710 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:07.968 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:07.968 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:07.968 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:07.968 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:07.968 CC lib/ftl/base/ftl_base_dev.o 00:04:07.968 CC lib/ftl/base/ftl_base_bdev.o 00:04:07.968 CC lib/ftl/ftl_trace.o 00:04:07.968 CC lib/iscsi/md5.o 00:04:07.968 LIB libspdk_vhost.a 00:04:07.968 CC lib/iscsi/param.o 00:04:08.225 CC lib/iscsi/portal_grp.o 00:04:08.225 CC lib/iscsi/tgt_node.o 00:04:08.225 CC lib/iscsi/iscsi_subsystem.o 00:04:08.225 CC lib/iscsi/iscsi_rpc.o 00:04:08.225 CC lib/iscsi/task.o 00:04:08.225 LIB libspdk_ftl.a 00:04:08.790 LIB libspdk_iscsi.a 00:04:09.047 CC module/env_dpdk/env_dpdk_rpc.o 00:04:09.047 CC module/scheduler/gscheduler/gscheduler.o 00:04:09.047 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:09.047 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:09.047 CC module/sock/posix/posix.o 00:04:09.047 CC module/accel/iaa/accel_iaa.o 00:04:09.047 CC module/accel/error/accel_error.o 00:04:09.047 CC module/accel/dsa/accel_dsa.o 00:04:09.047 CC module/blob/bdev/blob_bdev.o 00:04:09.047 CC module/accel/ioat/accel_ioat.o 00:04:09.047 LIB libspdk_env_dpdk_rpc.a 00:04:09.047 CC module/accel/ioat/accel_ioat_rpc.o 00:04:09.305 LIB libspdk_scheduler_gscheduler.a 00:04:09.305 LIB libspdk_scheduler_dpdk_governor.a 00:04:09.305 CC module/accel/iaa/accel_iaa_rpc.o 00:04:09.305 CC module/accel/error/accel_error_rpc.o 00:04:09.305 LIB libspdk_scheduler_dynamic.a 00:04:09.305 CC module/accel/dsa/accel_dsa_rpc.o 00:04:09.305 LIB libspdk_accel_ioat.a 00:04:09.305 LIB libspdk_accel_dsa.a 00:04:09.305 LIB libspdk_accel_iaa.a 00:04:09.305 LIB libspdk_blob_bdev.a 00:04:09.305 LIB libspdk_accel_error.a 00:04:09.564 CC module/bdev/nvme/bdev_nvme.o 00:04:09.564 CC module/bdev/malloc/bdev_malloc.o 00:04:09.564 CC module/bdev/delay/vbdev_delay.o 00:04:09.564 CC module/bdev/gpt/gpt.o 00:04:09.564 CC module/bdev/null/bdev_null.o 00:04:09.564 CC module/bdev/passthru/vbdev_passthru.o 00:04:09.564 CC module/bdev/error/vbdev_error.o 00:04:09.564 CC module/blobfs/bdev/blobfs_bdev.o 00:04:09.564 CC module/bdev/lvol/vbdev_lvol.o 00:04:09.822 CC module/bdev/gpt/vbdev_gpt.o 00:04:09.822 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:09.822 LIB libspdk_sock_posix.a 00:04:09.822 CC module/bdev/null/bdev_null_rpc.o 00:04:09.822 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:09.822 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:09.822 CC module/bdev/error/vbdev_error_rpc.o 00:04:10.080 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:10.080 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:10.080 LIB libspdk_blobfs_bdev.a 00:04:10.080 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:10.080 LIB libspdk_bdev_null.a 00:04:10.080 LIB libspdk_bdev_passthru.a 00:04:10.080 CC module/bdev/nvme/nvme_rpc.o 00:04:10.081 LIB libspdk_bdev_gpt.a 00:04:10.081 LIB libspdk_bdev_error.a 00:04:10.081 LIB libspdk_bdev_delay.a 00:04:10.081 LIB libspdk_bdev_malloc.a 00:04:10.081 CC module/bdev/raid/bdev_raid.o 00:04:10.338 CC module/bdev/split/vbdev_split.o 00:04:10.338 LIB libspdk_bdev_lvol.a 00:04:10.338 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:10.338 CC module/bdev/aio/bdev_aio.o 00:04:10.338 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:10.338 CC module/bdev/ftl/bdev_ftl.o 00:04:10.338 CC module/bdev/iscsi/bdev_iscsi.o 00:04:10.338 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:10.338 CC module/bdev/split/vbdev_split_rpc.o 00:04:10.596 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:10.596 LIB libspdk_bdev_zone_block.a 00:04:10.596 CC module/bdev/aio/bdev_aio_rpc.o 00:04:10.596 CC module/bdev/nvme/bdev_mdns_client.o 00:04:10.596 CC module/bdev/nvme/vbdev_opal.o 00:04:10.596 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:10.596 LIB libspdk_bdev_split.a 00:04:10.596 LIB libspdk_bdev_ftl.a 00:04:10.596 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:10.853 CC module/bdev/raid/bdev_raid_rpc.o 00:04:10.853 LIB libspdk_bdev_aio.a 00:04:10.853 LIB libspdk_bdev_iscsi.a 00:04:10.853 CC module/bdev/raid/bdev_raid_sb.o 00:04:10.853 CC module/bdev/raid/raid0.o 00:04:10.853 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:10.853 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:10.853 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:10.853 CC module/bdev/raid/raid1.o 00:04:10.853 CC module/bdev/raid/concat.o 00:04:11.111 CC module/bdev/raid/raid5f.o 00:04:11.369 LIB libspdk_bdev_virtio.a 00:04:11.627 LIB libspdk_bdev_raid.a 00:04:12.193 LIB libspdk_bdev_nvme.a 00:04:12.464 CC module/event/subsystems/scheduler/scheduler.o 00:04:12.464 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:12.464 CC module/event/subsystems/iobuf/iobuf.o 00:04:12.464 CC module/event/subsystems/vmd/vmd.o 00:04:12.464 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:12.464 CC module/event/subsystems/sock/sock.o 00:04:12.464 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:12.464 LIB libspdk_event_scheduler.a 00:04:12.464 LIB libspdk_event_sock.a 00:04:12.464 LIB libspdk_event_vhost_blk.a 00:04:12.464 LIB libspdk_event_vmd.a 00:04:12.464 LIB libspdk_event_iobuf.a 00:04:12.722 CC module/event/subsystems/accel/accel.o 00:04:12.722 LIB libspdk_event_accel.a 00:04:12.980 CC module/event/subsystems/bdev/bdev.o 00:04:12.980 LIB libspdk_event_bdev.a 00:04:13.239 CC module/event/subsystems/scsi/scsi.o 00:04:13.239 CC module/event/subsystems/nbd/nbd.o 00:04:13.239 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:13.239 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:13.239 LIB libspdk_event_nbd.a 00:04:13.497 LIB libspdk_event_scsi.a 00:04:13.497 LIB libspdk_event_nvmf.a 00:04:13.497 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:13.497 CC module/event/subsystems/iscsi/iscsi.o 00:04:13.756 LIB libspdk_event_iscsi.a 00:04:13.756 LIB libspdk_event_vhost_scsi.a 00:04:13.756 CXX app/trace/trace.o 00:04:13.756 CC app/trace_record/trace_record.o 00:04:14.014 CC examples/nvme/hello_world/hello_world.o 00:04:14.014 CC examples/ioat/perf/perf.o 00:04:14.014 CC examples/accel/perf/accel_perf.o 00:04:14.014 CC examples/sock/hello_world/hello_sock.o 00:04:14.014 CC app/nvmf_tgt/nvmf_main.o 00:04:14.014 CC examples/bdev/hello_world/hello_bdev.o 00:04:14.014 CC test/accel/dif/dif.o 00:04:14.014 CC examples/blob/hello_world/hello_blob.o 00:04:14.014 LINK spdk_trace_record 00:04:14.272 LINK nvmf_tgt 00:04:14.272 LINK hello_world 00:04:14.272 LINK ioat_perf 00:04:14.272 LINK hello_bdev 00:04:14.272 LINK hello_blob 00:04:14.272 LINK hello_sock 00:04:14.531 LINK spdk_trace 00:04:14.531 LINK accel_perf 00:04:14.531 LINK dif 00:04:14.789 CC examples/bdev/bdevperf/bdevperf.o 00:04:14.789 CC examples/ioat/verify/verify.o 00:04:15.047 CC examples/vmd/lsvmd/lsvmd.o 00:04:15.047 LINK lsvmd 00:04:15.047 LINK verify 00:04:15.305 CC test/app/bdev_svc/bdev_svc.o 00:04:15.305 CC examples/nvme/reconnect/reconnect.o 00:04:15.305 LINK bdev_svc 00:04:15.305 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:15.563 LINK bdevperf 00:04:15.563 LINK reconnect 00:04:15.821 CC examples/blob/cli/blobcli.o 00:04:16.079 LINK nvme_manage 00:04:16.337 CC examples/vmd/led/led.o 00:04:16.337 LINK blobcli 00:04:16.337 LINK led 00:04:16.595 CC test/bdev/bdevio/bdevio.o 00:04:17.162 CC test/blobfs/mkfs/mkfs.o 00:04:17.162 CC examples/nvme/arbitration/arbitration.o 00:04:17.162 TEST_HEADER include/spdk/accel_module.h 00:04:17.162 TEST_HEADER include/spdk/bit_pool.h 00:04:17.162 TEST_HEADER include/spdk/ioat.h 00:04:17.162 TEST_HEADER include/spdk/blobfs.h 00:04:17.162 TEST_HEADER include/spdk/notify.h 00:04:17.162 TEST_HEADER include/spdk/pipe.h 00:04:17.162 TEST_HEADER include/spdk/accel.h 00:04:17.162 TEST_HEADER include/spdk/file.h 00:04:17.162 TEST_HEADER include/spdk/version.h 00:04:17.162 LINK bdevio 00:04:17.162 TEST_HEADER include/spdk/trace_parser.h 00:04:17.162 TEST_HEADER include/spdk/opal_spec.h 00:04:17.162 TEST_HEADER include/spdk/uuid.h 00:04:17.162 TEST_HEADER include/spdk/likely.h 00:04:17.162 TEST_HEADER include/spdk/dif.h 00:04:17.162 TEST_HEADER include/spdk/memory.h 00:04:17.162 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:17.162 TEST_HEADER include/spdk/dma.h 00:04:17.162 TEST_HEADER include/spdk/nbd.h 00:04:17.162 TEST_HEADER include/spdk/conf.h 00:04:17.162 LINK mkfs 00:04:17.162 TEST_HEADER include/spdk/env_dpdk.h 00:04:17.162 TEST_HEADER include/spdk/nvmf_spec.h 00:04:17.162 TEST_HEADER include/spdk/iscsi_spec.h 00:04:17.162 TEST_HEADER include/spdk/mmio.h 00:04:17.162 TEST_HEADER include/spdk/json.h 00:04:17.162 TEST_HEADER include/spdk/opal.h 00:04:17.162 CC examples/nvmf/nvmf/nvmf.o 00:04:17.162 TEST_HEADER include/spdk/bdev.h 00:04:17.162 TEST_HEADER include/spdk/base64.h 00:04:17.162 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:17.162 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:17.162 TEST_HEADER include/spdk/fd.h 00:04:17.162 TEST_HEADER include/spdk/barrier.h 00:04:17.162 TEST_HEADER include/spdk/scsi_spec.h 00:04:17.162 TEST_HEADER include/spdk/zipf.h 00:04:17.162 TEST_HEADER include/spdk/nvmf.h 00:04:17.162 TEST_HEADER include/spdk/queue.h 00:04:17.162 TEST_HEADER include/spdk/xor.h 00:04:17.162 TEST_HEADER include/spdk/cpuset.h 00:04:17.162 TEST_HEADER include/spdk/thread.h 00:04:17.162 TEST_HEADER include/spdk/bdev_zone.h 00:04:17.162 TEST_HEADER include/spdk/fd_group.h 00:04:17.162 TEST_HEADER include/spdk/tree.h 00:04:17.162 TEST_HEADER include/spdk/blob_bdev.h 00:04:17.162 TEST_HEADER include/spdk/crc64.h 00:04:17.162 TEST_HEADER include/spdk/assert.h 00:04:17.162 TEST_HEADER include/spdk/nvme_spec.h 00:04:17.421 TEST_HEADER include/spdk/endian.h 00:04:17.421 TEST_HEADER include/spdk/pci_ids.h 00:04:17.421 TEST_HEADER include/spdk/log.h 00:04:17.421 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:17.421 TEST_HEADER include/spdk/ftl.h 00:04:17.421 TEST_HEADER include/spdk/config.h 00:04:17.421 TEST_HEADER include/spdk/vhost.h 00:04:17.421 TEST_HEADER include/spdk/bdev_module.h 00:04:17.421 TEST_HEADER include/spdk/nvme_intel.h 00:04:17.421 TEST_HEADER include/spdk/idxd_spec.h 00:04:17.421 TEST_HEADER include/spdk/crc16.h 00:04:17.421 TEST_HEADER include/spdk/nvme.h 00:04:17.421 TEST_HEADER include/spdk/stdinc.h 00:04:17.421 TEST_HEADER include/spdk/scsi.h 00:04:17.421 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:17.421 TEST_HEADER include/spdk/idxd.h 00:04:17.421 TEST_HEADER include/spdk/hexlify.h 00:04:17.421 TEST_HEADER include/spdk/reduce.h 00:04:17.421 TEST_HEADER include/spdk/crc32.h 00:04:17.421 TEST_HEADER include/spdk/init.h 00:04:17.421 TEST_HEADER include/spdk/nvmf_transport.h 00:04:17.421 TEST_HEADER include/spdk/nvme_zns.h 00:04:17.421 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:17.421 TEST_HEADER include/spdk/util.h 00:04:17.421 TEST_HEADER include/spdk/jsonrpc.h 00:04:17.421 TEST_HEADER include/spdk/env.h 00:04:17.421 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:17.421 TEST_HEADER include/spdk/lvol.h 00:04:17.421 TEST_HEADER include/spdk/histogram_data.h 00:04:17.421 TEST_HEADER include/spdk/event.h 00:04:17.421 TEST_HEADER include/spdk/trace.h 00:04:17.421 TEST_HEADER include/spdk/ioat_spec.h 00:04:17.421 TEST_HEADER include/spdk/string.h 00:04:17.421 TEST_HEADER include/spdk/ublk.h 00:04:17.421 TEST_HEADER include/spdk/bit_array.h 00:04:17.421 TEST_HEADER include/spdk/scheduler.h 00:04:17.421 TEST_HEADER include/spdk/blob.h 00:04:17.421 TEST_HEADER include/spdk/gpt_spec.h 00:04:17.421 TEST_HEADER include/spdk/sock.h 00:04:17.421 TEST_HEADER include/spdk/vmd.h 00:04:17.421 TEST_HEADER include/spdk/rpc.h 00:04:17.421 CXX test/cpp_headers/accel_module.o 00:04:17.421 CC examples/util/zipf/zipf.o 00:04:17.421 LINK arbitration 00:04:17.679 LINK nvmf 00:04:17.679 LINK zipf 00:04:17.679 CXX test/cpp_headers/bit_pool.o 00:04:17.938 CC app/iscsi_tgt/iscsi_tgt.o 00:04:17.938 CXX test/cpp_headers/ioat.o 00:04:18.196 CXX test/cpp_headers/blobfs.o 00:04:18.196 LINK iscsi_tgt 00:04:18.196 CXX test/cpp_headers/notify.o 00:04:18.196 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:18.196 CXX test/cpp_headers/pipe.o 00:04:18.455 CC test/dma/test_dma/test_dma.o 00:04:18.455 CXX test/cpp_headers/accel.o 00:04:18.455 CC examples/nvme/hotplug/hotplug.o 00:04:18.455 CC test/env/mem_callbacks/mem_callbacks.o 00:04:18.720 CXX test/cpp_headers/file.o 00:04:18.720 LINK nvme_fuzz 00:04:18.720 LINK hotplug 00:04:18.720 LINK mem_callbacks 00:04:18.720 LINK test_dma 00:04:18.977 CXX test/cpp_headers/version.o 00:04:18.977 CXX test/cpp_headers/trace_parser.o 00:04:18.977 CXX test/cpp_headers/opal_spec.o 00:04:19.235 CXX test/cpp_headers/uuid.o 00:04:19.235 CC test/env/vtophys/vtophys.o 00:04:19.493 CXX test/cpp_headers/likely.o 00:04:19.493 LINK vtophys 00:04:19.750 CXX test/cpp_headers/dif.o 00:04:19.750 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:19.750 CXX test/cpp_headers/memory.o 00:04:19.750 CXX test/cpp_headers/vfio_user_pci.o 00:04:19.750 LINK cmb_copy 00:04:20.008 CXX test/cpp_headers/dma.o 00:04:20.008 CC examples/nvme/abort/abort.o 00:04:20.008 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:20.008 CC test/event/event_perf/event_perf.o 00:04:20.008 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:20.008 CXX test/cpp_headers/nbd.o 00:04:20.266 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:20.266 CXX test/cpp_headers/conf.o 00:04:20.266 LINK pmr_persistence 00:04:20.266 LINK event_perf 00:04:20.266 LINK env_dpdk_post_init 00:04:20.266 CXX test/cpp_headers/env_dpdk.o 00:04:20.524 LINK abort 00:04:20.524 CXX test/cpp_headers/nvmf_spec.o 00:04:20.782 CXX test/cpp_headers/iscsi_spec.o 00:04:20.782 CXX test/cpp_headers/mmio.o 00:04:20.782 CC test/event/reactor/reactor.o 00:04:21.040 CXX test/cpp_headers/json.o 00:04:21.040 LINK reactor 00:04:21.040 CC test/event/reactor_perf/reactor_perf.o 00:04:21.298 CXX test/cpp_headers/opal.o 00:04:21.557 CC test/env/memory/memory_ut.o 00:04:21.557 LINK reactor_perf 00:04:21.557 CC test/event/app_repeat/app_repeat.o 00:04:21.557 CXX test/cpp_headers/bdev.o 00:04:21.557 CC app/spdk_tgt/spdk_tgt.o 00:04:21.557 CC test/env/pci/pci_ut.o 00:04:21.557 LINK app_repeat 00:04:21.816 CXX test/cpp_headers/base64.o 00:04:21.816 LINK spdk_tgt 00:04:22.074 CC examples/thread/thread/thread_ex.o 00:04:22.074 CXX test/cpp_headers/blobfs_bdev.o 00:04:22.074 CC test/lvol/esnap/esnap.o 00:04:22.074 LINK memory_ut 00:04:22.074 LINK pci_ut 00:04:22.074 CC test/nvme/aer/aer.o 00:04:22.074 CC test/nvme/reset/reset.o 00:04:22.332 CXX test/cpp_headers/nvme_ocssd.o 00:04:22.332 LINK iscsi_fuzz 00:04:22.332 LINK thread 00:04:22.332 CC test/event/scheduler/scheduler.o 00:04:22.332 LINK reset 00:04:22.591 CXX test/cpp_headers/fd.o 00:04:22.591 LINK aer 00:04:22.591 CXX test/cpp_headers/barrier.o 00:04:22.591 CC test/rpc_client/rpc_client_test.o 00:04:22.591 LINK scheduler 00:04:22.850 CXX test/cpp_headers/scsi_spec.o 00:04:22.850 CXX test/cpp_headers/zipf.o 00:04:22.850 LINK rpc_client_test 00:04:22.850 CC test/nvme/sgl/sgl.o 00:04:23.183 CXX test/cpp_headers/nvmf.o 00:04:23.183 CXX test/cpp_headers/queue.o 00:04:23.183 LINK sgl 00:04:23.183 CXX test/cpp_headers/xor.o 00:04:23.442 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:23.442 CXX test/cpp_headers/cpuset.o 00:04:23.442 CXX test/cpp_headers/thread.o 00:04:23.442 CXX test/cpp_headers/bdev_zone.o 00:04:23.442 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:23.701 CC examples/idxd/perf/perf.o 00:04:23.701 CXX test/cpp_headers/fd_group.o 00:04:23.701 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:23.701 CC test/thread/poller_perf/poller_perf.o 00:04:23.959 CXX test/cpp_headers/tree.o 00:04:23.959 LINK interrupt_tgt 00:04:23.959 CXX test/cpp_headers/blob_bdev.o 00:04:23.959 LINK poller_perf 00:04:23.959 LINK vhost_fuzz 00:04:23.959 LINK idxd_perf 00:04:24.218 CXX test/cpp_headers/crc64.o 00:04:24.218 CXX test/cpp_headers/assert.o 00:04:24.476 CXX test/cpp_headers/nvme_spec.o 00:04:24.476 CC test/nvme/e2edp/nvme_dp.o 00:04:24.735 CXX test/cpp_headers/endian.o 00:04:24.735 CC test/thread/lock/spdk_lock.o 00:04:24.735 CXX test/cpp_headers/pci_ids.o 00:04:24.735 CC test/app/histogram_perf/histogram_perf.o 00:04:24.735 LINK nvme_dp 00:04:24.993 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:04:24.993 CXX test/cpp_headers/log.o 00:04:24.993 LINK histogram_perf 00:04:24.993 CC test/app/jsoncat/jsoncat.o 00:04:24.993 LINK histogram_ut 00:04:25.252 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:25.252 CC app/spdk_lspci/spdk_lspci.o 00:04:25.252 LINK jsoncat 00:04:25.252 LINK spdk_lspci 00:04:25.252 CXX test/cpp_headers/ftl.o 00:04:25.510 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:04:25.510 CC test/unit/lib/accel/accel.c/accel_ut.o 00:04:25.510 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:04:25.510 CXX test/cpp_headers/config.o 00:04:25.510 CXX test/cpp_headers/vhost.o 00:04:25.510 CXX test/cpp_headers/bdev_module.o 00:04:25.769 CC app/spdk_nvme_perf/perf.o 00:04:25.769 CXX test/cpp_headers/nvme_intel.o 00:04:25.769 CC test/nvme/overhead/overhead.o 00:04:26.027 CC test/app/stub/stub.o 00:04:26.027 CXX test/cpp_headers/idxd_spec.o 00:04:26.027 LINK blob_bdev_ut 00:04:26.027 LINK stub 00:04:26.285 CXX test/cpp_headers/crc16.o 00:04:26.285 LINK overhead 00:04:26.285 CC test/unit/lib/blob/blob.c/blob_ut.o 00:04:26.285 CXX test/cpp_headers/nvme.o 00:04:26.285 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:04:26.544 CXX test/cpp_headers/stdinc.o 00:04:26.544 LINK tree_ut 00:04:26.544 LINK spdk_nvme_perf 00:04:26.802 CXX test/cpp_headers/scsi.o 00:04:26.802 LINK spdk_lock 00:04:26.802 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:04:27.060 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:27.319 CC test/nvme/err_injection/err_injection.o 00:04:27.319 CXX test/cpp_headers/idxd.o 00:04:27.319 CXX test/cpp_headers/hexlify.o 00:04:27.577 LINK err_injection 00:04:27.577 CC app/spdk_nvme_discover/discovery_aer.o 00:04:27.577 CC app/spdk_nvme_identify/identify.o 00:04:27.577 CXX test/cpp_headers/reduce.o 00:04:27.577 CXX test/cpp_headers/crc32.o 00:04:27.837 LINK esnap 00:04:27.837 LINK spdk_nvme_discover 00:04:27.837 CXX test/cpp_headers/init.o 00:04:27.837 CC app/spdk_top/spdk_top.o 00:04:27.837 LINK accel_ut 00:04:28.097 CXX test/cpp_headers/nvmf_transport.o 00:04:28.356 CXX test/cpp_headers/nvme_zns.o 00:04:28.356 CC app/vhost/vhost.o 00:04:28.356 CC app/spdk_dd/spdk_dd.o 00:04:28.356 LINK spdk_nvme_identify 00:04:28.618 CC test/nvme/startup/startup.o 00:04:28.618 CXX test/cpp_headers/vfio_user_spec.o 00:04:28.618 LINK blobfs_async_ut 00:04:28.618 LINK vhost 00:04:28.618 LINK startup 00:04:28.618 CXX test/cpp_headers/util.o 00:04:28.879 LINK spdk_dd 00:04:28.879 CXX test/cpp_headers/jsonrpc.o 00:04:28.879 CC test/unit/lib/dma/dma.c/dma_ut.o 00:04:28.879 LINK spdk_top 00:04:29.137 CXX test/cpp_headers/env.o 00:04:29.137 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:04:29.137 CXX test/cpp_headers/nvmf_cmd.o 00:04:29.396 LINK dma_ut 00:04:29.396 CXX test/cpp_headers/lvol.o 00:04:29.655 CC test/unit/lib/event/app.c/app_ut.o 00:04:29.655 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:04:29.655 CXX test/cpp_headers/histogram_data.o 00:04:29.655 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:04:29.655 CC test/nvme/reserve/reserve.o 00:04:29.913 CXX test/cpp_headers/event.o 00:04:29.913 LINK reserve 00:04:29.913 CXX test/cpp_headers/trace.o 00:04:30.172 LINK ioat_ut 00:04:30.172 CXX test/cpp_headers/ioat_spec.o 00:04:30.172 CXX test/cpp_headers/string.o 00:04:30.172 LINK app_ut 00:04:30.431 CXX test/cpp_headers/ublk.o 00:04:30.431 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:04:30.431 LINK blobfs_sync_ut 00:04:30.431 CXX test/cpp_headers/bit_array.o 00:04:30.688 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:04:30.688 CXX test/cpp_headers/scheduler.o 00:04:30.948 LINK conn_ut 00:04:30.948 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:04:30.948 CC test/nvme/simple_copy/simple_copy.o 00:04:30.948 CXX test/cpp_headers/blob.o 00:04:31.207 LINK blobfs_bdev_ut 00:04:31.207 LINK bdev_ut 00:04:31.207 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:04:31.207 CXX test/cpp_headers/gpt_spec.o 00:04:31.207 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:04:31.207 LINK simple_copy 00:04:31.466 CC test/unit/lib/iscsi/param.c/param_ut.o 00:04:31.466 CXX test/cpp_headers/sock.o 00:04:31.466 CC app/fio/nvme/fio_plugin.o 00:04:31.466 LINK reactor_ut 00:04:31.466 CXX test/cpp_headers/vmd.o 00:04:31.466 LINK init_grp_ut 00:04:31.466 CC test/unit/lib/bdev/part.c/part_ut.o 00:04:31.724 CXX test/cpp_headers/rpc.o 00:04:31.724 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:04:31.724 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:04:31.984 LINK param_ut 00:04:31.984 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:04:31.984 LINK scsi_nvme_ut 00:04:31.984 LINK spdk_nvme 00:04:32.242 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:04:32.242 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:04:32.242 CC test/nvme/connect_stress/connect_stress.o 00:04:32.501 LINK json_util_ut 00:04:32.501 LINK connect_stress 00:04:32.501 LINK jsonrpc_server_ut 00:04:32.501 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:04:32.759 LINK json_write_ut 00:04:32.759 CC app/fio/bdev/fio_plugin.o 00:04:32.759 LINK portal_grp_ut 00:04:33.017 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:04:33.017 LINK json_parse_ut 00:04:33.017 CC test/unit/lib/log/log.c/log_ut.o 00:04:33.276 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:04:33.276 LINK spdk_bdev 00:04:33.276 LINK gpt_ut 00:04:33.534 LINK tgt_node_ut 00:04:33.534 LINK log_ut 00:04:33.534 CC test/unit/lib/notify/notify.c/notify_ut.o 00:04:33.534 CC test/nvme/boot_partition/boot_partition.o 00:04:33.792 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:04:33.792 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:04:33.792 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:04:33.792 LINK boot_partition 00:04:33.792 LINK iscsi_ut 00:04:34.050 LINK notify_ut 00:04:34.308 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:04:34.308 LINK blob_ut 00:04:34.873 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:04:34.873 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:04:34.873 CC test/nvme/compliance/nvme_compliance.o 00:04:34.873 LINK vbdev_lvol_ut 00:04:35.131 LINK nvme_ut 00:04:35.131 LINK lvol_ut 00:04:35.389 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:04:35.389 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:04:35.389 LINK part_ut 00:04:35.389 LINK nvme_compliance 00:04:35.389 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:04:35.647 LINK nvme_ctrlr_cmd_ut 00:04:35.647 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:04:35.905 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:04:36.162 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:04:36.420 CC test/nvme/fused_ordering/fused_ordering.o 00:04:36.420 LINK ctrlr_bdev_ut 00:04:36.420 LINK bdev_raid_sb_ut 00:04:36.678 LINK fused_ordering 00:04:36.678 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:36.678 CC test/nvme/fdp/fdp.o 00:04:36.937 LINK nvme_ctrlr_ut 00:04:36.937 LINK doorbell_aers 00:04:37.194 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:04:37.194 LINK fdp 00:04:37.194 LINK ctrlr_discovery_ut 00:04:37.452 LINK subsystem_ut 00:04:37.452 CC test/nvme/cuse/cuse.o 00:04:37.452 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:04:37.709 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:04:37.974 LINK bdev_raid_ut 00:04:37.974 LINK ctrlr_ut 00:04:37.974 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:04:37.974 LINK nvme_ctrlr_ocssd_cmd_ut 00:04:37.974 CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o 00:04:38.233 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:04:38.233 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:04:38.233 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:04:38.233 LINK concat_ut 00:04:38.490 LINK cuse 00:04:38.490 LINK bdev_zone_ut 00:04:38.490 LINK raid1_ut 00:04:38.490 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:04:38.490 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:04:38.748 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:04:38.748 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:04:38.748 LINK nvmf_ut 00:04:38.748 LINK tcp_ut 00:04:39.007 LINK nvme_ns_ut 00:04:39.007 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:04:39.007 LINK raid5f_ut 00:04:39.265 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:04:39.265 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:04:39.265 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:04:39.523 LINK nvme_poll_group_ut 00:04:39.523 LINK bdev_ut 00:04:39.523 LINK nvme_quirks_ut 00:04:39.523 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:04:39.799 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:04:39.799 LINK nvme_qpair_ut 00:04:40.080 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:04:40.080 LINK nvme_ns_ocssd_cmd_ut 00:04:40.080 LINK nvme_ns_cmd_ut 00:04:40.080 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:04:40.080 LINK nvme_pcie_ut 00:04:40.338 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:04:40.338 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:04:40.338 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:04:40.596 LINK nvme_transport_ut 00:04:40.596 LINK nvme_io_msg_ut 00:04:40.596 LINK vbdev_zone_block_ut 00:04:40.855 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:04:40.855 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:04:40.855 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:04:41.113 LINK nvme_fabric_ut 00:04:41.113 LINK nvme_opal_ut 00:04:41.113 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:04:41.372 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:04:41.372 LINK nvme_pcie_common_ut 00:04:41.372 LINK scsi_ut 00:04:41.372 LINK dev_ut 00:04:41.631 LINK nvme_tcp_ut 00:04:41.631 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:04:41.631 CC test/unit/lib/sock/sock.c/sock_ut.o 00:04:41.631 CC test/unit/lib/sock/posix.c/posix_ut.o 00:04:41.889 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:04:41.889 LINK lun_ut 00:04:42.148 LINK nvme_cuse_ut 00:04:42.406 LINK transport_ut 00:04:42.406 CC test/unit/lib/thread/thread.c/thread_ut.o 00:04:42.406 LINK nvme_rdma_ut 00:04:42.406 LINK scsi_pr_ut 00:04:42.406 CC test/unit/lib/util/base64.c/base64_ut.o 00:04:42.664 LINK rdma_ut 00:04:42.664 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:04:42.664 LINK scsi_bdev_ut 00:04:42.664 LINK posix_ut 00:04:42.664 LINK base64_ut 00:04:42.664 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:04:42.664 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:04:42.922 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:04:42.922 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:04:42.922 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:04:42.922 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:04:42.922 LINK pci_event_ut 00:04:43.179 LINK sock_ut 00:04:43.179 LINK bit_array_ut 00:04:43.179 LINK cpuset_ut 00:04:43.179 LINK subsystem_ut 00:04:43.179 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:04:43.179 LINK rpc_ut 00:04:43.438 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:04:43.438 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:04:43.438 LINK crc16_ut 00:04:43.438 CC test/unit/lib/rdma/common.c/common_ut.o 00:04:43.438 LINK idxd_user_ut 00:04:43.438 LINK crc32_ieee_ut 00:04:43.438 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:04:43.438 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:04:43.697 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:04:43.697 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:04:43.697 LINK crc32c_ut 00:04:43.697 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:04:43.697 LINK iobuf_ut 00:04:43.697 LINK crc64_ut 00:04:43.956 LINK ftl_l2p_ut 00:04:43.956 LINK common_ut 00:04:43.956 CC test/unit/lib/util/dif.c/dif_ut.o 00:04:43.956 CC test/unit/lib/util/iov.c/iov_ut.o 00:04:43.956 CC test/unit/lib/util/math.c/math_ut.o 00:04:43.956 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:04:44.215 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:04:44.215 LINK math_ut 00:04:44.215 LINK iov_ut 00:04:44.215 LINK ftl_bitmap_ut 00:04:44.215 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:04:44.474 LINK idxd_ut 00:04:44.474 CC test/unit/lib/util/string.c/string_ut.o 00:04:44.474 LINK thread_ut 00:04:44.474 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:04:44.733 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:04:44.733 LINK ftl_io_ut 00:04:44.733 CC test/unit/lib/util/xor.c/xor_ut.o 00:04:44.733 LINK ftl_band_ut 00:04:44.733 LINK pipe_ut 00:04:44.733 LINK string_ut 00:04:44.992 LINK ftl_mempool_ut 00:04:44.992 LINK bdev_nvme_ut 00:04:44.992 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:04:44.992 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:04:44.992 LINK dif_ut 00:04:44.992 LINK xor_ut 00:04:45.251 LINK vhost_ut 00:04:45.251 LINK ftl_mngt_ut 00:04:46.188 LINK ftl_layout_upgrade_ut 00:04:46.188 LINK ftl_sb_ut 00:04:46.447 ************************************ 00:04:46.447 END TEST unittest_build 00:04:46.447 ************************************ 00:04:46.447 00:04:46.447 real 1m38.699s 00:04:46.447 user 8m19.757s 00:04:46.447 sys 1m29.049s 00:04:46.447 02:28:11 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:04:46.447 02:28:11 -- common/autotest_common.sh@10 -- $ set +x 00:04:46.447 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:04:46.447 02:28:11 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:46.447 02:28:11 -- nvmf/common.sh@7 -- # uname -s 00:04:46.447 02:28:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:46.447 02:28:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:46.447 02:28:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:46.447 02:28:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:46.447 02:28:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:46.447 02:28:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:46.447 02:28:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:46.447 02:28:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:46.447 02:28:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:46.447 02:28:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:46.447 02:28:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9308b2dd-9d23-4896-beac-0cb0b86dea6d 00:04:46.447 02:28:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=9308b2dd-9d23-4896-beac-0cb0b86dea6d 00:04:46.447 02:28:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:46.447 02:28:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:46.447 02:28:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:46.447 02:28:11 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:46.447 02:28:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:46.447 02:28:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:46.447 02:28:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:46.447 02:28:11 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:46.447 02:28:11 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:46.447 02:28:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:46.447 02:28:11 -- paths/export.sh@5 -- # export PATH 00:04:46.447 02:28:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:46.447 02:28:11 -- nvmf/common.sh@46 -- # : 0 00:04:46.447 02:28:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:46.447 02:28:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:46.447 02:28:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:46.447 02:28:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:46.447 02:28:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:46.447 02:28:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:46.447 02:28:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:46.447 02:28:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:46.447 02:28:11 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:46.447 02:28:11 -- spdk/autotest.sh@32 -- # uname -s 00:04:46.447 02:28:11 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:46.447 02:28:11 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E' 00:04:46.447 02:28:11 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:46.447 02:28:11 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:46.447 02:28:11 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:46.447 02:28:11 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:47.015 02:28:11 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:47.015 02:28:11 -- spdk/autotest.sh@46 -- # udevadm=/usr/bin/udevadm 00:04:47.015 02:28:11 -- spdk/autotest.sh@48 -- # udevadm_pid=105499 00:04:47.015 02:28:11 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:04:47.015 02:28:11 -- spdk/autotest.sh@47 -- # /usr/bin/udevadm monitor --property 00:04:47.015 02:28:12 -- spdk/autotest.sh@54 -- # echo 105541 00:04:47.015 02:28:12 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:47.015 02:28:12 -- spdk/autotest.sh@56 -- # echo 105562 00:04:47.015 02:28:12 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:47.015 02:28:12 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:04:47.015 02:28:12 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:47.015 02:28:12 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:04:47.015 02:28:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:47.015 02:28:12 -- common/autotest_common.sh@10 -- # set +x 00:04:47.015 02:28:12 -- spdk/autotest.sh@70 -- # create_test_list 00:04:47.015 02:28:12 -- common/autotest_common.sh@736 -- # xtrace_disable 00:04:47.015 02:28:12 -- common/autotest_common.sh@10 -- # set +x 00:04:47.015 02:28:12 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:47.015 02:28:12 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:47.015 02:28:12 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:04:47.015 02:28:12 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:47.015 02:28:12 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:04:47.015 02:28:12 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:04:47.015 02:28:12 -- common/autotest_common.sh@1440 -- # uname 00:04:47.015 02:28:12 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:04:47.015 02:28:12 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:04:47.015 02:28:12 -- common/autotest_common.sh@1460 -- # uname 00:04:47.015 02:28:12 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:04:47.015 02:28:12 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:04:47.015 02:28:12 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:04:47.015 02:28:12 -- spdk/autotest.sh@83 -- # hash lcov 00:04:47.015 02:28:12 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:47.015 02:28:12 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:04:47.015 --rc lcov_branch_coverage=1 00:04:47.015 --rc lcov_function_coverage=1 00:04:47.015 --rc genhtml_branch_coverage=1 00:04:47.015 --rc genhtml_function_coverage=1 00:04:47.015 --rc genhtml_legend=1 00:04:47.015 --rc geninfo_all_blocks=1 00:04:47.015 ' 00:04:47.015 02:28:12 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:04:47.015 --rc lcov_branch_coverage=1 00:04:47.015 --rc lcov_function_coverage=1 00:04:47.015 --rc genhtml_branch_coverage=1 00:04:47.015 --rc genhtml_function_coverage=1 00:04:47.015 --rc genhtml_legend=1 00:04:47.015 --rc geninfo_all_blocks=1 00:04:47.015 ' 00:04:47.015 02:28:12 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:04:47.015 --rc lcov_branch_coverage=1 00:04:47.015 --rc lcov_function_coverage=1 00:04:47.015 --rc genhtml_branch_coverage=1 00:04:47.015 --rc genhtml_function_coverage=1 00:04:47.015 --rc genhtml_legend=1 00:04:47.015 --rc geninfo_all_blocks=1 00:04:47.015 --no-external' 00:04:47.015 02:28:12 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:04:47.015 --rc lcov_branch_coverage=1 00:04:47.015 --rc lcov_function_coverage=1 00:04:47.015 --rc genhtml_branch_coverage=1 00:04:47.015 --rc genhtml_function_coverage=1 00:04:47.015 --rc genhtml_legend=1 00:04:47.015 --rc geninfo_all_blocks=1 00:04:47.015 --no-external' 00:04:47.015 02:28:12 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:47.273 lcov: LCOV version 1.15 00:04:47.273 02:28:12 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:49.176 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:49.176 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:49.176 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:49.176 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:49.176 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:49.176 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:49.176 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:49.176 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:49.176 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:49.176 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:49.176 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:49.176 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:49.176 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:49.176 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:49.176 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:49.176 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:49.176 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:49.176 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:49.176 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:49.176 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:49.176 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:49.176 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:49.176 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:49.176 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:49.176 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:49.176 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:49.176 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:49.176 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:49.176 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:49.176 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:49.176 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:49.176 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:49.176 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:49.176 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:49.176 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:49.176 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:49.176 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:49.176 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:49.176 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:49.176 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:49.176 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:49.176 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:49.176 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:49.176 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:49.176 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:49.176 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:49.176 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:49.176 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:49.176 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:49.176 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:49.176 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:49.176 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:49.176 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:49.176 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:49.176 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:49.176 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:49.176 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:49.176 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:49.176 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:49.176 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:49.176 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:49.176 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:49.176 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:49.176 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:49.176 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:49.176 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:49.176 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:49.176 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:49.177 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:49.177 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:49.177 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:49.177 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:49.177 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:49.177 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:49.177 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:49.177 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:49.177 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:49.177 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:49.177 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:49.177 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:49.177 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:49.177 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:49.177 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:49.177 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:49.177 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:49.177 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:49.177 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:49.177 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:49.177 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:49.177 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:49.177 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:49.177 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:49.177 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:49.177 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:49.177 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:49.177 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:49.177 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:49.177 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:49.177 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:49.177 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:49.177 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:49.177 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:49.177 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:49.177 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:49.177 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:49.177 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:49.177 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:49.177 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:49.177 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:49.177 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:49.177 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:49.177 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:49.177 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:49.177 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:49.177 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:49.177 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:49.177 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:49.177 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:49.177 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:49.177 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:49.177 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:49.177 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:49.177 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:49.177 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:49.177 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:49.177 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:49.177 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:49.177 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:49.177 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:49.177 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:49.177 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:49.177 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:49.177 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:49.177 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:49.177 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:49.177 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:49.177 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:49.177 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:49.177 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:49.177 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:49.177 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:49.177 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:49.177 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:49.177 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:49.435 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:49.435 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:49.435 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:49.435 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:49.435 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:49.435 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:49.435 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:49.435 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:49.436 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:49.436 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:49.436 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:49.436 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:49.436 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:49.436 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:49.436 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:49.436 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:49.436 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:49.436 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:49.436 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:49.436 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:49.436 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:49.436 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:49.436 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:49.436 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:49.436 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:49.436 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:49.436 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:49.436 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:05:36.099 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:05:36.099 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:05:36.099 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:05:36.099 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:05:36.099 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:05:36.099 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:05:36.099 02:28:58 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:05:36.099 02:28:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:36.099 02:28:58 -- common/autotest_common.sh@10 -- # set +x 00:05:36.099 02:28:58 -- spdk/autotest.sh@102 -- # rm -f 00:05:36.099 02:28:58 -- spdk/autotest.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:36.099 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:36.099 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:05:36.099 02:28:58 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:05:36.099 02:28:58 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:05:36.099 02:28:58 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:05:36.099 02:28:58 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:05:36.099 02:28:58 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:36.099 02:28:58 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:05:36.099 02:28:58 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:05:36.099 02:28:58 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:36.099 02:28:58 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:36.099 02:28:58 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:05:36.099 02:28:58 -- spdk/autotest.sh@121 -- # grep -v p 00:05:36.099 02:28:58 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 00:05:36.099 02:28:58 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:36.099 02:28:58 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:05:36.099 02:28:58 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:05:36.099 02:28:58 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:05:36.099 02:28:58 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:36.099 No valid GPT data, bailing 00:05:36.099 02:28:58 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:36.099 02:28:58 -- scripts/common.sh@393 -- # pt= 00:05:36.099 02:28:58 -- scripts/common.sh@394 -- # return 1 00:05:36.099 02:28:58 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:36.099 1+0 records in 00:05:36.099 1+0 records out 00:05:36.099 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0177798 s, 59.0 MB/s 00:05:36.099 02:28:58 -- spdk/autotest.sh@129 -- # sync 00:05:36.099 02:28:58 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:36.099 02:28:58 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:36.099 02:28:58 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:36.099 02:29:00 -- spdk/autotest.sh@135 -- # uname -s 00:05:36.099 02:29:00 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:05:36.099 02:29:00 -- spdk/autotest.sh@136 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:36.099 02:29:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:36.099 02:29:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:36.099 02:29:00 -- common/autotest_common.sh@10 -- # set +x 00:05:36.099 ************************************ 00:05:36.099 START TEST setup.sh 00:05:36.099 ************************************ 00:05:36.099 02:29:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:36.099 * Looking for test storage... 00:05:36.099 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:36.099 02:29:00 -- setup/test-setup.sh@10 -- # uname -s 00:05:36.099 02:29:00 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:36.099 02:29:00 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:36.099 02:29:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:36.099 02:29:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:36.099 02:29:00 -- common/autotest_common.sh@10 -- # set +x 00:05:36.099 ************************************ 00:05:36.099 START TEST acl 00:05:36.099 ************************************ 00:05:36.099 02:29:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:36.099 * Looking for test storage... 00:05:36.099 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:36.099 02:29:00 -- setup/acl.sh@10 -- # get_zoned_devs 00:05:36.099 02:29:00 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:05:36.099 02:29:00 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:05:36.099 02:29:00 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:05:36.099 02:29:00 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:36.099 02:29:00 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:05:36.099 02:29:00 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:05:36.099 02:29:00 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:36.099 02:29:00 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:36.099 02:29:00 -- setup/acl.sh@12 -- # devs=() 00:05:36.099 02:29:00 -- setup/acl.sh@12 -- # declare -a devs 00:05:36.099 02:29:00 -- setup/acl.sh@13 -- # drivers=() 00:05:36.099 02:29:00 -- setup/acl.sh@13 -- # declare -A drivers 00:05:36.099 02:29:00 -- setup/acl.sh@51 -- # setup reset 00:05:36.099 02:29:00 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:36.099 02:29:00 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:36.099 02:29:00 -- setup/acl.sh@52 -- # collect_setup_devs 00:05:36.099 02:29:00 -- setup/acl.sh@16 -- # local dev driver 00:05:36.099 02:29:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.099 02:29:00 -- setup/acl.sh@15 -- # setup output status 00:05:36.099 02:29:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:36.099 02:29:00 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:36.099 Hugepages 00:05:36.099 node hugesize free / total 00:05:36.099 02:29:00 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:36.099 02:29:00 -- setup/acl.sh@19 -- # continue 00:05:36.099 02:29:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.099 00:05:36.099 02:29:00 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:36.099 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:36.099 02:29:00 -- setup/acl.sh@19 -- # continue 00:05:36.099 02:29:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.099 02:29:00 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:36.099 02:29:00 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:36.099 02:29:00 -- setup/acl.sh@20 -- # continue 00:05:36.099 02:29:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.099 02:29:01 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:05:36.099 02:29:01 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:36.099 02:29:01 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:36.099 02:29:01 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:36.099 02:29:01 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:36.099 02:29:01 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.099 02:29:01 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:05:36.099 02:29:01 -- setup/acl.sh@54 -- # run_test denied denied 00:05:36.099 02:29:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:36.099 02:29:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:36.099 02:29:01 -- common/autotest_common.sh@10 -- # set +x 00:05:36.099 ************************************ 00:05:36.099 START TEST denied 00:05:36.099 ************************************ 00:05:36.099 02:29:01 -- common/autotest_common.sh@1104 -- # denied 00:05:36.099 02:29:01 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:05:36.099 02:29:01 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:05:36.099 02:29:01 -- setup/acl.sh@38 -- # setup output config 00:05:36.099 02:29:01 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:36.099 02:29:01 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:37.475 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:05:37.475 02:29:02 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:05:37.475 02:29:02 -- setup/acl.sh@28 -- # local dev driver 00:05:37.475 02:29:02 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:37.475 02:29:02 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:05:37.475 02:29:02 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:05:37.475 02:29:02 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:37.475 02:29:02 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:37.475 02:29:02 -- setup/acl.sh@41 -- # setup reset 00:05:37.475 02:29:02 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:37.475 02:29:02 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:38.042 ************************************ 00:05:38.042 END TEST denied 00:05:38.042 ************************************ 00:05:38.042 00:05:38.042 real 0m1.837s 00:05:38.042 user 0m0.511s 00:05:38.042 sys 0m1.372s 00:05:38.042 02:29:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.042 02:29:02 -- common/autotest_common.sh@10 -- # set +x 00:05:38.042 02:29:02 -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:38.042 02:29:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:38.042 02:29:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:38.042 02:29:02 -- common/autotest_common.sh@10 -- # set +x 00:05:38.042 ************************************ 00:05:38.042 START TEST allowed 00:05:38.042 ************************************ 00:05:38.042 02:29:02 -- common/autotest_common.sh@1104 -- # allowed 00:05:38.042 02:29:02 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:05:38.042 02:29:02 -- setup/acl.sh@45 -- # setup output config 00:05:38.042 02:29:02 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:05:38.042 02:29:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:38.042 02:29:02 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:39.432 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:39.432 02:29:04 -- setup/acl.sh@47 -- # verify 00:05:39.432 02:29:04 -- setup/acl.sh@28 -- # local dev driver 00:05:39.432 02:29:04 -- setup/acl.sh@48 -- # setup reset 00:05:39.432 02:29:04 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:39.432 02:29:04 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:40.000 00:05:40.000 real 0m1.975s 00:05:40.000 user 0m0.500s 00:05:40.000 sys 0m1.432s 00:05:40.000 02:29:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.000 02:29:04 -- common/autotest_common.sh@10 -- # set +x 00:05:40.000 ************************************ 00:05:40.000 END TEST allowed 00:05:40.000 ************************************ 00:05:40.000 00:05:40.000 real 0m4.726s 00:05:40.000 user 0m1.538s 00:05:40.000 sys 0m3.226s 00:05:40.000 ************************************ 00:05:40.000 END TEST acl 00:05:40.000 ************************************ 00:05:40.000 02:29:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.000 02:29:04 -- common/autotest_common.sh@10 -- # set +x 00:05:40.000 02:29:04 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:40.000 02:29:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:40.000 02:29:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:40.000 02:29:04 -- common/autotest_common.sh@10 -- # set +x 00:05:40.000 ************************************ 00:05:40.000 START TEST hugepages 00:05:40.000 ************************************ 00:05:40.000 02:29:04 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:40.000 * Looking for test storage... 00:05:40.000 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:40.001 02:29:05 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:40.001 02:29:05 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:40.001 02:29:05 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:40.001 02:29:05 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:40.001 02:29:05 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:40.001 02:29:05 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:40.001 02:29:05 -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:40.001 02:29:05 -- setup/common.sh@18 -- # local node= 00:05:40.001 02:29:05 -- setup/common.sh@19 -- # local var val 00:05:40.001 02:29:05 -- setup/common.sh@20 -- # local mem_f mem 00:05:40.001 02:29:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.001 02:29:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:40.001 02:29:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:40.001 02:29:05 -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.001 02:29:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.001 02:29:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251088 kB' 'MemFree: 1602296 kB' 'MemAvailable: 7411772 kB' 'Buffers: 42092 kB' 'Cached: 5836296 kB' 'SwapCached: 0 kB' 'Active: 1894172 kB' 'Inactive: 4107204 kB' 'Active(anon): 132012 kB' 'Inactive(anon): 1796 kB' 'Active(file): 1762160 kB' 'Inactive(file): 4105408 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 648 kB' 'Writeback: 0 kB' 'AnonPages: 141404 kB' 'Mapped: 73328 kB' 'Shmem: 2620 kB' 'KReclaimable: 263980 kB' 'Slab: 355148 kB' 'SReclaimable: 263980 kB' 'SUnreclaim: 91168 kB' 'KernelStack: 4660 kB' 'PageTables: 3924 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 4028392 kB' 'Committed_AS: 624364 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14276 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 2979840 kB' 'DirectMap1G: 11534336 kB' 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # continue 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # continue 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # continue 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # continue 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # continue 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # continue 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # continue 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # continue 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # continue 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # continue 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # continue 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # continue 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # continue 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # continue 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # continue 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # continue 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # continue 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # continue 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # continue 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # continue 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # continue 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # continue 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # continue 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # continue 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # continue 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # continue 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # continue 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # continue 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # continue 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # continue 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # continue 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # continue 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # continue 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # continue 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.001 02:29:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:40.001 02:29:05 -- setup/common.sh@32 -- # continue 00:05:40.002 02:29:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.002 02:29:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.002 02:29:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:40.002 02:29:05 -- setup/common.sh@32 -- # continue 00:05:40.002 02:29:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.002 02:29:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.002 02:29:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:40.002 02:29:05 -- setup/common.sh@32 -- # continue 00:05:40.002 02:29:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.002 02:29:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.002 02:29:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:40.002 02:29:05 -- setup/common.sh@32 -- # continue 00:05:40.002 02:29:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.002 02:29:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.002 02:29:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:40.002 02:29:05 -- setup/common.sh@32 -- # continue 00:05:40.002 02:29:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.002 02:29:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.002 02:29:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:40.002 02:29:05 -- setup/common.sh@32 -- # continue 00:05:40.002 02:29:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.002 02:29:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.002 02:29:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:40.002 02:29:05 -- setup/common.sh@32 -- # continue 00:05:40.002 02:29:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.002 02:29:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.002 02:29:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:40.002 02:29:05 -- setup/common.sh@32 -- # continue 00:05:40.002 02:29:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.002 02:29:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.002 02:29:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:40.002 02:29:05 -- setup/common.sh@32 -- # continue 00:05:40.002 02:29:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.002 02:29:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.002 02:29:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:40.002 02:29:05 -- setup/common.sh@32 -- # continue 00:05:40.002 02:29:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.002 02:29:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.002 02:29:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:40.002 02:29:05 -- setup/common.sh@32 -- # continue 00:05:40.002 02:29:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.002 02:29:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.002 02:29:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:40.002 02:29:05 -- setup/common.sh@32 -- # continue 00:05:40.002 02:29:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.002 02:29:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.002 02:29:05 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:40.002 02:29:05 -- setup/common.sh@32 -- # continue 00:05:40.260 02:29:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.260 02:29:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.260 02:29:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:40.260 02:29:05 -- setup/common.sh@32 -- # continue 00:05:40.261 02:29:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.261 02:29:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.261 02:29:05 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:40.261 02:29:05 -- setup/common.sh@33 -- # echo 2048 00:05:40.261 02:29:05 -- setup/common.sh@33 -- # return 0 00:05:40.261 02:29:05 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:40.261 02:29:05 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:40.261 02:29:05 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:40.261 02:29:05 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:40.261 02:29:05 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:40.261 02:29:05 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:40.261 02:29:05 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:40.261 02:29:05 -- setup/hugepages.sh@207 -- # get_nodes 00:05:40.261 02:29:05 -- setup/hugepages.sh@27 -- # local node 00:05:40.261 02:29:05 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:40.261 02:29:05 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:40.261 02:29:05 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:40.261 02:29:05 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:40.261 02:29:05 -- setup/hugepages.sh@208 -- # clear_hp 00:05:40.261 02:29:05 -- setup/hugepages.sh@37 -- # local node hp 00:05:40.261 02:29:05 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:40.261 02:29:05 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:40.261 02:29:05 -- setup/hugepages.sh@41 -- # echo 0 00:05:40.261 02:29:05 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:40.261 02:29:05 -- setup/hugepages.sh@41 -- # echo 0 00:05:40.261 02:29:05 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:40.261 02:29:05 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:40.261 02:29:05 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:40.261 02:29:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:40.261 02:29:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:40.261 02:29:05 -- common/autotest_common.sh@10 -- # set +x 00:05:40.261 ************************************ 00:05:40.261 START TEST default_setup 00:05:40.261 ************************************ 00:05:40.261 02:29:05 -- common/autotest_common.sh@1104 -- # default_setup 00:05:40.261 02:29:05 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:40.261 02:29:05 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:40.261 02:29:05 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:40.261 02:29:05 -- setup/hugepages.sh@51 -- # shift 00:05:40.261 02:29:05 -- setup/hugepages.sh@52 -- # node_ids=("$@") 00:05:40.261 02:29:05 -- setup/hugepages.sh@52 -- # local node_ids 00:05:40.261 02:29:05 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:40.261 02:29:05 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:40.261 02:29:05 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:40.261 02:29:05 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:05:40.261 02:29:05 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:40.261 02:29:05 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:40.261 02:29:05 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:40.261 02:29:05 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:40.261 02:29:05 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:40.261 02:29:05 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:40.261 02:29:05 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:40.261 02:29:05 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:40.261 02:29:05 -- setup/hugepages.sh@73 -- # return 0 00:05:40.261 02:29:05 -- setup/hugepages.sh@137 -- # setup output 00:05:40.261 02:29:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:40.261 02:29:05 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:40.520 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:40.520 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:41.461 02:29:06 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:41.461 02:29:06 -- setup/hugepages.sh@89 -- # local node 00:05:41.461 02:29:06 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:41.461 02:29:06 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:41.461 02:29:06 -- setup/hugepages.sh@92 -- # local surp 00:05:41.461 02:29:06 -- setup/hugepages.sh@93 -- # local resv 00:05:41.461 02:29:06 -- setup/hugepages.sh@94 -- # local anon 00:05:41.461 02:29:06 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:41.461 02:29:06 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:41.461 02:29:06 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:41.461 02:29:06 -- setup/common.sh@18 -- # local node= 00:05:41.461 02:29:06 -- setup/common.sh@19 -- # local var val 00:05:41.461 02:29:06 -- setup/common.sh@20 -- # local mem_f mem 00:05:41.461 02:29:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.461 02:29:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:41.461 02:29:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:41.461 02:29:06 -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.461 02:29:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.461 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.461 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.461 02:29:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251088 kB' 'MemFree: 3696168 kB' 'MemAvailable: 9505708 kB' 'Buffers: 42092 kB' 'Cached: 5836356 kB' 'SwapCached: 0 kB' 'Active: 1900812 kB' 'Inactive: 4107044 kB' 'Active(anon): 138436 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1762376 kB' 'Inactive(file): 4105252 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 648 kB' 'Writeback: 0 kB' 'AnonPages: 147548 kB' 'Mapped: 73340 kB' 'Shmem: 2616 kB' 'KReclaimable: 263984 kB' 'Slab: 354956 kB' 'SReclaimable: 263984 kB' 'SUnreclaim: 90972 kB' 'KernelStack: 4560 kB' 'PageTables: 3736 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076968 kB' 'Committed_AS: 633760 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14276 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 2979840 kB' 'DirectMap1G: 11534336 kB' 00:05:41.461 02:29:06 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.461 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.461 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.461 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.461 02:29:06 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.461 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.461 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.461 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.461 02:29:06 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.461 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.461 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.461 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.461 02:29:06 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.461 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.461 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.461 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.461 02:29:06 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.461 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.461 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.461 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.461 02:29:06 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.461 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.461 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.461 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.461 02:29:06 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.461 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.461 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.461 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.461 02:29:06 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.461 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.461 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.461 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.461 02:29:06 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.461 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.461 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.461 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.461 02:29:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.461 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.461 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.461 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.461 02:29:06 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.461 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.461 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.461 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.461 02:29:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.461 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.461 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.461 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.461 02:29:06 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.461 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.461 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.461 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.461 02:29:06 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.461 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.461 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.461 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.461 02:29:06 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.461 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.461 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.461 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.461 02:29:06 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.461 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.461 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.461 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.461 02:29:06 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.461 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.461 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.461 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.461 02:29:06 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.461 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.461 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.461 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.461 02:29:06 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.461 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.461 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.461 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.461 02:29:06 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.461 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.461 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.461 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.461 02:29:06 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.461 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.461 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.461 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.461 02:29:06 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.461 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.461 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.461 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.461 02:29:06 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.461 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.461 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.461 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.461 02:29:06 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.461 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.461 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.461 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.461 02:29:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.461 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.461 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.461 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.461 02:29:06 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.461 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.461 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.461 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.461 02:29:06 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.461 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.461 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.461 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.461 02:29:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.461 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.462 02:29:06 -- setup/common.sh@33 -- # echo 0 00:05:41.462 02:29:06 -- setup/common.sh@33 -- # return 0 00:05:41.462 02:29:06 -- setup/hugepages.sh@97 -- # anon=0 00:05:41.462 02:29:06 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:41.462 02:29:06 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:41.462 02:29:06 -- setup/common.sh@18 -- # local node= 00:05:41.462 02:29:06 -- setup/common.sh@19 -- # local var val 00:05:41.462 02:29:06 -- setup/common.sh@20 -- # local mem_f mem 00:05:41.462 02:29:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.462 02:29:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:41.462 02:29:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:41.462 02:29:06 -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.462 02:29:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.462 02:29:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251088 kB' 'MemFree: 3696428 kB' 'MemAvailable: 9505968 kB' 'Buffers: 42092 kB' 'Cached: 5836356 kB' 'SwapCached: 0 kB' 'Active: 1901072 kB' 'Inactive: 4107044 kB' 'Active(anon): 138696 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1762376 kB' 'Inactive(file): 4105252 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 648 kB' 'Writeback: 0 kB' 'AnonPages: 147808 kB' 'Mapped: 73340 kB' 'Shmem: 2616 kB' 'KReclaimable: 263984 kB' 'Slab: 354956 kB' 'SReclaimable: 263984 kB' 'SUnreclaim: 90972 kB' 'KernelStack: 4560 kB' 'PageTables: 3736 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076968 kB' 'Committed_AS: 633760 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14276 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 2979840 kB' 'DirectMap1G: 11534336 kB' 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.462 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.462 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.463 02:29:06 -- setup/common.sh@33 -- # echo 0 00:05:41.463 02:29:06 -- setup/common.sh@33 -- # return 0 00:05:41.463 02:29:06 -- setup/hugepages.sh@99 -- # surp=0 00:05:41.463 02:29:06 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:41.463 02:29:06 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:41.463 02:29:06 -- setup/common.sh@18 -- # local node= 00:05:41.463 02:29:06 -- setup/common.sh@19 -- # local var val 00:05:41.463 02:29:06 -- setup/common.sh@20 -- # local mem_f mem 00:05:41.463 02:29:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.463 02:29:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:41.463 02:29:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:41.463 02:29:06 -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.463 02:29:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.463 02:29:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251088 kB' 'MemFree: 3696916 kB' 'MemAvailable: 9506456 kB' 'Buffers: 42092 kB' 'Cached: 5836356 kB' 'SwapCached: 0 kB' 'Active: 1900644 kB' 'Inactive: 4107044 kB' 'Active(anon): 138268 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1762376 kB' 'Inactive(file): 4105252 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 648 kB' 'Writeback: 0 kB' 'AnonPages: 147592 kB' 'Mapped: 73116 kB' 'Shmem: 2616 kB' 'KReclaimable: 263984 kB' 'Slab: 354964 kB' 'SReclaimable: 263984 kB' 'SUnreclaim: 90980 kB' 'KernelStack: 4560 kB' 'PageTables: 3720 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076968 kB' 'Committed_AS: 633760 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14292 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 2979840 kB' 'DirectMap1G: 11534336 kB' 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.463 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.463 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.464 02:29:06 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.464 02:29:06 -- setup/common.sh@33 -- # echo 0 00:05:41.464 02:29:06 -- setup/common.sh@33 -- # return 0 00:05:41.464 02:29:06 -- setup/hugepages.sh@100 -- # resv=0 00:05:41.464 nr_hugepages=1024 00:05:41.464 resv_hugepages=0 00:05:41.464 surplus_hugepages=0 00:05:41.464 02:29:06 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:41.464 02:29:06 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:41.464 02:29:06 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:41.464 anon_hugepages=0 00:05:41.464 02:29:06 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:41.464 02:29:06 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:41.464 02:29:06 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:41.464 02:29:06 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:41.464 02:29:06 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:41.464 02:29:06 -- setup/common.sh@18 -- # local node= 00:05:41.464 02:29:06 -- setup/common.sh@19 -- # local var val 00:05:41.464 02:29:06 -- setup/common.sh@20 -- # local mem_f mem 00:05:41.464 02:29:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.464 02:29:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:41.464 02:29:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:41.464 02:29:06 -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.464 02:29:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.464 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.465 02:29:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251088 kB' 'MemFree: 3696916 kB' 'MemAvailable: 9506460 kB' 'Buffers: 42092 kB' 'Cached: 5836356 kB' 'SwapCached: 0 kB' 'Active: 1900952 kB' 'Inactive: 4107048 kB' 'Active(anon): 138576 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1762376 kB' 'Inactive(file): 4105256 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 648 kB' 'Writeback: 0 kB' 'AnonPages: 147896 kB' 'Mapped: 73116 kB' 'Shmem: 2616 kB' 'KReclaimable: 263984 kB' 'Slab: 354964 kB' 'SReclaimable: 263984 kB' 'SUnreclaim: 90980 kB' 'KernelStack: 4612 kB' 'PageTables: 3700 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076968 kB' 'Committed_AS: 638676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14308 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 2979840 kB' 'DirectMap1G: 11534336 kB' 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.465 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.465 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.466 02:29:06 -- setup/common.sh@33 -- # echo 1024 00:05:41.466 02:29:06 -- setup/common.sh@33 -- # return 0 00:05:41.466 02:29:06 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:41.466 02:29:06 -- setup/hugepages.sh@112 -- # get_nodes 00:05:41.466 02:29:06 -- setup/hugepages.sh@27 -- # local node 00:05:41.466 02:29:06 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:41.466 02:29:06 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:41.466 02:29:06 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:41.466 02:29:06 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:41.466 02:29:06 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:41.466 02:29:06 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:41.466 02:29:06 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:41.466 02:29:06 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:41.466 02:29:06 -- setup/common.sh@18 -- # local node=0 00:05:41.466 02:29:06 -- setup/common.sh@19 -- # local var val 00:05:41.466 02:29:06 -- setup/common.sh@20 -- # local mem_f mem 00:05:41.466 02:29:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.466 02:29:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:41.466 02:29:06 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:41.466 02:29:06 -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.466 02:29:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.466 02:29:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251088 kB' 'MemFree: 3696948 kB' 'MemUsed: 8554140 kB' 'Active: 1900652 kB' 'Inactive: 4107048 kB' 'Active(anon): 138276 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1762376 kB' 'Inactive(file): 4105256 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'Dirty: 648 kB' 'Writeback: 0 kB' 'FilePages: 5878448 kB' 'Mapped: 73068 kB' 'AnonPages: 147796 kB' 'Shmem: 2616 kB' 'KernelStack: 4548 kB' 'PageTables: 3604 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 263984 kB' 'Slab: 354964 kB' 'SReclaimable: 263984 kB' 'SUnreclaim: 90980 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.466 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.466 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.467 02:29:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.467 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.467 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.467 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.467 02:29:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.467 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.467 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.467 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.467 02:29:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.467 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.467 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.467 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.467 02:29:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.467 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.467 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.467 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.467 02:29:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.467 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.467 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.467 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.467 02:29:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.467 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.467 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.467 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.467 02:29:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.467 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.467 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.467 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.467 02:29:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.467 02:29:06 -- setup/common.sh@32 -- # continue 00:05:41.467 02:29:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.467 02:29:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.467 02:29:06 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.467 02:29:06 -- setup/common.sh@33 -- # echo 0 00:05:41.467 02:29:06 -- setup/common.sh@33 -- # return 0 00:05:41.467 02:29:06 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:41.467 02:29:06 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:41.467 02:29:06 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:41.467 02:29:06 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:41.467 node0=1024 expecting 1024 00:05:41.467 02:29:06 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:41.467 02:29:06 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:41.467 00:05:41.467 real 0m1.344s 00:05:41.467 user 0m0.289s 00:05:41.467 sys 0m1.020s 00:05:41.467 02:29:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.467 02:29:06 -- common/autotest_common.sh@10 -- # set +x 00:05:41.467 ************************************ 00:05:41.467 END TEST default_setup 00:05:41.467 ************************************ 00:05:41.467 02:29:06 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:41.467 02:29:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:41.467 02:29:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:41.467 02:29:06 -- common/autotest_common.sh@10 -- # set +x 00:05:41.467 ************************************ 00:05:41.467 START TEST per_node_1G_alloc 00:05:41.467 ************************************ 00:05:41.467 02:29:06 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:05:41.467 02:29:06 -- setup/hugepages.sh@143 -- # local IFS=, 00:05:41.467 02:29:06 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:41.467 02:29:06 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:41.467 02:29:06 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:41.467 02:29:06 -- setup/hugepages.sh@51 -- # shift 00:05:41.467 02:29:06 -- setup/hugepages.sh@52 -- # node_ids=("$@") 00:05:41.467 02:29:06 -- setup/hugepages.sh@52 -- # local node_ids 00:05:41.467 02:29:06 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:41.467 02:29:06 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:41.467 02:29:06 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:41.467 02:29:06 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:05:41.467 02:29:06 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:41.467 02:29:06 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:41.467 02:29:06 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:41.467 02:29:06 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:41.467 02:29:06 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:41.467 02:29:06 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:41.467 02:29:06 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:41.467 02:29:06 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:41.467 02:29:06 -- setup/hugepages.sh@73 -- # return 0 00:05:41.467 02:29:06 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:41.467 02:29:06 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:41.467 02:29:06 -- setup/hugepages.sh@146 -- # setup output 00:05:41.467 02:29:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:41.467 02:29:06 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:41.727 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:41.727 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:42.300 02:29:07 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:42.300 02:29:07 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:42.300 02:29:07 -- setup/hugepages.sh@89 -- # local node 00:05:42.300 02:29:07 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:42.300 02:29:07 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:42.300 02:29:07 -- setup/hugepages.sh@92 -- # local surp 00:05:42.300 02:29:07 -- setup/hugepages.sh@93 -- # local resv 00:05:42.300 02:29:07 -- setup/hugepages.sh@94 -- # local anon 00:05:42.300 02:29:07 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:42.300 02:29:07 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:42.300 02:29:07 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:42.300 02:29:07 -- setup/common.sh@18 -- # local node= 00:05:42.300 02:29:07 -- setup/common.sh@19 -- # local var val 00:05:42.300 02:29:07 -- setup/common.sh@20 -- # local mem_f mem 00:05:42.300 02:29:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:42.300 02:29:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:42.300 02:29:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:42.300 02:29:07 -- setup/common.sh@28 -- # mapfile -t mem 00:05:42.300 02:29:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:42.300 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.300 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.300 02:29:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251088 kB' 'MemFree: 4744096 kB' 'MemAvailable: 10553640 kB' 'Buffers: 42092 kB' 'Cached: 5836356 kB' 'SwapCached: 0 kB' 'Active: 1901212 kB' 'Inactive: 4107032 kB' 'Active(anon): 138820 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1762392 kB' 'Inactive(file): 4105240 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 648 kB' 'Writeback: 0 kB' 'AnonPages: 147588 kB' 'Mapped: 73576 kB' 'Shmem: 2616 kB' 'KReclaimable: 263984 kB' 'Slab: 355072 kB' 'SReclaimable: 263984 kB' 'SUnreclaim: 91088 kB' 'KernelStack: 4544 kB' 'PageTables: 3320 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601256 kB' 'Committed_AS: 641432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14292 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 2979840 kB' 'DirectMap1G: 11534336 kB' 00:05:42.300 02:29:07 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.300 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.300 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.300 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.300 02:29:07 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.300 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.300 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.300 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.300 02:29:07 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.300 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.300 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.300 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.300 02:29:07 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.300 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.300 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.300 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.300 02:29:07 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.300 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.300 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.300 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.300 02:29:07 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.300 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.300 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.300 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.300 02:29:07 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.300 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.300 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.300 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.300 02:29:07 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.300 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.300 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.300 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.300 02:29:07 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.300 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.300 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.300 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.300 02:29:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.300 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.300 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.300 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.300 02:29:07 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.300 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.300 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.300 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.300 02:29:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.300 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.300 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.300 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.300 02:29:07 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.300 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.300 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.300 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.300 02:29:07 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.300 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.300 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.300 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.300 02:29:07 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.300 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.300 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.300 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.300 02:29:07 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.300 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.300 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.300 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.300 02:29:07 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.300 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.300 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.300 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.300 02:29:07 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.300 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.300 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.300 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.300 02:29:07 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.300 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.300 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.300 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.300 02:29:07 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.300 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.300 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.300 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.300 02:29:07 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.300 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.300 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.300 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.300 02:29:07 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.300 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.300 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.300 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.300 02:29:07 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.300 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.300 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.300 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.300 02:29:07 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.300 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.301 02:29:07 -- setup/common.sh@33 -- # echo 0 00:05:42.301 02:29:07 -- setup/common.sh@33 -- # return 0 00:05:42.301 02:29:07 -- setup/hugepages.sh@97 -- # anon=0 00:05:42.301 02:29:07 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:42.301 02:29:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:42.301 02:29:07 -- setup/common.sh@18 -- # local node= 00:05:42.301 02:29:07 -- setup/common.sh@19 -- # local var val 00:05:42.301 02:29:07 -- setup/common.sh@20 -- # local mem_f mem 00:05:42.301 02:29:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:42.301 02:29:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:42.301 02:29:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:42.301 02:29:07 -- setup/common.sh@28 -- # mapfile -t mem 00:05:42.301 02:29:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.301 02:29:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251088 kB' 'MemFree: 4744120 kB' 'MemAvailable: 10553664 kB' 'Buffers: 42092 kB' 'Cached: 5836356 kB' 'SwapCached: 0 kB' 'Active: 1901312 kB' 'Inactive: 4107032 kB' 'Active(anon): 138920 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1762392 kB' 'Inactive(file): 4105240 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 648 kB' 'Writeback: 0 kB' 'AnonPages: 147792 kB' 'Mapped: 73424 kB' 'Shmem: 2616 kB' 'KReclaimable: 263984 kB' 'Slab: 355080 kB' 'SReclaimable: 263984 kB' 'SUnreclaim: 91096 kB' 'KernelStack: 4540 kB' 'PageTables: 3424 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601256 kB' 'Committed_AS: 641432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14292 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 2979840 kB' 'DirectMap1G: 11534336 kB' 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.301 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.301 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.302 02:29:07 -- setup/common.sh@33 -- # echo 0 00:05:42.302 02:29:07 -- setup/common.sh@33 -- # return 0 00:05:42.302 02:29:07 -- setup/hugepages.sh@99 -- # surp=0 00:05:42.302 02:29:07 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:42.302 02:29:07 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:42.302 02:29:07 -- setup/common.sh@18 -- # local node= 00:05:42.302 02:29:07 -- setup/common.sh@19 -- # local var val 00:05:42.302 02:29:07 -- setup/common.sh@20 -- # local mem_f mem 00:05:42.302 02:29:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:42.302 02:29:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:42.302 02:29:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:42.302 02:29:07 -- setup/common.sh@28 -- # mapfile -t mem 00:05:42.302 02:29:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.302 02:29:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251088 kB' 'MemFree: 4744128 kB' 'MemAvailable: 10553672 kB' 'Buffers: 42092 kB' 'Cached: 5836356 kB' 'SwapCached: 0 kB' 'Active: 1901380 kB' 'Inactive: 4107032 kB' 'Active(anon): 138988 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1762392 kB' 'Inactive(file): 4105240 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 648 kB' 'Writeback: 0 kB' 'AnonPages: 148272 kB' 'Mapped: 73068 kB' 'Shmem: 2616 kB' 'KReclaimable: 263984 kB' 'Slab: 354912 kB' 'SReclaimable: 263984 kB' 'SUnreclaim: 90928 kB' 'KernelStack: 4536 kB' 'PageTables: 3528 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601256 kB' 'Committed_AS: 641432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14308 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 2979840 kB' 'DirectMap1G: 11534336 kB' 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.302 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.302 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.303 02:29:07 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.303 02:29:07 -- setup/common.sh@33 -- # echo 0 00:05:42.303 02:29:07 -- setup/common.sh@33 -- # return 0 00:05:42.303 02:29:07 -- setup/hugepages.sh@100 -- # resv=0 00:05:42.303 nr_hugepages=512 00:05:42.303 resv_hugepages=0 00:05:42.303 02:29:07 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:42.303 02:29:07 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:42.303 surplus_hugepages=0 00:05:42.303 02:29:07 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:42.303 anon_hugepages=0 00:05:42.303 02:29:07 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:42.303 02:29:07 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:42.303 02:29:07 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:42.303 02:29:07 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:42.303 02:29:07 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:42.303 02:29:07 -- setup/common.sh@18 -- # local node= 00:05:42.303 02:29:07 -- setup/common.sh@19 -- # local var val 00:05:42.303 02:29:07 -- setup/common.sh@20 -- # local mem_f mem 00:05:42.303 02:29:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:42.303 02:29:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:42.303 02:29:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:42.303 02:29:07 -- setup/common.sh@28 -- # mapfile -t mem 00:05:42.303 02:29:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.303 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.304 02:29:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251088 kB' 'MemFree: 4744608 kB' 'MemAvailable: 10554152 kB' 'Buffers: 42092 kB' 'Cached: 5836356 kB' 'SwapCached: 0 kB' 'Active: 1900852 kB' 'Inactive: 4107032 kB' 'Active(anon): 138460 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1762392 kB' 'Inactive(file): 4105240 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 648 kB' 'Writeback: 0 kB' 'AnonPages: 147740 kB' 'Mapped: 73068 kB' 'Shmem: 2616 kB' 'KReclaimable: 263984 kB' 'Slab: 354968 kB' 'SReclaimable: 263984 kB' 'SUnreclaim: 90984 kB' 'KernelStack: 4580 kB' 'PageTables: 3644 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601256 kB' 'Committed_AS: 641684 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14324 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 2979840 kB' 'DirectMap1G: 11534336 kB' 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.304 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.304 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.305 02:29:07 -- setup/common.sh@33 -- # echo 512 00:05:42.305 02:29:07 -- setup/common.sh@33 -- # return 0 00:05:42.305 02:29:07 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:42.305 02:29:07 -- setup/hugepages.sh@112 -- # get_nodes 00:05:42.305 02:29:07 -- setup/hugepages.sh@27 -- # local node 00:05:42.305 02:29:07 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:42.305 02:29:07 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:42.305 02:29:07 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:42.305 02:29:07 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:42.305 02:29:07 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:42.305 02:29:07 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:42.305 02:29:07 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:42.305 02:29:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:42.305 02:29:07 -- setup/common.sh@18 -- # local node=0 00:05:42.305 02:29:07 -- setup/common.sh@19 -- # local var val 00:05:42.305 02:29:07 -- setup/common.sh@20 -- # local mem_f mem 00:05:42.305 02:29:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:42.305 02:29:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:42.305 02:29:07 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:42.305 02:29:07 -- setup/common.sh@28 -- # mapfile -t mem 00:05:42.305 02:29:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.305 02:29:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251088 kB' 'MemFree: 4744868 kB' 'MemUsed: 7506220 kB' 'Active: 1901112 kB' 'Inactive: 4107032 kB' 'Active(anon): 138720 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1762392 kB' 'Inactive(file): 4105240 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'Dirty: 648 kB' 'Writeback: 0 kB' 'FilePages: 5878448 kB' 'Mapped: 73068 kB' 'AnonPages: 148000 kB' 'Shmem: 2616 kB' 'KernelStack: 4580 kB' 'PageTables: 3644 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 263984 kB' 'Slab: 354968 kB' 'SReclaimable: 263984 kB' 'SUnreclaim: 90984 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.305 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.305 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.306 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.306 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.306 02:29:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.306 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.306 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.306 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.306 02:29:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.306 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.306 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.306 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.306 02:29:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.306 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.306 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.306 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.306 02:29:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.306 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.306 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.306 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.306 02:29:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.306 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.306 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.306 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.306 02:29:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.306 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.306 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.306 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.306 02:29:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.306 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.306 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.306 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.306 02:29:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.306 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.306 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.306 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.306 02:29:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.306 02:29:07 -- setup/common.sh@32 -- # continue 00:05:42.306 02:29:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.306 02:29:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.306 02:29:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.306 02:29:07 -- setup/common.sh@33 -- # echo 0 00:05:42.306 02:29:07 -- setup/common.sh@33 -- # return 0 00:05:42.306 02:29:07 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:42.306 02:29:07 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:42.306 02:29:07 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:42.306 02:29:07 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:42.306 node0=512 expecting 512 00:05:42.306 02:29:07 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:42.306 02:29:07 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:42.306 00:05:42.306 real 0m0.644s 00:05:42.306 user 0m0.240s 00:05:42.306 sys 0m0.437s 00:05:42.306 02:29:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.306 02:29:07 -- common/autotest_common.sh@10 -- # set +x 00:05:42.306 ************************************ 00:05:42.306 END TEST per_node_1G_alloc 00:05:42.306 ************************************ 00:05:42.306 02:29:07 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:42.306 02:29:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:42.306 02:29:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:42.306 02:29:07 -- common/autotest_common.sh@10 -- # set +x 00:05:42.306 ************************************ 00:05:42.306 START TEST even_2G_alloc 00:05:42.306 ************************************ 00:05:42.306 02:29:07 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:05:42.306 02:29:07 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:42.306 02:29:07 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:42.306 02:29:07 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:42.306 02:29:07 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:42.306 02:29:07 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:42.306 02:29:07 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:42.306 02:29:07 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:05:42.306 02:29:07 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:42.306 02:29:07 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:42.306 02:29:07 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:42.306 02:29:07 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:42.306 02:29:07 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:42.306 02:29:07 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:42.306 02:29:07 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:42.306 02:29:07 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:42.306 02:29:07 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:42.306 02:29:07 -- setup/hugepages.sh@83 -- # : 0 00:05:42.306 02:29:07 -- setup/hugepages.sh@84 -- # : 0 00:05:42.306 02:29:07 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:42.306 02:29:07 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:42.306 02:29:07 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:42.306 02:29:07 -- setup/hugepages.sh@153 -- # setup output 00:05:42.306 02:29:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:42.306 02:29:07 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:42.565 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:42.565 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:43.504 02:29:08 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:43.504 02:29:08 -- setup/hugepages.sh@89 -- # local node 00:05:43.504 02:29:08 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:43.504 02:29:08 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:43.504 02:29:08 -- setup/hugepages.sh@92 -- # local surp 00:05:43.504 02:29:08 -- setup/hugepages.sh@93 -- # local resv 00:05:43.504 02:29:08 -- setup/hugepages.sh@94 -- # local anon 00:05:43.504 02:29:08 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:43.504 02:29:08 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:43.504 02:29:08 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:43.504 02:29:08 -- setup/common.sh@18 -- # local node= 00:05:43.504 02:29:08 -- setup/common.sh@19 -- # local var val 00:05:43.504 02:29:08 -- setup/common.sh@20 -- # local mem_f mem 00:05:43.504 02:29:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:43.504 02:29:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:43.504 02:29:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:43.504 02:29:08 -- setup/common.sh@28 -- # mapfile -t mem 00:05:43.504 02:29:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:43.504 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.504 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.504 02:29:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251088 kB' 'MemFree: 3694564 kB' 'MemAvailable: 9504108 kB' 'Buffers: 42092 kB' 'Cached: 5836356 kB' 'SwapCached: 0 kB' 'Active: 1901296 kB' 'Inactive: 4107032 kB' 'Active(anon): 138904 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1762392 kB' 'Inactive(file): 4105240 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 648 kB' 'Writeback: 0 kB' 'AnonPages: 148304 kB' 'Mapped: 73060 kB' 'Shmem: 2616 kB' 'KReclaimable: 263984 kB' 'Slab: 355296 kB' 'SReclaimable: 263984 kB' 'SUnreclaim: 91312 kB' 'KernelStack: 4532 kB' 'PageTables: 3956 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076968 kB' 'Committed_AS: 628060 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14260 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 2979840 kB' 'DirectMap1G: 11534336 kB' 00:05:43.504 02:29:08 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.504 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.504 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.504 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.504 02:29:08 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.504 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.504 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.504 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.504 02:29:08 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.504 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.504 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.504 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.504 02:29:08 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.504 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.504 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.504 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.504 02:29:08 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.504 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.504 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.504 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.504 02:29:08 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.504 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.504 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.504 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.504 02:29:08 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.504 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.504 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.504 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.504 02:29:08 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.504 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.504 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.504 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.504 02:29:08 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.504 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.504 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.504 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.505 02:29:08 -- setup/common.sh@33 -- # echo 0 00:05:43.505 02:29:08 -- setup/common.sh@33 -- # return 0 00:05:43.505 02:29:08 -- setup/hugepages.sh@97 -- # anon=0 00:05:43.505 02:29:08 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:43.505 02:29:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:43.505 02:29:08 -- setup/common.sh@18 -- # local node= 00:05:43.505 02:29:08 -- setup/common.sh@19 -- # local var val 00:05:43.505 02:29:08 -- setup/common.sh@20 -- # local mem_f mem 00:05:43.505 02:29:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:43.505 02:29:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:43.505 02:29:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:43.505 02:29:08 -- setup/common.sh@28 -- # mapfile -t mem 00:05:43.505 02:29:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.505 02:29:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251088 kB' 'MemFree: 3694564 kB' 'MemAvailable: 9504108 kB' 'Buffers: 42092 kB' 'Cached: 5836356 kB' 'SwapCached: 0 kB' 'Active: 1901468 kB' 'Inactive: 4107032 kB' 'Active(anon): 139076 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1762392 kB' 'Inactive(file): 4105240 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 648 kB' 'Writeback: 0 kB' 'AnonPages: 148180 kB' 'Mapped: 73108 kB' 'Shmem: 2616 kB' 'KReclaimable: 263984 kB' 'Slab: 355296 kB' 'SReclaimable: 263984 kB' 'SUnreclaim: 91312 kB' 'KernelStack: 4500 kB' 'PageTables: 3900 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076968 kB' 'Committed_AS: 628060 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14276 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 2979840 kB' 'DirectMap1G: 11534336 kB' 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.505 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.505 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.506 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.506 02:29:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.506 02:29:08 -- setup/common.sh@33 -- # echo 0 00:05:43.506 02:29:08 -- setup/common.sh@33 -- # return 0 00:05:43.506 02:29:08 -- setup/hugepages.sh@99 -- # surp=0 00:05:43.506 02:29:08 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:43.506 02:29:08 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:43.506 02:29:08 -- setup/common.sh@18 -- # local node= 00:05:43.506 02:29:08 -- setup/common.sh@19 -- # local var val 00:05:43.506 02:29:08 -- setup/common.sh@20 -- # local mem_f mem 00:05:43.506 02:29:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:43.506 02:29:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:43.506 02:29:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:43.506 02:29:08 -- setup/common.sh@28 -- # mapfile -t mem 00:05:43.507 02:29:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.507 02:29:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251088 kB' 'MemFree: 3694848 kB' 'MemAvailable: 9504392 kB' 'Buffers: 42092 kB' 'Cached: 5836356 kB' 'SwapCached: 0 kB' 'Active: 1901376 kB' 'Inactive: 4107032 kB' 'Active(anon): 138984 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1762392 kB' 'Inactive(file): 4105240 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 648 kB' 'Writeback: 0 kB' 'AnonPages: 148224 kB' 'Mapped: 73060 kB' 'Shmem: 2616 kB' 'KReclaimable: 263984 kB' 'Slab: 355304 kB' 'SReclaimable: 263984 kB' 'SUnreclaim: 91320 kB' 'KernelStack: 4504 kB' 'PageTables: 3788 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076968 kB' 'Committed_AS: 633088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14292 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 2979840 kB' 'DirectMap1G: 11534336 kB' 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.507 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.507 02:29:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.508 02:29:08 -- setup/common.sh@33 -- # echo 0 00:05:43.508 02:29:08 -- setup/common.sh@33 -- # return 0 00:05:43.508 02:29:08 -- setup/hugepages.sh@100 -- # resv=0 00:05:43.508 nr_hugepages=1024 00:05:43.508 resv_hugepages=0 00:05:43.508 02:29:08 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:43.508 02:29:08 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:43.508 surplus_hugepages=0 00:05:43.508 anon_hugepages=0 00:05:43.508 02:29:08 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:43.508 02:29:08 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:43.508 02:29:08 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:43.508 02:29:08 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:43.508 02:29:08 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:43.508 02:29:08 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:43.508 02:29:08 -- setup/common.sh@18 -- # local node= 00:05:43.508 02:29:08 -- setup/common.sh@19 -- # local var val 00:05:43.508 02:29:08 -- setup/common.sh@20 -- # local mem_f mem 00:05:43.508 02:29:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:43.508 02:29:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:43.508 02:29:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:43.508 02:29:08 -- setup/common.sh@28 -- # mapfile -t mem 00:05:43.508 02:29:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.508 02:29:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251088 kB' 'MemFree: 3695304 kB' 'MemAvailable: 9504848 kB' 'Buffers: 42092 kB' 'Cached: 5836356 kB' 'SwapCached: 0 kB' 'Active: 1901000 kB' 'Inactive: 4107032 kB' 'Active(anon): 138608 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1762392 kB' 'Inactive(file): 4105240 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 648 kB' 'Writeback: 0 kB' 'AnonPages: 147628 kB' 'Mapped: 73156 kB' 'Shmem: 2616 kB' 'KReclaimable: 263984 kB' 'Slab: 355368 kB' 'SReclaimable: 263984 kB' 'SUnreclaim: 91384 kB' 'KernelStack: 4560 kB' 'PageTables: 3720 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076968 kB' 'Committed_AS: 632664 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14308 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 2979840 kB' 'DirectMap1G: 11534336 kB' 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.508 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.508 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.509 02:29:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.509 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.509 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.509 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.509 02:29:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.509 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.509 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.509 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.509 02:29:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.509 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.509 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.509 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.509 02:29:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.509 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.509 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.509 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.509 02:29:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.509 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.509 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.509 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.509 02:29:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.509 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.509 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.509 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.509 02:29:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.509 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.509 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.509 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.509 02:29:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.509 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.509 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.509 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.509 02:29:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.509 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.509 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.509 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.509 02:29:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.509 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.509 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.509 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.509 02:29:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.509 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.509 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.509 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.509 02:29:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.509 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.509 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.509 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.509 02:29:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.509 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.509 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.509 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.509 02:29:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.509 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.509 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.509 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.509 02:29:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.509 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.509 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.509 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.509 02:29:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.509 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.509 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.509 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.509 02:29:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.509 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.509 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.509 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.509 02:29:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.509 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.509 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.509 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.509 02:29:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.509 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.509 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.509 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.509 02:29:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.509 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.509 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.509 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.509 02:29:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.509 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.509 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.509 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.509 02:29:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.509 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.509 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.509 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.509 02:29:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.509 02:29:08 -- setup/common.sh@33 -- # echo 1024 00:05:43.509 02:29:08 -- setup/common.sh@33 -- # return 0 00:05:43.509 02:29:08 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:43.509 02:29:08 -- setup/hugepages.sh@112 -- # get_nodes 00:05:43.509 02:29:08 -- setup/hugepages.sh@27 -- # local node 00:05:43.509 02:29:08 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:43.509 02:29:08 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:43.509 02:29:08 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:43.509 02:29:08 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:43.509 02:29:08 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:43.509 02:29:08 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:43.509 02:29:08 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:43.509 02:29:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:43.509 02:29:08 -- setup/common.sh@18 -- # local node=0 00:05:43.509 02:29:08 -- setup/common.sh@19 -- # local var val 00:05:43.509 02:29:08 -- setup/common.sh@20 -- # local mem_f mem 00:05:43.509 02:29:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:43.509 02:29:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:43.509 02:29:08 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:43.509 02:29:08 -- setup/common.sh@28 -- # mapfile -t mem 00:05:43.509 02:29:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:43.509 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.509 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.509 02:29:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251088 kB' 'MemFree: 3695304 kB' 'MemUsed: 8555784 kB' 'Active: 1901260 kB' 'Inactive: 4107032 kB' 'Active(anon): 138868 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1762392 kB' 'Inactive(file): 4105240 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'Dirty: 648 kB' 'Writeback: 0 kB' 'FilePages: 5878448 kB' 'Mapped: 73156 kB' 'AnonPages: 147888 kB' 'Shmem: 2616 kB' 'KernelStack: 4628 kB' 'PageTables: 3720 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 263984 kB' 'Slab: 355368 kB' 'SReclaimable: 263984 kB' 'SUnreclaim: 91384 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:43.509 02:29:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.509 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.509 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.509 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.509 02:29:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.509 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.509 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.509 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.509 02:29:08 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.509 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.509 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.509 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.509 02:29:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.510 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.510 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.510 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.510 02:29:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.510 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.510 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.510 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.510 02:29:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.510 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.510 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.510 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.510 02:29:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.510 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.510 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.510 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.510 02:29:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.510 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.510 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.510 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.510 02:29:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.510 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.510 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.510 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.510 02:29:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.510 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.510 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.510 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.510 02:29:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.510 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.510 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.510 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.510 02:29:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.510 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.510 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.510 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.510 02:29:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.510 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.510 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.510 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.510 02:29:08 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.510 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.510 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.510 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.510 02:29:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.510 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.510 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.510 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.510 02:29:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.510 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.510 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.510 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.510 02:29:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.510 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.510 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.510 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.510 02:29:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.510 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.510 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.510 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.510 02:29:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.510 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.510 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.510 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.510 02:29:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.510 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.510 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.510 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.510 02:29:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.510 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.510 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.510 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.510 02:29:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.510 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.510 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.510 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.510 02:29:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.510 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.510 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.510 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.510 02:29:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.510 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.510 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.510 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.510 02:29:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.510 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.510 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.510 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.510 02:29:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.510 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.510 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.510 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.510 02:29:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.510 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.510 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.510 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.510 02:29:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.510 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.510 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.510 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.510 02:29:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.510 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.510 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.510 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.510 02:29:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.510 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.510 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.510 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.510 02:29:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.510 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.510 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.510 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.510 02:29:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.510 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.510 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.510 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.510 02:29:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.510 02:29:08 -- setup/common.sh@32 -- # continue 00:05:43.510 02:29:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.510 02:29:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.510 02:29:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.510 02:29:08 -- setup/common.sh@33 -- # echo 0 00:05:43.510 02:29:08 -- setup/common.sh@33 -- # return 0 00:05:43.510 02:29:08 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:43.510 02:29:08 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:43.510 02:29:08 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:43.510 02:29:08 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:43.510 node0=1024 expecting 1024 00:05:43.510 02:29:08 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:43.510 02:29:08 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:43.510 00:05:43.510 real 0m1.109s 00:05:43.510 user 0m0.272s 00:05:43.510 sys 0m0.871s 00:05:43.510 02:29:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.510 ************************************ 00:05:43.510 END TEST even_2G_alloc 00:05:43.510 ************************************ 00:05:43.510 02:29:08 -- common/autotest_common.sh@10 -- # set +x 00:05:43.510 02:29:08 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:43.510 02:29:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:43.510 02:29:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:43.510 02:29:08 -- common/autotest_common.sh@10 -- # set +x 00:05:43.510 ************************************ 00:05:43.510 START TEST odd_alloc 00:05:43.510 ************************************ 00:05:43.510 02:29:08 -- common/autotest_common.sh@1104 -- # odd_alloc 00:05:43.510 02:29:08 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:43.510 02:29:08 -- setup/hugepages.sh@49 -- # local size=2098176 00:05:43.510 02:29:08 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:43.510 02:29:08 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:43.510 02:29:08 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:43.510 02:29:08 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:43.510 02:29:08 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:05:43.510 02:29:08 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:43.510 02:29:08 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:43.510 02:29:08 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:43.510 02:29:08 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:43.510 02:29:08 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:43.510 02:29:08 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:43.510 02:29:08 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:43.510 02:29:08 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:43.510 02:29:08 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:43.510 02:29:08 -- setup/hugepages.sh@83 -- # : 0 00:05:43.510 02:29:08 -- setup/hugepages.sh@84 -- # : 0 00:05:43.510 02:29:08 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:43.510 02:29:08 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:43.510 02:29:08 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:43.510 02:29:08 -- setup/hugepages.sh@160 -- # setup output 00:05:43.510 02:29:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:43.510 02:29:08 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:43.770 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:43.770 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:44.340 02:29:09 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:44.340 02:29:09 -- setup/hugepages.sh@89 -- # local node 00:05:44.340 02:29:09 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:44.340 02:29:09 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:44.340 02:29:09 -- setup/hugepages.sh@92 -- # local surp 00:05:44.340 02:29:09 -- setup/hugepages.sh@93 -- # local resv 00:05:44.340 02:29:09 -- setup/hugepages.sh@94 -- # local anon 00:05:44.340 02:29:09 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:44.340 02:29:09 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:44.340 02:29:09 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:44.340 02:29:09 -- setup/common.sh@18 -- # local node= 00:05:44.340 02:29:09 -- setup/common.sh@19 -- # local var val 00:05:44.340 02:29:09 -- setup/common.sh@20 -- # local mem_f mem 00:05:44.340 02:29:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:44.340 02:29:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:44.340 02:29:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:44.340 02:29:09 -- setup/common.sh@28 -- # mapfile -t mem 00:05:44.340 02:29:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:44.340 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.341 02:29:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251088 kB' 'MemFree: 3706656 kB' 'MemAvailable: 9516204 kB' 'Buffers: 42092 kB' 'Cached: 5836360 kB' 'SwapCached: 0 kB' 'Active: 1888600 kB' 'Inactive: 4107036 kB' 'Active(anon): 126208 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1762392 kB' 'Inactive(file): 4105244 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 648 kB' 'Writeback: 0 kB' 'AnonPages: 135440 kB' 'Mapped: 72620 kB' 'Shmem: 2616 kB' 'KReclaimable: 263984 kB' 'Slab: 354800 kB' 'SReclaimable: 263984 kB' 'SUnreclaim: 90816 kB' 'KernelStack: 4400 kB' 'PageTables: 2952 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5075944 kB' 'Committed_AS: 604060 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14036 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 2979840 kB' 'DirectMap1G: 11534336 kB' 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.341 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.341 02:29:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.341 02:29:09 -- setup/common.sh@33 -- # echo 0 00:05:44.341 02:29:09 -- setup/common.sh@33 -- # return 0 00:05:44.341 02:29:09 -- setup/hugepages.sh@97 -- # anon=0 00:05:44.341 02:29:09 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:44.341 02:29:09 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:44.342 02:29:09 -- setup/common.sh@18 -- # local node= 00:05:44.342 02:29:09 -- setup/common.sh@19 -- # local var val 00:05:44.342 02:29:09 -- setup/common.sh@20 -- # local mem_f mem 00:05:44.342 02:29:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:44.342 02:29:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:44.342 02:29:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:44.342 02:29:09 -- setup/common.sh@28 -- # mapfile -t mem 00:05:44.342 02:29:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.342 02:29:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251088 kB' 'MemFree: 3706656 kB' 'MemAvailable: 9516204 kB' 'Buffers: 42092 kB' 'Cached: 5836360 kB' 'SwapCached: 0 kB' 'Active: 1888272 kB' 'Inactive: 4107036 kB' 'Active(anon): 125880 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1762392 kB' 'Inactive(file): 4105244 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 648 kB' 'Writeback: 0 kB' 'AnonPages: 135528 kB' 'Mapped: 72572 kB' 'Shmem: 2616 kB' 'KReclaimable: 263984 kB' 'Slab: 354800 kB' 'SReclaimable: 263984 kB' 'SUnreclaim: 90816 kB' 'KernelStack: 4384 kB' 'PageTables: 2940 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5075944 kB' 'Committed_AS: 609432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14052 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 2979840 kB' 'DirectMap1G: 11534336 kB' 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.342 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.342 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.343 02:29:09 -- setup/common.sh@33 -- # echo 0 00:05:44.343 02:29:09 -- setup/common.sh@33 -- # return 0 00:05:44.343 02:29:09 -- setup/hugepages.sh@99 -- # surp=0 00:05:44.343 02:29:09 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:44.343 02:29:09 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:44.343 02:29:09 -- setup/common.sh@18 -- # local node= 00:05:44.343 02:29:09 -- setup/common.sh@19 -- # local var val 00:05:44.343 02:29:09 -- setup/common.sh@20 -- # local mem_f mem 00:05:44.343 02:29:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:44.343 02:29:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:44.343 02:29:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:44.343 02:29:09 -- setup/common.sh@28 -- # mapfile -t mem 00:05:44.343 02:29:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.343 02:29:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251088 kB' 'MemFree: 3706932 kB' 'MemAvailable: 9516480 kB' 'Buffers: 42092 kB' 'Cached: 5836360 kB' 'SwapCached: 0 kB' 'Active: 1888276 kB' 'Inactive: 4107036 kB' 'Active(anon): 125884 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1762392 kB' 'Inactive(file): 4105244 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 648 kB' 'Writeback: 0 kB' 'AnonPages: 135360 kB' 'Mapped: 72572 kB' 'Shmem: 2616 kB' 'KReclaimable: 263984 kB' 'Slab: 354800 kB' 'SReclaimable: 263984 kB' 'SUnreclaim: 90816 kB' 'KernelStack: 4352 kB' 'PageTables: 2888 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5075944 kB' 'Committed_AS: 609432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14052 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 2979840 kB' 'DirectMap1G: 11534336 kB' 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.343 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.343 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.344 02:29:09 -- setup/common.sh@33 -- # echo 0 00:05:44.344 02:29:09 -- setup/common.sh@33 -- # return 0 00:05:44.344 02:29:09 -- setup/hugepages.sh@100 -- # resv=0 00:05:44.344 nr_hugepages=1025 00:05:44.344 02:29:09 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:44.344 02:29:09 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:44.344 resv_hugepages=0 00:05:44.344 surplus_hugepages=0 00:05:44.344 02:29:09 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:44.344 anon_hugepages=0 00:05:44.344 02:29:09 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:44.344 02:29:09 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:44.344 02:29:09 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:44.344 02:29:09 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:44.344 02:29:09 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:44.344 02:29:09 -- setup/common.sh@18 -- # local node= 00:05:44.344 02:29:09 -- setup/common.sh@19 -- # local var val 00:05:44.344 02:29:09 -- setup/common.sh@20 -- # local mem_f mem 00:05:44.344 02:29:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:44.344 02:29:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:44.344 02:29:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:44.344 02:29:09 -- setup/common.sh@28 -- # mapfile -t mem 00:05:44.344 02:29:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.344 02:29:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251088 kB' 'MemFree: 3707192 kB' 'MemAvailable: 9516740 kB' 'Buffers: 42092 kB' 'Cached: 5836360 kB' 'SwapCached: 0 kB' 'Active: 1888276 kB' 'Inactive: 4107036 kB' 'Active(anon): 125884 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1762392 kB' 'Inactive(file): 4105244 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 648 kB' 'Writeback: 0 kB' 'AnonPages: 135104 kB' 'Mapped: 72572 kB' 'Shmem: 2616 kB' 'KReclaimable: 263984 kB' 'Slab: 354800 kB' 'SReclaimable: 263984 kB' 'SUnreclaim: 90816 kB' 'KernelStack: 4352 kB' 'PageTables: 2888 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5075944 kB' 'Committed_AS: 608932 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14068 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 2979840 kB' 'DirectMap1G: 11534336 kB' 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.344 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.344 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.345 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.345 02:29:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.345 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.345 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.345 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.345 02:29:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.345 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.345 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.345 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.345 02:29:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.345 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.345 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.345 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.345 02:29:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.345 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.345 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.345 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.345 02:29:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.345 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.345 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.345 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.345 02:29:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.345 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.345 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.345 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.345 02:29:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.345 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.345 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.345 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.345 02:29:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.345 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.345 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.345 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.345 02:29:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.345 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.345 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.345 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.345 02:29:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.345 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.345 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.345 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.345 02:29:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.345 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.345 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.345 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.345 02:29:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.345 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.345 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.345 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.345 02:29:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.345 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.345 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.345 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.345 02:29:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.345 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.345 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.345 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.345 02:29:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.345 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.345 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.345 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.345 02:29:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.345 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.345 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.345 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.345 02:29:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.345 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.345 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.345 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.345 02:29:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.345 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.345 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.345 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.345 02:29:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.345 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.345 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.345 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.345 02:29:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.345 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.345 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.345 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.345 02:29:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.345 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.345 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.345 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.345 02:29:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.345 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.345 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.345 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.345 02:29:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.345 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.345 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.345 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.345 02:29:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.345 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.345 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.345 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.345 02:29:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.345 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.345 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.345 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.345 02:29:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.345 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.345 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.345 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.345 02:29:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.345 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.345 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.345 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.345 02:29:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.345 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.345 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.345 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.345 02:29:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.345 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.345 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.345 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.345 02:29:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.345 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.345 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.345 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.345 02:29:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.345 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.345 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.345 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.345 02:29:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.345 02:29:09 -- setup/common.sh@33 -- # echo 1025 00:05:44.345 02:29:09 -- setup/common.sh@33 -- # return 0 00:05:44.345 02:29:09 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:44.345 02:29:09 -- setup/hugepages.sh@112 -- # get_nodes 00:05:44.345 02:29:09 -- setup/hugepages.sh@27 -- # local node 00:05:44.345 02:29:09 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:44.345 02:29:09 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:44.345 02:29:09 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:44.345 02:29:09 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:44.345 02:29:09 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:44.345 02:29:09 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:44.345 02:29:09 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:44.345 02:29:09 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:44.346 02:29:09 -- setup/common.sh@18 -- # local node=0 00:05:44.346 02:29:09 -- setup/common.sh@19 -- # local var val 00:05:44.346 02:29:09 -- setup/common.sh@20 -- # local mem_f mem 00:05:44.346 02:29:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:44.346 02:29:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:44.346 02:29:09 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:44.346 02:29:09 -- setup/common.sh@28 -- # mapfile -t mem 00:05:44.346 02:29:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.346 02:29:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251088 kB' 'MemFree: 3706964 kB' 'MemUsed: 8544124 kB' 'Active: 1887956 kB' 'Inactive: 4107032 kB' 'Active(anon): 125560 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1762396 kB' 'Inactive(file): 4105240 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'Dirty: 648 kB' 'Writeback: 0 kB' 'FilePages: 5878452 kB' 'Mapped: 72524 kB' 'AnonPages: 134572 kB' 'Shmem: 2616 kB' 'KernelStack: 4388 kB' 'PageTables: 2840 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 263984 kB' 'Slab: 354800 kB' 'SReclaimable: 263984 kB' 'SUnreclaim: 90816 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # continue 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.346 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.346 02:29:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.346 02:29:09 -- setup/common.sh@33 -- # echo 0 00:05:44.346 02:29:09 -- setup/common.sh@33 -- # return 0 00:05:44.346 02:29:09 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:44.346 02:29:09 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:44.346 02:29:09 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:44.346 02:29:09 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:44.346 node0=1025 expecting 1025 00:05:44.346 02:29:09 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:44.346 02:29:09 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:44.346 00:05:44.346 real 0m0.868s 00:05:44.346 user 0m0.255s 00:05:44.346 sys 0m0.642s 00:05:44.346 02:29:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.346 ************************************ 00:05:44.346 END TEST odd_alloc 00:05:44.346 02:29:09 -- common/autotest_common.sh@10 -- # set +x 00:05:44.346 ************************************ 00:05:44.346 02:29:09 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:44.346 02:29:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:44.346 02:29:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:44.346 02:29:09 -- common/autotest_common.sh@10 -- # set +x 00:05:44.346 ************************************ 00:05:44.347 START TEST custom_alloc 00:05:44.347 ************************************ 00:05:44.347 02:29:09 -- common/autotest_common.sh@1104 -- # custom_alloc 00:05:44.347 02:29:09 -- setup/hugepages.sh@167 -- # local IFS=, 00:05:44.347 02:29:09 -- setup/hugepages.sh@169 -- # local node 00:05:44.347 02:29:09 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:44.347 02:29:09 -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:44.347 02:29:09 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:44.347 02:29:09 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:44.347 02:29:09 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:44.347 02:29:09 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:44.347 02:29:09 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:44.347 02:29:09 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:44.347 02:29:09 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:44.347 02:29:09 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:05:44.347 02:29:09 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:44.347 02:29:09 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:44.347 02:29:09 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:44.347 02:29:09 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:44.347 02:29:09 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:44.347 02:29:09 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:44.347 02:29:09 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:44.347 02:29:09 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:44.347 02:29:09 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:44.347 02:29:09 -- setup/hugepages.sh@83 -- # : 0 00:05:44.347 02:29:09 -- setup/hugepages.sh@84 -- # : 0 00:05:44.347 02:29:09 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:44.347 02:29:09 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:44.347 02:29:09 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:44.347 02:29:09 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:44.347 02:29:09 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:44.347 02:29:09 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:44.347 02:29:09 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:44.347 02:29:09 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:05:44.347 02:29:09 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:44.347 02:29:09 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:44.347 02:29:09 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:44.347 02:29:09 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:44.347 02:29:09 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:44.347 02:29:09 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:44.347 02:29:09 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:44.347 02:29:09 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:44.347 02:29:09 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:44.347 02:29:09 -- setup/hugepages.sh@78 -- # return 0 00:05:44.347 02:29:09 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:44.347 02:29:09 -- setup/hugepages.sh@187 -- # setup output 00:05:44.347 02:29:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:44.347 02:29:09 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:44.606 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:44.606 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:45.176 02:29:09 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:45.176 02:29:09 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:45.176 02:29:09 -- setup/hugepages.sh@89 -- # local node 00:05:45.176 02:29:09 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:45.176 02:29:09 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:45.176 02:29:09 -- setup/hugepages.sh@92 -- # local surp 00:05:45.176 02:29:09 -- setup/hugepages.sh@93 -- # local resv 00:05:45.176 02:29:09 -- setup/hugepages.sh@94 -- # local anon 00:05:45.176 02:29:09 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:45.176 02:29:09 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:45.176 02:29:09 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:45.176 02:29:09 -- setup/common.sh@18 -- # local node= 00:05:45.176 02:29:09 -- setup/common.sh@19 -- # local var val 00:05:45.176 02:29:09 -- setup/common.sh@20 -- # local mem_f mem 00:05:45.176 02:29:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:45.176 02:29:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:45.176 02:29:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:45.176 02:29:09 -- setup/common.sh@28 -- # mapfile -t mem 00:05:45.176 02:29:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:45.176 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.176 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.176 02:29:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251088 kB' 'MemFree: 4757952 kB' 'MemAvailable: 10567500 kB' 'Buffers: 42092 kB' 'Cached: 5836360 kB' 'SwapCached: 0 kB' 'Active: 1888736 kB' 'Inactive: 4107024 kB' 'Active(anon): 126332 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1762404 kB' 'Inactive(file): 4105232 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 648 kB' 'Writeback: 0 kB' 'AnonPages: 135736 kB' 'Mapped: 72488 kB' 'Shmem: 2616 kB' 'KReclaimable: 263984 kB' 'Slab: 354676 kB' 'SReclaimable: 263984 kB' 'SUnreclaim: 90692 kB' 'KernelStack: 4372 kB' 'PageTables: 3504 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601256 kB' 'Committed_AS: 599716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14068 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 2979840 kB' 'DirectMap1G: 11534336 kB' 00:05:45.176 02:29:09 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.176 02:29:09 -- setup/common.sh@32 -- # continue 00:05:45.176 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.176 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.176 02:29:09 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.176 02:29:09 -- setup/common.sh@32 -- # continue 00:05:45.176 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.176 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.176 02:29:09 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.176 02:29:09 -- setup/common.sh@32 -- # continue 00:05:45.176 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.176 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.176 02:29:09 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.176 02:29:09 -- setup/common.sh@32 -- # continue 00:05:45.177 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.177 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.177 02:29:09 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.177 02:29:09 -- setup/common.sh@32 -- # continue 00:05:45.177 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.177 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.177 02:29:09 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.177 02:29:09 -- setup/common.sh@32 -- # continue 00:05:45.177 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.177 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.177 02:29:09 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.177 02:29:09 -- setup/common.sh@32 -- # continue 00:05:45.177 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.177 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.177 02:29:09 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.177 02:29:09 -- setup/common.sh@32 -- # continue 00:05:45.177 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.177 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.177 02:29:09 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.177 02:29:09 -- setup/common.sh@32 -- # continue 00:05:45.177 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.177 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.177 02:29:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.177 02:29:09 -- setup/common.sh@32 -- # continue 00:05:45.177 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.177 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.177 02:29:09 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.177 02:29:09 -- setup/common.sh@32 -- # continue 00:05:45.177 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.177 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.177 02:29:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.177 02:29:09 -- setup/common.sh@32 -- # continue 00:05:45.177 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.177 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.177 02:29:09 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.177 02:29:09 -- setup/common.sh@32 -- # continue 00:05:45.177 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.177 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.177 02:29:09 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.177 02:29:09 -- setup/common.sh@32 -- # continue 00:05:45.177 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.177 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.177 02:29:09 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.177 02:29:09 -- setup/common.sh@32 -- # continue 00:05:45.177 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.177 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.177 02:29:09 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.177 02:29:09 -- setup/common.sh@32 -- # continue 00:05:45.177 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.177 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.177 02:29:09 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.177 02:29:09 -- setup/common.sh@32 -- # continue 00:05:45.177 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.177 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.177 02:29:09 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.177 02:29:09 -- setup/common.sh@32 -- # continue 00:05:45.177 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.177 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.177 02:29:09 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.177 02:29:09 -- setup/common.sh@32 -- # continue 00:05:45.177 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.177 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.177 02:29:09 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.177 02:29:09 -- setup/common.sh@32 -- # continue 00:05:45.177 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.177 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.177 02:29:09 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.177 02:29:09 -- setup/common.sh@32 -- # continue 00:05:45.177 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.177 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.177 02:29:09 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.177 02:29:09 -- setup/common.sh@32 -- # continue 00:05:45.177 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.177 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.177 02:29:09 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.177 02:29:09 -- setup/common.sh@32 -- # continue 00:05:45.177 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.177 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.177 02:29:09 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.177 02:29:09 -- setup/common.sh@32 -- # continue 00:05:45.177 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.177 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.177 02:29:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.177 02:29:09 -- setup/common.sh@32 -- # continue 00:05:45.177 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.177 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.177 02:29:09 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.177 02:29:09 -- setup/common.sh@32 -- # continue 00:05:45.177 02:29:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.177 02:29:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.177 02:29:10 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.177 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.177 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.177 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.177 02:29:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.177 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.177 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.177 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.177 02:29:10 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.177 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.177 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.177 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.177 02:29:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.177 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.177 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.177 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.177 02:29:10 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.177 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.177 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.177 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.177 02:29:10 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.177 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.177 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.177 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.177 02:29:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.177 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.177 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.177 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.177 02:29:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.177 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.177 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.177 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.177 02:29:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.177 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.177 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.177 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.177 02:29:10 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.177 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.177 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.177 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.177 02:29:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.177 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.177 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.177 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.177 02:29:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.177 02:29:10 -- setup/common.sh@33 -- # echo 0 00:05:45.177 02:29:10 -- setup/common.sh@33 -- # return 0 00:05:45.177 02:29:10 -- setup/hugepages.sh@97 -- # anon=0 00:05:45.177 02:29:10 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:45.177 02:29:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:45.177 02:29:10 -- setup/common.sh@18 -- # local node= 00:05:45.177 02:29:10 -- setup/common.sh@19 -- # local var val 00:05:45.177 02:29:10 -- setup/common.sh@20 -- # local mem_f mem 00:05:45.177 02:29:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:45.177 02:29:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:45.177 02:29:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:45.177 02:29:10 -- setup/common.sh@28 -- # mapfile -t mem 00:05:45.177 02:29:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:45.177 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.177 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.177 02:29:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251088 kB' 'MemFree: 4758208 kB' 'MemAvailable: 10567756 kB' 'Buffers: 42092 kB' 'Cached: 5836360 kB' 'SwapCached: 0 kB' 'Active: 1888780 kB' 'Inactive: 4107024 kB' 'Active(anon): 126376 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1762404 kB' 'Inactive(file): 4105232 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 648 kB' 'Writeback: 0 kB' 'AnonPages: 135076 kB' 'Mapped: 72488 kB' 'Shmem: 2616 kB' 'KReclaimable: 263984 kB' 'Slab: 354476 kB' 'SReclaimable: 263984 kB' 'SUnreclaim: 90492 kB' 'KernelStack: 4364 kB' 'PageTables: 3276 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601256 kB' 'Committed_AS: 605088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14068 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 2979840 kB' 'DirectMap1G: 11534336 kB' 00:05:45.177 02:29:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.177 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.177 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.178 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.178 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.179 02:29:10 -- setup/common.sh@33 -- # echo 0 00:05:45.179 02:29:10 -- setup/common.sh@33 -- # return 0 00:05:45.179 02:29:10 -- setup/hugepages.sh@99 -- # surp=0 00:05:45.179 02:29:10 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:45.179 02:29:10 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:45.179 02:29:10 -- setup/common.sh@18 -- # local node= 00:05:45.179 02:29:10 -- setup/common.sh@19 -- # local var val 00:05:45.179 02:29:10 -- setup/common.sh@20 -- # local mem_f mem 00:05:45.179 02:29:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:45.179 02:29:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:45.179 02:29:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:45.179 02:29:10 -- setup/common.sh@28 -- # mapfile -t mem 00:05:45.179 02:29:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.179 02:29:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251088 kB' 'MemFree: 4758668 kB' 'MemAvailable: 10568216 kB' 'Buffers: 42092 kB' 'Cached: 5836360 kB' 'SwapCached: 0 kB' 'Active: 1888592 kB' 'Inactive: 4107024 kB' 'Active(anon): 126188 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1762404 kB' 'Inactive(file): 4105232 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 648 kB' 'Writeback: 0 kB' 'AnonPages: 135020 kB' 'Mapped: 72560 kB' 'Shmem: 2616 kB' 'KReclaimable: 263984 kB' 'Slab: 354324 kB' 'SReclaimable: 263984 kB' 'SUnreclaim: 90340 kB' 'KernelStack: 4340 kB' 'PageTables: 3428 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601256 kB' 'Committed_AS: 605088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14084 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 2979840 kB' 'DirectMap1G: 11534336 kB' 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.179 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.179 02:29:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.180 02:29:10 -- setup/common.sh@33 -- # echo 0 00:05:45.180 02:29:10 -- setup/common.sh@33 -- # return 0 00:05:45.180 02:29:10 -- setup/hugepages.sh@100 -- # resv=0 00:05:45.180 nr_hugepages=512 00:05:45.180 02:29:10 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:45.180 02:29:10 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:45.180 resv_hugepages=0 00:05:45.180 surplus_hugepages=0 00:05:45.180 02:29:10 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:45.180 02:29:10 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:45.180 anon_hugepages=0 00:05:45.180 02:29:10 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:45.180 02:29:10 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:45.180 02:29:10 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:45.180 02:29:10 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:45.180 02:29:10 -- setup/common.sh@18 -- # local node= 00:05:45.180 02:29:10 -- setup/common.sh@19 -- # local var val 00:05:45.180 02:29:10 -- setup/common.sh@20 -- # local mem_f mem 00:05:45.180 02:29:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:45.180 02:29:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:45.180 02:29:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:45.180 02:29:10 -- setup/common.sh@28 -- # mapfile -t mem 00:05:45.180 02:29:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.180 02:29:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251088 kB' 'MemFree: 4758880 kB' 'MemAvailable: 10568428 kB' 'Buffers: 42092 kB' 'Cached: 5836360 kB' 'SwapCached: 0 kB' 'Active: 1888592 kB' 'Inactive: 4107024 kB' 'Active(anon): 126188 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1762404 kB' 'Inactive(file): 4105232 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 648 kB' 'Writeback: 0 kB' 'AnonPages: 135284 kB' 'Mapped: 72560 kB' 'Shmem: 2616 kB' 'KReclaimable: 263984 kB' 'Slab: 354324 kB' 'SReclaimable: 263984 kB' 'SUnreclaim: 90340 kB' 'KernelStack: 4408 kB' 'PageTables: 3428 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601256 kB' 'Committed_AS: 610224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14116 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 2979840 kB' 'DirectMap1G: 11534336 kB' 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.180 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.180 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.181 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.181 02:29:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.181 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.181 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.181 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.181 02:29:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.181 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.181 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.181 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.181 02:29:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.181 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.181 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.181 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.181 02:29:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.181 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.181 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.181 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.181 02:29:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.181 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.181 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.181 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.181 02:29:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.181 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.181 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.181 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.181 02:29:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.181 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.181 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.181 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.181 02:29:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.181 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.181 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.181 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.181 02:29:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.181 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.181 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.181 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.181 02:29:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.181 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.181 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.181 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.181 02:29:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.181 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.181 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.181 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.181 02:29:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.181 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.181 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.181 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.181 02:29:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.181 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.181 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.181 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.181 02:29:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.181 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.181 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.181 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.181 02:29:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.181 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.181 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.181 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.181 02:29:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.181 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.181 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.181 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.181 02:29:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.181 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.181 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.181 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.181 02:29:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.181 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.181 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.181 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.181 02:29:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.181 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.181 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.181 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.181 02:29:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.181 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.181 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.181 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.181 02:29:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.181 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.181 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.181 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.181 02:29:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.181 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.181 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.181 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.181 02:29:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.181 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.181 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.181 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.181 02:29:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.181 02:29:10 -- setup/common.sh@33 -- # echo 512 00:05:45.181 02:29:10 -- setup/common.sh@33 -- # return 0 00:05:45.181 02:29:10 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:45.181 02:29:10 -- setup/hugepages.sh@112 -- # get_nodes 00:05:45.181 02:29:10 -- setup/hugepages.sh@27 -- # local node 00:05:45.181 02:29:10 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:45.181 02:29:10 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:45.181 02:29:10 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:45.181 02:29:10 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:45.181 02:29:10 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:45.181 02:29:10 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:45.181 02:29:10 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:45.181 02:29:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:45.181 02:29:10 -- setup/common.sh@18 -- # local node=0 00:05:45.181 02:29:10 -- setup/common.sh@19 -- # local var val 00:05:45.181 02:29:10 -- setup/common.sh@20 -- # local mem_f mem 00:05:45.181 02:29:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:45.181 02:29:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:45.181 02:29:10 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:45.181 02:29:10 -- setup/common.sh@28 -- # mapfile -t mem 00:05:45.181 02:29:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:45.181 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.181 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.181 02:29:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251088 kB' 'MemFree: 4759400 kB' 'MemUsed: 7491688 kB' 'Active: 1888592 kB' 'Inactive: 4107024 kB' 'Active(anon): 126188 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1762404 kB' 'Inactive(file): 4105232 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'Dirty: 648 kB' 'Writeback: 0 kB' 'FilePages: 5878452 kB' 'Mapped: 72820 kB' 'AnonPages: 135160 kB' 'Shmem: 2616 kB' 'KernelStack: 4408 kB' 'PageTables: 3428 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 263984 kB' 'Slab: 354324 kB' 'SReclaimable: 263984 kB' 'SUnreclaim: 90340 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:45.181 02:29:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.181 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.181 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.181 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.181 02:29:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.181 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.181 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.181 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.181 02:29:10 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.181 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.181 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.181 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.181 02:29:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.182 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.182 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.182 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.182 02:29:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.182 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.182 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.182 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.182 02:29:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.182 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.182 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.182 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.182 02:29:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.182 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.182 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.182 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.182 02:29:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.182 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.182 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.182 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.182 02:29:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.182 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.182 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.182 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.182 02:29:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.182 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.182 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.182 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.182 02:29:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.182 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.182 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.182 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.182 02:29:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.182 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.182 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.182 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.182 02:29:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.182 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.182 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.182 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.182 02:29:10 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.182 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.182 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.182 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.182 02:29:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.182 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.182 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.182 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.182 02:29:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.182 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.182 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.182 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.182 02:29:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.182 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.182 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.182 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.182 02:29:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.182 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.182 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.182 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.182 02:29:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.182 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.182 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.182 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.182 02:29:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.182 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.182 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.182 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.182 02:29:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.182 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.182 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.182 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.182 02:29:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.182 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.182 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.182 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.182 02:29:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.182 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.182 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.182 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.182 02:29:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.182 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.182 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.182 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.182 02:29:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.182 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.182 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.182 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.182 02:29:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.182 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.182 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.182 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.182 02:29:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.182 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.182 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.182 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.182 02:29:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.182 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.182 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.182 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.182 02:29:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.182 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.182 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.182 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.182 02:29:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.182 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.182 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.182 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.182 02:29:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.182 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.182 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.182 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.182 02:29:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.182 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.182 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.182 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.182 02:29:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.182 02:29:10 -- setup/common.sh@32 -- # continue 00:05:45.182 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.182 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.182 02:29:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.182 02:29:10 -- setup/common.sh@33 -- # echo 0 00:05:45.182 02:29:10 -- setup/common.sh@33 -- # return 0 00:05:45.182 02:29:10 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:45.182 02:29:10 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:45.182 02:29:10 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:45.182 02:29:10 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:45.182 node0=512 expecting 512 00:05:45.182 02:29:10 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:45.182 02:29:10 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:45.182 00:05:45.182 real 0m0.785s 00:05:45.182 user 0m0.283s 00:05:45.182 sys 0m0.526s 00:05:45.182 02:29:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.182 02:29:10 -- common/autotest_common.sh@10 -- # set +x 00:05:45.182 ************************************ 00:05:45.182 END TEST custom_alloc 00:05:45.182 ************************************ 00:05:45.182 02:29:10 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:45.182 02:29:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:45.182 02:29:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:45.182 02:29:10 -- common/autotest_common.sh@10 -- # set +x 00:05:45.182 ************************************ 00:05:45.182 START TEST no_shrink_alloc 00:05:45.182 ************************************ 00:05:45.182 02:29:10 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:05:45.182 02:29:10 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:45.182 02:29:10 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:45.182 02:29:10 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:45.182 02:29:10 -- setup/hugepages.sh@51 -- # shift 00:05:45.182 02:29:10 -- setup/hugepages.sh@52 -- # node_ids=("$@") 00:05:45.182 02:29:10 -- setup/hugepages.sh@52 -- # local node_ids 00:05:45.182 02:29:10 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:45.182 02:29:10 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:45.182 02:29:10 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:45.182 02:29:10 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:05:45.182 02:29:10 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:45.182 02:29:10 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:45.182 02:29:10 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:45.182 02:29:10 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:45.182 02:29:10 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:45.182 02:29:10 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:45.182 02:29:10 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:45.182 02:29:10 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:45.182 02:29:10 -- setup/hugepages.sh@73 -- # return 0 00:05:45.182 02:29:10 -- setup/hugepages.sh@198 -- # setup output 00:05:45.182 02:29:10 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:45.182 02:29:10 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:45.442 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:45.442 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:46.020 02:29:10 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:46.020 02:29:10 -- setup/hugepages.sh@89 -- # local node 00:05:46.020 02:29:10 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:46.020 02:29:10 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:46.020 02:29:10 -- setup/hugepages.sh@92 -- # local surp 00:05:46.020 02:29:10 -- setup/hugepages.sh@93 -- # local resv 00:05:46.020 02:29:10 -- setup/hugepages.sh@94 -- # local anon 00:05:46.020 02:29:10 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:46.020 02:29:10 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:46.020 02:29:10 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:46.020 02:29:10 -- setup/common.sh@18 -- # local node= 00:05:46.020 02:29:10 -- setup/common.sh@19 -- # local var val 00:05:46.020 02:29:10 -- setup/common.sh@20 -- # local mem_f mem 00:05:46.020 02:29:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.020 02:29:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:46.020 02:29:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:46.020 02:29:10 -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.020 02:29:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.020 02:29:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251088 kB' 'MemFree: 3709988 kB' 'MemAvailable: 9519560 kB' 'Buffers: 42100 kB' 'Cached: 5836360 kB' 'SwapCached: 0 kB' 'Active: 1888212 kB' 'Inactive: 4107024 kB' 'Active(anon): 125800 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1762412 kB' 'Inactive(file): 4105232 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 688 kB' 'Writeback: 0 kB' 'AnonPages: 135480 kB' 'Mapped: 72548 kB' 'Shmem: 2616 kB' 'KReclaimable: 264000 kB' 'Slab: 354472 kB' 'SReclaimable: 264000 kB' 'SUnreclaim: 90472 kB' 'KernelStack: 4372 kB' 'PageTables: 3072 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076968 kB' 'Committed_AS: 598424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14052 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 2979840 kB' 'DirectMap1G: 11534336 kB' 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.020 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.020 02:29:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.021 02:29:10 -- setup/common.sh@33 -- # echo 0 00:05:46.021 02:29:10 -- setup/common.sh@33 -- # return 0 00:05:46.021 02:29:10 -- setup/hugepages.sh@97 -- # anon=0 00:05:46.021 02:29:10 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:46.021 02:29:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:46.021 02:29:10 -- setup/common.sh@18 -- # local node= 00:05:46.021 02:29:10 -- setup/common.sh@19 -- # local var val 00:05:46.021 02:29:10 -- setup/common.sh@20 -- # local mem_f mem 00:05:46.021 02:29:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.021 02:29:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:46.021 02:29:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:46.021 02:29:10 -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.021 02:29:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.021 02:29:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251088 kB' 'MemFree: 3710248 kB' 'MemAvailable: 9519820 kB' 'Buffers: 42100 kB' 'Cached: 5836360 kB' 'SwapCached: 0 kB' 'Active: 1888472 kB' 'Inactive: 4107024 kB' 'Active(anon): 126060 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1762412 kB' 'Inactive(file): 4105232 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 688 kB' 'Writeback: 0 kB' 'AnonPages: 135352 kB' 'Mapped: 72548 kB' 'Shmem: 2616 kB' 'KReclaimable: 264000 kB' 'Slab: 354472 kB' 'SReclaimable: 264000 kB' 'SUnreclaim: 90472 kB' 'KernelStack: 4372 kB' 'PageTables: 3072 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076968 kB' 'Committed_AS: 597208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14052 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 2979840 kB' 'DirectMap1G: 11534336 kB' 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.021 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.021 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.022 02:29:10 -- setup/common.sh@33 -- # echo 0 00:05:46.022 02:29:10 -- setup/common.sh@33 -- # return 0 00:05:46.022 02:29:10 -- setup/hugepages.sh@99 -- # surp=0 00:05:46.022 02:29:10 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:46.022 02:29:10 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:46.022 02:29:10 -- setup/common.sh@18 -- # local node= 00:05:46.022 02:29:10 -- setup/common.sh@19 -- # local var val 00:05:46.022 02:29:10 -- setup/common.sh@20 -- # local mem_f mem 00:05:46.022 02:29:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.022 02:29:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:46.022 02:29:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:46.022 02:29:10 -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.022 02:29:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.022 02:29:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251088 kB' 'MemFree: 3710248 kB' 'MemAvailable: 9519820 kB' 'Buffers: 42100 kB' 'Cached: 5836360 kB' 'SwapCached: 0 kB' 'Active: 1888732 kB' 'Inactive: 4107024 kB' 'Active(anon): 126320 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1762412 kB' 'Inactive(file): 4105232 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 688 kB' 'Writeback: 0 kB' 'AnonPages: 135352 kB' 'Mapped: 72548 kB' 'Shmem: 2616 kB' 'KReclaimable: 264000 kB' 'Slab: 354472 kB' 'SReclaimable: 264000 kB' 'SUnreclaim: 90472 kB' 'KernelStack: 4372 kB' 'PageTables: 3072 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076968 kB' 'Committed_AS: 597208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14068 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 2979840 kB' 'DirectMap1G: 11534336 kB' 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.022 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.022 02:29:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.022 02:29:10 -- setup/common.sh@33 -- # echo 0 00:05:46.022 02:29:10 -- setup/common.sh@33 -- # return 0 00:05:46.022 02:29:10 -- setup/hugepages.sh@100 -- # resv=0 00:05:46.022 02:29:10 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:46.022 nr_hugepages=1024 00:05:46.022 resv_hugepages=0 00:05:46.022 02:29:10 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:46.022 surplus_hugepages=0 00:05:46.023 02:29:10 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:46.023 anon_hugepages=0 00:05:46.023 02:29:10 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:46.023 02:29:10 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:46.023 02:29:10 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:46.023 02:29:10 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:46.023 02:29:10 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:46.023 02:29:10 -- setup/common.sh@18 -- # local node= 00:05:46.023 02:29:10 -- setup/common.sh@19 -- # local var val 00:05:46.023 02:29:10 -- setup/common.sh@20 -- # local mem_f mem 00:05:46.023 02:29:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.023 02:29:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:46.023 02:29:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:46.023 02:29:10 -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.023 02:29:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.023 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.023 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.023 02:29:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251088 kB' 'MemFree: 3710264 kB' 'MemAvailable: 9519836 kB' 'Buffers: 42100 kB' 'Cached: 5836360 kB' 'SwapCached: 0 kB' 'Active: 1888276 kB' 'Inactive: 4107020 kB' 'Active(anon): 125860 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1762416 kB' 'Inactive(file): 4105228 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 688 kB' 'Writeback: 0 kB' 'AnonPages: 135436 kB' 'Mapped: 72288 kB' 'Shmem: 2616 kB' 'KReclaimable: 264000 kB' 'Slab: 354472 kB' 'SReclaimable: 264000 kB' 'SUnreclaim: 90472 kB' 'KernelStack: 4340 kB' 'PageTables: 3020 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076968 kB' 'Committed_AS: 602476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14084 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 2979840 kB' 'DirectMap1G: 11534336 kB' 00:05:46.023 02:29:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.023 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.023 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.023 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.023 02:29:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.023 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.023 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.023 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.023 02:29:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.023 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.023 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.023 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.023 02:29:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.023 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.023 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.023 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.023 02:29:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.023 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.023 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.023 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.023 02:29:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.023 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.023 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.023 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.023 02:29:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.023 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.023 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.023 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.023 02:29:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.023 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.023 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.023 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.023 02:29:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.023 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.023 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.023 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.023 02:29:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.023 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.023 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.023 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.023 02:29:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.023 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.023 02:29:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.023 02:29:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.023 02:29:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.023 02:29:10 -- setup/common.sh@32 -- # continue 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.023 02:29:11 -- setup/common.sh@33 -- # echo 1024 00:05:46.023 02:29:11 -- setup/common.sh@33 -- # return 0 00:05:46.023 02:29:11 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:46.023 02:29:11 -- setup/hugepages.sh@112 -- # get_nodes 00:05:46.023 02:29:11 -- setup/hugepages.sh@27 -- # local node 00:05:46.023 02:29:11 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:46.023 02:29:11 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:46.023 02:29:11 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:46.023 02:29:11 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:46.023 02:29:11 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:46.023 02:29:11 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:46.023 02:29:11 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:46.023 02:29:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:46.023 02:29:11 -- setup/common.sh@18 -- # local node=0 00:05:46.023 02:29:11 -- setup/common.sh@19 -- # local var val 00:05:46.023 02:29:11 -- setup/common.sh@20 -- # local mem_f mem 00:05:46.023 02:29:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.023 02:29:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:46.023 02:29:11 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:46.023 02:29:11 -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.023 02:29:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.023 02:29:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251088 kB' 'MemFree: 3710256 kB' 'MemUsed: 8540832 kB' 'Active: 1888436 kB' 'Inactive: 4107020 kB' 'Active(anon): 126020 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1762416 kB' 'Inactive(file): 4105228 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'Dirty: 688 kB' 'Writeback: 0 kB' 'FilePages: 5878460 kB' 'Mapped: 72288 kB' 'AnonPages: 135208 kB' 'Shmem: 2616 kB' 'KernelStack: 4408 kB' 'PageTables: 3020 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 264000 kB' 'Slab: 354472 kB' 'SReclaimable: 264000 kB' 'SUnreclaim: 90472 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.023 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.023 02:29:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.024 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.024 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.024 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.024 02:29:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.024 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.024 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.024 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.024 02:29:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.024 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.024 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.024 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.024 02:29:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.024 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.024 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.024 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.024 02:29:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.024 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.024 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.024 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.024 02:29:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.024 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.024 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.024 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.024 02:29:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.024 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.024 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.024 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.024 02:29:11 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.024 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.024 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.024 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.024 02:29:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.024 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.024 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.024 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.024 02:29:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.024 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.024 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.024 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.024 02:29:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.024 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.024 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.024 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.024 02:29:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.024 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.024 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.024 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.024 02:29:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.024 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.024 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.024 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.024 02:29:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.024 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.024 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.024 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.024 02:29:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.024 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.024 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.024 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.024 02:29:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.024 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.024 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.024 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.024 02:29:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.024 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.024 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.024 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.024 02:29:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.024 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.024 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.024 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.024 02:29:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.024 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.024 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.024 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.024 02:29:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.024 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.024 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.024 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.024 02:29:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.024 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.024 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.024 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.024 02:29:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.024 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.024 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.024 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.024 02:29:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.024 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.024 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.024 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.024 02:29:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.024 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.024 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.024 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.024 02:29:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.024 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.024 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.024 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.024 02:29:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.024 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.024 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.024 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.024 02:29:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.024 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.024 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.024 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.024 02:29:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.024 02:29:11 -- setup/common.sh@33 -- # echo 0 00:05:46.024 02:29:11 -- setup/common.sh@33 -- # return 0 00:05:46.024 02:29:11 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:46.024 02:29:11 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:46.024 02:29:11 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:46.024 02:29:11 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:46.024 02:29:11 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:46.024 node0=1024 expecting 1024 00:05:46.024 02:29:11 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:46.024 02:29:11 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:46.024 02:29:11 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:46.024 02:29:11 -- setup/hugepages.sh@202 -- # setup output 00:05:46.024 02:29:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:46.024 02:29:11 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:46.282 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:46.282 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:46.282 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:46.282 02:29:11 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:46.282 02:29:11 -- setup/hugepages.sh@89 -- # local node 00:05:46.282 02:29:11 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:46.282 02:29:11 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:46.282 02:29:11 -- setup/hugepages.sh@92 -- # local surp 00:05:46.282 02:29:11 -- setup/hugepages.sh@93 -- # local resv 00:05:46.282 02:29:11 -- setup/hugepages.sh@94 -- # local anon 00:05:46.282 02:29:11 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:46.282 02:29:11 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:46.282 02:29:11 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:46.282 02:29:11 -- setup/common.sh@18 -- # local node= 00:05:46.282 02:29:11 -- setup/common.sh@19 -- # local var val 00:05:46.282 02:29:11 -- setup/common.sh@20 -- # local mem_f mem 00:05:46.282 02:29:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.282 02:29:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:46.282 02:29:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:46.282 02:29:11 -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.282 02:29:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.282 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.282 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.282 02:29:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251088 kB' 'MemFree: 3709044 kB' 'MemAvailable: 9518616 kB' 'Buffers: 42100 kB' 'Cached: 5836360 kB' 'SwapCached: 0 kB' 'Active: 1888460 kB' 'Inactive: 4107008 kB' 'Active(anon): 126032 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1762428 kB' 'Inactive(file): 4105216 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 688 kB' 'Writeback: 0 kB' 'AnonPages: 135772 kB' 'Mapped: 72336 kB' 'Shmem: 2616 kB' 'KReclaimable: 264000 kB' 'Slab: 354400 kB' 'SReclaimable: 264000 kB' 'SUnreclaim: 90400 kB' 'KernelStack: 4372 kB' 'PageTables: 3136 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076968 kB' 'Committed_AS: 598788 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14100 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 2979840 kB' 'DirectMap1G: 11534336 kB' 00:05:46.282 02:29:11 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.282 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.282 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.282 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.282 02:29:11 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.282 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.282 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.282 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.282 02:29:11 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.282 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.282 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.282 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.282 02:29:11 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.282 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.282 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.282 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.282 02:29:11 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.282 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.282 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.282 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.282 02:29:11 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.282 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.282 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.282 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.282 02:29:11 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.282 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.282 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.282 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.282 02:29:11 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.282 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.282 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.282 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.282 02:29:11 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.282 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.282 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.282 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.544 02:29:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.544 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.544 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.544 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.544 02:29:11 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.544 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.544 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.544 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.544 02:29:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.544 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.544 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.544 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.544 02:29:11 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.544 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.544 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.544 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.544 02:29:11 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.544 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.544 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.544 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.544 02:29:11 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.544 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.544 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.544 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.544 02:29:11 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.544 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.544 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.544 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.544 02:29:11 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.544 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.544 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.544 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.544 02:29:11 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.544 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.544 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.544 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.544 02:29:11 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.544 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.544 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.544 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.544 02:29:11 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.544 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.544 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.544 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.544 02:29:11 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.544 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.544 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.544 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.544 02:29:11 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.544 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.544 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.544 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.544 02:29:11 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.544 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.544 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.544 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.544 02:29:11 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.544 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.544 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.544 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.544 02:29:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.544 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.544 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.544 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.544 02:29:11 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.544 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.544 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.544 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.545 02:29:11 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.545 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.545 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.545 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.545 02:29:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.545 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.545 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.545 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.545 02:29:11 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.545 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.545 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.545 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.545 02:29:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.545 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.545 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.545 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.545 02:29:11 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.545 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.545 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.545 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.545 02:29:11 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.545 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.545 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.545 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.545 02:29:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.545 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.545 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.545 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.545 02:29:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.545 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.545 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.545 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.545 02:29:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.545 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.545 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.545 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.545 02:29:11 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.545 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.545 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.545 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.545 02:29:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.545 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.545 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.545 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.545 02:29:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.545 02:29:11 -- setup/common.sh@33 -- # echo 0 00:05:46.545 02:29:11 -- setup/common.sh@33 -- # return 0 00:05:46.545 02:29:11 -- setup/hugepages.sh@97 -- # anon=0 00:05:46.545 02:29:11 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:46.545 02:29:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:46.545 02:29:11 -- setup/common.sh@18 -- # local node= 00:05:46.545 02:29:11 -- setup/common.sh@19 -- # local var val 00:05:46.545 02:29:11 -- setup/common.sh@20 -- # local mem_f mem 00:05:46.545 02:29:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.545 02:29:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:46.545 02:29:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:46.545 02:29:11 -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.545 02:29:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.545 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.545 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.545 02:29:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251088 kB' 'MemFree: 3709068 kB' 'MemAvailable: 9518640 kB' 'Buffers: 42100 kB' 'Cached: 5836360 kB' 'SwapCached: 0 kB' 'Active: 1888340 kB' 'Inactive: 4107008 kB' 'Active(anon): 125912 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1762428 kB' 'Inactive(file): 4105216 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 688 kB' 'Writeback: 0 kB' 'AnonPages: 135440 kB' 'Mapped: 72240 kB' 'Shmem: 2616 kB' 'KReclaimable: 264000 kB' 'Slab: 354432 kB' 'SReclaimable: 264000 kB' 'SUnreclaim: 90432 kB' 'KernelStack: 4280 kB' 'PageTables: 2860 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076968 kB' 'Committed_AS: 598788 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14116 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 2979840 kB' 'DirectMap1G: 11534336 kB' 00:05:46.545 02:29:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.545 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.545 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.545 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.545 02:29:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.545 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.545 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.545 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.545 02:29:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.545 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.545 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.545 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.545 02:29:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.545 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.545 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.545 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.545 02:29:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.545 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.545 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.545 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.545 02:29:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.545 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.545 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.545 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.545 02:29:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.545 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.545 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.545 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.545 02:29:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.545 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.545 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.545 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.545 02:29:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.545 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.545 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.545 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.545 02:29:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.545 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.545 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.545 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.545 02:29:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.545 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.545 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.545 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.545 02:29:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.545 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.545 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.545 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.545 02:29:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.545 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.545 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.545 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.545 02:29:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.546 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.546 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.546 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.546 02:29:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.546 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.546 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.546 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.546 02:29:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.546 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.546 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.546 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.546 02:29:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.546 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.546 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.546 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.546 02:29:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.546 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.546 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.546 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.546 02:29:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.546 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.546 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.546 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.546 02:29:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.546 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.546 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.546 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.546 02:29:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.546 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.546 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.546 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.546 02:29:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.546 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.546 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.546 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.546 02:29:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.546 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.546 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.546 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.546 02:29:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.546 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.546 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.546 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.546 02:29:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.546 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.546 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.546 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.546 02:29:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.546 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.546 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.546 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.546 02:29:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.546 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.546 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.546 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.546 02:29:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.546 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.546 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.546 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.546 02:29:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.546 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.546 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.546 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.546 02:29:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.546 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.546 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.546 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.546 02:29:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.546 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.546 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.546 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.546 02:29:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.546 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.546 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.546 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.546 02:29:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.546 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.546 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.546 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.546 02:29:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.546 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.546 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.546 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.546 02:29:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.546 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.546 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.546 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.546 02:29:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.546 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.546 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.546 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.546 02:29:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.546 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.546 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.546 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.546 02:29:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.546 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.546 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.546 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.546 02:29:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.546 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.546 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.546 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.546 02:29:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.546 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.546 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.546 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.546 02:29:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.546 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.546 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.546 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.546 02:29:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.546 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.546 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.546 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.546 02:29:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.546 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.546 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.546 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.546 02:29:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.546 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.546 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.547 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.547 02:29:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.547 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.547 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.547 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.547 02:29:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.547 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.547 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.547 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.547 02:29:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.547 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.547 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.547 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.547 02:29:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.547 02:29:11 -- setup/common.sh@33 -- # echo 0 00:05:46.547 02:29:11 -- setup/common.sh@33 -- # return 0 00:05:46.547 02:29:11 -- setup/hugepages.sh@99 -- # surp=0 00:05:46.547 02:29:11 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:46.547 02:29:11 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:46.547 02:29:11 -- setup/common.sh@18 -- # local node= 00:05:46.547 02:29:11 -- setup/common.sh@19 -- # local var val 00:05:46.547 02:29:11 -- setup/common.sh@20 -- # local mem_f mem 00:05:46.547 02:29:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.547 02:29:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:46.547 02:29:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:46.547 02:29:11 -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.547 02:29:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.547 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.547 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.547 02:29:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251088 kB' 'MemFree: 3709068 kB' 'MemAvailable: 9518640 kB' 'Buffers: 42100 kB' 'Cached: 5836360 kB' 'SwapCached: 0 kB' 'Active: 1888320 kB' 'Inactive: 4107008 kB' 'Active(anon): 125892 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1762428 kB' 'Inactive(file): 4105216 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 688 kB' 'Writeback: 0 kB' 'AnonPages: 135344 kB' 'Mapped: 72288 kB' 'Shmem: 2616 kB' 'KReclaimable: 264000 kB' 'Slab: 354456 kB' 'SReclaimable: 264000 kB' 'SUnreclaim: 90456 kB' 'KernelStack: 4216 kB' 'PageTables: 2764 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076968 kB' 'Committed_AS: 608364 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14116 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 2979840 kB' 'DirectMap1G: 11534336 kB' 00:05:46.547 02:29:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.547 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.547 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.547 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.547 02:29:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.547 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.547 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.547 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.547 02:29:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.547 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.547 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.547 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.547 02:29:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.547 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.547 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.547 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.547 02:29:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.547 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.547 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.547 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.547 02:29:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.547 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.547 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.547 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.547 02:29:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.547 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.547 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.547 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.547 02:29:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.547 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.547 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.547 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.547 02:29:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.547 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.547 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.547 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.547 02:29:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.547 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.547 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.547 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.547 02:29:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.547 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.547 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.547 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.547 02:29:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.547 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.547 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.547 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.547 02:29:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.547 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.547 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.547 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.547 02:29:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.547 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.547 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.547 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.547 02:29:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.547 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.547 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.547 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.547 02:29:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.547 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.547 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.547 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.547 02:29:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.547 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.547 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.547 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.547 02:29:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.547 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.547 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.547 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.547 02:29:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.547 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.547 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.547 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.547 02:29:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.547 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.548 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.548 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.548 02:29:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.548 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.548 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.548 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.548 02:29:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.548 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.548 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.548 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.548 02:29:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.548 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.548 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.548 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.548 02:29:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.548 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.548 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.548 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.548 02:29:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.548 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.548 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.548 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.548 02:29:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.548 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.548 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.548 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.548 02:29:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.548 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.548 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.548 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.548 02:29:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.548 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.548 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.548 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.548 02:29:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.548 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.548 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.548 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.548 02:29:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.548 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.548 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.548 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.548 02:29:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.548 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.548 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.548 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.548 02:29:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.548 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.548 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.548 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.548 02:29:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.548 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.548 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.548 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.548 02:29:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.548 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.548 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.548 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.548 02:29:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.548 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.548 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.548 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.548 02:29:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.548 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.548 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.548 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.548 02:29:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.548 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.548 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.548 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.548 02:29:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.548 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.548 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.548 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.548 02:29:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.548 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.548 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.548 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.548 02:29:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.548 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.548 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.548 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.548 02:29:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.548 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.548 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.548 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.548 02:29:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.548 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.548 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.548 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.548 02:29:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.548 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.548 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.548 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.548 02:29:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.548 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.548 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.548 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.548 02:29:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.548 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.548 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.548 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.548 02:29:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.548 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.548 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.548 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.548 02:29:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.548 02:29:11 -- setup/common.sh@33 -- # echo 0 00:05:46.548 02:29:11 -- setup/common.sh@33 -- # return 0 00:05:46.548 02:29:11 -- setup/hugepages.sh@100 -- # resv=0 00:05:46.548 nr_hugepages=1024 00:05:46.548 02:29:11 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:46.548 resv_hugepages=0 00:05:46.548 surplus_hugepages=0 00:05:46.548 02:29:11 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:46.548 02:29:11 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:46.548 anon_hugepages=0 00:05:46.548 02:29:11 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:46.548 02:29:11 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:46.548 02:29:11 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:46.548 02:29:11 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:46.548 02:29:11 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:46.548 02:29:11 -- setup/common.sh@18 -- # local node= 00:05:46.548 02:29:11 -- setup/common.sh@19 -- # local var val 00:05:46.548 02:29:11 -- setup/common.sh@20 -- # local mem_f mem 00:05:46.548 02:29:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.548 02:29:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:46.548 02:29:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:46.548 02:29:11 -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.548 02:29:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.548 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.548 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.549 02:29:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251088 kB' 'MemFree: 3709068 kB' 'MemAvailable: 9518640 kB' 'Buffers: 42100 kB' 'Cached: 5836360 kB' 'SwapCached: 0 kB' 'Active: 1888320 kB' 'Inactive: 4107008 kB' 'Active(anon): 125892 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1762428 kB' 'Inactive(file): 4105216 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 688 kB' 'Writeback: 0 kB' 'AnonPages: 135216 kB' 'Mapped: 72288 kB' 'Shmem: 2616 kB' 'KReclaimable: 264000 kB' 'Slab: 354456 kB' 'SReclaimable: 264000 kB' 'SUnreclaim: 90456 kB' 'KernelStack: 4284 kB' 'PageTables: 2764 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076968 kB' 'Committed_AS: 607916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14116 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 2979840 kB' 'DirectMap1G: 11534336 kB' 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.549 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.549 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.550 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.550 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.550 02:29:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.550 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.550 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.550 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.550 02:29:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.550 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.550 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.550 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.550 02:29:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.550 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.550 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.550 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.550 02:29:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.550 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.550 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.550 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.550 02:29:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.550 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.550 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.550 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.550 02:29:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.550 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.550 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.550 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.550 02:29:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.550 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.550 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.550 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.550 02:29:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.550 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.550 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.550 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.550 02:29:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.550 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.550 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.550 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.550 02:29:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.550 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.550 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.550 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.550 02:29:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.550 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.550 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.550 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.550 02:29:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.550 02:29:11 -- setup/common.sh@33 -- # echo 1024 00:05:46.550 02:29:11 -- setup/common.sh@33 -- # return 0 00:05:46.550 02:29:11 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:46.550 02:29:11 -- setup/hugepages.sh@112 -- # get_nodes 00:05:46.550 02:29:11 -- setup/hugepages.sh@27 -- # local node 00:05:46.550 02:29:11 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:46.550 02:29:11 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:46.550 02:29:11 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:46.550 02:29:11 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:46.550 02:29:11 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:46.550 02:29:11 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:46.550 02:29:11 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:46.550 02:29:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:46.550 02:29:11 -- setup/common.sh@18 -- # local node=0 00:05:46.550 02:29:11 -- setup/common.sh@19 -- # local var val 00:05:46.550 02:29:11 -- setup/common.sh@20 -- # local mem_f mem 00:05:46.550 02:29:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.550 02:29:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:46.550 02:29:11 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:46.550 02:29:11 -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.550 02:29:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.550 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.550 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.550 02:29:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251088 kB' 'MemFree: 3709352 kB' 'MemUsed: 8541736 kB' 'Active: 1888216 kB' 'Inactive: 4107008 kB' 'Active(anon): 125788 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1762428 kB' 'Inactive(file): 4105216 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'Dirty: 688 kB' 'Writeback: 0 kB' 'FilePages: 5878460 kB' 'Mapped: 72288 kB' 'AnonPages: 134756 kB' 'Shmem: 2616 kB' 'KernelStack: 4320 kB' 'PageTables: 2720 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 264000 kB' 'Slab: 354456 kB' 'SReclaimable: 264000 kB' 'SUnreclaim: 90456 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:46.550 02:29:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.550 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.550 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.550 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.550 02:29:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.550 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.550 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.550 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.550 02:29:11 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.550 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.550 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.550 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.550 02:29:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.550 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.550 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.550 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.550 02:29:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.551 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.551 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.551 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.551 02:29:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.551 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.551 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.551 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.551 02:29:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.551 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.551 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.551 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.551 02:29:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.551 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.551 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.551 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.551 02:29:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.551 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.551 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.551 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.551 02:29:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.551 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.551 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.551 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.551 02:29:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.551 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.551 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.551 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.551 02:29:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.551 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.551 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.551 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.551 02:29:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.551 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.551 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.551 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.551 02:29:11 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.551 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.551 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.551 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.551 02:29:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.551 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.551 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.551 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.551 02:29:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.551 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.551 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.551 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.551 02:29:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.551 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.551 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.551 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.551 02:29:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.551 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.551 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.551 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.551 02:29:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.551 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.551 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.551 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.551 02:29:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.551 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.551 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.551 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.551 02:29:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.551 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.551 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.551 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.551 02:29:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.551 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.551 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.551 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.551 02:29:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.551 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.551 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.551 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.551 02:29:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.551 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.551 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.551 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.551 02:29:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.551 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.551 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.551 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.551 02:29:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.551 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.551 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.551 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.551 02:29:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.551 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.551 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.551 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.551 02:29:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.551 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.551 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.551 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.551 02:29:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.551 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.551 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.551 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.551 02:29:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.551 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.551 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.551 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.551 02:29:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.551 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.551 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.551 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.551 02:29:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.551 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.551 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.551 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.551 02:29:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.551 02:29:11 -- setup/common.sh@32 -- # continue 00:05:46.551 02:29:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.551 02:29:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.551 02:29:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.551 02:29:11 -- setup/common.sh@33 -- # echo 0 00:05:46.551 02:29:11 -- setup/common.sh@33 -- # return 0 00:05:46.551 02:29:11 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:46.551 02:29:11 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:46.551 02:29:11 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:46.551 02:29:11 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:46.551 02:29:11 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:46.551 node0=1024 expecting 1024 00:05:46.551 02:29:11 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:46.551 00:05:46.551 real 0m1.312s 00:05:46.551 user 0m0.514s 00:05:46.551 sys 0m0.854s 00:05:46.552 02:29:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.552 02:29:11 -- common/autotest_common.sh@10 -- # set +x 00:05:46.552 ************************************ 00:05:46.552 END TEST no_shrink_alloc 00:05:46.552 ************************************ 00:05:46.552 02:29:11 -- setup/hugepages.sh@217 -- # clear_hp 00:05:46.552 02:29:11 -- setup/hugepages.sh@37 -- # local node hp 00:05:46.552 02:29:11 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:46.552 02:29:11 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:46.552 02:29:11 -- setup/hugepages.sh@41 -- # echo 0 00:05:46.552 02:29:11 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:46.552 02:29:11 -- setup/hugepages.sh@41 -- # echo 0 00:05:46.552 02:29:11 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:46.552 02:29:11 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:46.552 00:05:46.552 real 0m6.484s 00:05:46.552 user 0m2.069s 00:05:46.552 sys 0m4.541s 00:05:46.552 02:29:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.552 02:29:11 -- common/autotest_common.sh@10 -- # set +x 00:05:46.552 ************************************ 00:05:46.552 END TEST hugepages 00:05:46.552 ************************************ 00:05:46.552 02:29:11 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:46.552 02:29:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:46.552 02:29:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:46.552 02:29:11 -- common/autotest_common.sh@10 -- # set +x 00:05:46.552 ************************************ 00:05:46.552 START TEST driver 00:05:46.552 ************************************ 00:05:46.552 02:29:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:46.552 * Looking for test storage... 00:05:46.552 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:46.552 02:29:11 -- setup/driver.sh@68 -- # setup reset 00:05:46.552 02:29:11 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:46.552 02:29:11 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:47.119 02:29:12 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:47.119 02:29:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:47.119 02:29:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:47.119 02:29:12 -- common/autotest_common.sh@10 -- # set +x 00:05:47.119 ************************************ 00:05:47.119 START TEST guess_driver 00:05:47.119 ************************************ 00:05:47.119 02:29:12 -- common/autotest_common.sh@1104 -- # guess_driver 00:05:47.119 02:29:12 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:47.119 02:29:12 -- setup/driver.sh@47 -- # local fail=0 00:05:47.119 02:29:12 -- setup/driver.sh@49 -- # pick_driver 00:05:47.119 02:29:12 -- setup/driver.sh@36 -- # vfio 00:05:47.119 02:29:12 -- setup/driver.sh@21 -- # local iommu_grups 00:05:47.119 02:29:12 -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:47.119 02:29:12 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:47.119 02:29:12 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:47.120 02:29:12 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:47.120 02:29:12 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:47.120 02:29:12 -- setup/driver.sh@29 -- # [[ N == Y ]] 00:05:47.120 02:29:12 -- setup/driver.sh@32 -- # return 1 00:05:47.120 02:29:12 -- setup/driver.sh@38 -- # uio 00:05:47.120 02:29:12 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:47.120 02:29:12 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:47.120 02:29:12 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:47.120 02:29:12 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:47.120 02:29:12 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/5.4.0-176-generic/kernel/drivers/uio/uio.ko 00:05:47.120 insmod /lib/modules/5.4.0-176-generic/kernel/drivers/uio/uio_pci_generic.ko == *\.\k\o* ]] 00:05:47.120 02:29:12 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:47.120 02:29:12 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:47.120 Looking for driver=uio_pci_generic 00:05:47.120 02:29:12 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:47.120 02:29:12 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:47.120 02:29:12 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:47.120 02:29:12 -- setup/driver.sh@45 -- # setup output config 00:05:47.120 02:29:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:47.120 02:29:12 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:47.378 02:29:12 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:47.378 02:29:12 -- setup/driver.sh@58 -- # continue 00:05:47.378 02:29:12 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:47.636 02:29:12 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:47.637 02:29:12 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:47.637 02:29:12 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:48.570 02:29:13 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:48.570 02:29:13 -- setup/driver.sh@65 -- # setup reset 00:05:48.570 02:29:13 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:48.571 02:29:13 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:49.137 00:05:49.137 real 0m1.900s 00:05:49.137 user 0m0.458s 00:05:49.137 sys 0m1.401s 00:05:49.137 02:29:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.137 02:29:13 -- common/autotest_common.sh@10 -- # set +x 00:05:49.137 ************************************ 00:05:49.137 END TEST guess_driver 00:05:49.137 ************************************ 00:05:49.137 00:05:49.137 real 0m2.447s 00:05:49.137 user 0m0.748s 00:05:49.137 sys 0m1.674s 00:05:49.137 02:29:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.137 02:29:13 -- common/autotest_common.sh@10 -- # set +x 00:05:49.137 ************************************ 00:05:49.137 END TEST driver 00:05:49.137 ************************************ 00:05:49.137 02:29:14 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:49.137 02:29:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:49.137 02:29:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:49.137 02:29:14 -- common/autotest_common.sh@10 -- # set +x 00:05:49.137 ************************************ 00:05:49.137 START TEST devices 00:05:49.137 ************************************ 00:05:49.137 02:29:14 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:49.137 * Looking for test storage... 00:05:49.137 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:49.137 02:29:14 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:49.137 02:29:14 -- setup/devices.sh@192 -- # setup reset 00:05:49.137 02:29:14 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:49.137 02:29:14 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:49.703 02:29:14 -- setup/devices.sh@194 -- # get_zoned_devs 00:05:49.703 02:29:14 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:05:49.703 02:29:14 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:05:49.703 02:29:14 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:05:49.703 02:29:14 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:49.703 02:29:14 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:05:49.703 02:29:14 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:05:49.703 02:29:14 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:49.703 02:29:14 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:49.703 02:29:14 -- setup/devices.sh@196 -- # blocks=() 00:05:49.703 02:29:14 -- setup/devices.sh@196 -- # declare -a blocks 00:05:49.703 02:29:14 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:49.703 02:29:14 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:49.703 02:29:14 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:49.703 02:29:14 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:49.703 02:29:14 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:49.703 02:29:14 -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:49.703 02:29:14 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:05:49.703 02:29:14 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:49.703 02:29:14 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:49.703 02:29:14 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:05:49.703 02:29:14 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:49.703 No valid GPT data, bailing 00:05:49.703 02:29:14 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:49.703 02:29:14 -- scripts/common.sh@393 -- # pt= 00:05:49.703 02:29:14 -- scripts/common.sh@394 -- # return 1 00:05:49.703 02:29:14 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:49.703 02:29:14 -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:49.703 02:29:14 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:49.703 02:29:14 -- setup/common.sh@80 -- # echo 5368709120 00:05:49.703 02:29:14 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:49.703 02:29:14 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:49.703 02:29:14 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:05:49.703 02:29:14 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:49.703 02:29:14 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:49.703 02:29:14 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:49.703 02:29:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:49.703 02:29:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:49.703 02:29:14 -- common/autotest_common.sh@10 -- # set +x 00:05:49.703 ************************************ 00:05:49.703 START TEST nvme_mount 00:05:49.703 ************************************ 00:05:49.703 02:29:14 -- common/autotest_common.sh@1104 -- # nvme_mount 00:05:49.703 02:29:14 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:49.703 02:29:14 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:49.703 02:29:14 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:49.703 02:29:14 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:49.703 02:29:14 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:49.703 02:29:14 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:49.703 02:29:14 -- setup/common.sh@40 -- # local part_no=1 00:05:49.703 02:29:14 -- setup/common.sh@41 -- # local size=1073741824 00:05:49.703 02:29:14 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:49.703 02:29:14 -- setup/common.sh@44 -- # parts=() 00:05:49.703 02:29:14 -- setup/common.sh@44 -- # local parts 00:05:49.703 02:29:14 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:49.703 02:29:14 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:49.703 02:29:14 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:49.703 02:29:14 -- setup/common.sh@46 -- # (( part++ )) 00:05:49.703 02:29:14 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:49.703 02:29:14 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:49.703 02:29:14 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:49.703 02:29:14 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:50.742 Creating new GPT entries in memory. 00:05:50.742 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:50.742 other utilities. 00:05:50.742 02:29:15 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:50.742 02:29:15 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:50.742 02:29:15 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:50.742 02:29:15 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:50.742 02:29:15 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:52.119 Creating new GPT entries in memory. 00:05:52.119 The operation has completed successfully. 00:05:52.119 02:29:16 -- setup/common.sh@57 -- # (( part++ )) 00:05:52.119 02:29:16 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:52.119 02:29:16 -- setup/common.sh@62 -- # wait 109941 00:05:52.119 02:29:16 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:52.119 02:29:16 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:52.119 02:29:16 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:52.119 02:29:16 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:52.119 02:29:16 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:52.119 02:29:16 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:52.119 02:29:16 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:52.119 02:29:16 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:52.119 02:29:16 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:52.119 02:29:16 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:52.119 02:29:16 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:52.119 02:29:16 -- setup/devices.sh@53 -- # local found=0 00:05:52.119 02:29:16 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:52.119 02:29:16 -- setup/devices.sh@56 -- # : 00:05:52.119 02:29:16 -- setup/devices.sh@59 -- # local pci status 00:05:52.119 02:29:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.119 02:29:16 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:52.120 02:29:16 -- setup/devices.sh@47 -- # setup output config 00:05:52.120 02:29:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:52.120 02:29:16 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:52.120 02:29:17 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:52.120 02:29:17 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:52.120 02:29:17 -- setup/devices.sh@63 -- # found=1 00:05:52.120 02:29:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.120 02:29:17 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:52.120 02:29:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.120 02:29:17 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:52.120 02:29:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.497 02:29:18 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:53.497 02:29:18 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:53.497 02:29:18 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:53.497 02:29:18 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:53.497 02:29:18 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:53.497 02:29:18 -- setup/devices.sh@110 -- # cleanup_nvme 00:05:53.497 02:29:18 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:53.497 02:29:18 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:53.497 02:29:18 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:53.497 02:29:18 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:53.497 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:53.497 02:29:18 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:53.497 02:29:18 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:53.497 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:53.497 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:53.497 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:53.497 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:53.497 02:29:18 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:53.497 02:29:18 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:53.497 02:29:18 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:53.497 02:29:18 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:53.497 02:29:18 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:53.497 02:29:18 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:53.497 02:29:18 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:53.497 02:29:18 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:53.497 02:29:18 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:53.497 02:29:18 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:53.497 02:29:18 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:53.497 02:29:18 -- setup/devices.sh@53 -- # local found=0 00:05:53.497 02:29:18 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:53.497 02:29:18 -- setup/devices.sh@56 -- # : 00:05:53.497 02:29:18 -- setup/devices.sh@59 -- # local pci status 00:05:53.497 02:29:18 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:53.497 02:29:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.497 02:29:18 -- setup/devices.sh@47 -- # setup output config 00:05:53.497 02:29:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:53.497 02:29:18 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:53.497 02:29:18 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:53.497 02:29:18 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:53.497 02:29:18 -- setup/devices.sh@63 -- # found=1 00:05:53.497 02:29:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.497 02:29:18 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:53.497 02:29:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.756 02:29:18 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:53.756 02:29:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.689 02:29:19 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:54.689 02:29:19 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:54.689 02:29:19 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:54.689 02:29:19 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:54.689 02:29:19 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:54.689 02:29:19 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:54.689 02:29:19 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:05:54.689 02:29:19 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:54.689 02:29:19 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:54.689 02:29:19 -- setup/devices.sh@50 -- # local mount_point= 00:05:54.689 02:29:19 -- setup/devices.sh@51 -- # local test_file= 00:05:54.689 02:29:19 -- setup/devices.sh@53 -- # local found=0 00:05:54.689 02:29:19 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:54.689 02:29:19 -- setup/devices.sh@59 -- # local pci status 00:05:54.689 02:29:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.689 02:29:19 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:54.689 02:29:19 -- setup/devices.sh@47 -- # setup output config 00:05:54.689 02:29:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:54.689 02:29:19 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:54.947 02:29:19 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:54.947 02:29:19 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:54.947 02:29:19 -- setup/devices.sh@63 -- # found=1 00:05:54.947 02:29:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.947 02:29:19 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:54.947 02:29:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.947 02:29:20 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:54.947 02:29:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:56.860 02:29:21 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:56.860 02:29:21 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:56.860 02:29:21 -- setup/devices.sh@68 -- # return 0 00:05:56.860 02:29:21 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:56.860 02:29:21 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:56.860 02:29:21 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:56.860 02:29:21 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:56.860 02:29:21 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:56.860 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:56.860 00:05:56.860 real 0m6.899s 00:05:56.860 user 0m0.748s 00:05:56.860 sys 0m4.003s 00:05:56.860 ************************************ 00:05:56.860 END TEST nvme_mount 00:05:56.860 ************************************ 00:05:56.860 02:29:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.860 02:29:21 -- common/autotest_common.sh@10 -- # set +x 00:05:56.860 02:29:21 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:56.860 02:29:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:56.860 02:29:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:56.860 02:29:21 -- common/autotest_common.sh@10 -- # set +x 00:05:56.860 ************************************ 00:05:56.860 START TEST dm_mount 00:05:56.860 ************************************ 00:05:56.860 02:29:21 -- common/autotest_common.sh@1104 -- # dm_mount 00:05:56.860 02:29:21 -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:56.860 02:29:21 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:56.860 02:29:21 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:56.860 02:29:21 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:56.860 02:29:21 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:56.860 02:29:21 -- setup/common.sh@40 -- # local part_no=2 00:05:56.860 02:29:21 -- setup/common.sh@41 -- # local size=1073741824 00:05:56.860 02:29:21 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:56.860 02:29:21 -- setup/common.sh@44 -- # parts=() 00:05:56.860 02:29:21 -- setup/common.sh@44 -- # local parts 00:05:56.860 02:29:21 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:56.860 02:29:21 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:56.860 02:29:21 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:56.860 02:29:21 -- setup/common.sh@46 -- # (( part++ )) 00:05:56.860 02:29:21 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:56.860 02:29:21 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:56.860 02:29:21 -- setup/common.sh@46 -- # (( part++ )) 00:05:56.860 02:29:21 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:56.860 02:29:21 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:56.860 02:29:21 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:56.860 02:29:21 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:57.794 Creating new GPT entries in memory. 00:05:57.794 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:57.794 other utilities. 00:05:57.794 02:29:22 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:57.794 02:29:22 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:57.794 02:29:22 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:57.794 02:29:22 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:57.794 02:29:22 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:58.729 Creating new GPT entries in memory. 00:05:58.729 The operation has completed successfully. 00:05:58.729 02:29:23 -- setup/common.sh@57 -- # (( part++ )) 00:05:58.729 02:29:23 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:58.729 02:29:23 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:58.729 02:29:23 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:58.729 02:29:23 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:06:00.104 The operation has completed successfully. 00:06:00.104 02:29:24 -- setup/common.sh@57 -- # (( part++ )) 00:06:00.104 02:29:24 -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:00.104 02:29:24 -- setup/common.sh@62 -- # wait 110436 00:06:00.104 02:29:24 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:06:00.104 02:29:24 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:00.104 02:29:24 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:00.104 02:29:24 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:06:00.104 02:29:24 -- setup/devices.sh@160 -- # for t in {1..5} 00:06:00.104 02:29:24 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:00.104 02:29:24 -- setup/devices.sh@161 -- # break 00:06:00.104 02:29:24 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:00.104 02:29:24 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:06:00.104 02:29:24 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:06:00.104 02:29:24 -- setup/devices.sh@166 -- # dm=dm-0 00:06:00.104 02:29:24 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:06:00.104 02:29:24 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:06:00.104 02:29:24 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:00.104 02:29:24 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:06:00.104 02:29:24 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:00.104 02:29:24 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:00.104 02:29:24 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:06:00.104 02:29:24 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:00.104 02:29:24 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:00.104 02:29:24 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:06:00.104 02:29:24 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:06:00.104 02:29:24 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:00.104 02:29:24 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:00.104 02:29:24 -- setup/devices.sh@53 -- # local found=0 00:06:00.104 02:29:24 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:06:00.104 02:29:24 -- setup/devices.sh@56 -- # : 00:06:00.104 02:29:24 -- setup/devices.sh@59 -- # local pci status 00:06:00.105 02:29:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:00.105 02:29:24 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:06:00.105 02:29:24 -- setup/devices.sh@47 -- # setup output config 00:06:00.105 02:29:24 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:00.105 02:29:24 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:00.105 02:29:25 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:06:00.105 02:29:25 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:06:00.105 02:29:25 -- setup/devices.sh@63 -- # found=1 00:06:00.105 02:29:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:00.105 02:29:25 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:06:00.105 02:29:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:00.105 02:29:25 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:06:00.105 02:29:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:01.505 02:29:26 -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:01.505 02:29:26 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:06:01.505 02:29:26 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:01.505 02:29:26 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:06:01.505 02:29:26 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:01.505 02:29:26 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:01.505 02:29:26 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:06:01.505 02:29:26 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:06:01.505 02:29:26 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:06:01.505 02:29:26 -- setup/devices.sh@50 -- # local mount_point= 00:06:01.505 02:29:26 -- setup/devices.sh@51 -- # local test_file= 00:06:01.505 02:29:26 -- setup/devices.sh@53 -- # local found=0 00:06:01.505 02:29:26 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:01.505 02:29:26 -- setup/devices.sh@59 -- # local pci status 00:06:01.505 02:29:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:01.505 02:29:26 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:06:01.505 02:29:26 -- setup/devices.sh@47 -- # setup output config 00:06:01.505 02:29:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:01.505 02:29:26 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:01.505 02:29:26 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:06:01.505 02:29:26 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:06:01.505 02:29:26 -- setup/devices.sh@63 -- # found=1 00:06:01.505 02:29:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:01.505 02:29:26 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:06:01.505 02:29:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:01.505 02:29:26 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:06:01.505 02:29:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:02.905 02:29:27 -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:02.905 02:29:27 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:02.905 02:29:27 -- setup/devices.sh@68 -- # return 0 00:06:02.905 02:29:27 -- setup/devices.sh@187 -- # cleanup_dm 00:06:02.905 02:29:27 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:02.905 02:29:27 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:02.905 02:29:27 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:06:02.905 02:29:27 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:02.905 02:29:27 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:06:02.905 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:02.905 02:29:27 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:02.905 02:29:27 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:06:02.905 00:06:02.905 real 0m6.285s 00:06:02.905 user 0m0.451s 00:06:02.905 sys 0m2.585s 00:06:02.905 02:29:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.905 02:29:27 -- common/autotest_common.sh@10 -- # set +x 00:06:02.905 ************************************ 00:06:02.905 END TEST dm_mount 00:06:02.905 ************************************ 00:06:02.905 02:29:27 -- setup/devices.sh@1 -- # cleanup 00:06:02.905 02:29:27 -- setup/devices.sh@11 -- # cleanup_nvme 00:06:02.905 02:29:27 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:02.905 02:29:27 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:02.905 02:29:27 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:02.905 02:29:27 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:02.905 02:29:27 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:02.905 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:06:02.905 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:06:02.905 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:02.905 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:02.905 02:29:27 -- setup/devices.sh@12 -- # cleanup_dm 00:06:02.905 02:29:27 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:02.905 02:29:27 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:02.905 02:29:27 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:02.905 02:29:27 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:02.905 02:29:27 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:06:02.905 02:29:27 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:06:03.164 00:06:03.164 real 0m13.988s 00:06:03.164 user 0m1.630s 00:06:03.164 sys 0m6.893s 00:06:03.164 02:29:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.164 02:29:28 -- common/autotest_common.sh@10 -- # set +x 00:06:03.164 ************************************ 00:06:03.164 END TEST devices 00:06:03.164 ************************************ 00:06:03.164 00:06:03.164 real 0m27.920s 00:06:03.164 user 0m6.143s 00:06:03.164 sys 0m16.443s 00:06:03.164 02:29:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.164 02:29:28 -- common/autotest_common.sh@10 -- # set +x 00:06:03.164 ************************************ 00:06:03.164 END TEST setup.sh 00:06:03.164 ************************************ 00:06:03.164 02:29:28 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:03.164 Hugepages 00:06:03.164 node hugesize free / total 00:06:03.164 node0 1048576kB 0 / 0 00:06:03.164 node0 2048kB 2048 / 2048 00:06:03.164 00:06:03.164 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:03.422 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:03.422 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:03.422 02:29:28 -- spdk/autotest.sh@141 -- # uname -s 00:06:03.422 02:29:28 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:06:03.422 02:29:28 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:06:03.422 02:29:28 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:03.681 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:06:03.940 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:06:04.876 02:29:29 -- common/autotest_common.sh@1517 -- # sleep 1 00:06:06.252 02:29:30 -- common/autotest_common.sh@1518 -- # bdfs=() 00:06:06.252 02:29:30 -- common/autotest_common.sh@1518 -- # local bdfs 00:06:06.252 02:29:30 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:06:06.252 02:29:30 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:06:06.252 02:29:30 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:06.252 02:29:30 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:06.252 02:29:30 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:06.252 02:29:30 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:06.252 02:29:30 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:06.252 02:29:30 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:06:06.252 02:29:30 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:06:06.252 02:29:30 -- common/autotest_common.sh@1521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:06.252 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:06:06.252 Waiting for block devices as requested 00:06:06.511 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:06:06.511 02:29:31 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:06:06.511 02:29:31 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:06:06.511 02:29:31 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:06:06.511 02:29:31 -- common/autotest_common.sh@1487 -- # grep 0000:00:06.0/nvme/nvme 00:06:06.511 02:29:31 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:06:06.511 02:29:31 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:06:06.511 02:29:31 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:06:06.511 02:29:31 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:06:06.511 02:29:31 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:06:06.511 02:29:31 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:06:06.511 02:29:31 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:06:06.511 02:29:31 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:06:06.511 02:29:31 -- common/autotest_common.sh@1530 -- # grep oacs 00:06:06.511 02:29:31 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:06:06.511 02:29:31 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:06:06.511 02:29:31 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:06:06.511 02:29:31 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:06:06.511 02:29:31 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:06:06.511 02:29:31 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:06:06.511 02:29:31 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:06:06.511 02:29:31 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:06:06.511 02:29:31 -- common/autotest_common.sh@1542 -- # continue 00:06:06.511 02:29:31 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:06:06.511 02:29:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:06.511 02:29:31 -- common/autotest_common.sh@10 -- # set +x 00:06:06.511 02:29:31 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:06:06.511 02:29:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:06.511 02:29:31 -- common/autotest_common.sh@10 -- # set +x 00:06:06.511 02:29:31 -- spdk/autotest.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:06.770 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:06:07.029 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:06:07.964 02:29:33 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:06:07.964 02:29:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:07.964 02:29:33 -- common/autotest_common.sh@10 -- # set +x 00:06:08.224 02:29:33 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:06:08.224 02:29:33 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:06:08.224 02:29:33 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:06:08.224 02:29:33 -- common/autotest_common.sh@1562 -- # bdfs=() 00:06:08.224 02:29:33 -- common/autotest_common.sh@1562 -- # local bdfs 00:06:08.224 02:29:33 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:06:08.224 02:29:33 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:08.224 02:29:33 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:08.224 02:29:33 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:08.224 02:29:33 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:08.224 02:29:33 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:08.224 02:29:33 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:06:08.224 02:29:33 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:06:08.224 02:29:33 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:06:08.224 02:29:33 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:06:08.224 02:29:33 -- common/autotest_common.sh@1565 -- # device=0x0010 00:06:08.224 02:29:33 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:08.224 02:29:33 -- common/autotest_common.sh@1571 -- # printf '%s\n' 00:06:08.224 02:29:33 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:06:08.224 02:29:33 -- common/autotest_common.sh@1578 -- # return 0 00:06:08.224 02:29:33 -- spdk/autotest.sh@161 -- # '[' 1 -eq 1 ']' 00:06:08.224 02:29:33 -- spdk/autotest.sh@162 -- # run_test unittest /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:06:08.224 02:29:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:08.224 02:29:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:08.224 02:29:33 -- common/autotest_common.sh@10 -- # set +x 00:06:08.224 ************************************ 00:06:08.224 START TEST unittest 00:06:08.224 ************************************ 00:06:08.224 02:29:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:06:08.224 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:06:08.224 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit 00:06:08.224 + testdir=/home/vagrant/spdk_repo/spdk/test/unit 00:06:08.224 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:06:08.224 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit/../.. 00:06:08.224 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:08.224 + source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:06:08.224 ++ rpc_py=rpc_cmd 00:06:08.224 ++ set -e 00:06:08.224 ++ shopt -s nullglob 00:06:08.224 ++ shopt -s extglob 00:06:08.224 ++ [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:08.224 ++ source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:08.224 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:08.224 +++ CONFIG_FIO_PLUGIN=y 00:06:08.224 +++ CONFIG_NVME_CUSE=y 00:06:08.224 +++ CONFIG_RAID5F=y 00:06:08.224 +++ CONFIG_LTO=n 00:06:08.224 +++ CONFIG_SMA=n 00:06:08.224 +++ CONFIG_ISAL=y 00:06:08.224 +++ CONFIG_OPENSSL_PATH= 00:06:08.224 +++ CONFIG_IDXD_KERNEL=n 00:06:08.224 +++ CONFIG_URING_PATH= 00:06:08.224 +++ CONFIG_DAOS=n 00:06:08.224 +++ CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:06:08.224 +++ CONFIG_OCF=n 00:06:08.224 +++ CONFIG_EXAMPLES=y 00:06:08.224 +++ CONFIG_RDMA_PROV=verbs 00:06:08.224 +++ CONFIG_ISCSI_INITIATOR=y 00:06:08.224 +++ CONFIG_VTUNE=n 00:06:08.224 +++ CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:06:08.224 +++ CONFIG_CET=n 00:06:08.224 +++ CONFIG_TESTS=y 00:06:08.224 +++ CONFIG_APPS=y 00:06:08.224 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:08.224 +++ CONFIG_DAOS_DIR= 00:06:08.224 +++ CONFIG_CRYPTO_MLX5=n 00:06:08.224 +++ CONFIG_XNVME=n 00:06:08.224 +++ CONFIG_UNIT_TESTS=y 00:06:08.224 +++ CONFIG_FUSE=n 00:06:08.224 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:08.224 +++ CONFIG_OCF_PATH= 00:06:08.224 +++ CONFIG_WPDK_DIR= 00:06:08.224 +++ CONFIG_VFIO_USER=n 00:06:08.224 +++ CONFIG_MAX_LCORES= 00:06:08.224 +++ CONFIG_ARCH=native 00:06:08.224 +++ CONFIG_TSAN=n 00:06:08.224 +++ CONFIG_VIRTIO=y 00:06:08.224 +++ CONFIG_IPSEC_MB=n 00:06:08.224 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:08.224 +++ CONFIG_ASAN=y 00:06:08.224 +++ CONFIG_SHARED=n 00:06:08.224 +++ CONFIG_VTUNE_DIR= 00:06:08.224 +++ CONFIG_RDMA_SET_TOS=y 00:06:08.224 +++ CONFIG_VBDEV_COMPRESS=n 00:06:08.224 +++ CONFIG_VFIO_USER_DIR= 00:06:08.224 +++ CONFIG_FUZZER_LIB= 00:06:08.224 +++ CONFIG_HAVE_EXECINFO_H=y 00:06:08.224 +++ CONFIG_USDT=n 00:06:08.224 +++ CONFIG_URING_ZNS=n 00:06:08.224 +++ CONFIG_FC_PATH= 00:06:08.224 +++ CONFIG_COVERAGE=y 00:06:08.224 +++ CONFIG_CUSTOMOCF=n 00:06:08.224 +++ CONFIG_DPDK_PKG_CONFIG=n 00:06:08.224 +++ CONFIG_WERROR=y 00:06:08.224 +++ CONFIG_DEBUG=y 00:06:08.224 +++ CONFIG_RDMA=y 00:06:08.224 +++ CONFIG_HAVE_ARC4RANDOM=n 00:06:08.224 +++ CONFIG_FUZZER=n 00:06:08.224 +++ CONFIG_FC=n 00:06:08.224 +++ CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:06:08.224 +++ CONFIG_HAVE_LIBARCHIVE=n 00:06:08.224 +++ CONFIG_DPDK_COMPRESSDEV=n 00:06:08.224 +++ CONFIG_CROSS_PREFIX= 00:06:08.224 +++ CONFIG_PREFIX=/usr/local 00:06:08.224 +++ CONFIG_HAVE_LIBBSD=n 00:06:08.224 +++ CONFIG_UBSAN=y 00:06:08.224 +++ CONFIG_PGO_CAPTURE=n 00:06:08.224 +++ CONFIG_UBLK=n 00:06:08.224 +++ CONFIG_ISAL_CRYPTO=y 00:06:08.224 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:08.224 +++ CONFIG_CRYPTO=n 00:06:08.224 +++ CONFIG_RBD=n 00:06:08.224 +++ CONFIG_LIBDIR= 00:06:08.224 +++ CONFIG_IPSEC_MB_DIR= 00:06:08.224 +++ CONFIG_PGO_USE=n 00:06:08.224 +++ CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:08.224 +++ CONFIG_GOLANG=n 00:06:08.224 +++ CONFIG_VHOST=y 00:06:08.224 +++ CONFIG_IDXD=y 00:06:08.224 +++ CONFIG_AVAHI=n 00:06:08.224 +++ CONFIG_URING=n 00:06:08.224 ++ source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:06:08.224 +++++ dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:06:08.224 ++++ readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:06:08.224 +++ _root=/home/vagrant/spdk_repo/spdk/test/common 00:06:08.224 +++ _root=/home/vagrant/spdk_repo/spdk 00:06:08.224 +++ _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:06:08.224 +++ _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:06:08.224 +++ _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:06:08.225 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:08.225 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:08.225 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:08.225 +++ VHOST_APP=("$_app_dir/vhost") 00:06:08.225 +++ DD_APP=("$_app_dir/spdk_dd") 00:06:08.225 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:06:08.225 +++ [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:06:08.225 +++ [[ #ifndef SPDK_CONFIG_H 00:06:08.225 #define SPDK_CONFIG_H 00:06:08.225 #define SPDK_CONFIG_APPS 1 00:06:08.225 #define SPDK_CONFIG_ARCH native 00:06:08.225 #define SPDK_CONFIG_ASAN 1 00:06:08.225 #undef SPDK_CONFIG_AVAHI 00:06:08.225 #undef SPDK_CONFIG_CET 00:06:08.225 #define SPDK_CONFIG_COVERAGE 1 00:06:08.225 #define SPDK_CONFIG_CROSS_PREFIX 00:06:08.225 #undef SPDK_CONFIG_CRYPTO 00:06:08.225 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:08.225 #undef SPDK_CONFIG_CUSTOMOCF 00:06:08.225 #undef SPDK_CONFIG_DAOS 00:06:08.225 #define SPDK_CONFIG_DAOS_DIR 00:06:08.225 #define SPDK_CONFIG_DEBUG 1 00:06:08.225 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:08.225 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:06:08.225 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:06:08.225 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:06:08.225 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:08.225 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:08.225 #define SPDK_CONFIG_EXAMPLES 1 00:06:08.225 #undef SPDK_CONFIG_FC 00:06:08.225 #define SPDK_CONFIG_FC_PATH 00:06:08.225 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:08.225 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:08.225 #undef SPDK_CONFIG_FUSE 00:06:08.225 #undef SPDK_CONFIG_FUZZER 00:06:08.225 #define SPDK_CONFIG_FUZZER_LIB 00:06:08.225 #undef SPDK_CONFIG_GOLANG 00:06:08.225 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:06:08.225 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:08.225 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:08.225 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:08.225 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:08.225 #define SPDK_CONFIG_IDXD 1 00:06:08.225 #undef SPDK_CONFIG_IDXD_KERNEL 00:06:08.225 #undef SPDK_CONFIG_IPSEC_MB 00:06:08.225 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:08.225 #define SPDK_CONFIG_ISAL 1 00:06:08.225 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:08.225 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:08.225 #define SPDK_CONFIG_LIBDIR 00:06:08.225 #undef SPDK_CONFIG_LTO 00:06:08.225 #define SPDK_CONFIG_MAX_LCORES 00:06:08.225 #define SPDK_CONFIG_NVME_CUSE 1 00:06:08.225 #undef SPDK_CONFIG_OCF 00:06:08.225 #define SPDK_CONFIG_OCF_PATH 00:06:08.225 #define SPDK_CONFIG_OPENSSL_PATH 00:06:08.225 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:08.225 #undef SPDK_CONFIG_PGO_USE 00:06:08.225 #define SPDK_CONFIG_PREFIX /usr/local 00:06:08.225 #define SPDK_CONFIG_RAID5F 1 00:06:08.225 #undef SPDK_CONFIG_RBD 00:06:08.225 #define SPDK_CONFIG_RDMA 1 00:06:08.225 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:08.225 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:08.225 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:08.225 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:08.225 #undef SPDK_CONFIG_SHARED 00:06:08.225 #undef SPDK_CONFIG_SMA 00:06:08.225 #define SPDK_CONFIG_TESTS 1 00:06:08.225 #undef SPDK_CONFIG_TSAN 00:06:08.225 #undef SPDK_CONFIG_UBLK 00:06:08.225 #define SPDK_CONFIG_UBSAN 1 00:06:08.225 #define SPDK_CONFIG_UNIT_TESTS 1 00:06:08.225 #undef SPDK_CONFIG_URING 00:06:08.225 #define SPDK_CONFIG_URING_PATH 00:06:08.225 #undef SPDK_CONFIG_URING_ZNS 00:06:08.225 #undef SPDK_CONFIG_USDT 00:06:08.225 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:08.225 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:08.225 #undef SPDK_CONFIG_VFIO_USER 00:06:08.225 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:08.225 #define SPDK_CONFIG_VHOST 1 00:06:08.225 #define SPDK_CONFIG_VIRTIO 1 00:06:08.225 #undef SPDK_CONFIG_VTUNE 00:06:08.225 #define SPDK_CONFIG_VTUNE_DIR 00:06:08.225 #define SPDK_CONFIG_WERROR 1 00:06:08.225 #define SPDK_CONFIG_WPDK_DIR 00:06:08.225 #undef SPDK_CONFIG_XNVME 00:06:08.225 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:08.225 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:08.225 ++ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:08.225 +++ [[ -e /bin/wpdk_common.sh ]] 00:06:08.225 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:08.225 +++ source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:08.225 ++++ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:06:08.225 ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:06:08.225 ++++ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:06:08.225 ++++ export PATH 00:06:08.225 ++++ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:06:08.225 ++ source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:06:08.225 +++++ dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:06:08.225 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:06:08.225 +++ _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:06:08.225 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:06:08.225 +++ _pmrootdir=/home/vagrant/spdk_repo/spdk 00:06:08.225 +++ TEST_TAG=N/A 00:06:08.225 +++ TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:06:08.225 ++ : 1 00:06:08.225 ++ export RUN_NIGHTLY 00:06:08.225 ++ : 0 00:06:08.225 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:06:08.225 ++ : 0 00:06:08.225 ++ export SPDK_RUN_VALGRIND 00:06:08.225 ++ : 1 00:06:08.225 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:06:08.225 ++ : 1 00:06:08.225 ++ export SPDK_TEST_UNITTEST 00:06:08.225 ++ : 00:06:08.225 ++ export SPDK_TEST_AUTOBUILD 00:06:08.225 ++ : 0 00:06:08.225 ++ export SPDK_TEST_RELEASE_BUILD 00:06:08.225 ++ : 0 00:06:08.225 ++ export SPDK_TEST_ISAL 00:06:08.225 ++ : 0 00:06:08.225 ++ export SPDK_TEST_ISCSI 00:06:08.225 ++ : 0 00:06:08.225 ++ export SPDK_TEST_ISCSI_INITIATOR 00:06:08.225 ++ : 1 00:06:08.225 ++ export SPDK_TEST_NVME 00:06:08.225 ++ : 0 00:06:08.225 ++ export SPDK_TEST_NVME_PMR 00:06:08.225 ++ : 0 00:06:08.225 ++ export SPDK_TEST_NVME_BP 00:06:08.225 ++ : 0 00:06:08.225 ++ export SPDK_TEST_NVME_CLI 00:06:08.225 ++ : 0 00:06:08.225 ++ export SPDK_TEST_NVME_CUSE 00:06:08.225 ++ : 0 00:06:08.225 ++ export SPDK_TEST_NVME_FDP 00:06:08.225 ++ : 0 00:06:08.225 ++ export SPDK_TEST_NVMF 00:06:08.225 ++ : 0 00:06:08.225 ++ export SPDK_TEST_VFIOUSER 00:06:08.225 ++ : 0 00:06:08.225 ++ export SPDK_TEST_VFIOUSER_QEMU 00:06:08.225 ++ : 0 00:06:08.225 ++ export SPDK_TEST_FUZZER 00:06:08.225 ++ : 0 00:06:08.225 ++ export SPDK_TEST_FUZZER_SHORT 00:06:08.225 ++ : rdma 00:06:08.225 ++ export SPDK_TEST_NVMF_TRANSPORT 00:06:08.225 ++ : 0 00:06:08.225 ++ export SPDK_TEST_RBD 00:06:08.225 ++ : 0 00:06:08.225 ++ export SPDK_TEST_VHOST 00:06:08.225 ++ : 1 00:06:08.225 ++ export SPDK_TEST_BLOCKDEV 00:06:08.225 ++ : 0 00:06:08.225 ++ export SPDK_TEST_IOAT 00:06:08.225 ++ : 0 00:06:08.225 ++ export SPDK_TEST_BLOBFS 00:06:08.225 ++ : 0 00:06:08.225 ++ export SPDK_TEST_VHOST_INIT 00:06:08.225 ++ : 0 00:06:08.225 ++ export SPDK_TEST_LVOL 00:06:08.225 ++ : 0 00:06:08.225 ++ export SPDK_TEST_VBDEV_COMPRESS 00:06:08.225 ++ : 1 00:06:08.225 ++ export SPDK_RUN_ASAN 00:06:08.225 ++ : 1 00:06:08.225 ++ export SPDK_RUN_UBSAN 00:06:08.225 ++ : /home/vagrant/spdk_repo/dpdk/build 00:06:08.225 ++ export SPDK_RUN_EXTERNAL_DPDK 00:06:08.225 ++ : 0 00:06:08.225 ++ export SPDK_RUN_NON_ROOT 00:06:08.225 ++ : 0 00:06:08.225 ++ export SPDK_TEST_CRYPTO 00:06:08.225 ++ : 0 00:06:08.225 ++ export SPDK_TEST_FTL 00:06:08.225 ++ : 0 00:06:08.225 ++ export SPDK_TEST_OCF 00:06:08.225 ++ : 0 00:06:08.225 ++ export SPDK_TEST_VMD 00:06:08.225 ++ : 0 00:06:08.225 ++ export SPDK_TEST_OPAL 00:06:08.225 ++ : v22.11.4 00:06:08.225 ++ export SPDK_TEST_NATIVE_DPDK 00:06:08.225 ++ : true 00:06:08.225 ++ export SPDK_AUTOTEST_X 00:06:08.225 ++ : 1 00:06:08.225 ++ export SPDK_TEST_RAID5 00:06:08.225 ++ : 0 00:06:08.225 ++ export SPDK_TEST_URING 00:06:08.225 ++ : 0 00:06:08.225 ++ export SPDK_TEST_USDT 00:06:08.225 ++ : 0 00:06:08.225 ++ export SPDK_TEST_USE_IGB_UIO 00:06:08.225 ++ : 0 00:06:08.225 ++ export SPDK_TEST_SCHEDULER 00:06:08.225 ++ : 0 00:06:08.225 ++ export SPDK_TEST_SCANBUILD 00:06:08.225 ++ : 00:06:08.225 ++ export SPDK_TEST_NVMF_NICS 00:06:08.225 ++ : 0 00:06:08.225 ++ export SPDK_TEST_SMA 00:06:08.225 ++ : 0 00:06:08.225 ++ export SPDK_TEST_DAOS 00:06:08.225 ++ : 0 00:06:08.225 ++ export SPDK_TEST_XNVME 00:06:08.225 ++ : 0 00:06:08.225 ++ export SPDK_TEST_ACCEL_DSA 00:06:08.225 ++ : 0 00:06:08.225 ++ export SPDK_TEST_ACCEL_IAA 00:06:08.225 ++ : 0 00:06:08.225 ++ export SPDK_TEST_ACCEL_IOAT 00:06:08.225 ++ : 00:06:08.225 ++ export SPDK_TEST_FUZZER_TARGET 00:06:08.225 ++ : 0 00:06:08.225 ++ export SPDK_TEST_NVMF_MDNS 00:06:08.225 ++ : 0 00:06:08.225 ++ export SPDK_JSONRPC_GO_CLIENT 00:06:08.225 ++ export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:06:08.225 ++ SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:06:08.225 ++ export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:06:08.225 ++ DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:06:08.225 ++ export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:08.225 ++ VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:08.225 ++ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:08.226 ++ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:08.226 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:08.226 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:06:08.226 ++ export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:08.226 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:08.226 ++ export PYTHONDONTWRITEBYTECODE=1 00:06:08.226 ++ PYTHONDONTWRITEBYTECODE=1 00:06:08.226 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:08.226 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:08.226 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:08.226 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:08.226 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:06:08.226 ++ rm -rf /var/tmp/asan_suppression_file 00:06:08.226 ++ cat 00:06:08.226 ++ echo leak:libfuse3.so 00:06:08.226 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:08.226 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:08.226 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:08.226 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:08.226 ++ '[' -z /var/spdk/dependencies ']' 00:06:08.226 ++ export DEPENDENCY_DIR 00:06:08.226 ++ export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:06:08.226 ++ SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:06:08.226 ++ export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:06:08.226 ++ SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:06:08.226 ++ export QEMU_BIN= 00:06:08.226 ++ QEMU_BIN= 00:06:08.226 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:06:08.226 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:06:08.226 ++ export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:06:08.226 ++ AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:06:08.226 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:08.226 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:08.226 ++ '[' 0 -eq 0 ']' 00:06:08.226 ++ export valgrind= 00:06:08.226 ++ valgrind= 00:06:08.226 +++ uname -s 00:06:08.226 ++ '[' Linux = Linux ']' 00:06:08.226 ++ HUGEMEM=4096 00:06:08.226 ++ export CLEAR_HUGE=yes 00:06:08.226 ++ CLEAR_HUGE=yes 00:06:08.226 ++ [[ 0 -eq 1 ]] 00:06:08.226 ++ [[ 0 -eq 1 ]] 00:06:08.226 ++ MAKE=make 00:06:08.226 +++ nproc 00:06:08.226 ++ MAKEFLAGS=-j10 00:06:08.226 ++ export HUGEMEM=4096 00:06:08.226 ++ HUGEMEM=4096 00:06:08.226 ++ '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:06:08.226 ++ NO_HUGE=() 00:06:08.226 ++ TEST_MODE= 00:06:08.226 ++ [[ -z '' ]] 00:06:08.226 ++ PYTHONPATH+=:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:06:08.226 ++ exec 00:06:08.226 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:06:08.226 ++ /home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:06:08.226 ++ set_test_storage 2147483648 00:06:08.226 ++ [[ -v testdir ]] 00:06:08.226 ++ local requested_size=2147483648 00:06:08.226 ++ local mount target_dir 00:06:08.226 ++ local -A mounts fss sizes avails uses 00:06:08.226 ++ local source fs size avail mount use 00:06:08.226 ++ local storage_fallback storage_candidates 00:06:08.226 +++ mktemp -udt spdk.XXXXXX 00:06:08.226 ++ storage_fallback=/tmp/spdk.mIy4NV 00:06:08.226 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:08.226 ++ [[ -n '' ]] 00:06:08.226 ++ [[ -n '' ]] 00:06:08.226 ++ mkdir -p /home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.mIy4NV/tests/unit /tmp/spdk.mIy4NV 00:06:08.226 ++ requested_size=2214592512 00:06:08.226 ++ read -r source fs size use avail _ mount 00:06:08.226 +++ df -T 00:06:08.226 +++ grep -v Filesystem 00:06:08.226 ++ mounts["$mount"]=udev 00:06:08.226 ++ fss["$mount"]=devtmpfs 00:06:08.226 ++ avails["$mount"]=6224457728 00:06:08.226 ++ sizes["$mount"]=6224457728 00:06:08.226 ++ uses["$mount"]=0 00:06:08.226 ++ read -r source fs size use avail _ mount 00:06:08.226 ++ mounts["$mount"]=tmpfs 00:06:08.226 ++ fss["$mount"]=tmpfs 00:06:08.226 ++ avails["$mount"]=1253408768 00:06:08.226 ++ sizes["$mount"]=1254514688 00:06:08.226 ++ uses["$mount"]=1105920 00:06:08.226 ++ read -r source fs size use avail _ mount 00:06:08.226 ++ mounts["$mount"]=/dev/vda1 00:06:08.226 ++ fss["$mount"]=ext4 00:06:08.226 ++ avails["$mount"]=9269895168 00:06:08.226 ++ sizes["$mount"]=20616794112 00:06:08.226 ++ uses["$mount"]=11330121728 00:06:08.226 ++ read -r source fs size use avail _ mount 00:06:08.226 ++ mounts["$mount"]=tmpfs 00:06:08.226 ++ fss["$mount"]=tmpfs 00:06:08.226 ++ avails["$mount"]=6272557056 00:06:08.226 ++ sizes["$mount"]=6272557056 00:06:08.226 ++ uses["$mount"]=0 00:06:08.226 ++ read -r source fs size use avail _ mount 00:06:08.226 ++ mounts["$mount"]=tmpfs 00:06:08.226 ++ fss["$mount"]=tmpfs 00:06:08.226 ++ avails["$mount"]=5242880 00:06:08.226 ++ sizes["$mount"]=5242880 00:06:08.226 ++ uses["$mount"]=0 00:06:08.226 ++ read -r source fs size use avail _ mount 00:06:08.226 ++ mounts["$mount"]=tmpfs 00:06:08.226 ++ fss["$mount"]=tmpfs 00:06:08.226 ++ avails["$mount"]=6272557056 00:06:08.226 ++ sizes["$mount"]=6272557056 00:06:08.226 ++ uses["$mount"]=0 00:06:08.226 ++ read -r source fs size use avail _ mount 00:06:08.226 ++ mounts["$mount"]=/dev/loop0 00:06:08.226 ++ fss["$mount"]=squashfs 00:06:08.226 ++ avails["$mount"]=0 00:06:08.226 ++ sizes["$mount"]=67108864 00:06:08.226 ++ uses["$mount"]=67108864 00:06:08.226 ++ read -r source fs size use avail _ mount 00:06:08.226 ++ mounts["$mount"]=/dev/loop1 00:06:08.226 ++ fss["$mount"]=squashfs 00:06:08.226 ++ avails["$mount"]=0 00:06:08.226 ++ sizes["$mount"]=41025536 00:06:08.226 ++ uses["$mount"]=41025536 00:06:08.226 ++ read -r source fs size use avail _ mount 00:06:08.226 ++ mounts["$mount"]=/dev/vda15 00:06:08.226 ++ fss["$mount"]=vfat 00:06:08.226 ++ avails["$mount"]=103089152 00:06:08.226 ++ sizes["$mount"]=109422592 00:06:08.226 ++ uses["$mount"]=6334464 00:06:08.226 ++ read -r source fs size use avail _ mount 00:06:08.226 ++ mounts["$mount"]=/dev/loop2 00:06:08.226 ++ fss["$mount"]=squashfs 00:06:08.226 ++ avails["$mount"]=0 00:06:08.226 ++ sizes["$mount"]=96337920 00:06:08.226 ++ uses["$mount"]=96337920 00:06:08.226 ++ read -r source fs size use avail _ mount 00:06:08.226 ++ mounts["$mount"]=tmpfs 00:06:08.226 ++ fss["$mount"]=tmpfs 00:06:08.226 ++ avails["$mount"]=1254510592 00:06:08.226 ++ sizes["$mount"]=1254510592 00:06:08.226 ++ uses["$mount"]=0 00:06:08.226 ++ read -r source fs size use avail _ mount 00:06:08.226 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest/ubuntu2004-libvirt/output 00:06:08.226 ++ fss["$mount"]=fuse.sshfs 00:06:08.226 ++ avails["$mount"]=96156090368 00:06:08.226 ++ sizes["$mount"]=105088212992 00:06:08.226 ++ uses["$mount"]=3546689536 00:06:08.226 ++ read -r source fs size use avail _ mount 00:06:08.226 ++ printf '* Looking for test storage...\n' 00:06:08.226 * Looking for test storage... 00:06:08.226 ++ local target_space new_size 00:06:08.226 ++ for target_dir in "${storage_candidates[@]}" 00:06:08.226 +++ df /home/vagrant/spdk_repo/spdk/test/unit 00:06:08.226 +++ awk '$1 !~ /Filesystem/{print $6}' 00:06:08.226 ++ mount=/ 00:06:08.226 ++ target_space=9269895168 00:06:08.226 ++ (( target_space == 0 || target_space < requested_size )) 00:06:08.226 ++ (( target_space >= requested_size )) 00:06:08.226 ++ [[ ext4 == tmpfs ]] 00:06:08.226 ++ [[ ext4 == ramfs ]] 00:06:08.226 ++ [[ / == / ]] 00:06:08.226 ++ new_size=13544714240 00:06:08.226 ++ (( new_size * 100 / sizes[/] > 95 )) 00:06:08.226 ++ export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:06:08.226 ++ SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:06:08.226 ++ printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/unit 00:06:08.226 * Found test storage at /home/vagrant/spdk_repo/spdk/test/unit 00:06:08.226 ++ return 0 00:06:08.226 ++ set -o errtrace 00:06:08.226 ++ shopt -s extdebug 00:06:08.226 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:06:08.226 ++ PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:08.226 02:29:33 -- common/autotest_common.sh@1672 -- # true 00:06:08.226 02:29:33 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:06:08.226 02:29:33 -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:06:08.226 02:29:33 -- common/autotest_common.sh@29 -- # exec 00:06:08.226 02:29:33 -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:08.226 02:29:33 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:08.226 02:29:33 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:08.226 02:29:33 -- common/autotest_common.sh@18 -- # set -x 00:06:08.226 02:29:33 -- unit/unittest.sh@17 -- # cd /home/vagrant/spdk_repo/spdk 00:06:08.226 02:29:33 -- unit/unittest.sh@151 -- # '[' 0 -eq 1 ']' 00:06:08.226 02:29:33 -- unit/unittest.sh@158 -- # '[' -z x ']' 00:06:08.226 02:29:33 -- unit/unittest.sh@165 -- # '[' 0 -eq 1 ']' 00:06:08.226 02:29:33 -- unit/unittest.sh@178 -- # grep CC_TYPE /home/vagrant/spdk_repo/spdk/mk/cc.mk 00:06:08.226 02:29:33 -- unit/unittest.sh@178 -- # CC_TYPE=CC_TYPE=gcc 00:06:08.226 02:29:33 -- unit/unittest.sh@179 -- # hash lcov 00:06:08.226 02:29:33 -- unit/unittest.sh@179 -- # grep -q '#define SPDK_CONFIG_COVERAGE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:08.226 02:29:33 -- unit/unittest.sh@179 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:06:08.226 02:29:33 -- unit/unittest.sh@180 -- # cov_avail=yes 00:06:08.226 02:29:33 -- unit/unittest.sh@184 -- # '[' yes = yes ']' 00:06:08.226 02:29:33 -- unit/unittest.sh@186 -- # [[ -z /home/vagrant/spdk_repo/spdk/../output ]] 00:06:08.226 02:29:33 -- unit/unittest.sh@189 -- # UT_COVERAGE=/home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:06:08.226 02:29:33 -- unit/unittest.sh@191 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:06:08.226 02:29:33 -- unit/unittest.sh@199 -- # export 'LCOV_OPTS= 00:06:08.226 --rc lcov_branch_coverage=1 00:06:08.226 --rc lcov_function_coverage=1 00:06:08.226 --rc genhtml_branch_coverage=1 00:06:08.226 --rc genhtml_function_coverage=1 00:06:08.226 --rc genhtml_legend=1 00:06:08.226 --rc geninfo_all_blocks=1 00:06:08.226 ' 00:06:08.226 02:29:33 -- unit/unittest.sh@199 -- # LCOV_OPTS=' 00:06:08.226 --rc lcov_branch_coverage=1 00:06:08.226 --rc lcov_function_coverage=1 00:06:08.226 --rc genhtml_branch_coverage=1 00:06:08.226 --rc genhtml_function_coverage=1 00:06:08.226 --rc genhtml_legend=1 00:06:08.226 --rc geninfo_all_blocks=1 00:06:08.226 ' 00:06:08.226 02:29:33 -- unit/unittest.sh@200 -- # export 'LCOV=lcov 00:06:08.226 --rc lcov_branch_coverage=1 00:06:08.226 --rc lcov_function_coverage=1 00:06:08.226 --rc genhtml_branch_coverage=1 00:06:08.226 --rc genhtml_function_coverage=1 00:06:08.226 --rc genhtml_legend=1 00:06:08.227 --rc geninfo_all_blocks=1 00:06:08.227 --no-external' 00:06:08.227 02:29:33 -- unit/unittest.sh@200 -- # LCOV='lcov 00:06:08.227 --rc lcov_branch_coverage=1 00:06:08.227 --rc lcov_function_coverage=1 00:06:08.227 --rc genhtml_branch_coverage=1 00:06:08.227 --rc genhtml_function_coverage=1 00:06:08.227 --rc genhtml_legend=1 00:06:08.227 --rc geninfo_all_blocks=1 00:06:08.227 --no-external' 00:06:08.227 02:29:33 -- unit/unittest.sh@202 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -d . -t Baseline -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info 00:06:10.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:06:10.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:06:10.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:06:10.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:06:10.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:06:10.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:06:10.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:06:10.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:06:10.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:06:10.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:06:10.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:06:10.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:06:10.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:06:10.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:06:10.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:06:10.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:06:10.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:06:10.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:06:10.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:06:10.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:06:10.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:06:10.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:06:10.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:06:10.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:06:10.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:06:10.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:06:10.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:06:10.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:06:10.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:06:10.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:06:10.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:06:10.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:06:10.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:06:10.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:06:10.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:06:10.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:06:10.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:06:10.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:06:10.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:06:10.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:06:10.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:06:10.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:06:10.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:06:10.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:06:10.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:06:10.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:06:10.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:06:10.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:06:10.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:06:10.128 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:06:10.128 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:06:10.128 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:06:10.128 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:06:10.128 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:06:10.128 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:06:10.128 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:06:10.128 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:06:10.128 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:06:10.128 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:06:10.128 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:06:10.128 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:06:10.128 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:06:10.128 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:06:10.128 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:06:10.128 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:06:10.128 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:06:10.128 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:06:10.128 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:06:10.128 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:06:10.128 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:06:10.128 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:06:10.128 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:06:10.128 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:06:10.128 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:06:10.128 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:06:10.128 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:06:10.128 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:06:10.128 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:06:10.128 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:06:10.128 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:06:10.128 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:06:10.128 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:06:10.128 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:06:10.128 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:06:10.128 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:06:10.128 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:06:10.128 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:06:10.128 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:06:10.128 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:06:10.128 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:06:10.128 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:06:10.128 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:06:10.128 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:06:10.128 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:06:10.128 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:06:10.128 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:06:10.128 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:06:10.128 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:06:10.387 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:06:10.387 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:06:10.387 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:06:10.387 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:06:10.387 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:06:10.387 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:06:10.387 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:06:10.387 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:06:10.387 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:06:10.387 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:06:10.387 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:06:10.387 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:06:10.387 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:06:10.387 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:06:10.387 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:06:10.387 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:06:10.387 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:06:10.387 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:06:10.387 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:06:10.387 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:06:10.387 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:06:10.387 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:06:10.387 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:06:10.387 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:06:10.387 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:06:10.387 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:06:10.387 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:06:10.387 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:06:10.387 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:06:10.387 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:06:10.387 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:06:10.387 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:06:10.387 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:06:10.387 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:06:10.387 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:06:10.387 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:06:10.387 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:06:10.387 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:06:10.387 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:06:10.387 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:06:10.387 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:06:10.387 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:06:10.387 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:06:10.387 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:06:10.387 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:06:10.387 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:06:10.387 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:06:10.387 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:06:10.387 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:06:10.387 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:06:10.387 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:06:10.387 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:06:10.387 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:06:10.387 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:06:10.387 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:06:10.387 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:06:10.387 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:06:10.387 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:06:10.387 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:06:10.387 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:06:10.387 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:06:10.387 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:06:10.387 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:06:10.387 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:06:10.387 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:06:10.387 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:06:10.387 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:06:10.387 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:06:10.387 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:06:10.388 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:06:10.388 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:06:10.388 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:06:10.388 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:06:10.388 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:06:49.139 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:06:49.139 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:06:49.139 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:06:49.139 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:06:49.139 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:06:49.139 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:06:52.423 02:30:16 -- unit/unittest.sh@206 -- # uname -m 00:06:52.423 02:30:16 -- unit/unittest.sh@206 -- # '[' x86_64 = aarch64 ']' 00:06:52.423 02:30:16 -- unit/unittest.sh@210 -- # run_test unittest_pci_event /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:06:52.423 02:30:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:52.423 02:30:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:52.423 02:30:16 -- common/autotest_common.sh@10 -- # set +x 00:06:52.423 ************************************ 00:06:52.423 START TEST unittest_pci_event 00:06:52.423 ************************************ 00:06:52.423 02:30:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:06:52.423 00:06:52.423 00:06:52.423 CUnit - A unit testing framework for C - Version 2.1-3 00:06:52.423 http://cunit.sourceforge.net/ 00:06:52.423 00:06:52.423 00:06:52.423 Suite: pci_event 00:06:52.423 Test: test_pci_parse_event ...[2024-07-11 02:30:16.845331] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 162:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 0000 00:06:52.423 [2024-07-11 02:30:16.845926] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 185:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 000000 00:06:52.423 passed 00:06:52.423 00:06:52.423 Run Summary: Type Total Ran Passed Failed Inactive 00:06:52.423 suites 1 1 n/a 0 0 00:06:52.423 tests 1 1 1 0 0 00:06:52.423 asserts 15 15 15 0 n/a 00:06:52.423 00:06:52.423 Elapsed time = 0.001 seconds 00:06:52.423 00:06:52.423 real 0m0.039s 00:06:52.423 user 0m0.018s 00:06:52.423 sys 0m0.018s 00:06:52.423 02:30:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.423 02:30:16 -- common/autotest_common.sh@10 -- # set +x 00:06:52.423 ************************************ 00:06:52.423 END TEST unittest_pci_event 00:06:52.423 ************************************ 00:06:52.423 02:30:16 -- unit/unittest.sh@211 -- # run_test unittest_include /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:06:52.423 02:30:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:52.423 02:30:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:52.423 02:30:16 -- common/autotest_common.sh@10 -- # set +x 00:06:52.423 ************************************ 00:06:52.423 START TEST unittest_include 00:06:52.423 ************************************ 00:06:52.423 02:30:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:06:52.423 00:06:52.423 00:06:52.423 CUnit - A unit testing framework for C - Version 2.1-3 00:06:52.423 http://cunit.sourceforge.net/ 00:06:52.423 00:06:52.423 00:06:52.423 Suite: histogram 00:06:52.423 Test: histogram_test ...passed 00:06:52.423 Test: histogram_merge ...passed 00:06:52.423 00:06:52.423 Run Summary: Type Total Ran Passed Failed Inactive 00:06:52.423 suites 1 1 n/a 0 0 00:06:52.423 tests 2 2 2 0 0 00:06:52.423 asserts 50 50 50 0 n/a 00:06:52.423 00:06:52.423 Elapsed time = 0.006 seconds 00:06:52.423 00:06:52.423 real 0m0.038s 00:06:52.423 user 0m0.024s 00:06:52.423 sys 0m0.014s 00:06:52.423 02:30:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.423 02:30:16 -- common/autotest_common.sh@10 -- # set +x 00:06:52.423 ************************************ 00:06:52.423 END TEST unittest_include 00:06:52.423 ************************************ 00:06:52.423 02:30:16 -- unit/unittest.sh@212 -- # run_test unittest_bdev unittest_bdev 00:06:52.423 02:30:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:52.423 02:30:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:52.423 02:30:16 -- common/autotest_common.sh@10 -- # set +x 00:06:52.423 ************************************ 00:06:52.423 START TEST unittest_bdev 00:06:52.423 ************************************ 00:06:52.423 02:30:17 -- common/autotest_common.sh@1104 -- # unittest_bdev 00:06:52.423 02:30:17 -- unit/unittest.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:06:52.423 00:06:52.423 00:06:52.423 CUnit - A unit testing framework for C - Version 2.1-3 00:06:52.423 http://cunit.sourceforge.net/ 00:06:52.423 00:06:52.423 00:06:52.423 Suite: bdev 00:06:52.423 Test: bytes_to_blocks_test ...passed 00:06:52.423 Test: num_blocks_test ...passed 00:06:52.423 Test: io_valid_test ...passed 00:06:52.423 Test: open_write_test ...[2024-07-11 02:30:17.101301] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:06:52.423 [2024-07-11 02:30:17.101784] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:06:52.423 [2024-07-11 02:30:17.102029] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:06:52.423 passed 00:06:52.423 Test: claim_test ...passed 00:06:52.423 Test: alias_add_del_test ...[2024-07-11 02:30:17.192717] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:06:52.423 [2024-07-11 02:30:17.193012] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4583:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:06:52.423 [2024-07-11 02:30:17.193204] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:06:52.423 passed 00:06:52.423 Test: get_device_stat_test ...passed 00:06:52.423 Test: bdev_io_types_test ...passed 00:06:52.423 Test: bdev_io_wait_test ...passed 00:06:52.423 Test: bdev_io_spans_split_test ...passed 00:06:52.423 Test: bdev_io_boundary_split_test ...passed 00:06:52.423 Test: bdev_io_max_size_and_segment_split_test ...[2024-07-11 02:30:17.359278] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3185:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:06:52.423 passed 00:06:52.423 Test: bdev_io_mix_split_test ...passed 00:06:52.423 Test: bdev_io_split_with_io_wait ...passed 00:06:52.423 Test: bdev_io_write_unit_split_test ...[2024-07-11 02:30:17.469888] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:06:52.423 [2024-07-11 02:30:17.470121] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:06:52.423 [2024-07-11 02:30:17.470187] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:06:52.423 [2024-07-11 02:30:17.470368] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:06:52.423 passed 00:06:52.681 Test: bdev_io_alignment_with_boundary ...passed 00:06:52.681 Test: bdev_io_alignment ...passed 00:06:52.681 Test: bdev_histograms ...passed 00:06:52.681 Test: bdev_write_zeroes ...passed 00:06:52.681 Test: bdev_compare_and_write ...passed 00:06:52.681 Test: bdev_compare ...passed 00:06:52.940 Test: bdev_compare_emulated ...passed 00:06:52.940 Test: bdev_zcopy_write ...passed 00:06:52.940 Test: bdev_zcopy_read ...passed 00:06:52.940 Test: bdev_open_while_hotremove ...passed 00:06:52.940 Test: bdev_close_while_hotremove ...passed 00:06:52.940 Test: bdev_open_ext_test ...[2024-07-11 02:30:17.927164] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8046:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:06:52.940 passed 00:06:52.940 Test: bdev_open_ext_unregister ...[2024-07-11 02:30:17.927774] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8046:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:06:52.940 passed 00:06:52.940 Test: bdev_set_io_timeout ...passed 00:06:52.940 Test: bdev_set_qd_sampling ...passed 00:06:52.940 Test: lba_range_overlap ...passed 00:06:53.198 Test: lock_lba_range_check_ranges ...passed 00:06:53.198 Test: lock_lba_range_with_io_outstanding ...passed 00:06:53.198 Test: lock_lba_range_overlapped ...passed 00:06:53.198 Test: bdev_quiesce ...[2024-07-11 02:30:18.124617] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9969:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:06:53.198 passed 00:06:53.198 Test: bdev_io_abort ...passed 00:06:53.198 Test: bdev_unmap ...passed 00:06:53.198 Test: bdev_write_zeroes_split_test ...passed 00:06:53.198 Test: bdev_set_options_test ...[2024-07-11 02:30:18.247253] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 485:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:06:53.198 passed 00:06:53.198 Test: bdev_get_memory_domains ...passed 00:06:53.198 Test: bdev_io_ext ...passed 00:06:53.456 Test: bdev_io_ext_no_opts ...passed 00:06:53.456 Test: bdev_io_ext_invalid_opts ...passed 00:06:53.456 Test: bdev_io_ext_split ...passed 00:06:53.456 Test: bdev_io_ext_bounce_buffer ...passed 00:06:53.456 Test: bdev_register_uuid_alias ...[2024-07-11 02:30:18.435290] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name 84403fa5-b6b4-48a9-bc4d-5b4022b418fe already exists 00:06:53.456 [2024-07-11 02:30:18.435526] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:84403fa5-b6b4-48a9-bc4d-5b4022b418fe alias for bdev bdev0 00:06:53.456 passed 00:06:53.456 Test: bdev_unregister_by_name ...[2024-07-11 02:30:18.454797] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7836:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:06:53.456 [2024-07-11 02:30:18.454956] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7844:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:06:53.456 passed 00:06:53.456 Test: for_each_bdev_test ...passed 00:06:53.456 Test: bdev_seek_test ...passed 00:06:53.456 Test: bdev_copy ...passed 00:06:53.716 Test: bdev_copy_split_test ...passed 00:06:53.716 Test: examine_locks ...passed 00:06:53.716 Test: claim_v2_rwo ...[2024-07-11 02:30:18.553050] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:53.716 [2024-07-11 02:30:18.553246] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8570:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:53.716 [2024-07-11 02:30:18.553392] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:53.716 [2024-07-11 02:30:18.553537] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:53.716 [2024-07-11 02:30:18.553656] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:53.716 passed[2024-07-11 02:30:18.553784] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8565:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:06:53.716 00:06:53.716 Test: claim_v2_rom ...[2024-07-11 02:30:18.554155] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:53.716 [2024-07-11 02:30:18.554299] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:53.716 [2024-07-11 02:30:18.554418] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:53.716 [2024-07-11 02:30:18.554560] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:53.716 [2024-07-11 02:30:18.554731] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8608:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:06:53.716 [2024-07-11 02:30:18.554884] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:06:53.716 passed 00:06:53.716 Test: claim_v2_rwm ...[2024-07-11 02:30:18.555263] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8638:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:06:53.716 [2024-07-11 02:30:18.555421] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:53.716 [2024-07-11 02:30:18.555563] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:53.716 [2024-07-11 02:30:18.555715] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:53.716 [2024-07-11 02:30:18.555777] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:53.716 [2024-07-11 02:30:18.555906] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8658:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:06:53.716 [2024-07-11 02:30:18.556074] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8638:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:06:53.716 passed 00:06:53.716 Test: claim_v2_existing_writer ...[2024-07-11 02:30:18.556637] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:06:53.716 [2024-07-11 02:30:18.556800] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:06:53.716 passed 00:06:53.716 Test: claim_v2_existing_v1 ...[2024-07-11 02:30:18.557216] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:06:53.717 [2024-07-11 02:30:18.557350] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:06:53.717 [2024-07-11 02:30:18.557484] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:06:53.717 passed 00:06:53.717 Test: claim_v1_existing_v2 ...[2024-07-11 02:30:18.557760] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:53.717 [2024-07-11 02:30:18.557958] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:53.717 [2024-07-11 02:30:18.558144] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:53.717 passed 00:06:53.717 Test: examine_claimed ...[2024-07-11 02:30:18.558795] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:06:53.717 passed 00:06:53.717 00:06:53.717 Run Summary: Type Total Ran Passed Failed Inactive 00:06:53.717 suites 1 1 n/a 0 0 00:06:53.717 tests 59 59 59 0 0 00:06:53.717 asserts 4599 4599 4599 0 n/a 00:06:53.717 00:06:53.717 Elapsed time = 1.514 seconds 00:06:53.717 02:30:18 -- unit/unittest.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:06:53.717 00:06:53.717 00:06:53.717 CUnit - A unit testing framework for C - Version 2.1-3 00:06:53.717 http://cunit.sourceforge.net/ 00:06:53.717 00:06:53.717 00:06:53.717 Suite: nvme 00:06:53.717 Test: test_create_ctrlr ...passed 00:06:53.717 Test: test_reset_ctrlr ...[2024-07-11 02:30:18.611903] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:53.717 passed 00:06:53.717 Test: test_race_between_reset_and_destruct_ctrlr ...passed 00:06:53.717 Test: test_failover_ctrlr ...passed 00:06:53.717 Test: test_race_between_failover_and_add_secondary_trid ...[2024-07-11 02:30:18.615321] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:53.717 [2024-07-11 02:30:18.615694] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:53.717 [2024-07-11 02:30:18.616044] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:53.717 passed 00:06:53.717 Test: test_pending_reset ...[2024-07-11 02:30:18.617884] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:53.717 [2024-07-11 02:30:18.618300] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:53.717 passed 00:06:53.717 Test: test_attach_ctrlr ...[2024-07-11 02:30:18.619752] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:06:53.717 passed 00:06:53.717 Test: test_aer_cb ...passed 00:06:53.717 Test: test_submit_nvme_cmd ...passed 00:06:53.717 Test: test_add_remove_trid ...passed 00:06:53.717 Test: test_abort ...[2024-07-11 02:30:18.624100] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7227:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:06:53.717 passed 00:06:53.717 Test: test_get_io_qpair ...passed 00:06:53.717 Test: test_bdev_unregister ...passed 00:06:53.717 Test: test_compare_ns ...passed 00:06:53.717 Test: test_init_ana_log_page ...passed 00:06:53.717 Test: test_get_memory_domains ...passed 00:06:53.717 Test: test_reconnect_qpair ...[2024-07-11 02:30:18.627988] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:53.717 passed 00:06:53.717 Test: test_create_bdev_ctrlr ...[2024-07-11 02:30:18.628837] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5279:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated. 00:06:53.717 passed 00:06:53.717 Test: test_add_multi_ns_to_bdev ...[2024-07-11 02:30:18.630504] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4492:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:06:53.717 passed 00:06:53.717 Test: test_add_multi_io_paths_to_nbdev_ch ...passed 00:06:53.717 Test: test_admin_path ...passed 00:06:53.717 Test: test_reset_bdev_ctrlr ...passed 00:06:53.717 Test: test_find_io_path ...passed 00:06:53.717 Test: test_retry_io_if_ana_state_is_updating ...passed 00:06:53.717 Test: test_retry_io_for_io_path_error ...passed 00:06:53.717 Test: test_retry_io_count ...passed 00:06:53.717 Test: test_concurrent_read_ana_log_page ...passed 00:06:53.717 Test: test_retry_io_for_ana_error ...passed 00:06:53.717 Test: test_check_io_error_resiliency_params ...[2024-07-11 02:30:18.639771] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5932:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:06:53.717 [2024-07-11 02:30:18.639939] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5936:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:06:53.717 [2024-07-11 02:30:18.640071] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5945:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:06:53.717 [2024-07-11 02:30:18.640198] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5948:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:06:53.717 [2024-07-11 02:30:18.640342] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5960:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:06:53.717 [2024-07-11 02:30:18.640495] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5960:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:06:53.717 [2024-07-11 02:30:18.640618] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5940:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:06:53.717 [2024-07-11 02:30:18.640795] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5955:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:06:53.717 [2024-07-11 02:30:18.640974] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5952:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:06:53.717 passed 00:06:53.717 Test: test_retry_io_if_ctrlr_is_resetting ...passed 00:06:53.717 Test: test_reconnect_ctrlr ...passed 00:06:53.717 Test: test_retry_failover_ctrlr ...passed 00:06:53.717 Test: test_fail_path ...[2024-07-11 02:30:18.644188] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:53.717 [2024-07-11 02:30:18.644344] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:53.717 [2024-07-11 02:30:18.644605] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:53.717 [2024-07-11 02:30:18.644742] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:53.717 [2024-07-11 02:30:18.644872] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:53.717 [2024-07-11 02:30:18.645204] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:53.717 [2024-07-11 02:30:18.645756] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:53.717 [2024-07-11 02:30:18.645907] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:53.717 [2024-07-11 02:30:18.646002] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:53.717 [2024-07-11 02:30:18.646114] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:53.717 passed 00:06:53.717 Test: test_nvme_ns_cmp ...passed 00:06:53.717 Test: test_ana_transition ...passed 00:06:53.717 Test: test_set_preferred_path ...[2024-07-11 02:30:18.646220] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:53.717 passed 00:06:53.717 Test: test_find_next_io_path ...passed 00:06:53.717 Test: test_find_io_path_min_qd ...passed 00:06:53.717 Test: test_disable_auto_failback ...[2024-07-11 02:30:18.648431] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:53.717 passed 00:06:53.717 Test: test_set_multipath_policy ...passed 00:06:53.717 Test: test_uuid_generation ...passed 00:06:53.717 Test: test_retry_io_to_same_path ...passed 00:06:53.717 Test: test_race_between_reset_and_disconnected ...passed 00:06:53.717 Test: test_ctrlr_op_rpc ...passed 00:06:53.717 Test: test_bdev_ctrlr_op_rpc ...passed 00:06:53.717 Test: test_disable_enable_ctrlr ...[2024-07-11 02:30:18.653568] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:53.717 [2024-07-11 02:30:18.653887] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:53.717 passed 00:06:53.717 Test: test_delete_ctrlr_done ...passed 00:06:53.717 Test: test_ns_remove_during_reset ...passed 00:06:53.717 00:06:53.718 Run Summary: Type Total Ran Passed Failed Inactive 00:06:53.718 suites 1 1 n/a 0 0 00:06:53.718 tests 48 48 48 0 0 00:06:53.718 asserts 3553 3553 3553 0 n/a 00:06:53.718 00:06:53.718 Elapsed time = 0.033 seconds 00:06:53.718 02:30:18 -- unit/unittest.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:06:53.718 Test Options 00:06:53.718 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2 00:06:53.718 00:06:53.718 00:06:53.718 CUnit - A unit testing framework for C - Version 2.1-3 00:06:53.718 http://cunit.sourceforge.net/ 00:06:53.718 00:06:53.718 00:06:53.718 Suite: raid 00:06:53.718 Test: test_create_raid ...passed 00:06:53.718 Test: test_create_raid_superblock ...passed 00:06:53.718 Test: test_delete_raid ...passed 00:06:53.718 Test: test_create_raid_invalid_args ...[2024-07-11 02:30:18.694492] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1357:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:06:53.718 [2024-07-11 02:30:18.694990] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1351:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:06:53.718 [2024-07-11 02:30:18.695515] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1341:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:06:53.718 [2024-07-11 02:30:18.695911] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:06:53.718 [2024-07-11 02:30:18.696902] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:06:53.718 passed 00:06:53.718 Test: test_delete_raid_invalid_args ...passed 00:06:53.718 Test: test_io_channel ...passed 00:06:53.718 Test: test_reset_io ...passed 00:06:53.718 Test: test_write_io ...passed 00:06:53.718 Test: test_read_io ...passed 00:06:54.652 Test: test_unmap_io ...passed 00:06:54.652 Test: test_io_failure ...[2024-07-11 02:30:19.520553] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c: 832:raid_bdev_submit_request: *ERROR*: submit request, invalid io type 0 00:06:54.652 passed 00:06:54.652 Test: test_multi_raid_no_io ...passed 00:06:54.652 Test: test_multi_raid_with_io ...passed 00:06:54.652 Test: test_io_type_supported ...passed 00:06:54.652 Test: test_raid_json_dump_info ...passed 00:06:54.652 Test: test_context_size ...passed 00:06:54.652 Test: test_raid_level_conversions ...passed 00:06:54.652 Test: test_raid_process ...passed 00:06:54.652 Test: test_raid_io_split ...passed 00:06:54.652 00:06:54.652 Run Summary: Type Total Ran Passed Failed Inactive 00:06:54.652 suites 1 1 n/a 0 0 00:06:54.652 tests 19 19 19 0 0 00:06:54.652 asserts 177879 177879 177879 0 n/a 00:06:54.652 00:06:54.652 Elapsed time = 0.837 seconds 00:06:54.652 02:30:19 -- unit/unittest.sh@23 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:06:54.652 00:06:54.652 00:06:54.652 CUnit - A unit testing framework for C - Version 2.1-3 00:06:54.652 http://cunit.sourceforge.net/ 00:06:54.652 00:06:54.652 00:06:54.652 Suite: raid_sb 00:06:54.652 Test: test_raid_bdev_write_superblock ...passed 00:06:54.652 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:06:54.652 Test: test_raid_bdev_parse_superblock ...[2024-07-11 02:30:19.574246] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 120:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:06:54.652 passed 00:06:54.652 00:06:54.652 Run Summary: Type Total Ran Passed Failed Inactive 00:06:54.652 suites 1 1 n/a 0 0 00:06:54.652 tests 3 3 3 0 0 00:06:54.652 asserts 32 32 32 0 n/a 00:06:54.652 00:06:54.652 Elapsed time = 0.001 seconds 00:06:54.652 02:30:19 -- unit/unittest.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:06:54.652 00:06:54.652 00:06:54.652 CUnit - A unit testing framework for C - Version 2.1-3 00:06:54.652 http://cunit.sourceforge.net/ 00:06:54.652 00:06:54.652 00:06:54.653 Suite: concat 00:06:54.653 Test: test_concat_start ...passed 00:06:54.653 Test: test_concat_rw ...passed 00:06:54.653 Test: test_concat_null_payload ...passed 00:06:54.653 00:06:54.653 Run Summary: Type Total Ran Passed Failed Inactive 00:06:54.653 suites 1 1 n/a 0 0 00:06:54.653 tests 3 3 3 0 0 00:06:54.653 asserts 8097 8097 8097 0 n/a 00:06:54.653 00:06:54.653 Elapsed time = 0.007 seconds 00:06:54.653 02:30:19 -- unit/unittest.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:06:54.653 00:06:54.653 00:06:54.653 CUnit - A unit testing framework for C - Version 2.1-3 00:06:54.653 http://cunit.sourceforge.net/ 00:06:54.653 00:06:54.653 00:06:54.653 Suite: raid1 00:06:54.653 Test: test_raid1_start ...passed 00:06:54.653 Test: test_raid1_read_balancing ...passed 00:06:54.653 00:06:54.653 Run Summary: Type Total Ran Passed Failed Inactive 00:06:54.653 suites 1 1 n/a 0 0 00:06:54.653 tests 2 2 2 0 0 00:06:54.653 asserts 2856 2856 2856 0 n/a 00:06:54.653 00:06:54.653 Elapsed time = 0.004 seconds 00:06:54.653 02:30:19 -- unit/unittest.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:06:54.653 00:06:54.653 00:06:54.653 CUnit - A unit testing framework for C - Version 2.1-3 00:06:54.653 http://cunit.sourceforge.net/ 00:06:54.653 00:06:54.653 00:06:54.653 Suite: zone 00:06:54.653 Test: test_zone_get_operation ...passed 00:06:54.653 Test: test_bdev_zone_get_info ...passed 00:06:54.653 Test: test_bdev_zone_management ...passed 00:06:54.653 Test: test_bdev_zone_append ...passed 00:06:54.653 Test: test_bdev_zone_append_with_md ...passed 00:06:54.653 Test: test_bdev_zone_appendv ...passed 00:06:54.653 Test: test_bdev_zone_appendv_with_md ...passed 00:06:54.653 Test: test_bdev_io_get_append_location ...passed 00:06:54.653 00:06:54.653 Run Summary: Type Total Ran Passed Failed Inactive 00:06:54.653 suites 1 1 n/a 0 0 00:06:54.653 tests 8 8 8 0 0 00:06:54.653 asserts 94 94 94 0 n/a 00:06:54.653 00:06:54.653 Elapsed time = 0.001 seconds 00:06:54.653 02:30:19 -- unit/unittest.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:06:54.653 00:06:54.653 00:06:54.653 CUnit - A unit testing framework for C - Version 2.1-3 00:06:54.653 http://cunit.sourceforge.net/ 00:06:54.653 00:06:54.653 00:06:54.653 Suite: gpt_parse 00:06:54.653 Test: test_parse_mbr_and_primary ...[2024-07-11 02:30:19.712427] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:54.653 [2024-07-11 02:30:19.712811] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:54.653 [2024-07-11 02:30:19.712986] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:06:54.653 [2024-07-11 02:30:19.713212] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:06:54.653 [2024-07-11 02:30:19.713370] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:06:54.653 [2024-07-11 02:30:19.713550] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:06:54.653 passed 00:06:54.653 Test: test_parse_secondary ...[2024-07-11 02:30:19.714616] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:06:54.653 [2024-07-11 02:30:19.714772] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:06:54.653 [2024-07-11 02:30:19.714909] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:06:54.653 [2024-07-11 02:30:19.715039] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:06:54.653 passed 00:06:54.653 Test: test_check_mbr ...[2024-07-11 02:30:19.716084] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:54.653 [2024-07-11 02:30:19.716238] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:54.653 passed 00:06:54.653 Test: test_read_header ...[2024-07-11 02:30:19.716571] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:06:54.653 [2024-07-11 02:30:19.716781] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 177:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:06:54.653 [2024-07-11 02:30:19.716977] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:06:54.653 [2024-07-11 02:30:19.717146] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 191:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:06:54.653 [2024-07-11 02:30:19.717280] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 135:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:06:54.653 [2024-07-11 02:30:19.717413] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:06:54.653 passed 00:06:54.653 Test: test_read_partitions ...[2024-07-11 02:30:19.717759] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:06:54.653 [2024-07-11 02:30:19.717925] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 95:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:06:54.653 [2024-07-11 02:30:19.718060] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:06:54.653 [2024-07-11 02:30:19.718214] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:06:54.653 [2024-07-11 02:30:19.718733] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:06:54.653 passed 00:06:54.653 00:06:54.653 Run Summary: Type Total Ran Passed Failed Inactive 00:06:54.653 suites 1 1 n/a 0 0 00:06:54.653 tests 5 5 5 0 0 00:06:54.653 asserts 33 33 33 0 n/a 00:06:54.653 00:06:54.653 Elapsed time = 0.005 seconds 00:06:54.653 02:30:19 -- unit/unittest.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:06:54.911 00:06:54.911 00:06:54.911 CUnit - A unit testing framework for C - Version 2.1-3 00:06:54.911 http://cunit.sourceforge.net/ 00:06:54.911 00:06:54.911 00:06:54.911 Suite: bdev_part 00:06:54.911 Test: part_test ...[2024-07-11 02:30:19.751033] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name test1 already exists 00:06:54.911 passed 00:06:54.911 Test: part_free_test ...passed 00:06:54.911 Test: part_get_io_channel_test ...passed 00:06:54.911 Test: part_construct_ext ...passed 00:06:54.911 00:06:54.911 Run Summary: Type Total Ran Passed Failed Inactive 00:06:54.911 suites 1 1 n/a 0 0 00:06:54.911 tests 4 4 4 0 0 00:06:54.911 asserts 48 48 48 0 n/a 00:06:54.911 00:06:54.911 Elapsed time = 0.048 seconds 00:06:54.911 02:30:19 -- unit/unittest.sh@29 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:06:54.911 00:06:54.911 00:06:54.911 CUnit - A unit testing framework for C - Version 2.1-3 00:06:54.911 http://cunit.sourceforge.net/ 00:06:54.911 00:06:54.911 00:06:54.911 Suite: scsi_nvme_suite 00:06:54.911 Test: scsi_nvme_translate_test ...passed 00:06:54.911 00:06:54.911 Run Summary: Type Total Ran Passed Failed Inactive 00:06:54.911 suites 1 1 n/a 0 0 00:06:54.911 tests 1 1 1 0 0 00:06:54.911 asserts 104 104 104 0 n/a 00:06:54.911 00:06:54.911 Elapsed time = 0.000 seconds 00:06:54.911 02:30:19 -- unit/unittest.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:06:54.911 00:06:54.911 00:06:54.911 CUnit - A unit testing framework for C - Version 2.1-3 00:06:54.911 http://cunit.sourceforge.net/ 00:06:54.911 00:06:54.911 00:06:54.911 Suite: lvol 00:06:54.911 Test: ut_lvs_init ...[2024-07-11 02:30:19.872882] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:06:54.911 [2024-07-11 02:30:19.873291] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:06:54.911 passed 00:06:54.911 Test: ut_lvol_init ...passed 00:06:54.911 Test: ut_lvol_snapshot ...passed 00:06:54.911 Test: ut_lvol_clone ...passed 00:06:54.911 Test: ut_lvs_destroy ...passed 00:06:54.911 Test: ut_lvs_unload ...passed 00:06:54.911 Test: ut_lvol_resize ...[2024-07-11 02:30:19.874769] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1391:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:06:54.911 passed 00:06:54.911 Test: ut_lvol_set_read_only ...passed 00:06:54.911 Test: ut_lvol_hotremove ...passed 00:06:54.911 Test: ut_vbdev_lvol_get_io_channel ...passed 00:06:54.911 Test: ut_vbdev_lvol_io_type_supported ...passed 00:06:54.911 Test: ut_lvol_read_write ...passed 00:06:54.911 Test: ut_vbdev_lvol_submit_request ...passed 00:06:54.911 Test: ut_lvol_examine_config ...passed 00:06:54.911 Test: ut_lvol_examine_disk ...[2024-07-11 02:30:19.875408] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1533:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:06:54.911 passed 00:06:54.911 Test: ut_lvol_rename ...[2024-07-11 02:30:19.876338] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:06:54.911 [2024-07-11 02:30:19.876443] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1341:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:06:54.911 passed 00:06:54.911 Test: ut_bdev_finish ...passed 00:06:54.911 Test: ut_lvs_rename ...passed 00:06:54.911 Test: ut_lvol_seek ...passed 00:06:54.911 Test: ut_esnap_dev_create ...[2024-07-11 02:30:19.877019] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1868:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:06:54.911 [2024-07-11 02:30:19.877088] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1874:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:06:54.911 [2024-07-11 02:30:19.877126] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:06:54.911 [2024-07-11 02:30:19.877173] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1900:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : unable to claim esnap bdev 'a27fd8fe-d4b9-431e-a044-271016228ce4': -1 00:06:54.911 passed 00:06:54.911 Test: ut_lvol_esnap_clone_bad_args ...[2024-07-11 02:30:19.877292] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1277:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:06:54.911 [2024-07-11 02:30:19.877324] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1284:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9d1-aa17f37dd8db' could not be opened: error -19 00:06:54.911 passed 00:06:54.911 00:06:54.911 Run Summary: Type Total Ran Passed Failed Inactive 00:06:54.911 suites 1 1 n/a 0 0 00:06:54.911 tests 21 21 21 0 0 00:06:54.911 asserts 712 712 712 0 n/a 00:06:54.911 00:06:54.911 Elapsed time = 0.005 seconds 00:06:54.911 02:30:19 -- unit/unittest.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:06:54.911 00:06:54.911 00:06:54.911 CUnit - A unit testing framework for C - Version 2.1-3 00:06:54.911 http://cunit.sourceforge.net/ 00:06:54.911 00:06:54.911 00:06:54.911 Suite: zone_block 00:06:54.911 Test: test_zone_block_create ...passed 00:06:54.911 Test: test_zone_block_create_invalid ...[2024-07-11 02:30:19.930788] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:06:54.911 [2024-07-11 02:30:19.931099] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-07-11 02:30:19.931269] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:06:54.911 [2024-07-11 02:30:19.931333] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-07-11 02:30:19.931485] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 860:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:06:54.911 [2024-07-11 02:30:19.931521] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-07-11 02:30:19.931599] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 865:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:06:54.911 [2024-07-11 02:30:19.931648] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argumentpassed 00:06:54.911 Test: test_get_zone_info ...[2024-07-11 02:30:19.932189] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:54.911 [2024-07-11 02:30:19.932266] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:54.912 [2024-07-11 02:30:19.932377] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:54.912 passed 00:06:54.912 Test: test_supported_io_types ...passed 00:06:54.912 Test: test_reset_zone ...[2024-07-11 02:30:19.933157] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:54.912 [2024-07-11 02:30:19.933222] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:54.912 passed 00:06:54.912 Test: test_open_zone ...[2024-07-11 02:30:19.933685] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:54.912 [2024-07-11 02:30:19.934419] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:54.912 [2024-07-11 02:30:19.934490] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:54.912 passed 00:06:54.912 Test: test_zone_write ...[2024-07-11 02:30:19.934904] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:06:54.912 [2024-07-11 02:30:19.934960] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:54.912 [2024-07-11 02:30:19.935028] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:06:54.912 [2024-07-11 02:30:19.935073] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:54.912 [2024-07-11 02:30:19.940223] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:06:54.912 [2024-07-11 02:30:19.940275] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:54.912 [2024-07-11 02:30:19.940351] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:06:54.912 [2024-07-11 02:30:19.940374] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:54.912 [2024-07-11 02:30:19.945769] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:06:54.912 [2024-07-11 02:30:19.945835] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:54.912 passed 00:06:54.912 Test: test_zone_read ...[2024-07-11 02:30:19.946261] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:06:54.912 [2024-07-11 02:30:19.946310] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:54.912 [2024-07-11 02:30:19.946390] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:06:54.912 [2024-07-11 02:30:19.946423] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:54.912 [2024-07-11 02:30:19.946897] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:06:54.912 [2024-07-11 02:30:19.946938] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:54.912 passed 00:06:54.912 Test: test_close_zone ...[2024-07-11 02:30:19.947275] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:54.912 [2024-07-11 02:30:19.947376] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:54.912 [2024-07-11 02:30:19.947612] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:54.912 [2024-07-11 02:30:19.947684] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:54.912 passed 00:06:54.912 Test: test_finish_zone ...[2024-07-11 02:30:19.948345] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:54.912 [2024-07-11 02:30:19.948398] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:54.912 passed 00:06:54.912 Test: test_append_zone ...[2024-07-11 02:30:19.948791] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:06:54.912 [2024-07-11 02:30:19.948840] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:54.912 [2024-07-11 02:30:19.948898] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:06:54.912 [2024-07-11 02:30:19.948933] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:54.912 [2024-07-11 02:30:19.960259] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:06:54.912 [2024-07-11 02:30:19.960317] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:54.912 passed 00:06:54.912 00:06:54.912 Run Summary: Type Total Ran Passed Failed Inactive 00:06:54.912 suites 1 1 n/a 0 0 00:06:54.912 tests 11 11 11 0 0 00:06:54.912 asserts 3437 3437 3437 0 n/a 00:06:54.912 00:06:54.912 Elapsed time = 0.031 seconds 00:06:55.169 02:30:20 -- unit/unittest.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:06:55.169 00:06:55.169 00:06:55.169 CUnit - A unit testing framework for C - Version 2.1-3 00:06:55.169 http://cunit.sourceforge.net/ 00:06:55.169 00:06:55.169 00:06:55.169 Suite: bdev 00:06:55.169 Test: basic ...[2024-07-11 02:30:20.054337] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x55f79c88e401): Operation not permitted (rc=-1) 00:06:55.169 [2024-07-11 02:30:20.054638] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x6130000003c0 (0x55f79c88e3c0): Operation not permitted (rc=-1) 00:06:55.169 [2024-07-11 02:30:20.054684] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x55f79c88e401): Operation not permitted (rc=-1) 00:06:55.169 passed 00:06:55.169 Test: unregister_and_close ...passed 00:06:55.169 Test: unregister_and_close_different_threads ...passed 00:06:55.169 Test: basic_qos ...passed 00:06:55.169 Test: put_channel_during_reset ...passed 00:06:55.427 Test: aborted_reset ...passed 00:06:55.427 Test: aborted_reset_no_outstanding_io ...passed 00:06:55.427 Test: io_during_reset ...passed 00:06:55.427 Test: reset_completions ...passed 00:06:55.427 Test: io_during_qos_queue ...passed 00:06:55.427 Test: io_during_qos_reset ...passed 00:06:55.427 Test: enomem ...passed 00:06:55.427 Test: enomem_multi_bdev ...passed 00:06:55.685 Test: enomem_multi_bdev_unregister ...passed 00:06:55.685 Test: enomem_multi_io_target ...passed 00:06:55.685 Test: qos_dynamic_enable ...passed 00:06:55.685 Test: bdev_histograms_mt ...passed 00:06:55.685 Test: bdev_set_io_timeout_mt ...[2024-07-11 02:30:20.684099] thread.c: 465:spdk_thread_lib_fini: *ERROR*: io_device 0x6130000003c0 not unregistered 00:06:55.685 passed 00:06:55.685 Test: lock_lba_range_then_submit_io ...[2024-07-11 02:30:20.699468] thread.c:2163:spdk_io_device_register: *ERROR*: io_device 0x55f79c88e380 already registered (old:0x6130000003c0 new:0x613000000c80) 00:06:55.685 passed 00:06:55.685 Test: unregister_during_reset ...passed 00:06:55.685 Test: event_notify_and_close ...passed 00:06:55.944 Test: unregister_and_qos_poller ...passed 00:06:55.944 Suite: bdev_wrong_thread 00:06:55.944 Test: spdk_bdev_register_wt ...[2024-07-11 02:30:20.808119] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8364:spdk_bdev_register: *ERROR*: Cannot examine bdev wt_bdev on thread 0x618000001480 (0x618000001480) 00:06:55.944 passed 00:06:55.944 Test: spdk_bdev_examine_wt ...[2024-07-11 02:30:20.808461] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 793:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x618000001480 (0x618000001480) 00:06:55.944 passed 00:06:55.944 00:06:55.944 Run Summary: Type Total Ran Passed Failed Inactive 00:06:55.944 suites 2 2 n/a 0 0 00:06:55.944 tests 24 24 24 0 0 00:06:55.944 asserts 621 621 621 0 n/a 00:06:55.944 00:06:55.944 Elapsed time = 0.781 seconds 00:06:55.944 00:06:55.944 real 0m3.829s 00:06:55.944 user 0m1.734s 00:06:55.944 sys 0m2.057s 00:06:55.944 02:30:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.944 ************************************ 00:06:55.944 END TEST unittest_bdev 00:06:55.944 ************************************ 00:06:55.944 02:30:20 -- common/autotest_common.sh@10 -- # set +x 00:06:55.944 02:30:20 -- unit/unittest.sh@213 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:55.944 02:30:20 -- unit/unittest.sh@218 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:55.944 02:30:20 -- unit/unittest.sh@223 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:55.944 02:30:20 -- unit/unittest.sh@227 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:55.944 02:30:20 -- unit/unittest.sh@228 -- # run_test unittest_bdev_raid5f /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:06:55.944 02:30:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:55.944 02:30:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:55.944 02:30:20 -- common/autotest_common.sh@10 -- # set +x 00:06:55.944 ************************************ 00:06:55.944 START TEST unittest_bdev_raid5f 00:06:55.944 ************************************ 00:06:55.944 02:30:20 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:06:55.944 00:06:55.944 00:06:55.944 CUnit - A unit testing framework for C - Version 2.1-3 00:06:55.944 http://cunit.sourceforge.net/ 00:06:55.944 00:06:55.944 00:06:55.944 Suite: raid5f 00:06:55.944 Test: test_raid5f_start ...passed 00:06:56.510 Test: test_raid5f_submit_read_request ...passed 00:06:56.510 Test: test_raid5f_stripe_request_map_iovecs ...passed 00:06:59.793 Test: test_raid5f_submit_full_stripe_write_request ...passed 00:07:14.662 Test: test_raid5f_chunk_write_error ...passed 00:07:22.784 Test: test_raid5f_chunk_write_error_with_enomem ...passed 00:07:24.684 Test: test_raid5f_submit_full_stripe_write_request_degraded ...passed 00:07:51.235 Test: test_raid5f_submit_read_request_degraded ...passed 00:07:51.235 00:07:51.235 Run Summary: Type Total Ran Passed Failed Inactive 00:07:51.235 suites 1 1 n/a 0 0 00:07:51.235 tests 8 8 8 0 0 00:07:51.235 asserts 351864 351864 351864 0 n/a 00:07:51.235 00:07:51.235 Elapsed time = 52.857 seconds 00:07:51.235 00:07:51.235 real 0m52.946s 00:07:51.235 user 0m50.434s 00:07:51.235 sys 0m2.493s 00:07:51.235 02:31:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:51.235 02:31:13 -- common/autotest_common.sh@10 -- # set +x 00:07:51.235 ************************************ 00:07:51.235 END TEST unittest_bdev_raid5f 00:07:51.235 ************************************ 00:07:51.235 02:31:13 -- unit/unittest.sh@231 -- # run_test unittest_blob_blobfs unittest_blob 00:07:51.235 02:31:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:51.235 02:31:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:51.235 02:31:13 -- common/autotest_common.sh@10 -- # set +x 00:07:51.235 ************************************ 00:07:51.235 START TEST unittest_blob_blobfs 00:07:51.235 ************************************ 00:07:51.235 02:31:13 -- common/autotest_common.sh@1104 -- # unittest_blob 00:07:51.235 02:31:13 -- unit/unittest.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:07:51.235 02:31:13 -- unit/unittest.sh@39 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:07:51.235 00:07:51.235 00:07:51.235 CUnit - A unit testing framework for C - Version 2.1-3 00:07:51.235 http://cunit.sourceforge.net/ 00:07:51.235 00:07:51.235 00:07:51.235 Suite: blob_nocopy_noextent 00:07:51.235 Test: blob_init ...[2024-07-11 02:31:13.926847] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:51.235 passed 00:07:51.235 Test: blob_thin_provision ...passed 00:07:51.235 Test: blob_read_only ...passed 00:07:51.235 Test: bs_load ...[2024-07-11 02:31:14.026764] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:51.235 passed 00:07:51.235 Test: bs_load_custom_cluster_size ...passed 00:07:51.235 Test: bs_load_after_failed_grow ...passed 00:07:51.235 Test: bs_cluster_sz ...[2024-07-11 02:31:14.060822] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:51.235 [2024-07-11 02:31:14.061242] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:51.235 [2024-07-11 02:31:14.061414] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:51.235 passed 00:07:51.235 Test: bs_resize_md ...passed 00:07:51.235 Test: bs_destroy ...passed 00:07:51.235 Test: bs_type ...passed 00:07:51.235 Test: bs_super_block ...passed 00:07:51.235 Test: bs_test_recover_cluster_count ...passed 00:07:51.236 Test: bs_grow_live ...passed 00:07:51.236 Test: bs_grow_live_no_space ...passed 00:07:51.236 Test: bs_test_grow ...passed 00:07:51.236 Test: blob_serialize_test ...passed 00:07:51.236 Test: super_block_crc ...passed 00:07:51.236 Test: blob_thin_prov_write_count_io ...passed 00:07:51.236 Test: bs_load_iter_test ...passed 00:07:51.236 Test: blob_relations ...[2024-07-11 02:31:14.225735] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:51.236 [2024-07-11 02:31:14.225856] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:51.236 [2024-07-11 02:31:14.226919] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:51.236 [2024-07-11 02:31:14.227057] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:51.236 passed 00:07:51.236 Test: blob_relations2 ...[2024-07-11 02:31:14.242323] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:51.236 [2024-07-11 02:31:14.242431] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:51.236 [2024-07-11 02:31:14.242492] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:51.236 [2024-07-11 02:31:14.242514] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:51.236 [2024-07-11 02:31:14.244002] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:51.236 [2024-07-11 02:31:14.244076] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:51.236 [2024-07-11 02:31:14.244554] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:51.236 [2024-07-11 02:31:14.244614] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:51.236 passed 00:07:51.236 Test: blob_relations3 ...passed 00:07:51.236 Test: blobstore_clean_power_failure ...passed 00:07:51.236 Test: blob_delete_snapshot_power_failure ...[2024-07-11 02:31:14.393744] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:51.236 [2024-07-11 02:31:14.406065] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:51.236 [2024-07-11 02:31:14.406200] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:51.236 [2024-07-11 02:31:14.406269] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:51.236 [2024-07-11 02:31:14.418261] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:51.236 [2024-07-11 02:31:14.418354] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:51.236 [2024-07-11 02:31:14.418432] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:51.236 [2024-07-11 02:31:14.418466] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:51.236 [2024-07-11 02:31:14.431337] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:51.236 [2024-07-11 02:31:14.431564] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:51.236 [2024-07-11 02:31:14.445241] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:51.236 [2024-07-11 02:31:14.445387] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:51.236 [2024-07-11 02:31:14.459540] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:51.236 [2024-07-11 02:31:14.459653] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:51.236 passed 00:07:51.236 Test: blob_create_snapshot_power_failure ...[2024-07-11 02:31:14.501249] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:51.236 [2024-07-11 02:31:14.528219] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:51.236 [2024-07-11 02:31:14.542086] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:51.236 passed 00:07:51.236 Test: blob_io_unit ...passed 00:07:51.236 Test: blob_io_unit_compatibility ...passed 00:07:51.236 Test: blob_ext_md_pages ...passed 00:07:51.236 Test: blob_esnap_io_4096_4096 ...passed 00:07:51.236 Test: blob_esnap_io_512_512 ...passed 00:07:51.236 Test: blob_esnap_io_4096_512 ...passed 00:07:51.236 Test: blob_esnap_io_512_4096 ...passed 00:07:51.236 Suite: blob_bs_nocopy_noextent 00:07:51.236 Test: blob_open ...passed 00:07:51.236 Test: blob_create ...[2024-07-11 02:31:14.813820] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:51.236 passed 00:07:51.236 Test: blob_create_loop ...passed 00:07:51.236 Test: blob_create_fail ...[2024-07-11 02:31:14.923037] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:51.236 passed 00:07:51.236 Test: blob_create_internal ...passed 00:07:51.236 Test: blob_create_zero_extent ...passed 00:07:51.236 Test: blob_snapshot ...passed 00:07:51.236 Test: blob_clone ...passed 00:07:51.236 Test: blob_inflate ...[2024-07-11 02:31:15.133822] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:51.236 passed 00:07:51.236 Test: blob_delete ...passed 00:07:51.236 Test: blob_resize_test ...[2024-07-11 02:31:15.208497] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:51.236 passed 00:07:51.236 Test: channel_ops ...passed 00:07:51.236 Test: blob_super ...passed 00:07:51.236 Test: blob_rw_verify_iov ...passed 00:07:51.236 Test: blob_unmap ...passed 00:07:51.236 Test: blob_iter ...passed 00:07:51.236 Test: blob_parse_md ...passed 00:07:51.236 Test: bs_load_pending_removal ...passed 00:07:51.236 Test: bs_unload ...[2024-07-11 02:31:15.511227] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:51.236 passed 00:07:51.236 Test: bs_usable_clusters ...passed 00:07:51.236 Test: blob_crc ...[2024-07-11 02:31:15.586232] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:51.236 [2024-07-11 02:31:15.586400] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:51.236 passed 00:07:51.236 Test: blob_flags ...passed 00:07:51.236 Test: bs_version ...passed 00:07:51.236 Test: blob_set_xattrs_test ...[2024-07-11 02:31:15.701304] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:51.236 [2024-07-11 02:31:15.701421] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:51.236 passed 00:07:51.236 Test: blob_thin_prov_alloc ...passed 00:07:51.236 Test: blob_insert_cluster_msg_test ...passed 00:07:51.236 Test: blob_thin_prov_rw ...passed 00:07:51.236 Test: blob_thin_prov_rle ...passed 00:07:51.236 Test: blob_thin_prov_rw_iov ...passed 00:07:51.236 Test: blob_snapshot_rw ...passed 00:07:51.236 Test: blob_snapshot_rw_iov ...passed 00:07:51.494 Test: blob_inflate_rw ...passed 00:07:51.494 Test: blob_snapshot_freeze_io ...passed 00:07:51.494 Test: blob_operation_split_rw ...passed 00:07:51.751 Test: blob_operation_split_rw_iov ...passed 00:07:51.751 Test: blob_simultaneous_operations ...[2024-07-11 02:31:16.774517] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:51.751 [2024-07-11 02:31:16.775086] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:51.751 [2024-07-11 02:31:16.776817] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:51.751 [2024-07-11 02:31:16.777124] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:51.751 [2024-07-11 02:31:16.792376] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:51.751 [2024-07-11 02:31:16.792634] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:51.751 [2024-07-11 02:31:16.792931] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:51.751 [2024-07-11 02:31:16.793073] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:51.751 passed 00:07:52.009 Test: blob_persist_test ...passed 00:07:52.009 Test: blob_decouple_snapshot ...passed 00:07:52.009 Test: blob_seek_io_unit ...passed 00:07:52.009 Test: blob_nested_freezes ...passed 00:07:52.009 Suite: blob_blob_nocopy_noextent 00:07:52.009 Test: blob_write ...passed 00:07:52.009 Test: blob_read ...passed 00:07:52.268 Test: blob_rw_verify ...passed 00:07:52.268 Test: blob_rw_verify_iov_nomem ...passed 00:07:52.268 Test: blob_rw_iov_read_only ...passed 00:07:52.268 Test: blob_xattr ...passed 00:07:52.268 Test: blob_dirty_shutdown ...passed 00:07:52.268 Test: blob_is_degraded ...passed 00:07:52.268 Suite: blob_esnap_bs_nocopy_noextent 00:07:52.268 Test: blob_esnap_create ...passed 00:07:52.527 Test: blob_esnap_thread_add_remove ...passed 00:07:52.527 Test: blob_esnap_clone_snapshot ...passed 00:07:52.527 Test: blob_esnap_clone_inflate ...passed 00:07:52.527 Test: blob_esnap_clone_decouple ...passed 00:07:52.527 Test: blob_esnap_clone_reload ...passed 00:07:52.527 Test: blob_esnap_hotplug ...passed 00:07:52.527 Suite: blob_nocopy_extent 00:07:52.527 Test: blob_init ...[2024-07-11 02:31:17.573393] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:52.527 passed 00:07:52.527 Test: blob_thin_provision ...passed 00:07:52.527 Test: blob_read_only ...passed 00:07:52.786 Test: bs_load ...[2024-07-11 02:31:17.625997] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:52.786 passed 00:07:52.786 Test: bs_load_custom_cluster_size ...passed 00:07:52.786 Test: bs_load_after_failed_grow ...passed 00:07:52.786 Test: bs_cluster_sz ...[2024-07-11 02:31:17.654982] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:52.786 [2024-07-11 02:31:17.655321] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:52.786 [2024-07-11 02:31:17.655479] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:52.786 passed 00:07:52.786 Test: bs_resize_md ...passed 00:07:52.786 Test: bs_destroy ...passed 00:07:52.786 Test: bs_type ...passed 00:07:52.786 Test: bs_super_block ...passed 00:07:52.786 Test: bs_test_recover_cluster_count ...passed 00:07:52.786 Test: bs_grow_live ...passed 00:07:52.786 Test: bs_grow_live_no_space ...passed 00:07:52.786 Test: bs_test_grow ...passed 00:07:52.786 Test: blob_serialize_test ...passed 00:07:52.786 Test: super_block_crc ...passed 00:07:52.786 Test: blob_thin_prov_write_count_io ...passed 00:07:52.786 Test: bs_load_iter_test ...passed 00:07:52.786 Test: blob_relations ...[2024-07-11 02:31:17.823680] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:52.786 [2024-07-11 02:31:17.823986] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:52.786 [2024-07-11 02:31:17.824995] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:52.786 [2024-07-11 02:31:17.825168] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:52.786 passed 00:07:52.786 Test: blob_relations2 ...[2024-07-11 02:31:17.840672] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:52.786 [2024-07-11 02:31:17.840929] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:52.786 [2024-07-11 02:31:17.840998] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:52.786 [2024-07-11 02:31:17.841303] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:52.786 [2024-07-11 02:31:17.842792] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:52.786 [2024-07-11 02:31:17.842955] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:52.786 [2024-07-11 02:31:17.843415] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:52.786 [2024-07-11 02:31:17.843569] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:52.786 passed 00:07:52.786 Test: blob_relations3 ...passed 00:07:53.046 Test: blobstore_clean_power_failure ...passed 00:07:53.046 Test: blob_delete_snapshot_power_failure ...[2024-07-11 02:31:18.018365] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:53.046 [2024-07-11 02:31:18.033416] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:53.046 [2024-07-11 02:31:18.047409] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:53.046 [2024-07-11 02:31:18.047705] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:53.046 [2024-07-11 02:31:18.047776] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:53.046 [2024-07-11 02:31:18.061847] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:53.046 [2024-07-11 02:31:18.062119] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:53.046 [2024-07-11 02:31:18.062191] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:53.046 [2024-07-11 02:31:18.062337] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:53.046 [2024-07-11 02:31:18.076349] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:53.046 [2024-07-11 02:31:18.076639] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:53.046 [2024-07-11 02:31:18.076703] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:53.046 [2024-07-11 02:31:18.076848] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:53.046 [2024-07-11 02:31:18.090840] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:53.046 [2024-07-11 02:31:18.091149] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:53.046 [2024-07-11 02:31:18.105249] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:53.046 [2024-07-11 02:31:18.105571] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:53.046 [2024-07-11 02:31:18.119468] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:53.046 [2024-07-11 02:31:18.119758] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:53.305 passed 00:07:53.305 Test: blob_create_snapshot_power_failure ...[2024-07-11 02:31:18.161656] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:53.305 [2024-07-11 02:31:18.175365] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:53.305 [2024-07-11 02:31:18.202717] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:53.305 [2024-07-11 02:31:18.217217] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:53.305 passed 00:07:53.305 Test: blob_io_unit ...passed 00:07:53.305 Test: blob_io_unit_compatibility ...passed 00:07:53.305 Test: blob_ext_md_pages ...passed 00:07:53.305 Test: blob_esnap_io_4096_4096 ...passed 00:07:53.305 Test: blob_esnap_io_512_512 ...passed 00:07:53.563 Test: blob_esnap_io_4096_512 ...passed 00:07:53.563 Test: blob_esnap_io_512_4096 ...passed 00:07:53.563 Suite: blob_bs_nocopy_extent 00:07:53.563 Test: blob_open ...passed 00:07:53.563 Test: blob_create ...[2024-07-11 02:31:18.490080] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:53.563 passed 00:07:53.563 Test: blob_create_loop ...passed 00:07:53.563 Test: blob_create_fail ...[2024-07-11 02:31:18.603579] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:53.563 passed 00:07:53.563 Test: blob_create_internal ...passed 00:07:53.822 Test: blob_create_zero_extent ...passed 00:07:53.822 Test: blob_snapshot ...passed 00:07:53.822 Test: blob_clone ...passed 00:07:53.822 Test: blob_inflate ...[2024-07-11 02:31:18.800752] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:53.822 passed 00:07:53.822 Test: blob_delete ...passed 00:07:53.822 Test: blob_resize_test ...[2024-07-11 02:31:18.871636] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:53.822 passed 00:07:54.082 Test: channel_ops ...passed 00:07:54.082 Test: blob_super ...passed 00:07:54.082 Test: blob_rw_verify_iov ...passed 00:07:54.082 Test: blob_unmap ...passed 00:07:54.082 Test: blob_iter ...passed 00:07:54.082 Test: blob_parse_md ...passed 00:07:54.082 Test: bs_load_pending_removal ...passed 00:07:54.082 Test: bs_unload ...[2024-07-11 02:31:19.165490] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:54.340 passed 00:07:54.340 Test: bs_usable_clusters ...passed 00:07:54.340 Test: blob_crc ...[2024-07-11 02:31:19.239949] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:54.340 [2024-07-11 02:31:19.240259] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:54.340 passed 00:07:54.340 Test: blob_flags ...passed 00:07:54.340 Test: bs_version ...passed 00:07:54.340 Test: blob_set_xattrs_test ...[2024-07-11 02:31:19.357737] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:54.340 [2024-07-11 02:31:19.358039] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:54.340 passed 00:07:54.599 Test: blob_thin_prov_alloc ...passed 00:07:54.599 Test: blob_insert_cluster_msg_test ...passed 00:07:54.599 Test: blob_thin_prov_rw ...passed 00:07:54.599 Test: blob_thin_prov_rle ...passed 00:07:54.599 Test: blob_thin_prov_rw_iov ...passed 00:07:54.599 Test: blob_snapshot_rw ...passed 00:07:54.857 Test: blob_snapshot_rw_iov ...passed 00:07:54.857 Test: blob_inflate_rw ...passed 00:07:55.119 Test: blob_snapshot_freeze_io ...passed 00:07:55.119 Test: blob_operation_split_rw ...passed 00:07:55.386 Test: blob_operation_split_rw_iov ...passed 00:07:55.386 Test: blob_simultaneous_operations ...[2024-07-11 02:31:20.288468] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:55.386 [2024-07-11 02:31:20.288764] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:55.386 [2024-07-11 02:31:20.289982] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:55.386 [2024-07-11 02:31:20.290140] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:55.386 [2024-07-11 02:31:20.302377] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:55.386 [2024-07-11 02:31:20.302612] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:55.386 [2024-07-11 02:31:20.302761] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:55.386 [2024-07-11 02:31:20.303017] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:55.386 passed 00:07:55.386 Test: blob_persist_test ...passed 00:07:55.386 Test: blob_decouple_snapshot ...passed 00:07:55.386 Test: blob_seek_io_unit ...passed 00:07:55.644 Test: blob_nested_freezes ...passed 00:07:55.644 Suite: blob_blob_nocopy_extent 00:07:55.644 Test: blob_write ...passed 00:07:55.644 Test: blob_read ...passed 00:07:55.644 Test: blob_rw_verify ...passed 00:07:55.644 Test: blob_rw_verify_iov_nomem ...passed 00:07:55.644 Test: blob_rw_iov_read_only ...passed 00:07:55.644 Test: blob_xattr ...passed 00:07:55.902 Test: blob_dirty_shutdown ...passed 00:07:55.902 Test: blob_is_degraded ...passed 00:07:55.902 Suite: blob_esnap_bs_nocopy_extent 00:07:55.902 Test: blob_esnap_create ...passed 00:07:55.902 Test: blob_esnap_thread_add_remove ...passed 00:07:55.902 Test: blob_esnap_clone_snapshot ...passed 00:07:55.902 Test: blob_esnap_clone_inflate ...passed 00:07:55.902 Test: blob_esnap_clone_decouple ...passed 00:07:55.902 Test: blob_esnap_clone_reload ...passed 00:07:56.161 Test: blob_esnap_hotplug ...passed 00:07:56.161 Suite: blob_copy_noextent 00:07:56.161 Test: blob_init ...[2024-07-11 02:31:21.026168] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:56.161 passed 00:07:56.161 Test: blob_thin_provision ...passed 00:07:56.161 Test: blob_read_only ...passed 00:07:56.161 Test: bs_load ...[2024-07-11 02:31:21.075559] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:56.161 passed 00:07:56.161 Test: bs_load_custom_cluster_size ...passed 00:07:56.161 Test: bs_load_after_failed_grow ...passed 00:07:56.161 Test: bs_cluster_sz ...[2024-07-11 02:31:21.102642] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:56.161 [2024-07-11 02:31:21.102991] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:56.161 [2024-07-11 02:31:21.103145] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:56.161 passed 00:07:56.161 Test: bs_resize_md ...passed 00:07:56.161 Test: bs_destroy ...passed 00:07:56.161 Test: bs_type ...passed 00:07:56.161 Test: bs_super_block ...passed 00:07:56.161 Test: bs_test_recover_cluster_count ...passed 00:07:56.161 Test: bs_grow_live ...passed 00:07:56.161 Test: bs_grow_live_no_space ...passed 00:07:56.161 Test: bs_test_grow ...passed 00:07:56.161 Test: blob_serialize_test ...passed 00:07:56.161 Test: super_block_crc ...passed 00:07:56.161 Test: blob_thin_prov_write_count_io ...passed 00:07:56.161 Test: bs_load_iter_test ...passed 00:07:56.420 Test: blob_relations ...[2024-07-11 02:31:21.262265] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:56.420 [2024-07-11 02:31:21.262633] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:56.420 [2024-07-11 02:31:21.263271] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:56.420 [2024-07-11 02:31:21.263468] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:56.420 passed 00:07:56.420 Test: blob_relations2 ...[2024-07-11 02:31:21.277549] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:56.420 [2024-07-11 02:31:21.277840] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:56.420 [2024-07-11 02:31:21.278036] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:56.420 [2024-07-11 02:31:21.278109] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:56.420 [2024-07-11 02:31:21.280081] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:56.420 [2024-07-11 02:31:21.280196] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:56.420 [2024-07-11 02:31:21.280777] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:56.420 [2024-07-11 02:31:21.280874] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:56.420 passed 00:07:56.420 Test: blob_relations3 ...passed 00:07:56.420 Test: blobstore_clean_power_failure ...passed 00:07:56.420 Test: blob_delete_snapshot_power_failure ...[2024-07-11 02:31:21.450098] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:56.420 [2024-07-11 02:31:21.462878] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:56.420 [2024-07-11 02:31:21.462983] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:56.420 [2024-07-11 02:31:21.463064] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:56.420 [2024-07-11 02:31:21.475997] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:56.420 [2024-07-11 02:31:21.476105] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:56.420 [2024-07-11 02:31:21.476140] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:56.420 [2024-07-11 02:31:21.476164] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:56.420 [2024-07-11 02:31:21.489081] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:56.420 [2024-07-11 02:31:21.489208] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:56.420 [2024-07-11 02:31:21.502198] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:56.420 [2024-07-11 02:31:21.502326] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:56.679 [2024-07-11 02:31:21.515432] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:56.679 [2024-07-11 02:31:21.515558] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:56.679 passed 00:07:56.679 Test: blob_create_snapshot_power_failure ...[2024-07-11 02:31:21.553280] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:56.679 [2024-07-11 02:31:21.577626] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:56.679 [2024-07-11 02:31:21.590206] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:56.679 passed 00:07:56.679 Test: blob_io_unit ...passed 00:07:56.679 Test: blob_io_unit_compatibility ...passed 00:07:56.679 Test: blob_ext_md_pages ...passed 00:07:56.679 Test: blob_esnap_io_4096_4096 ...passed 00:07:56.679 Test: blob_esnap_io_512_512 ...passed 00:07:56.679 Test: blob_esnap_io_4096_512 ...passed 00:07:56.937 Test: blob_esnap_io_512_4096 ...passed 00:07:56.937 Suite: blob_bs_copy_noextent 00:07:56.937 Test: blob_open ...passed 00:07:56.937 Test: blob_create ...[2024-07-11 02:31:21.851669] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:56.937 passed 00:07:56.937 Test: blob_create_loop ...passed 00:07:56.937 Test: blob_create_fail ...[2024-07-11 02:31:21.950450] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:56.937 passed 00:07:56.937 Test: blob_create_internal ...passed 00:07:56.937 Test: blob_create_zero_extent ...passed 00:07:57.196 Test: blob_snapshot ...passed 00:07:57.196 Test: blob_clone ...passed 00:07:57.196 Test: blob_inflate ...[2024-07-11 02:31:22.125606] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:57.196 passed 00:07:57.196 Test: blob_delete ...passed 00:07:57.196 Test: blob_resize_test ...[2024-07-11 02:31:22.191442] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:57.196 passed 00:07:57.196 Test: channel_ops ...passed 00:07:57.196 Test: blob_super ...passed 00:07:57.455 Test: blob_rw_verify_iov ...passed 00:07:57.455 Test: blob_unmap ...passed 00:07:57.455 Test: blob_iter ...passed 00:07:57.455 Test: blob_parse_md ...passed 00:07:57.455 Test: bs_load_pending_removal ...passed 00:07:57.455 Test: bs_unload ...[2024-07-11 02:31:22.474629] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:57.455 passed 00:07:57.455 Test: bs_usable_clusters ...passed 00:07:57.455 Test: blob_crc ...[2024-07-11 02:31:22.546763] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:57.455 [2024-07-11 02:31:22.546932] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:57.713 passed 00:07:57.713 Test: blob_flags ...passed 00:07:57.713 Test: bs_version ...passed 00:07:57.713 Test: blob_set_xattrs_test ...[2024-07-11 02:31:22.654635] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:57.713 [2024-07-11 02:31:22.654802] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:57.713 passed 00:07:57.972 Test: blob_thin_prov_alloc ...passed 00:07:57.972 Test: blob_insert_cluster_msg_test ...passed 00:07:57.972 Test: blob_thin_prov_rw ...passed 00:07:57.972 Test: blob_thin_prov_rle ...passed 00:07:57.972 Test: blob_thin_prov_rw_iov ...passed 00:07:57.972 Test: blob_snapshot_rw ...passed 00:07:57.972 Test: blob_snapshot_rw_iov ...passed 00:07:58.230 Test: blob_inflate_rw ...passed 00:07:58.230 Test: blob_snapshot_freeze_io ...passed 00:07:58.489 Test: blob_operation_split_rw ...passed 00:07:58.746 Test: blob_operation_split_rw_iov ...passed 00:07:58.746 Test: blob_simultaneous_operations ...[2024-07-11 02:31:23.647056] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:58.746 [2024-07-11 02:31:23.647167] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:58.746 [2024-07-11 02:31:23.647681] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:58.746 [2024-07-11 02:31:23.647721] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:58.746 [2024-07-11 02:31:23.650448] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:58.746 [2024-07-11 02:31:23.650511] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:58.746 [2024-07-11 02:31:23.650610] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:58.746 [2024-07-11 02:31:23.650632] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:58.746 passed 00:07:58.746 Test: blob_persist_test ...passed 00:07:58.746 Test: blob_decouple_snapshot ...passed 00:07:58.746 Test: blob_seek_io_unit ...passed 00:07:58.746 Test: blob_nested_freezes ...passed 00:07:58.746 Suite: blob_blob_copy_noextent 00:07:59.003 Test: blob_write ...passed 00:07:59.003 Test: blob_read ...passed 00:07:59.003 Test: blob_rw_verify ...passed 00:07:59.003 Test: blob_rw_verify_iov_nomem ...passed 00:07:59.003 Test: blob_rw_iov_read_only ...passed 00:07:59.003 Test: blob_xattr ...passed 00:07:59.003 Test: blob_dirty_shutdown ...passed 00:07:59.003 Test: blob_is_degraded ...passed 00:07:59.003 Suite: blob_esnap_bs_copy_noextent 00:07:59.261 Test: blob_esnap_create ...passed 00:07:59.261 Test: blob_esnap_thread_add_remove ...passed 00:07:59.261 Test: blob_esnap_clone_snapshot ...passed 00:07:59.261 Test: blob_esnap_clone_inflate ...passed 00:07:59.261 Test: blob_esnap_clone_decouple ...passed 00:07:59.261 Test: blob_esnap_clone_reload ...passed 00:07:59.261 Test: blob_esnap_hotplug ...passed 00:07:59.261 Suite: blob_copy_extent 00:07:59.261 Test: blob_init ...[2024-07-11 02:31:24.339963] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:59.261 passed 00:07:59.520 Test: blob_thin_provision ...passed 00:07:59.520 Test: blob_read_only ...passed 00:07:59.520 Test: bs_load ...[2024-07-11 02:31:24.390594] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:59.520 passed 00:07:59.520 Test: bs_load_custom_cluster_size ...passed 00:07:59.520 Test: bs_load_after_failed_grow ...passed 00:07:59.520 Test: bs_cluster_sz ...[2024-07-11 02:31:24.417494] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:59.520 [2024-07-11 02:31:24.417812] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:59.520 [2024-07-11 02:31:24.417858] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:59.520 passed 00:07:59.520 Test: bs_resize_md ...passed 00:07:59.520 Test: bs_destroy ...passed 00:07:59.520 Test: bs_type ...passed 00:07:59.520 Test: bs_super_block ...passed 00:07:59.520 Test: bs_test_recover_cluster_count ...passed 00:07:59.520 Test: bs_grow_live ...passed 00:07:59.520 Test: bs_grow_live_no_space ...passed 00:07:59.520 Test: bs_test_grow ...passed 00:07:59.520 Test: blob_serialize_test ...passed 00:07:59.520 Test: super_block_crc ...passed 00:07:59.520 Test: blob_thin_prov_write_count_io ...passed 00:07:59.520 Test: bs_load_iter_test ...passed 00:07:59.520 Test: blob_relations ...[2024-07-11 02:31:24.580198] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:59.520 [2024-07-11 02:31:24.580323] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:59.520 [2024-07-11 02:31:24.581317] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:59.521 [2024-07-11 02:31:24.581451] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:59.521 passed 00:07:59.521 Test: blob_relations2 ...[2024-07-11 02:31:24.596602] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:59.521 [2024-07-11 02:31:24.596700] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:59.521 [2024-07-11 02:31:24.596759] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:59.521 [2024-07-11 02:31:24.596785] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:59.521 [2024-07-11 02:31:24.598253] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:59.521 [2024-07-11 02:31:24.598330] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:59.521 [2024-07-11 02:31:24.598810] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:59.521 [2024-07-11 02:31:24.598871] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:59.521 passed 00:07:59.779 Test: blob_relations3 ...passed 00:07:59.779 Test: blobstore_clean_power_failure ...passed 00:07:59.779 Test: blob_delete_snapshot_power_failure ...[2024-07-11 02:31:24.761104] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:59.779 [2024-07-11 02:31:24.774098] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:59.779 [2024-07-11 02:31:24.787916] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:59.779 [2024-07-11 02:31:24.788042] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:59.779 [2024-07-11 02:31:24.788077] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:59.780 [2024-07-11 02:31:24.805517] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:59.780 [2024-07-11 02:31:24.805606] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:59.780 [2024-07-11 02:31:24.805661] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:59.780 [2024-07-11 02:31:24.805690] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:59.780 [2024-07-11 02:31:24.819363] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:59.780 [2024-07-11 02:31:24.819507] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:59.780 [2024-07-11 02:31:24.819546] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:59.780 [2024-07-11 02:31:24.819570] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:59.780 [2024-07-11 02:31:24.832797] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:59.780 [2024-07-11 02:31:24.832905] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:59.780 [2024-07-11 02:31:24.846348] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:59.780 [2024-07-11 02:31:24.846512] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:59.780 [2024-07-11 02:31:24.859744] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:59.780 [2024-07-11 02:31:24.859845] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:00.038 passed 00:08:00.038 Test: blob_create_snapshot_power_failure ...[2024-07-11 02:31:24.897998] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:00.038 [2024-07-11 02:31:24.910582] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:08:00.038 [2024-07-11 02:31:24.934903] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:08:00.038 [2024-07-11 02:31:24.947462] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:08:00.038 passed 00:08:00.038 Test: blob_io_unit ...passed 00:08:00.038 Test: blob_io_unit_compatibility ...passed 00:08:00.038 Test: blob_ext_md_pages ...passed 00:08:00.038 Test: blob_esnap_io_4096_4096 ...passed 00:08:00.038 Test: blob_esnap_io_512_512 ...passed 00:08:00.038 Test: blob_esnap_io_4096_512 ...passed 00:08:00.297 Test: blob_esnap_io_512_4096 ...passed 00:08:00.297 Suite: blob_bs_copy_extent 00:08:00.297 Test: blob_open ...passed 00:08:00.297 Test: blob_create ...[2024-07-11 02:31:25.194210] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:08:00.297 passed 00:08:00.297 Test: blob_create_loop ...passed 00:08:00.297 Test: blob_create_fail ...[2024-07-11 02:31:25.296001] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:00.297 passed 00:08:00.297 Test: blob_create_internal ...passed 00:08:00.297 Test: blob_create_zero_extent ...passed 00:08:00.556 Test: blob_snapshot ...passed 00:08:00.556 Test: blob_clone ...passed 00:08:00.556 Test: blob_inflate ...[2024-07-11 02:31:25.473725] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:08:00.556 passed 00:08:00.556 Test: blob_delete ...passed 00:08:00.556 Test: blob_resize_test ...[2024-07-11 02:31:25.537899] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:08:00.556 passed 00:08:00.556 Test: channel_ops ...passed 00:08:00.556 Test: blob_super ...passed 00:08:00.814 Test: blob_rw_verify_iov ...passed 00:08:00.814 Test: blob_unmap ...passed 00:08:00.814 Test: blob_iter ...passed 00:08:00.814 Test: blob_parse_md ...passed 00:08:00.814 Test: bs_load_pending_removal ...passed 00:08:00.814 Test: bs_unload ...[2024-07-11 02:31:25.809183] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:08:00.814 passed 00:08:00.814 Test: bs_usable_clusters ...passed 00:08:00.814 Test: blob_crc ...[2024-07-11 02:31:25.880458] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:00.814 [2024-07-11 02:31:25.880587] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:00.814 passed 00:08:01.073 Test: blob_flags ...passed 00:08:01.073 Test: bs_version ...passed 00:08:01.073 Test: blob_set_xattrs_test ...[2024-07-11 02:31:25.983591] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:01.073 [2024-07-11 02:31:25.983707] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:01.073 passed 00:08:01.073 Test: blob_thin_prov_alloc ...passed 00:08:01.073 Test: blob_insert_cluster_msg_test ...passed 00:08:01.332 Test: blob_thin_prov_rw ...passed 00:08:01.332 Test: blob_thin_prov_rle ...passed 00:08:01.332 Test: blob_thin_prov_rw_iov ...passed 00:08:01.332 Test: blob_snapshot_rw ...passed 00:08:01.332 Test: blob_snapshot_rw_iov ...passed 00:08:01.590 Test: blob_inflate_rw ...passed 00:08:01.590 Test: blob_snapshot_freeze_io ...passed 00:08:01.849 Test: blob_operation_split_rw ...passed 00:08:01.849 Test: blob_operation_split_rw_iov ...passed 00:08:01.849 Test: blob_simultaneous_operations ...[2024-07-11 02:31:26.896166] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:01.849 [2024-07-11 02:31:26.896264] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:01.849 [2024-07-11 02:31:26.896728] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:01.849 [2024-07-11 02:31:26.896767] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:01.849 [2024-07-11 02:31:26.899459] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:01.849 [2024-07-11 02:31:26.899521] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:01.849 [2024-07-11 02:31:26.899619] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:01.849 [2024-07-11 02:31:26.899646] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:01.849 passed 00:08:02.108 Test: blob_persist_test ...passed 00:08:02.108 Test: blob_decouple_snapshot ...passed 00:08:02.108 Test: blob_seek_io_unit ...passed 00:08:02.108 Test: blob_nested_freezes ...passed 00:08:02.108 Suite: blob_blob_copy_extent 00:08:02.108 Test: blob_write ...passed 00:08:02.108 Test: blob_read ...passed 00:08:02.108 Test: blob_rw_verify ...passed 00:08:02.366 Test: blob_rw_verify_iov_nomem ...passed 00:08:02.366 Test: blob_rw_iov_read_only ...passed 00:08:02.366 Test: blob_xattr ...passed 00:08:02.366 Test: blob_dirty_shutdown ...passed 00:08:02.366 Test: blob_is_degraded ...passed 00:08:02.366 Suite: blob_esnap_bs_copy_extent 00:08:02.366 Test: blob_esnap_create ...passed 00:08:02.366 Test: blob_esnap_thread_add_remove ...passed 00:08:02.624 Test: blob_esnap_clone_snapshot ...passed 00:08:02.624 Test: blob_esnap_clone_inflate ...passed 00:08:02.624 Test: blob_esnap_clone_decouple ...passed 00:08:02.624 Test: blob_esnap_clone_reload ...passed 00:08:02.624 Test: blob_esnap_hotplug ...passed 00:08:02.624 00:08:02.624 Run Summary: Type Total Ran Passed Failed Inactive 00:08:02.624 suites 16 16 n/a 0 0 00:08:02.624 tests 348 348 348 0 0 00:08:02.624 asserts 92605 92605 92605 0 n/a 00:08:02.624 00:08:02.624 Elapsed time = 13.629 seconds 00:08:02.624 02:31:27 -- unit/unittest.sh@41 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:08:02.624 00:08:02.624 00:08:02.624 CUnit - A unit testing framework for C - Version 2.1-3 00:08:02.624 http://cunit.sourceforge.net/ 00:08:02.624 00:08:02.624 00:08:02.624 Suite: blob_bdev 00:08:02.624 Test: create_bs_dev ...passed 00:08:02.624 Test: create_bs_dev_ro ...[2024-07-11 02:31:27.700306] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 507:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:08:02.624 passed 00:08:02.624 Test: create_bs_dev_rw ...passed 00:08:02.624 Test: claim_bs_dev ...[2024-07-11 02:31:27.700786] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:08:02.624 passed 00:08:02.625 Test: claim_bs_dev_ro ...passed 00:08:02.625 Test: deferred_destroy_refs ...passed 00:08:02.625 Test: deferred_destroy_channels ...passed 00:08:02.625 Test: deferred_destroy_threads ...passed 00:08:02.625 00:08:02.625 Run Summary: Type Total Ran Passed Failed Inactive 00:08:02.625 suites 1 1 n/a 0 0 00:08:02.625 tests 8 8 8 0 0 00:08:02.625 asserts 119 119 119 0 n/a 00:08:02.625 00:08:02.625 Elapsed time = 0.001 seconds 00:08:02.882 02:31:27 -- unit/unittest.sh@42 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:08:02.882 00:08:02.882 00:08:02.883 CUnit - A unit testing framework for C - Version 2.1-3 00:08:02.883 http://cunit.sourceforge.net/ 00:08:02.883 00:08:02.883 00:08:02.883 Suite: tree 00:08:02.883 Test: blobfs_tree_op_test ...passed 00:08:02.883 00:08:02.883 Run Summary: Type Total Ran Passed Failed Inactive 00:08:02.883 suites 1 1 n/a 0 0 00:08:02.883 tests 1 1 1 0 0 00:08:02.883 asserts 27 27 27 0 n/a 00:08:02.883 00:08:02.883 Elapsed time = 0.000 seconds 00:08:02.883 02:31:27 -- unit/unittest.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:08:02.883 00:08:02.883 00:08:02.883 CUnit - A unit testing framework for C - Version 2.1-3 00:08:02.883 http://cunit.sourceforge.net/ 00:08:02.883 00:08:02.883 00:08:02.883 Suite: blobfs_async_ut 00:08:02.883 Test: fs_init ...passed 00:08:02.883 Test: fs_open ...passed 00:08:02.883 Test: fs_create ...passed 00:08:02.883 Test: fs_truncate ...passed 00:08:02.883 Test: fs_rename ...[2024-07-11 02:31:27.895205] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1474:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:08:02.883 passed 00:08:02.883 Test: fs_rw_async ...passed 00:08:02.883 Test: fs_writev_readv_async ...passed 00:08:02.883 Test: tree_find_buffer_ut ...passed 00:08:02.883 Test: channel_ops ...passed 00:08:02.883 Test: channel_ops_sync ...passed 00:08:02.883 00:08:02.883 Run Summary: Type Total Ran Passed Failed Inactive 00:08:02.883 suites 1 1 n/a 0 0 00:08:02.883 tests 10 10 10 0 0 00:08:02.883 asserts 292 292 292 0 n/a 00:08:02.883 00:08:02.883 Elapsed time = 0.178 seconds 00:08:03.141 02:31:27 -- unit/unittest.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:08:03.141 00:08:03.141 00:08:03.141 CUnit - A unit testing framework for C - Version 2.1-3 00:08:03.141 http://cunit.sourceforge.net/ 00:08:03.141 00:08:03.141 00:08:03.141 Suite: blobfs_sync_ut 00:08:03.142 Test: cache_read_after_write ...[2024-07-11 02:31:28.083595] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1474:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:08:03.142 passed 00:08:03.142 Test: file_length ...passed 00:08:03.142 Test: append_write_to_extend_blob ...passed 00:08:03.142 Test: partial_buffer ...passed 00:08:03.142 Test: cache_write_null_buffer ...passed 00:08:03.142 Test: fs_create_sync ...passed 00:08:03.142 Test: fs_rename_sync ...passed 00:08:03.142 Test: cache_append_no_cache ...passed 00:08:03.142 Test: fs_delete_file_without_close ...passed 00:08:03.142 00:08:03.142 Run Summary: Type Total Ran Passed Failed Inactive 00:08:03.142 suites 1 1 n/a 0 0 00:08:03.142 tests 9 9 9 0 0 00:08:03.142 asserts 345 345 345 0 n/a 00:08:03.142 00:08:03.142 Elapsed time = 0.383 seconds 00:08:03.401 02:31:28 -- unit/unittest.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:08:03.401 00:08:03.401 00:08:03.401 CUnit - A unit testing framework for C - Version 2.1-3 00:08:03.401 http://cunit.sourceforge.net/ 00:08:03.401 00:08:03.401 00:08:03.401 Suite: blobfs_bdev_ut 00:08:03.401 Test: spdk_blobfs_bdev_detect_test ...[2024-07-11 02:31:28.275161] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:08:03.401 passed 00:08:03.401 Test: spdk_blobfs_bdev_create_test ...passed 00:08:03.401 Test: spdk_blobfs_bdev_mount_test ...passed 00:08:03.401 00:08:03.401 [2024-07-11 02:31:28.275509] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:08:03.401 Run Summary: Type Total Ran Passed Failed Inactive 00:08:03.401 suites 1 1 n/a 0 0 00:08:03.401 tests 3 3 3 0 0 00:08:03.401 asserts 9 9 9 0 n/a 00:08:03.401 00:08:03.401 Elapsed time = 0.000 seconds 00:08:03.401 00:08:03.401 real 0m14.395s 00:08:03.401 user 0m13.790s 00:08:03.401 sys 0m0.763s 00:08:03.401 ************************************ 00:08:03.401 02:31:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.401 02:31:28 -- common/autotest_common.sh@10 -- # set +x 00:08:03.401 END TEST unittest_blob_blobfs 00:08:03.401 ************************************ 00:08:03.401 02:31:28 -- unit/unittest.sh@232 -- # run_test unittest_event unittest_event 00:08:03.401 02:31:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:03.401 02:31:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:03.401 02:31:28 -- common/autotest_common.sh@10 -- # set +x 00:08:03.401 ************************************ 00:08:03.401 START TEST unittest_event 00:08:03.401 ************************************ 00:08:03.401 02:31:28 -- common/autotest_common.sh@1104 -- # unittest_event 00:08:03.401 02:31:28 -- unit/unittest.sh@50 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:08:03.401 00:08:03.401 00:08:03.401 CUnit - A unit testing framework for C - Version 2.1-3 00:08:03.401 http://cunit.sourceforge.net/ 00:08:03.401 00:08:03.401 00:08:03.401 Suite: app_suite 00:08:03.401 Test: test_spdk_app_parse_args ...app_ut [options] 00:08:03.401 options: 00:08:03.401 -c, --config JSON config file (default none) 00:08:03.401 --json JSON config file (default none) 00:08:03.401 --json-ignore-init-errors 00:08:03.401 don't exit on invalid config entry 00:08:03.401 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:03.401 -g, --single-file-segments 00:08:03.401 force creating just one hugetlbfs file 00:08:03.401 -h, --help show this usage 00:08:03.401 -i, --shm-id shared memory ID (optional) 00:08:03.401 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:08:03.401 --lcores lcore to CPU mapping list. The list is in the format: 00:08:03.401 [<,lcores[@CPUs]>...] 00:08:03.401 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:03.401 Within the group, '-' is used for range separator, 00:08:03.401 ',' is used for single number separator. 00:08:03.401 '( )' can be omitted for single element group, 00:08:03.401 '@' can be omitted if cpus and lcores have the same value 00:08:03.401 -n, --mem-channels channel number of memory channels used for DPDK 00:08:03.401 -p, --main-core main (primary) core for DPDK 00:08:03.401 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:03.401 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:03.401 --disable-cpumask-locks Disable CPU core lock files. 00:08:03.401 --silence-noticelog disable notice level logging to stderr 00:08:03.401 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:03.401 -u, --no-pci disable PCI access 00:08:03.401 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:03.401 --max-delay maximum reactor delay (in microseconds) 00:08:03.401 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:03.401 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:03.401 -R, --huge-unlink unlink huge files after initialization 00:08:03.401 -v, --version print SPDK version 00:08:03.401 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:03.401 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:03.401 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:03.401 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:08:03.401 Tracepoints vary in size and can use more than one trace entry. 00:08:03.401 --rpcs-allowed comma-separated list of permitted RPCS 00:08:03.401 --env-context Opaque context for use of the env implementation 00:08:03.401 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:03.401 --no-huge run without using hugepages 00:08:03.401 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:08:03.401 -e, --tpoint-group [:] 00:08:03.401 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:08:03.401 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:08:03.401 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:08:03.401 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:08:03.401 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:08:03.401 app_ut: invalid option -- 'z' 00:08:03.401 app_ut [options] 00:08:03.401 options: 00:08:03.401 -c, --config JSON config file (default none) 00:08:03.401 --json JSON config file (default none) 00:08:03.401 --json-ignore-init-errors 00:08:03.401 don't exit on invalid config entry 00:08:03.401 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:03.401 -g, --single-file-segments 00:08:03.401 force creating just one hugetlbfs file 00:08:03.401 -h, --help show this usage 00:08:03.401 -i, --shm-id shared memory ID (optional) 00:08:03.401 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:08:03.401 --lcores lcore to CPU mapping list. The list is in the format: 00:08:03.401 [<,lcores[@CPUs]>...] 00:08:03.401 app_ut: unrecognized option '--test-long-opt' 00:08:03.401 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:03.401 Within the group, '-' is used for range separator, 00:08:03.401 ',' is used for single number separator. 00:08:03.401 '( )' can be omitted for single element group, 00:08:03.401 '@' can be omitted if cpus and lcores have the same value 00:08:03.401 -n, --mem-channels channel number of memory channels used for DPDK 00:08:03.401 -p, --main-core main (primary) core for DPDK 00:08:03.401 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:03.401 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:03.401 --disable-cpumask-locks Disable CPU core lock files. 00:08:03.401 --silence-noticelog disable notice level logging to stderr 00:08:03.401 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:03.401 -u, --no-pci disable PCI access 00:08:03.401 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:03.401 --max-delay maximum reactor delay (in microseconds) 00:08:03.401 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:03.401 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:03.401 -R, --huge-unlink unlink huge files after initialization 00:08:03.401 -v, --version print SPDK version 00:08:03.401 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:03.401 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:03.401 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:03.401 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:08:03.401 Tracepoints vary in size and can use more than one trace entry. 00:08:03.401 --rpcs-allowed comma-separated list of permitted RPCS 00:08:03.402 --env-context Opaque context for use of the env implementation 00:08:03.402 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:03.402 --no-huge run without using hugepages 00:08:03.402 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:08:03.402 -e, --tpoint-group [:] 00:08:03.402 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:08:03.402 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:08:03.402 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:08:03.402 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:08:03.402 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:08:03.402 [2024-07-11 02:31:28.361539] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1030:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:08:03.402 [2024-07-11 02:31:28.361818] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1211:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:08:03.402 app_ut [options] 00:08:03.402 options: 00:08:03.402 -c, --config JSON config file (default none) 00:08:03.402 --json JSON config file (default none) 00:08:03.402 --json-ignore-init-errors 00:08:03.402 don't exit on invalid config entry 00:08:03.402 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:03.402 -g, --single-file-segments 00:08:03.402 force creating just one hugetlbfs file 00:08:03.402 -h, --help show this usage 00:08:03.402 -i, --shm-id shared memory ID (optional) 00:08:03.402 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:08:03.402 --lcores lcore to CPU mapping list. The list is in the format: 00:08:03.402 [<,lcores[@CPUs]>...] 00:08:03.402 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:03.402 Within the group, '-' is used for range separator, 00:08:03.402 ',' is used for single number separator. 00:08:03.402 '( )' can be omitted for single element group, 00:08:03.402 '@' can be omitted if cpus and lcores have the same value 00:08:03.402 -n, --mem-channels channel number of memory channels used for DPDK 00:08:03.402 -p, --main-core main (primary) core for DPDK 00:08:03.402 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:03.402 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:03.402 --disable-cpumask-locks Disable CPU core lock files. 00:08:03.402 --silence-noticelog disable notice level logging to stderr 00:08:03.402 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:03.402 -u, --no-pci disable PCI access 00:08:03.402 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:03.402 --max-delay maximum reactor delay (in microseconds) 00:08:03.402 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:03.402 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:03.402 -R, --huge-unlink unlink huge files after initialization 00:08:03.402 -v, --version print SPDK version 00:08:03.402 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:03.402 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:03.402 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:03.402 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:08:03.402 Tracepoints vary in size and can use more than one trace entry. 00:08:03.402 --rpcs-allowed comma-separated list of permitted RPCS 00:08:03.402 --env-context Opaque context for use of the env implementation 00:08:03.402 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:03.402 --no-huge run without using hugepages 00:08:03.402 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:08:03.402 -e, --tpoint-group [:] 00:08:03.402 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:08:03.402 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:08:03.402 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:08:03.402 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:08:03.402 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:08:03.402 passed 00:08:03.402 00:08:03.402 Run Summary: Type Total Ran Passed Failed Inactive 00:08:03.402 suites 1 1 n/a 0 0 00:08:03.402 tests 1 1 1 0 0 00:08:03.402 asserts 8 8 8 0 n/a 00:08:03.402 00:08:03.402 Elapsed time = 0.001 seconds 00:08:03.402 [2024-07-11 02:31:28.362003] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1116:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:08:03.402 02:31:28 -- unit/unittest.sh@51 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:08:03.402 00:08:03.402 00:08:03.402 CUnit - A unit testing framework for C - Version 2.1-3 00:08:03.402 http://cunit.sourceforge.net/ 00:08:03.402 00:08:03.402 00:08:03.402 Suite: app_suite 00:08:03.402 Test: test_create_reactor ...passed 00:08:03.402 Test: test_init_reactors ...passed 00:08:03.402 Test: test_event_call ...passed 00:08:03.402 Test: test_schedule_thread ...passed 00:08:03.402 Test: test_reschedule_thread ...passed 00:08:03.402 Test: test_bind_thread ...passed 00:08:03.402 Test: test_for_each_reactor ...passed 00:08:03.402 Test: test_reactor_stats ...passed 00:08:03.402 Test: test_scheduler ...passed 00:08:03.402 Test: test_governor ...passed 00:08:03.402 00:08:03.402 Run Summary: Type Total Ran Passed Failed Inactive 00:08:03.402 suites 1 1 n/a 0 0 00:08:03.402 tests 10 10 10 0 0 00:08:03.402 asserts 344 344 344 0 n/a 00:08:03.402 00:08:03.402 Elapsed time = 0.019 seconds 00:08:03.402 00:08:03.402 real 0m0.088s 00:08:03.402 user 0m0.062s 00:08:03.402 sys 0m0.027s 00:08:03.402 02:31:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.402 02:31:28 -- common/autotest_common.sh@10 -- # set +x 00:08:03.402 ************************************ 00:08:03.402 END TEST unittest_event 00:08:03.402 ************************************ 00:08:03.402 02:31:28 -- unit/unittest.sh@233 -- # uname -s 00:08:03.402 02:31:28 -- unit/unittest.sh@233 -- # '[' Linux = Linux ']' 00:08:03.402 02:31:28 -- unit/unittest.sh@234 -- # run_test unittest_ftl unittest_ftl 00:08:03.402 02:31:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:03.402 02:31:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:03.402 02:31:28 -- common/autotest_common.sh@10 -- # set +x 00:08:03.402 ************************************ 00:08:03.402 START TEST unittest_ftl 00:08:03.402 ************************************ 00:08:03.402 02:31:28 -- common/autotest_common.sh@1104 -- # unittest_ftl 00:08:03.402 02:31:28 -- unit/unittest.sh@55 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_band.c/ftl_band_ut 00:08:03.661 00:08:03.661 00:08:03.661 CUnit - A unit testing framework for C - Version 2.1-3 00:08:03.661 http://cunit.sourceforge.net/ 00:08:03.661 00:08:03.661 00:08:03.661 Suite: ftl_band_suite 00:08:03.661 Test: test_band_block_offset_from_addr_base ...passed 00:08:03.661 Test: test_band_block_offset_from_addr_offset ...passed 00:08:03.661 Test: test_band_addr_from_block_offset ...passed 00:08:03.661 Test: test_band_set_addr ...passed 00:08:03.661 Test: test_invalidate_addr ...passed 00:08:03.661 Test: test_next_xfer_addr ...passed 00:08:03.661 00:08:03.661 Run Summary: Type Total Ran Passed Failed Inactive 00:08:03.661 suites 1 1 n/a 0 0 00:08:03.661 tests 6 6 6 0 0 00:08:03.661 asserts 30356 30356 30356 0 n/a 00:08:03.661 00:08:03.661 Elapsed time = 0.172 seconds 00:08:03.661 02:31:28 -- unit/unittest.sh@56 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut 00:08:03.661 00:08:03.661 00:08:03.661 CUnit - A unit testing framework for C - Version 2.1-3 00:08:03.661 http://cunit.sourceforge.net/ 00:08:03.661 00:08:03.661 00:08:03.661 Suite: ftl_bitmap 00:08:03.661 Test: test_ftl_bitmap_create ...[2024-07-11 02:31:28.737278] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 52:ftl_bitmap_create: *ERROR*: Buffer for bitmap must be aligned to 8 bytes 00:08:03.661 [2024-07-11 02:31:28.737570] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 58:ftl_bitmap_create: *ERROR*: Size of buffer for bitmap must be divisible by 8 bytes 00:08:03.661 passed 00:08:03.661 Test: test_ftl_bitmap_get ...passed 00:08:03.661 Test: test_ftl_bitmap_set ...passed 00:08:03.661 Test: test_ftl_bitmap_clear ...passed 00:08:03.661 Test: test_ftl_bitmap_find_first_set ...passed 00:08:03.661 Test: test_ftl_bitmap_find_first_clear ...passed 00:08:03.661 Test: test_ftl_bitmap_count_set ...passed 00:08:03.661 00:08:03.661 Run Summary: Type Total Ran Passed Failed Inactive 00:08:03.661 suites 1 1 n/a 0 0 00:08:03.661 tests 7 7 7 0 0 00:08:03.661 asserts 137 137 137 0 n/a 00:08:03.661 00:08:03.661 Elapsed time = 0.001 seconds 00:08:03.920 02:31:28 -- unit/unittest.sh@57 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_io.c/ftl_io_ut 00:08:03.920 00:08:03.920 00:08:03.920 CUnit - A unit testing framework for C - Version 2.1-3 00:08:03.920 http://cunit.sourceforge.net/ 00:08:03.920 00:08:03.920 00:08:03.920 Suite: ftl_io_suite 00:08:03.920 Test: test_completion ...passed 00:08:03.920 Test: test_multiple_ios ...passed 00:08:03.920 00:08:03.920 Run Summary: Type Total Ran Passed Failed Inactive 00:08:03.920 suites 1 1 n/a 0 0 00:08:03.920 tests 2 2 2 0 0 00:08:03.920 asserts 47 47 47 0 n/a 00:08:03.920 00:08:03.920 Elapsed time = 0.003 seconds 00:08:03.920 02:31:28 -- unit/unittest.sh@58 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut 00:08:03.920 00:08:03.920 00:08:03.920 CUnit - A unit testing framework for C - Version 2.1-3 00:08:03.920 http://cunit.sourceforge.net/ 00:08:03.920 00:08:03.920 00:08:03.920 Suite: ftl_mngt 00:08:03.920 Test: test_next_step ...passed 00:08:03.920 Test: test_continue_step ...passed 00:08:03.920 Test: test_get_func_and_step_cntx_alloc ...passed 00:08:03.920 Test: test_fail_step ...passed 00:08:03.920 Test: test_mngt_call_and_call_rollback ...passed 00:08:03.920 Test: test_nested_process_failure ...passed 00:08:03.920 00:08:03.920 Run Summary: Type Total Ran Passed Failed Inactive 00:08:03.920 suites 1 1 n/a 0 0 00:08:03.920 tests 6 6 6 0 0 00:08:03.920 asserts 176 176 176 0 n/a 00:08:03.920 00:08:03.920 Elapsed time = 0.001 seconds 00:08:03.920 02:31:28 -- unit/unittest.sh@59 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut 00:08:03.920 00:08:03.920 00:08:03.920 CUnit - A unit testing framework for C - Version 2.1-3 00:08:03.920 http://cunit.sourceforge.net/ 00:08:03.920 00:08:03.920 00:08:03.920 Suite: ftl_mempool 00:08:03.920 Test: test_ftl_mempool_create ...passed 00:08:03.920 Test: test_ftl_mempool_get_put ...passed 00:08:03.920 00:08:03.920 Run Summary: Type Total Ran Passed Failed Inactive 00:08:03.920 suites 1 1 n/a 0 0 00:08:03.920 tests 2 2 2 0 0 00:08:03.920 asserts 36 36 36 0 n/a 00:08:03.920 00:08:03.920 Elapsed time = 0.000 seconds 00:08:03.920 02:31:28 -- unit/unittest.sh@60 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut 00:08:03.920 00:08:03.920 00:08:03.920 CUnit - A unit testing framework for C - Version 2.1-3 00:08:03.920 http://cunit.sourceforge.net/ 00:08:03.920 00:08:03.920 00:08:03.920 Suite: ftl_addr64_suite 00:08:03.920 Test: test_addr_cached ...passed 00:08:03.920 00:08:03.920 Run Summary: Type Total Ran Passed Failed Inactive 00:08:03.920 suites 1 1 n/a 0 0 00:08:03.920 tests 1 1 1 0 0 00:08:03.920 asserts 1536 1536 1536 0 n/a 00:08:03.920 00:08:03.920 Elapsed time = 0.000 seconds 00:08:03.920 02:31:28 -- unit/unittest.sh@61 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_sb/ftl_sb_ut 00:08:03.920 00:08:03.920 00:08:03.920 CUnit - A unit testing framework for C - Version 2.1-3 00:08:03.920 http://cunit.sourceforge.net/ 00:08:03.920 00:08:03.920 00:08:03.920 Suite: ftl_sb 00:08:03.920 Test: test_sb_crc_v2 ...passed 00:08:03.920 Test: test_sb_crc_v3 ...passed 00:08:03.920 Test: test_sb_v3_md_layout ...[2024-07-11 02:31:28.889771] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 143:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Missing regions 00:08:03.920 [2024-07-11 02:31:28.890238] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 131:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:08:03.920 [2024-07-11 02:31:28.890312] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:08:03.920 [2024-07-11 02:31:28.890360] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:08:03.920 [2024-07-11 02:31:28.890400] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:08:03.920 [2024-07-11 02:31:28.890506] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 93:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Unsupported MD region type found 00:08:03.920 [2024-07-11 02:31:28.890545] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:08:03.920 [2024-07-11 02:31:28.890613] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:08:03.920 [2024-07-11 02:31:28.890729] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:08:03.920 [2024-07-11 02:31:28.890784] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:08:03.920 [2024-07-11 02:31:28.890821] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:08:03.920 passed 00:08:03.920 Test: test_sb_v5_md_layout ...passed 00:08:03.920 00:08:03.920 Run Summary: Type Total Ran Passed Failed Inactive 00:08:03.920 suites 1 1 n/a 0 0 00:08:03.920 tests 4 4 4 0 0 00:08:03.920 asserts 148 148 148 0 n/a 00:08:03.920 00:08:03.920 Elapsed time = 0.003 seconds 00:08:03.920 02:31:28 -- unit/unittest.sh@62 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut 00:08:03.920 00:08:03.920 00:08:03.920 CUnit - A unit testing framework for C - Version 2.1-3 00:08:03.920 http://cunit.sourceforge.net/ 00:08:03.920 00:08:03.920 00:08:03.920 Suite: ftl_layout_upgrade 00:08:03.920 Test: test_l2p_upgrade ...passed 00:08:03.920 00:08:03.920 Run Summary: Type Total Ran Passed Failed Inactive 00:08:03.920 suites 1 1 n/a 0 0 00:08:03.920 tests 1 1 1 0 0 00:08:03.920 asserts 140 140 140 0 n/a 00:08:03.920 00:08:03.920 Elapsed time = 0.001 seconds 00:08:03.920 00:08:03.920 real 0m0.454s 00:08:03.920 user 0m0.227s 00:08:03.920 sys 0m0.229s 00:08:03.920 02:31:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.920 02:31:28 -- common/autotest_common.sh@10 -- # set +x 00:08:03.920 ************************************ 00:08:03.920 END TEST unittest_ftl 00:08:03.920 ************************************ 00:08:03.920 02:31:28 -- unit/unittest.sh@237 -- # run_test unittest_accel /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:08:03.920 02:31:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:03.920 02:31:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:03.920 02:31:28 -- common/autotest_common.sh@10 -- # set +x 00:08:03.920 ************************************ 00:08:03.920 START TEST unittest_accel 00:08:03.920 ************************************ 00:08:03.921 02:31:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:08:03.921 00:08:03.921 00:08:03.921 CUnit - A unit testing framework for C - Version 2.1-3 00:08:03.921 http://cunit.sourceforge.net/ 00:08:03.921 00:08:03.921 00:08:03.921 Suite: accel_sequence 00:08:03.921 Test: test_sequence_fill_copy ...passed 00:08:03.921 Test: test_sequence_abort ...passed 00:08:03.921 Test: test_sequence_append_error ...passed 00:08:04.179 Test: test_sequence_completion_error ...[2024-07-11 02:31:29.012577] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1926:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7fb0820797c0 00:08:04.179 [2024-07-11 02:31:29.013034] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1926:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x7fb0820797c0 00:08:04.179 [2024-07-11 02:31:29.013230] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1836:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x7fb0820797c0 00:08:04.179 [2024-07-11 02:31:29.013395] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1836:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x7fb0820797c0 00:08:04.179 passed 00:08:04.179 Test: test_sequence_decompress ...passed 00:08:04.179 Test: test_sequence_reverse ...passed 00:08:04.179 Test: test_sequence_copy_elision ...passed 00:08:04.179 Test: test_sequence_accel_buffers ...passed 00:08:04.179 Test: test_sequence_memory_domain ...[2024-07-11 02:31:29.026301] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1728:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:08:04.179 [2024-07-11 02:31:29.026614] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1767:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -98 00:08:04.179 passed 00:08:04.179 Test: test_sequence_module_memory_domain ...passed 00:08:04.179 Test: test_sequence_crypto ...passed 00:08:04.179 Test: test_sequence_driver ...[2024-07-11 02:31:29.034162] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1875:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x7fb0814517c0 using driver: ut 00:08:04.179 [2024-07-11 02:31:29.034391] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1939:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7fb0814517c0 through driver: ut 00:08:04.179 passed 00:08:04.179 Test: test_sequence_same_iovs ...passed 00:08:04.179 Test: test_sequence_crc32 ...passed 00:08:04.179 Suite: accel 00:08:04.179 Test: test_spdk_accel_task_complete ...passed 00:08:04.179 Test: test_get_task ...passed 00:08:04.179 Test: test_spdk_accel_submit_copy ...passed 00:08:04.179 Test: test_spdk_accel_submit_dualcast ...[2024-07-11 02:31:29.040695] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:08:04.179 [2024-07-11 02:31:29.040855] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:08:04.179 passed 00:08:04.179 Test: test_spdk_accel_submit_compare ...passed 00:08:04.179 Test: test_spdk_accel_submit_fill ...passed 00:08:04.179 Test: test_spdk_accel_submit_crc32c ...passed 00:08:04.179 Test: test_spdk_accel_submit_crc32cv ...passed 00:08:04.179 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:08:04.179 Test: test_spdk_accel_submit_xor ...passed 00:08:04.179 Test: test_spdk_accel_module_find_by_name ...passed 00:08:04.179 Test: test_spdk_accel_module_register ...passed 00:08:04.179 00:08:04.179 Run Summary: Type Total Ran Passed Failed Inactive 00:08:04.179 suites 2 2 n/a 0 0 00:08:04.179 tests 26 26 26 0 0 00:08:04.179 asserts 831 831 831 0 n/a 00:08:04.179 00:08:04.180 Elapsed time = 0.036 seconds 00:08:04.180 00:08:04.180 real 0m0.078s 00:08:04.180 user 0m0.035s 00:08:04.180 sys 0m0.038s 00:08:04.180 02:31:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.180 ************************************ 00:08:04.180 02:31:29 -- common/autotest_common.sh@10 -- # set +x 00:08:04.180 END TEST unittest_accel 00:08:04.180 ************************************ 00:08:04.180 02:31:29 -- unit/unittest.sh@238 -- # run_test unittest_ioat /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:08:04.180 02:31:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:04.180 02:31:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:04.180 02:31:29 -- common/autotest_common.sh@10 -- # set +x 00:08:04.180 ************************************ 00:08:04.180 START TEST unittest_ioat 00:08:04.180 ************************************ 00:08:04.180 02:31:29 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:08:04.180 00:08:04.180 00:08:04.180 CUnit - A unit testing framework for C - Version 2.1-3 00:08:04.180 http://cunit.sourceforge.net/ 00:08:04.180 00:08:04.180 00:08:04.180 Suite: ioat 00:08:04.180 Test: ioat_state_check ...passed 00:08:04.180 00:08:04.180 Run Summary: Type Total Ran Passed Failed Inactive 00:08:04.180 suites 1 1 n/a 0 0 00:08:04.180 tests 1 1 1 0 0 00:08:04.180 asserts 32 32 32 0 n/a 00:08:04.180 00:08:04.180 Elapsed time = 0.000 seconds 00:08:04.180 00:08:04.180 real 0m0.027s 00:08:04.180 user 0m0.007s 00:08:04.180 sys 0m0.021s 00:08:04.180 02:31:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.180 ************************************ 00:08:04.180 END TEST unittest_ioat 00:08:04.180 02:31:29 -- common/autotest_common.sh@10 -- # set +x 00:08:04.180 ************************************ 00:08:04.180 02:31:29 -- unit/unittest.sh@239 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:04.180 02:31:29 -- unit/unittest.sh@240 -- # run_test unittest_idxd_user /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:08:04.180 02:31:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:04.180 02:31:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:04.180 02:31:29 -- common/autotest_common.sh@10 -- # set +x 00:08:04.180 ************************************ 00:08:04.180 START TEST unittest_idxd_user 00:08:04.180 ************************************ 00:08:04.180 02:31:29 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:08:04.180 00:08:04.180 00:08:04.180 CUnit - A unit testing framework for C - Version 2.1-3 00:08:04.180 http://cunit.sourceforge.net/ 00:08:04.180 00:08:04.180 00:08:04.180 Suite: idxd_user 00:08:04.180 Test: test_idxd_wait_cmd ...[2024-07-11 02:31:29.205540] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:08:04.180 [2024-07-11 02:31:29.205806] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:08:04.180 passed 00:08:04.180 Test: test_idxd_reset_dev ...passed 00:08:04.180 Test: test_idxd_group_config ...[2024-07-11 02:31:29.205922] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:08:04.180 [2024-07-11 02:31:29.205958] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:08:04.180 passed 00:08:04.180 Test: test_idxd_wq_config ...passed 00:08:04.180 00:08:04.180 Run Summary: Type Total Ran Passed Failed Inactive 00:08:04.180 suites 1 1 n/a 0 0 00:08:04.180 tests 4 4 4 0 0 00:08:04.180 asserts 20 20 20 0 n/a 00:08:04.180 00:08:04.180 Elapsed time = 0.001 seconds 00:08:04.180 00:08:04.180 real 0m0.031s 00:08:04.180 user 0m0.004s 00:08:04.180 sys 0m0.028s 00:08:04.180 02:31:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.180 ************************************ 00:08:04.180 02:31:29 -- common/autotest_common.sh@10 -- # set +x 00:08:04.180 END TEST unittest_idxd_user 00:08:04.180 ************************************ 00:08:04.180 02:31:29 -- unit/unittest.sh@242 -- # run_test unittest_iscsi unittest_iscsi 00:08:04.180 02:31:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:04.180 02:31:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:04.180 02:31:29 -- common/autotest_common.sh@10 -- # set +x 00:08:04.439 ************************************ 00:08:04.439 START TEST unittest_iscsi 00:08:04.439 ************************************ 00:08:04.439 02:31:29 -- common/autotest_common.sh@1104 -- # unittest_iscsi 00:08:04.439 02:31:29 -- unit/unittest.sh@66 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:08:04.439 00:08:04.439 00:08:04.439 CUnit - A unit testing framework for C - Version 2.1-3 00:08:04.439 http://cunit.sourceforge.net/ 00:08:04.439 00:08:04.439 00:08:04.439 Suite: conn_suite 00:08:04.439 Test: read_task_split_in_order_case ...passed 00:08:04.439 Test: read_task_split_reverse_order_case ...passed 00:08:04.439 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:08:04.439 Test: process_non_read_task_completion_test ...passed 00:08:04.439 Test: free_tasks_on_connection ...passed 00:08:04.439 Test: free_tasks_with_queued_datain ...passed 00:08:04.439 Test: abort_queued_datain_task_test ...passed 00:08:04.439 Test: abort_queued_datain_tasks_test ...passed 00:08:04.439 00:08:04.439 Run Summary: Type Total Ran Passed Failed Inactive 00:08:04.439 suites 1 1 n/a 0 0 00:08:04.439 tests 8 8 8 0 0 00:08:04.439 asserts 230 230 230 0 n/a 00:08:04.439 00:08:04.439 Elapsed time = 0.000 seconds 00:08:04.439 02:31:29 -- unit/unittest.sh@67 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:08:04.439 00:08:04.439 00:08:04.439 CUnit - A unit testing framework for C - Version 2.1-3 00:08:04.439 http://cunit.sourceforge.net/ 00:08:04.439 00:08:04.439 00:08:04.439 Suite: iscsi_suite 00:08:04.439 Test: param_negotiation_test ...passed 00:08:04.439 Test: list_negotiation_test ...passed 00:08:04.439 Test: parse_valid_test ...passed 00:08:04.439 Test: parse_invalid_test ...[2024-07-11 02:31:29.325686] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 202:iscsi_parse_param: *ERROR*: '=' not found 00:08:04.439 [2024-07-11 02:31:29.325926] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 202:iscsi_parse_param: *ERROR*: '=' not found 00:08:04.439 [2024-07-11 02:31:29.325964] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 208:iscsi_parse_param: *ERROR*: Empty key 00:08:04.439 [2024-07-11 02:31:29.326028] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 248:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:08:04.439 [2024-07-11 02:31:29.326144] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 248:iscsi_parse_param: *ERROR*: Overflow Val 256 00:08:04.439 [2024-07-11 02:31:29.326209] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 215:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:08:04.439 [2024-07-11 02:31:29.326331] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 229:iscsi_parse_param: *ERROR*: Duplicated Key B 00:08:04.439 passed 00:08:04.439 00:08:04.439 Run Summary: Type Total Ran Passed Failed Inactive 00:08:04.439 suites 1 1 n/a 0 0 00:08:04.439 tests 4 4 4 0 0 00:08:04.439 asserts 161 161 161 0 n/a 00:08:04.439 00:08:04.439 Elapsed time = 0.006 seconds 00:08:04.439 02:31:29 -- unit/unittest.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:08:04.439 00:08:04.439 00:08:04.439 CUnit - A unit testing framework for C - Version 2.1-3 00:08:04.439 http://cunit.sourceforge.net/ 00:08:04.439 00:08:04.439 00:08:04.439 Suite: iscsi_target_node_suite 00:08:04.439 Test: add_lun_test_cases ...[2024-07-11 02:31:29.355192] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1248:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:08:04.439 [2024-07-11 02:31:29.355480] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1254:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:08:04.440 [2024-07-11 02:31:29.355566] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:08:04.440 passed 00:08:04.440 Test: allow_any_allowed ...passed 00:08:04.440 Test: allow_ipv6_allowed ...passed 00:08:04.440 Test: allow_ipv6_denied ...passed 00:08:04.440 Test: allow_ipv6_invalid ...passed 00:08:04.440 Test: allow_ipv4_allowed ...passed 00:08:04.440 Test: allow_ipv4_denied ...passed 00:08:04.440 Test: allow_ipv4_invalid ...passed 00:08:04.440 Test: node_access_allowed ...[2024-07-11 02:31:29.355602] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:08:04.440 [2024-07-11 02:31:29.355624] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1266:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:08:04.440 passed 00:08:04.440 Test: node_access_denied_by_empty_netmask ...passed 00:08:04.440 Test: node_access_multi_initiator_groups_cases ...passed 00:08:04.440 Test: allow_iscsi_name_multi_maps_case ...passed 00:08:04.440 Test: chap_param_test_cases ...[2024-07-11 02:31:29.356032] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:08:04.440 [2024-07-11 02:31:29.356065] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:08:04.440 [2024-07-11 02:31:29.356110] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:08:04.440 [2024-07-11 02:31:29.356131] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:08:04.440 [2024-07-11 02:31:29.356157] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1026:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:08:04.440 passed 00:08:04.440 00:08:04.440 Run Summary: Type Total Ran Passed Failed Inactive 00:08:04.440 suites 1 1 n/a 0 0 00:08:04.440 tests 13 13 13 0 0 00:08:04.440 asserts 50 50 50 0 n/a 00:08:04.440 00:08:04.440 Elapsed time = 0.001 seconds 00:08:04.440 02:31:29 -- unit/unittest.sh@69 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:08:04.440 00:08:04.440 00:08:04.440 CUnit - A unit testing framework for C - Version 2.1-3 00:08:04.440 http://cunit.sourceforge.net/ 00:08:04.440 00:08:04.440 00:08:04.440 Suite: iscsi_suite 00:08:04.440 Test: op_login_check_target_test ...[2024-07-11 02:31:29.391096] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1434:iscsi_op_login_check_target: *ERROR*: access denied 00:08:04.440 passed 00:08:04.440 Test: op_login_session_normal_test ...[2024-07-11 02:31:29.391541] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:08:04.440 [2024-07-11 02:31:29.391594] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:08:04.440 [2024-07-11 02:31:29.391636] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:08:04.440 [2024-07-11 02:31:29.391685] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:08:04.440 [2024-07-11 02:31:29.391784] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:08:04.440 [2024-07-11 02:31:29.391892] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 702:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:08:04.440 passed 00:08:04.440 Test: maxburstlength_test ...[2024-07-11 02:31:29.391991] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:08:04.440 [2024-07-11 02:31:29.392245] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:08:04.440 [2024-07-11 02:31:29.392312] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4548:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:08:04.440 passed 00:08:04.440 Test: underflow_for_read_transfer_test ...passed 00:08:04.440 Test: underflow_for_zero_read_transfer_test ...passed 00:08:04.440 Test: underflow_for_request_sense_test ...passed 00:08:04.440 Test: underflow_for_check_condition_test ...passed 00:08:04.440 Test: add_transfer_task_test ...passed 00:08:04.440 Test: get_transfer_task_test ...passed 00:08:04.440 Test: del_transfer_task_test ...passed 00:08:04.440 Test: clear_all_transfer_tasks_test ...passed 00:08:04.440 Test: build_iovs_test ...passed 00:08:04.440 Test: build_iovs_with_md_test ...passed 00:08:04.440 Test: pdu_hdr_op_login_test ...[2024-07-11 02:31:29.393919] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1251:iscsi_op_login_rsp_init: *ERROR*: transit error 00:08:04.440 [2024-07-11 02:31:29.394049] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1258:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:08:04.440 [2024-07-11 02:31:29.394139] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1272:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:08:04.440 passed 00:08:04.440 Test: pdu_hdr_op_text_test ...[2024-07-11 02:31:29.394243] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2240:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:08:04.440 [2024-07-11 02:31:29.394323] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2272:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:08:04.440 [2024-07-11 02:31:29.394362] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2285:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:08:04.440 passed 00:08:04.440 Test: pdu_hdr_op_logout_test ...[2024-07-11 02:31:29.394444] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2515:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:08:04.440 passed 00:08:04.440 Test: pdu_hdr_op_scsi_test ...[2024-07-11 02:31:29.394600] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:08:04.440 [2024-07-11 02:31:29.394640] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:08:04.440 [2024-07-11 02:31:29.394687] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3364:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:08:04.440 [2024-07-11 02:31:29.394786] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3397:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:08:04.440 [2024-07-11 02:31:29.394883] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3404:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:08:04.440 [2024-07-11 02:31:29.395095] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3428:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:08:04.440 passed 00:08:04.440 Test: pdu_hdr_op_task_mgmt_test ...[2024-07-11 02:31:29.395194] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3605:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:08:04.440 [2024-07-11 02:31:29.395266] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3694:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:08:04.440 passed 00:08:04.440 Test: pdu_hdr_op_nopout_test ...[2024-07-11 02:31:29.395495] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3713:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:08:04.440 [2024-07-11 02:31:29.395599] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:08:04.440 [2024-07-11 02:31:29.395631] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:08:04.440 [2024-07-11 02:31:29.395662] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3743:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:08:04.440 passed 00:08:04.440 Test: pdu_hdr_op_data_test ...[2024-07-11 02:31:29.395698] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4186:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:08:04.440 [2024-07-11 02:31:29.395767] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4203:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:08:04.440 [2024-07-11 02:31:29.395841] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:08:04.440 [2024-07-11 02:31:29.395891] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4216:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:08:04.440 [2024-07-11 02:31:29.395969] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4222:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:08:04.440 [2024-07-11 02:31:29.396067] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4233:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:08:04.440 [2024-07-11 02:31:29.396110] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4243:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:08:04.440 passed 00:08:04.440 Test: empty_text_with_cbit_test ...passed 00:08:04.440 Test: pdu_payload_read_test ...[2024-07-11 02:31:29.398228] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4631:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:08:04.440 passed 00:08:04.440 Test: data_out_pdu_sequence_test ...passed 00:08:04.440 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:08:04.440 00:08:04.440 Run Summary: Type Total Ran Passed Failed Inactive 00:08:04.440 suites 1 1 n/a 0 0 00:08:04.440 tests 24 24 24 0 0 00:08:04.440 asserts 150253 150253 150253 0 n/a 00:08:04.440 00:08:04.440 Elapsed time = 0.017 seconds 00:08:04.440 02:31:29 -- unit/unittest.sh@70 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:08:04.440 00:08:04.440 00:08:04.440 CUnit - A unit testing framework for C - Version 2.1-3 00:08:04.440 http://cunit.sourceforge.net/ 00:08:04.440 00:08:04.440 00:08:04.440 Suite: init_grp_suite 00:08:04.440 Test: create_initiator_group_success_case ...passed 00:08:04.440 Test: find_initiator_group_success_case ...passed 00:08:04.440 Test: register_initiator_group_twice_case ...passed 00:08:04.440 Test: add_initiator_name_success_case ...passed 00:08:04.440 Test: add_initiator_name_fail_case ...[2024-07-11 02:31:29.441974] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:08:04.440 passed 00:08:04.440 Test: delete_all_initiator_names_success_case ...passed 00:08:04.440 Test: add_netmask_success_case ...passed 00:08:04.440 Test: add_netmask_fail_case ...[2024-07-11 02:31:29.442403] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:08:04.440 passed 00:08:04.440 Test: delete_all_netmasks_success_case ...passed 00:08:04.440 Test: initiator_name_overwrite_all_to_any_case ...passed 00:08:04.440 Test: netmask_overwrite_all_to_any_case ...passed 00:08:04.440 Test: add_delete_initiator_names_case ...passed 00:08:04.440 Test: add_duplicated_initiator_names_case ...passed 00:08:04.440 Test: delete_nonexisting_initiator_names_case ...passed 00:08:04.440 Test: add_delete_netmasks_case ...passed 00:08:04.440 Test: add_duplicated_netmasks_case ...passed 00:08:04.440 Test: delete_nonexisting_netmasks_case ...passed 00:08:04.440 00:08:04.440 Run Summary: Type Total Ran Passed Failed Inactive 00:08:04.440 suites 1 1 n/a 0 0 00:08:04.440 tests 17 17 17 0 0 00:08:04.440 asserts 108 108 108 0 n/a 00:08:04.440 00:08:04.440 Elapsed time = 0.001 seconds 00:08:04.440 02:31:29 -- unit/unittest.sh@71 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:08:04.440 00:08:04.440 00:08:04.440 CUnit - A unit testing framework for C - Version 2.1-3 00:08:04.441 http://cunit.sourceforge.net/ 00:08:04.441 00:08:04.441 00:08:04.441 Suite: portal_grp_suite 00:08:04.441 Test: portal_create_ipv4_normal_case ...passed 00:08:04.441 Test: portal_create_ipv6_normal_case ...passed 00:08:04.441 Test: portal_create_ipv4_wildcard_case ...passed 00:08:04.441 Test: portal_create_ipv6_wildcard_case ...passed 00:08:04.441 Test: portal_create_twice_case ...[2024-07-11 02:31:29.473362] /home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:08:04.441 passed 00:08:04.441 Test: portal_grp_register_unregister_case ...passed 00:08:04.441 Test: portal_grp_register_twice_case ...passed 00:08:04.441 Test: portal_grp_add_delete_case ...passed 00:08:04.441 Test: portal_grp_add_delete_twice_case ...passed 00:08:04.441 00:08:04.441 Run Summary: Type Total Ran Passed Failed Inactive 00:08:04.441 suites 1 1 n/a 0 0 00:08:04.441 tests 9 9 9 0 0 00:08:04.441 asserts 44 44 44 0 n/a 00:08:04.441 00:08:04.441 Elapsed time = 0.003 seconds 00:08:04.441 00:08:04.441 real 0m0.219s 00:08:04.441 user 0m0.141s 00:08:04.441 sys 0m0.080s 00:08:04.441 02:31:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.441 02:31:29 -- common/autotest_common.sh@10 -- # set +x 00:08:04.441 ************************************ 00:08:04.441 END TEST unittest_iscsi 00:08:04.441 ************************************ 00:08:04.699 02:31:29 -- unit/unittest.sh@243 -- # run_test unittest_json unittest_json 00:08:04.699 02:31:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:04.699 02:31:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:04.699 02:31:29 -- common/autotest_common.sh@10 -- # set +x 00:08:04.699 ************************************ 00:08:04.699 START TEST unittest_json 00:08:04.699 ************************************ 00:08:04.699 02:31:29 -- common/autotest_common.sh@1104 -- # unittest_json 00:08:04.699 02:31:29 -- unit/unittest.sh@75 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:08:04.699 00:08:04.699 00:08:04.699 CUnit - A unit testing framework for C - Version 2.1-3 00:08:04.699 http://cunit.sourceforge.net/ 00:08:04.699 00:08:04.699 00:08:04.699 Suite: json 00:08:04.699 Test: test_parse_literal ...passed 00:08:04.699 Test: test_parse_string_simple ...passed 00:08:04.699 Test: test_parse_string_control_chars ...passed 00:08:04.699 Test: test_parse_string_utf8 ...passed 00:08:04.699 Test: test_parse_string_escapes_twochar ...passed 00:08:04.699 Test: test_parse_string_escapes_unicode ...passed 00:08:04.699 Test: test_parse_number ...passed 00:08:04.699 Test: test_parse_array ...passed 00:08:04.699 Test: test_parse_object ...passed 00:08:04.699 Test: test_parse_nesting ...passed 00:08:04.699 Test: test_parse_comment ...passed 00:08:04.699 00:08:04.699 Run Summary: Type Total Ran Passed Failed Inactive 00:08:04.699 suites 1 1 n/a 0 0 00:08:04.699 tests 11 11 11 0 0 00:08:04.699 asserts 1516 1516 1516 0 n/a 00:08:04.699 00:08:04.699 Elapsed time = 0.001 seconds 00:08:04.699 02:31:29 -- unit/unittest.sh@76 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:08:04.699 00:08:04.699 00:08:04.699 CUnit - A unit testing framework for C - Version 2.1-3 00:08:04.699 http://cunit.sourceforge.net/ 00:08:04.699 00:08:04.699 00:08:04.699 Suite: json 00:08:04.699 Test: test_strequal ...passed 00:08:04.699 Test: test_num_to_uint16 ...passed 00:08:04.699 Test: test_num_to_int32 ...passed 00:08:04.699 Test: test_num_to_uint64 ...passed 00:08:04.699 Test: test_decode_object ...passed 00:08:04.699 Test: test_decode_array ...passed 00:08:04.699 Test: test_decode_bool ...passed 00:08:04.699 Test: test_decode_uint16 ...passed 00:08:04.699 Test: test_decode_int32 ...passed 00:08:04.699 Test: test_decode_uint32 ...passed 00:08:04.699 Test: test_decode_uint64 ...passed 00:08:04.699 Test: test_decode_string ...passed 00:08:04.699 Test: test_decode_uuid ...passed 00:08:04.699 Test: test_find ...passed 00:08:04.699 Test: test_find_array ...passed 00:08:04.699 Test: test_iterating ...passed 00:08:04.699 Test: test_free_object ...passed 00:08:04.699 00:08:04.699 Run Summary: Type Total Ran Passed Failed Inactive 00:08:04.699 suites 1 1 n/a 0 0 00:08:04.699 tests 17 17 17 0 0 00:08:04.699 asserts 236 236 236 0 n/a 00:08:04.699 00:08:04.699 Elapsed time = 0.001 seconds 00:08:04.699 02:31:29 -- unit/unittest.sh@77 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:08:04.699 00:08:04.699 00:08:04.699 CUnit - A unit testing framework for C - Version 2.1-3 00:08:04.699 http://cunit.sourceforge.net/ 00:08:04.700 00:08:04.700 00:08:04.700 Suite: json 00:08:04.700 Test: test_write_literal ...passed 00:08:04.700 Test: test_write_string_simple ...passed 00:08:04.700 Test: test_write_string_escapes ...passed 00:08:04.700 Test: test_write_string_utf16le ...passed 00:08:04.700 Test: test_write_number_int32 ...passed 00:08:04.700 Test: test_write_number_uint32 ...passed 00:08:04.700 Test: test_write_number_uint128 ...passed 00:08:04.700 Test: test_write_string_number_uint128 ...passed 00:08:04.700 Test: test_write_number_int64 ...passed 00:08:04.700 Test: test_write_number_uint64 ...passed 00:08:04.700 Test: test_write_number_double ...passed 00:08:04.700 Test: test_write_uuid ...passed 00:08:04.700 Test: test_write_array ...passed 00:08:04.700 Test: test_write_object ...passed 00:08:04.700 Test: test_write_nesting ...passed 00:08:04.700 Test: test_write_val ...passed 00:08:04.700 00:08:04.700 Run Summary: Type Total Ran Passed Failed Inactive 00:08:04.700 suites 1 1 n/a 0 0 00:08:04.700 tests 16 16 16 0 0 00:08:04.700 asserts 918 918 918 0 n/a 00:08:04.700 00:08:04.700 Elapsed time = 0.004 seconds 00:08:04.700 02:31:29 -- unit/unittest.sh@78 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:08:04.700 00:08:04.700 00:08:04.700 CUnit - A unit testing framework for C - Version 2.1-3 00:08:04.700 http://cunit.sourceforge.net/ 00:08:04.700 00:08:04.700 00:08:04.700 Suite: jsonrpc 00:08:04.700 Test: test_parse_request ...passed 00:08:04.700 Test: test_parse_request_streaming ...passed 00:08:04.700 00:08:04.700 Run Summary: Type Total Ran Passed Failed Inactive 00:08:04.700 suites 1 1 n/a 0 0 00:08:04.700 tests 2 2 2 0 0 00:08:04.700 asserts 289 289 289 0 n/a 00:08:04.700 00:08:04.700 Elapsed time = 0.004 seconds 00:08:04.700 00:08:04.700 real 0m0.133s 00:08:04.700 user 0m0.070s 00:08:04.700 sys 0m0.062s 00:08:04.700 02:31:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.700 02:31:29 -- common/autotest_common.sh@10 -- # set +x 00:08:04.700 ************************************ 00:08:04.700 END TEST unittest_json 00:08:04.700 ************************************ 00:08:04.700 02:31:29 -- unit/unittest.sh@244 -- # run_test unittest_rpc unittest_rpc 00:08:04.700 02:31:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:04.700 02:31:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:04.700 02:31:29 -- common/autotest_common.sh@10 -- # set +x 00:08:04.700 ************************************ 00:08:04.700 START TEST unittest_rpc 00:08:04.700 ************************************ 00:08:04.700 02:31:29 -- common/autotest_common.sh@1104 -- # unittest_rpc 00:08:04.700 02:31:29 -- unit/unittest.sh@82 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:08:04.700 00:08:04.700 00:08:04.700 CUnit - A unit testing framework for C - Version 2.1-3 00:08:04.700 http://cunit.sourceforge.net/ 00:08:04.700 00:08:04.700 00:08:04.700 Suite: rpc 00:08:04.700 Test: test_jsonrpc_handler ...passed 00:08:04.700 Test: test_spdk_rpc_is_method_allowed ...passed 00:08:04.700 Test: test_rpc_get_methods ...[2024-07-11 02:31:29.737596] /home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 378:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:08:04.700 passed 00:08:04.700 Test: test_rpc_spdk_get_version ...passed 00:08:04.700 Test: test_spdk_rpc_listen_close ...passed 00:08:04.700 00:08:04.700 Run Summary: Type Total Ran Passed Failed Inactive 00:08:04.700 suites 1 1 n/a 0 0 00:08:04.700 tests 5 5 5 0 0 00:08:04.700 asserts 20 20 20 0 n/a 00:08:04.700 00:08:04.700 Elapsed time = 0.000 seconds 00:08:04.700 00:08:04.700 real 0m0.028s 00:08:04.700 user 0m0.013s 00:08:04.700 sys 0m0.015s 00:08:04.700 02:31:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.700 ************************************ 00:08:04.700 END TEST unittest_rpc 00:08:04.700 02:31:29 -- common/autotest_common.sh@10 -- # set +x 00:08:04.700 ************************************ 00:08:04.959 02:31:29 -- unit/unittest.sh@245 -- # run_test unittest_notify /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:08:04.959 02:31:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:04.959 02:31:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:04.959 02:31:29 -- common/autotest_common.sh@10 -- # set +x 00:08:04.959 ************************************ 00:08:04.959 START TEST unittest_notify 00:08:04.959 ************************************ 00:08:04.959 02:31:29 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:08:04.959 00:08:04.959 00:08:04.959 CUnit - A unit testing framework for C - Version 2.1-3 00:08:04.959 http://cunit.sourceforge.net/ 00:08:04.959 00:08:04.959 00:08:04.959 Suite: app_suite 00:08:04.959 Test: notify ...passed 00:08:04.959 00:08:04.959 Run Summary: Type Total Ran Passed Failed Inactive 00:08:04.959 suites 1 1 n/a 0 0 00:08:04.959 tests 1 1 1 0 0 00:08:04.959 asserts 13 13 13 0 n/a 00:08:04.959 00:08:04.959 Elapsed time = 0.000 seconds 00:08:04.959 00:08:04.959 real 0m0.031s 00:08:04.959 user 0m0.019s 00:08:04.959 sys 0m0.012s 00:08:04.959 02:31:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.959 02:31:29 -- common/autotest_common.sh@10 -- # set +x 00:08:04.959 ************************************ 00:08:04.959 END TEST unittest_notify 00:08:04.959 ************************************ 00:08:04.959 02:31:29 -- unit/unittest.sh@246 -- # run_test unittest_nvme unittest_nvme 00:08:04.959 02:31:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:04.959 02:31:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:04.959 02:31:29 -- common/autotest_common.sh@10 -- # set +x 00:08:04.959 ************************************ 00:08:04.959 START TEST unittest_nvme 00:08:04.959 ************************************ 00:08:04.959 02:31:29 -- common/autotest_common.sh@1104 -- # unittest_nvme 00:08:04.959 02:31:29 -- unit/unittest.sh@86 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:08:04.959 00:08:04.959 00:08:04.959 CUnit - A unit testing framework for C - Version 2.1-3 00:08:04.959 http://cunit.sourceforge.net/ 00:08:04.959 00:08:04.959 00:08:04.959 Suite: nvme 00:08:04.959 Test: test_opc_data_transfer ...passed 00:08:04.959 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:08:04.959 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:08:04.959 Test: test_trid_parse_and_compare ...[2024-07-11 02:31:29.897962] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1167:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:08:04.959 [2024-07-11 02:31:29.898371] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:08:04.959 [2024-07-11 02:31:29.898486] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1179:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:08:04.959 [2024-07-11 02:31:29.898531] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:08:04.959 [2024-07-11 02:31:29.898563] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1190:parse_next_key: *ERROR*: Key without value 00:08:04.959 [2024-07-11 02:31:29.898656] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:08:04.959 passed 00:08:04.959 Test: test_trid_trtype_str ...passed 00:08:04.959 Test: test_trid_adrfam_str ...passed 00:08:04.959 Test: test_nvme_ctrlr_probe ...[2024-07-11 02:31:29.898953] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:08:04.959 passed 00:08:04.959 Test: test_spdk_nvme_probe ...[2024-07-11 02:31:29.899121] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:08:04.959 [2024-07-11 02:31:29.899161] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:08:04.959 [2024-07-11 02:31:29.899266] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 812:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:08:04.959 [2024-07-11 02:31:29.899310] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:08:04.959 passed 00:08:04.959 Test: test_spdk_nvme_connect ...[2024-07-11 02:31:29.899414] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 989:spdk_nvme_connect: *ERROR*: No transport ID specified 00:08:04.959 [2024-07-11 02:31:29.899867] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:08:04.959 passed 00:08:04.959 Test: test_nvme_ctrlr_probe_internal ...[2024-07-11 02:31:29.899981] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1000:spdk_nvme_connect: *ERROR*: Create probe context failed 00:08:04.959 passed 00:08:04.959 Test: test_nvme_init_controllers ...[2024-07-11 02:31:29.900112] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:08:04.959 [2024-07-11 02:31:29.900143] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:08:04.959 passed 00:08:04.959 Test: test_nvme_driver_init ...[2024-07-11 02:31:29.900208] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:08:04.959 [2024-07-11 02:31:29.900298] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:08:04.959 [2024-07-11 02:31:29.900327] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:08:04.959 [2024-07-11 02:31:30.014988] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:08:04.959 passed 00:08:04.959 Test: test_spdk_nvme_detach ...passed 00:08:04.959 Test: test_nvme_completion_poll_cb ...passed 00:08:04.959 Test: test_nvme_user_copy_cmd_complete ...[2024-07-11 02:31:30.015162] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 618:nvme_driver_init: *ERROR*: failed to initialize mutex 00:08:04.959 passed 00:08:04.959 Test: test_nvme_allocate_request_null ...passed 00:08:04.959 Test: test_nvme_allocate_request ...passed 00:08:04.959 Test: test_nvme_free_request ...passed 00:08:04.959 Test: test_nvme_allocate_request_user_copy ...passed 00:08:04.959 Test: test_nvme_robust_mutex_init_shared ...passed 00:08:04.959 Test: test_nvme_request_check_timeout ...passed 00:08:04.959 Test: test_nvme_wait_for_completion ...passed 00:08:04.959 Test: test_spdk_nvme_parse_func ...passed 00:08:04.959 Test: test_spdk_nvme_detach_async ...passed 00:08:04.959 Test: test_nvme_parse_addr ...[2024-07-11 02:31:30.015867] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1577:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL 00:08:04.959 passed 00:08:04.959 00:08:04.959 Run Summary: Type Total Ran Passed Failed Inactive 00:08:04.959 suites 1 1 n/a 0 0 00:08:04.959 tests 25 25 25 0 0 00:08:04.959 asserts 326 326 326 0 n/a 00:08:04.959 00:08:04.959 Elapsed time = 0.007 seconds 00:08:04.959 02:31:30 -- unit/unittest.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:08:04.959 00:08:04.959 00:08:04.959 CUnit - A unit testing framework for C - Version 2.1-3 00:08:04.959 http://cunit.sourceforge.net/ 00:08:04.959 00:08:04.959 00:08:04.959 Suite: nvme_ctrlr 00:08:04.959 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-07-11 02:31:30.050533] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:05.218 passed 00:08:05.218 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-07-11 02:31:30.052217] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:05.218 passed 00:08:05.218 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-07-11 02:31:30.053576] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:05.218 passed 00:08:05.218 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-07-11 02:31:30.054907] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:05.218 passed 00:08:05.218 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-07-11 02:31:30.056229] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:05.218 [2024-07-11 02:31:30.057473] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-11 02:31:30.058735] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-11 02:31:30.059997] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:08:05.218 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-07-11 02:31:30.062506] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:05.218 [2024-07-11 02:31:30.064931] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-11 02:31:30.066196] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:08:05.218 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-07-11 02:31:30.068682] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:05.218 [2024-07-11 02:31:30.069860] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-11 02:31:30.072265] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:08:05.218 Test: test_nvme_ctrlr_init_delay ...[2024-07-11 02:31:30.074883] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:05.218 passed 00:08:05.218 Test: test_alloc_io_qpair_rr_1 ...[2024-07-11 02:31:30.076231] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:05.218 [2024-07-11 02:31:30.076376] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:08:05.218 [2024-07-11 02:31:30.076551] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:08:05.218 passed 00:08:05.218 Test: test_ctrlr_get_default_ctrlr_opts ...passed 00:08:05.218 Test: test_ctrlr_get_default_io_qpair_opts ...passed 00:08:05.218 Test: test_alloc_io_qpair_wrr_1 ...[2024-07-11 02:31:30.076615] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:08:05.218 [2024-07-11 02:31:30.076650] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:08:05.218 [2024-07-11 02:31:30.076760] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:05.218 passed 00:08:05.218 Test: test_alloc_io_qpair_wrr_2 ...[2024-07-11 02:31:30.076935] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:05.218 [2024-07-11 02:31:30.077063] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:08:05.218 passed 00:08:05.218 Test: test_spdk_nvme_ctrlr_update_firmware ...[2024-07-11 02:31:30.077303] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4846:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size! 00:08:05.218 [2024-07-11 02:31:30.077443] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4883:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:08:05.218 [2024-07-11 02:31:30.077541] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4923:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed! 00:08:05.218 [2024-07-11 02:31:30.077609] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4883:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:08:05.218 passed 00:08:05.218 Test: test_nvme_ctrlr_fail ...passed 00:08:05.218 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...passed 00:08:05.218 Test: test_nvme_ctrlr_set_supported_features ...passed 00:08:05.218 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...[2024-07-11 02:31:30.077695] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [] in failed state. 00:08:05.218 passed 00:08:05.218 Test: test_nvme_ctrlr_test_active_ns ...[2024-07-11 02:31:30.077966] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:05.477 passed 00:08:05.477 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:08:05.477 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:08:05.477 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:08:05.477 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-07-11 02:31:30.399310] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:05.477 passed 00:08:05.477 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-07-11 02:31:30.406848] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:05.477 passed 00:08:05.477 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-07-11 02:31:30.408102] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:05.477 [2024-07-11 02:31:30.408178] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:2870:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:08:05.477 passed 00:08:05.477 Test: test_alloc_io_qpair_fail ...[2024-07-11 02:31:30.409361] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:05.477 passed 00:08:05.477 Test: test_nvme_ctrlr_add_remove_process ...passed 00:08:05.477 Test: test_nvme_ctrlr_set_arbitration_feature ...[2024-07-11 02:31:30.409474] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 497:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed 00:08:05.477 passed 00:08:05.477 Test: test_nvme_ctrlr_set_state ...passed 00:08:05.477 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-07-11 02:31:30.409612] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1465:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:08:05.477 [2024-07-11 02:31:30.409670] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:05.477 passed 00:08:05.477 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-07-11 02:31:30.430910] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:05.477 passed 00:08:05.477 Test: test_nvme_ctrlr_ns_mgmt ...[2024-07-11 02:31:30.471742] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:05.477 passed 00:08:05.477 Test: test_nvme_ctrlr_reset ...[2024-07-11 02:31:30.473271] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:05.477 passed 00:08:05.477 Test: test_nvme_ctrlr_aer_callback ...[2024-07-11 02:31:30.473676] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:05.477 passed 00:08:05.477 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-07-11 02:31:30.475147] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:05.477 passed 00:08:05.477 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:08:05.477 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:08:05.477 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-07-11 02:31:30.476857] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:05.477 passed 00:08:05.477 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:08:05.477 Test: test_nvme_ctrlr_ana_resize ...[2024-07-11 02:31:30.478299] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:05.477 passed 00:08:05.477 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:08:05.477 Test: test_nvme_transport_ctrlr_ready ...[2024-07-11 02:31:30.479842] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4016:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1 00:08:05.477 [2024-07-11 02:31:30.479885] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4067:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 51 (error) 00:08:05.477 passed 00:08:05.477 Test: test_nvme_ctrlr_disable ...[2024-07-11 02:31:30.479971] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:05.477 passed 00:08:05.477 00:08:05.477 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.477 suites 1 1 n/a 0 0 00:08:05.477 tests 43 43 43 0 0 00:08:05.477 asserts 10418 10418 10418 0 n/a 00:08:05.477 00:08:05.477 Elapsed time = 0.389 seconds 00:08:05.477 02:31:30 -- unit/unittest.sh@88 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:08:05.477 00:08:05.477 00:08:05.477 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.477 http://cunit.sourceforge.net/ 00:08:05.477 00:08:05.477 00:08:05.477 Suite: nvme_ctrlr_cmd 00:08:05.477 Test: test_get_log_pages ...passed 00:08:05.477 Test: test_set_feature_cmd ...passed 00:08:05.477 Test: test_set_feature_ns_cmd ...passed 00:08:05.477 Test: test_get_feature_cmd ...passed 00:08:05.477 Test: test_get_feature_ns_cmd ...passed 00:08:05.477 Test: test_abort_cmd ...passed 00:08:05.477 Test: test_set_host_id_cmds ...[2024-07-11 02:31:30.522913] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 508:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:08:05.477 passed 00:08:05.477 Test: test_io_cmd_raw_no_payload_build ...passed 00:08:05.477 Test: test_io_raw_cmd ...passed 00:08:05.477 Test: test_io_raw_cmd_with_md ...passed 00:08:05.477 Test: test_namespace_attach ...passed 00:08:05.477 Test: test_namespace_detach ...passed 00:08:05.477 Test: test_namespace_create ...passed 00:08:05.477 Test: test_namespace_delete ...passed 00:08:05.477 Test: test_doorbell_buffer_config ...passed 00:08:05.477 Test: test_format_nvme ...passed 00:08:05.477 Test: test_fw_commit ...passed 00:08:05.477 Test: test_fw_image_download ...passed 00:08:05.477 Test: test_sanitize ...passed 00:08:05.477 Test: test_directive ...passed 00:08:05.477 Test: test_nvme_request_add_abort ...passed 00:08:05.477 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:08:05.477 Test: test_nvme_ctrlr_cmd_identify ...passed 00:08:05.477 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:08:05.477 00:08:05.478 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.478 suites 1 1 n/a 0 0 00:08:05.478 tests 24 24 24 0 0 00:08:05.478 asserts 198 198 198 0 n/a 00:08:05.478 00:08:05.478 Elapsed time = 0.001 seconds 00:08:05.478 02:31:30 -- unit/unittest.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:08:05.478 00:08:05.478 00:08:05.478 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.478 http://cunit.sourceforge.net/ 00:08:05.478 00:08:05.478 00:08:05.478 Suite: nvme_ctrlr_cmd 00:08:05.478 Test: test_geometry_cmd ...passed 00:08:05.478 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:08:05.478 00:08:05.478 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.478 suites 1 1 n/a 0 0 00:08:05.478 tests 2 2 2 0 0 00:08:05.478 asserts 7 7 7 0 n/a 00:08:05.478 00:08:05.478 Elapsed time = 0.000 seconds 00:08:05.478 02:31:30 -- unit/unittest.sh@90 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:08:05.737 00:08:05.737 00:08:05.737 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.737 http://cunit.sourceforge.net/ 00:08:05.737 00:08:05.737 00:08:05.737 Suite: nvme 00:08:05.737 Test: test_nvme_ns_construct ...passed 00:08:05.737 Test: test_nvme_ns_uuid ...passed 00:08:05.737 Test: test_nvme_ns_csi ...passed 00:08:05.737 Test: test_nvme_ns_data ...passed 00:08:05.737 Test: test_nvme_ns_set_identify_data ...passed 00:08:05.737 Test: test_spdk_nvme_ns_get_values ...passed 00:08:05.737 Test: test_spdk_nvme_ns_is_active ...passed 00:08:05.737 Test: spdk_nvme_ns_supports ...passed 00:08:05.737 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:08:05.737 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:08:05.737 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:08:05.737 Test: test_nvme_ns_find_id_desc ...passed 00:08:05.737 00:08:05.737 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.737 suites 1 1 n/a 0 0 00:08:05.737 tests 12 12 12 0 0 00:08:05.737 asserts 83 83 83 0 n/a 00:08:05.737 00:08:05.737 Elapsed time = 0.001 seconds 00:08:05.737 02:31:30 -- unit/unittest.sh@91 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:08:05.737 00:08:05.737 00:08:05.737 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.737 http://cunit.sourceforge.net/ 00:08:05.737 00:08:05.737 00:08:05.737 Suite: nvme_ns_cmd 00:08:05.737 Test: split_test ...passed 00:08:05.737 Test: split_test2 ...passed 00:08:05.737 Test: split_test3 ...passed 00:08:05.737 Test: split_test4 ...passed 00:08:05.737 Test: test_nvme_ns_cmd_flush ...passed 00:08:05.737 Test: test_nvme_ns_cmd_dataset_management ...passed 00:08:05.737 Test: test_nvme_ns_cmd_copy ...passed 00:08:05.737 Test: test_io_flags ...[2024-07-11 02:31:30.613052] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:08:05.737 passed 00:08:05.737 Test: test_nvme_ns_cmd_write_zeroes ...passed 00:08:05.737 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:08:05.737 Test: test_nvme_ns_cmd_reservation_register ...passed 00:08:05.737 Test: test_nvme_ns_cmd_reservation_release ...passed 00:08:05.737 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:08:05.737 Test: test_nvme_ns_cmd_reservation_report ...passed 00:08:05.737 Test: test_cmd_child_request ...passed 00:08:05.737 Test: test_nvme_ns_cmd_readv ...passed 00:08:05.737 Test: test_nvme_ns_cmd_read_with_md ...passed 00:08:05.737 Test: test_nvme_ns_cmd_writev ...[2024-07-11 02:31:30.614516] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 287:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512 00:08:05.737 passed 00:08:05.737 Test: test_nvme_ns_cmd_write_with_md ...passed 00:08:05.737 Test: test_nvme_ns_cmd_zone_append_with_md ...passed 00:08:05.737 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:08:05.737 Test: test_nvme_ns_cmd_comparev ...passed 00:08:05.737 Test: test_nvme_ns_cmd_compare_and_write ...passed 00:08:05.737 Test: test_nvme_ns_cmd_compare_with_md ...passed 00:08:05.737 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:08:05.737 Test: test_nvme_ns_cmd_setup_request ...passed 00:08:05.737 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:08:05.737 Test: test_spdk_nvme_ns_cmd_writev_ext ...passed 00:08:05.737 Test: test_spdk_nvme_ns_cmd_readv_ext ...passed 00:08:05.737 Test: test_nvme_ns_cmd_verify ...passed[2024-07-11 02:31:30.616600] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:08:05.737 [2024-07-11 02:31:30.616696] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:08:05.737 00:08:05.737 Test: test_nvme_ns_cmd_io_mgmt_send ...passed 00:08:05.737 Test: test_nvme_ns_cmd_io_mgmt_recv ...passed 00:08:05.737 00:08:05.737 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.737 suites 1 1 n/a 0 0 00:08:05.737 tests 32 32 32 0 0 00:08:05.737 asserts 550 550 550 0 n/a 00:08:05.737 00:08:05.737 Elapsed time = 0.005 seconds 00:08:05.737 02:31:30 -- unit/unittest.sh@92 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:08:05.737 00:08:05.737 00:08:05.737 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.737 http://cunit.sourceforge.net/ 00:08:05.737 00:08:05.737 00:08:05.737 Suite: nvme_ns_cmd 00:08:05.737 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:08:05.737 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:08:05.737 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:08:05.737 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:08:05.737 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:08:05.737 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:08:05.737 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:08:05.737 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:08:05.737 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:08:05.737 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:08:05.737 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:08:05.737 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:08:05.737 00:08:05.737 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.737 suites 1 1 n/a 0 0 00:08:05.737 tests 12 12 12 0 0 00:08:05.737 asserts 123 123 123 0 n/a 00:08:05.737 00:08:05.737 Elapsed time = 0.001 seconds 00:08:05.737 02:31:30 -- unit/unittest.sh@93 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:08:05.737 00:08:05.737 00:08:05.737 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.738 http://cunit.sourceforge.net/ 00:08:05.738 00:08:05.738 00:08:05.738 Suite: nvme_qpair 00:08:05.738 Test: test3 ...passed 00:08:05.738 Test: test_ctrlr_failed ...passed 00:08:05.738 Test: struct_packing ...passed 00:08:05.738 Test: test_nvme_qpair_process_completions ...[2024-07-11 02:31:30.682372] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:08:05.738 [2024-07-11 02:31:30.682657] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:08:05.738 [2024-07-11 02:31:30.682725] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:08:05.738 [2024-07-11 02:31:30.682796] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:08:05.738 passed 00:08:05.738 Test: test_nvme_completion_is_retry ...passed 00:08:05.738 Test: test_get_status_string ...passed 00:08:05.738 Test: test_nvme_qpair_add_cmd_error_injection ...passed 00:08:05.738 Test: test_nvme_qpair_submit_request ...passed 00:08:05.738 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:08:05.738 Test: test_nvme_qpair_manual_complete_request ...passed 00:08:05.738 Test: test_nvme_qpair_init_deinit ...passed 00:08:05.738 Test: test_nvme_get_sgl_print_info ...passed 00:08:05.738 00:08:05.738 [2024-07-11 02:31:30.683194] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:08:05.738 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.738 suites 1 1 n/a 0 0 00:08:05.738 tests 12 12 12 0 0 00:08:05.738 asserts 154 154 154 0 n/a 00:08:05.738 00:08:05.738 Elapsed time = 0.001 seconds 00:08:05.738 02:31:30 -- unit/unittest.sh@94 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:08:05.738 00:08:05.738 00:08:05.738 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.738 http://cunit.sourceforge.net/ 00:08:05.738 00:08:05.738 00:08:05.738 Suite: nvme_pcie 00:08:05.738 Test: test_prp_list_append ...[2024-07-11 02:31:30.714644] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:08:05.738 [2024-07-11 02:31:30.715054] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:08:05.738 [2024-07-11 02:31:30.715131] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1221:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:08:05.738 [2024-07-11 02:31:30.715503] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:08:05.738 [2024-07-11 02:31:30.715643] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:08:05.738 passed 00:08:05.738 Test: test_nvme_pcie_hotplug_monitor ...passed 00:08:05.738 Test: test_shadow_doorbell_update ...passed 00:08:05.738 Test: test_build_contig_hw_sgl_request ...passed 00:08:05.738 Test: test_nvme_pcie_qpair_build_metadata ...passed 00:08:05.738 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed 00:08:05.738 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed 00:08:05.738 Test: test_nvme_pcie_qpair_build_contig_request ...[2024-07-11 02:31:30.715906] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:08:05.738 passed 00:08:05.738 Test: test_nvme_pcie_ctrlr_regs_get_set ...passed 00:08:05.738 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...passed 00:08:05.738 Test: test_nvme_pcie_ctrlr_map_io_cmb ...passed 00:08:05.738 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...passed 00:08:05.738 Test: test_nvme_pcie_ctrlr_config_pmr ...[2024-07-11 02:31:30.716042] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:08:05.738 [2024-07-11 02:31:30.716145] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:08:05.738 [2024-07-11 02:31:30.716200] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:08:05.738 passed 00:08:05.738 Test: test_nvme_pcie_ctrlr_map_io_pmr ...passed[2024-07-11 02:31:30.716250] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:08:05.738 00:08:05.738 00:08:05.738 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.738 suites 1 1 n/a 0 0 00:08:05.738 tests 14 14 14 0 0 00:08:05.738 asserts 235 235 235 0 n/a 00:08:05.738 00:08:05.738 Elapsed time = 0.002 seconds 00:08:05.738 02:31:30 -- unit/unittest.sh@95 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:08:05.738 00:08:05.738 00:08:05.738 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.738 http://cunit.sourceforge.net/ 00:08:05.738 00:08:05.738 00:08:05.738 Suite: nvme_ns_cmd 00:08:05.738 Test: nvme_poll_group_create_test ...passed 00:08:05.738 Test: nvme_poll_group_add_remove_test ...passed 00:08:05.738 Test: nvme_poll_group_process_completions ...passed 00:08:05.738 Test: nvme_poll_group_destroy_test ...passed 00:08:05.738 Test: nvme_poll_group_get_free_stats ...passed 00:08:05.738 00:08:05.738 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.738 suites 1 1 n/a 0 0 00:08:05.738 tests 5 5 5 0 0 00:08:05.738 asserts 75 75 75 0 n/a 00:08:05.738 00:08:05.738 Elapsed time = 0.000 seconds 00:08:05.738 02:31:30 -- unit/unittest.sh@96 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:08:05.738 00:08:05.738 00:08:05.738 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.738 http://cunit.sourceforge.net/ 00:08:05.738 00:08:05.738 00:08:05.738 Suite: nvme_quirks 00:08:05.738 Test: test_nvme_quirks_striping ...passed 00:08:05.738 00:08:05.738 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.738 suites 1 1 n/a 0 0 00:08:05.738 tests 1 1 1 0 0 00:08:05.738 asserts 5 5 5 0 n/a 00:08:05.738 00:08:05.738 Elapsed time = 0.000 seconds 00:08:05.738 02:31:30 -- unit/unittest.sh@97 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:08:05.738 00:08:05.738 00:08:05.738 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.738 http://cunit.sourceforge.net/ 00:08:05.738 00:08:05.738 00:08:05.738 Suite: nvme_tcp 00:08:05.738 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:08:05.738 Test: test_nvme_tcp_build_iovs ...passed 00:08:05.738 Test: test_nvme_tcp_build_sgl_request ...[2024-07-11 02:31:30.804597] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 783:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x7ffdba30a3b0, and the iovcnt=16, remaining_size=28672 00:08:05.738 passed 00:08:05.738 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed 00:08:05.738 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:08:05.738 Test: test_nvme_tcp_req_complete_safe ...passed 00:08:05.738 Test: test_nvme_tcp_req_get ...passed 00:08:05.738 Test: test_nvme_tcp_req_init ...passed 00:08:05.738 Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed 00:08:05.738 Test: test_nvme_tcp_qpair_write_pdu ...passed 00:08:05.738 Test: test_nvme_tcp_qpair_set_recv_state ...passed 00:08:05.738 Test: test_nvme_tcp_alloc_reqs ...[2024-07-11 02:31:30.805259] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdba30c0d0 is same with the state(6) to be set 00:08:05.738 passed 00:08:05.738 Test: test_nvme_tcp_qpair_send_h2c_term_req ...[2024-07-11 02:31:30.805576] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdba30b260 is same with the state(5) to be set 00:08:05.738 passed 00:08:05.738 Test: test_nvme_tcp_pdu_ch_handle ...[2024-07-11 02:31:30.805664] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1108:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x7ffdba30bd90 00:08:05.738 [2024-07-11 02:31:30.805718] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1167:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:08:05.738 [2024-07-11 02:31:30.805797] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdba30b720 is same with the state(5) to be set 00:08:05.738 [2024-07-11 02:31:30.805856] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1118:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:08:05.738 [2024-07-11 02:31:30.805929] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdba30b720 is same with the state(5) to be set 00:08:05.738 [2024-07-11 02:31:30.805969] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:08:05.738 [2024-07-11 02:31:30.805995] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdba30b720 is same with the state(5) to be set 00:08:05.738 [2024-07-11 02:31:30.806034] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdba30b720 is same with the state(5) to be set 00:08:05.738 [2024-07-11 02:31:30.806071] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdba30b720 is same with the state(5) to be set 00:08:05.738 [2024-07-11 02:31:30.806126] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdba30b720 is same with the state(5) to be set 00:08:05.738 [2024-07-11 02:31:30.806157] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdba30b720 is same with the state(5) to be set 00:08:05.738 [2024-07-11 02:31:30.806197] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdba30b720 is same with the state(5) to be set 00:08:05.738 passed 00:08:05.738 Test: test_nvme_tcp_qpair_connect_sock ...[2024-07-11 02:31:30.806349] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2239:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:08:05.738 [2024-07-11 02:31:30.806394] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2251:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:08:05.738 [2024-07-11 02:31:30.806608] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2251:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:08:05.738 passed 00:08:05.738 Test: test_nvme_tcp_qpair_icreq_send ...passed 00:08:05.738 Test: test_nvme_tcp_c2h_payload_handle ...[2024-07-11 02:31:30.806720] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1282:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffdba30b8d0): PDU Sequence Error 00:08:05.738 passed 00:08:05.738 Test: test_nvme_tcp_icresp_handle ...[2024-07-11 02:31:30.806824] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1508:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:08:05.738 [2024-07-11 02:31:30.806859] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1515:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:08:05.738 [2024-07-11 02:31:30.806892] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdba30b270 is same with the state(5) to be set 00:08:05.738 [2024-07-11 02:31:30.806925] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1524:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:08:05.738 [2024-07-11 02:31:30.806958] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdba30b270 is same with the state(5) to be set 00:08:05.739 [2024-07-11 02:31:30.807005] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdba30b270 is same with the state(0) to be set 00:08:05.739 passed 00:08:05.739 Test: test_nvme_tcp_pdu_payload_handle ...passed 00:08:05.739 Test: test_nvme_tcp_capsule_resp_hdr_handle ...[2024-07-11 02:31:30.807071] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1282:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffdba30bd90): PDU Sequence Error 00:08:05.739 [2024-07-11 02:31:30.807151] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1585:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x7ffdba30a550 00:08:05.739 passed 00:08:05.739 Test: test_nvme_tcp_ctrlr_connect_qpair ...passed 00:08:05.739 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...[2024-07-11 02:31:30.807290] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 353:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x7ffdba309bd0, errno=0, rc=0 00:08:05.739 [2024-07-11 02:31:30.807335] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdba309bd0 is same with the state(5) to be set 00:08:05.739 [2024-07-11 02:31:30.807404] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdba309bd0 is same with the state(5) to be set 00:08:05.739 [2024-07-11 02:31:30.807456] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffdba309bd0 (0): Success 00:08:05.739 passed 00:08:05.739 Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-07-11 02:31:30.807495] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffdba309bd0 (0): Success 00:08:05.997 [2024-07-11 02:31:30.920917] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:08:05.997 [2024-07-11 02:31:30.921026] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:08:05.997 passed 00:08:05.997 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...passed 00:08:05.997 Test: test_nvme_tcp_poll_group_get_stats ...passed 00:08:05.997 Test: test_nvme_tcp_ctrlr_construct ...[2024-07-11 02:31:30.921214] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2849:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:05.997 [2024-07-11 02:31:30.921245] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2849:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:05.997 [2024-07-11 02:31:30.921456] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:08:05.997 [2024-07-11 02:31:30.921490] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:08:05.997 [2024-07-11 02:31:30.921581] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2239:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:08:05.997 [2024-07-11 02:31:30.921656] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:08:05.997 passed 00:08:05.997 Test: test_nvme_tcp_qpair_submit_request ...[2024-07-11 02:31:30.921758] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000001540 with addr=192.168.1.78, port=23 00:08:05.997 [2024-07-11 02:31:30.921820] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:08:05.997 [2024-07-11 02:31:30.921943] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 783:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x613000001a80, and the iovcnt=1, remaining_size=1024 00:08:05.997 [2024-07-11 02:31:30.921978] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 961:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:08:05.997 passed 00:08:05.997 00:08:05.997 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.997 suites 1 1 n/a 0 0 00:08:05.997 tests 27 27 27 0 0 00:08:05.997 asserts 624 624 624 0 n/a 00:08:05.997 00:08:05.997 Elapsed time = 0.118 seconds 00:08:05.997 02:31:30 -- unit/unittest.sh@98 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:08:05.997 00:08:05.997 00:08:05.997 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.997 http://cunit.sourceforge.net/ 00:08:05.997 00:08:05.997 00:08:05.997 Suite: nvme_transport 00:08:05.997 Test: test_nvme_get_transport ...passed 00:08:05.997 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:08:05.997 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:08:05.997 Test: test_nvme_transport_poll_group_add_remove ...passed 00:08:05.997 Test: test_ctrlr_get_memory_domains ...passed 00:08:05.997 00:08:05.997 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.997 suites 1 1 n/a 0 0 00:08:05.997 tests 5 5 5 0 0 00:08:05.997 asserts 28 28 28 0 n/a 00:08:05.997 00:08:05.998 Elapsed time = 0.000 seconds 00:08:05.998 02:31:30 -- unit/unittest.sh@99 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:08:05.998 00:08:05.998 00:08:05.998 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.998 http://cunit.sourceforge.net/ 00:08:05.998 00:08:05.998 00:08:05.998 Suite: nvme_io_msg 00:08:05.998 Test: test_nvme_io_msg_send ...passed 00:08:05.998 Test: test_nvme_io_msg_process ...passed 00:08:05.998 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:08:05.998 00:08:05.998 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.998 suites 1 1 n/a 0 0 00:08:05.998 tests 3 3 3 0 0 00:08:05.998 asserts 56 56 56 0 n/a 00:08:05.998 00:08:05.998 Elapsed time = 0.000 seconds 00:08:05.998 02:31:31 -- unit/unittest.sh@100 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:08:05.998 00:08:05.998 00:08:05.998 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.998 http://cunit.sourceforge.net/ 00:08:05.998 00:08:05.998 00:08:05.998 Suite: nvme_pcie_common 00:08:05.998 Test: test_nvme_pcie_ctrlr_alloc_cmb ...[2024-07-11 02:31:31.025547] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:08:05.998 passed 00:08:05.998 Test: test_nvme_pcie_qpair_construct_destroy ...passed 00:08:05.998 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed 00:08:05.998 Test: test_nvme_pcie_ctrlr_connect_qpair ...[2024-07-11 02:31:31.026261] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 503:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:08:05.998 [2024-07-11 02:31:31.026363] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 456:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:08:05.998 [2024-07-11 02:31:31.026394] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 550:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq 00:08:05.998 passed 00:08:05.998 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...passed 00:08:05.998 Test: test_nvme_pcie_poll_group_get_stats ...[2024-07-11 02:31:31.026751] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1791:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:05.998 passed 00:08:05.998 00:08:05.998 [2024-07-11 02:31:31.026793] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1791:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:05.998 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.998 suites 1 1 n/a 0 0 00:08:05.998 tests 6 6 6 0 0 00:08:05.998 asserts 148 148 148 0 n/a 00:08:05.998 00:08:05.998 Elapsed time = 0.001 seconds 00:08:05.998 02:31:31 -- unit/unittest.sh@101 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:08:05.998 00:08:05.998 00:08:05.998 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.998 http://cunit.sourceforge.net/ 00:08:05.998 00:08:05.998 00:08:05.998 Suite: nvme_fabric 00:08:05.998 Test: test_nvme_fabric_prop_set_cmd ...passed 00:08:05.998 Test: test_nvme_fabric_prop_get_cmd ...passed 00:08:05.998 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:08:05.998 Test: test_nvme_fabric_discover_probe ...passed 00:08:05.998 Test: test_nvme_fabric_qpair_connect ...[2024-07-11 02:31:31.056392] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -125, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:08:05.998 passed 00:08:05.998 00:08:05.998 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.998 suites 1 1 n/a 0 0 00:08:05.998 tests 5 5 5 0 0 00:08:05.998 asserts 60 60 60 0 n/a 00:08:05.998 00:08:05.998 Elapsed time = 0.001 seconds 00:08:05.998 02:31:31 -- unit/unittest.sh@102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:08:05.998 00:08:05.998 00:08:05.998 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.998 http://cunit.sourceforge.net/ 00:08:05.998 00:08:05.998 00:08:05.998 Suite: nvme_opal 00:08:05.998 Test: test_opal_nvme_security_recv_send_done ...passed 00:08:05.998 Test: test_opal_add_short_atom_header ...[2024-07-11 02:31:31.085300] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:08:05.998 passed 00:08:05.998 00:08:05.998 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.998 suites 1 1 n/a 0 0 00:08:05.998 tests 2 2 2 0 0 00:08:05.998 asserts 22 22 22 0 n/a 00:08:05.998 00:08:05.998 Elapsed time = 0.001 seconds 00:08:06.256 00:08:06.256 real 0m1.217s 00:08:06.256 user 0m0.664s 00:08:06.256 sys 0m0.407s 00:08:06.256 02:31:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:06.256 02:31:31 -- common/autotest_common.sh@10 -- # set +x 00:08:06.256 ************************************ 00:08:06.256 END TEST unittest_nvme 00:08:06.256 ************************************ 00:08:06.256 02:31:31 -- unit/unittest.sh@247 -- # run_test unittest_log /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:08:06.256 02:31:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:06.256 02:31:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:06.256 02:31:31 -- common/autotest_common.sh@10 -- # set +x 00:08:06.256 ************************************ 00:08:06.256 START TEST unittest_log 00:08:06.256 ************************************ 00:08:06.256 02:31:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:08:06.256 00:08:06.256 00:08:06.256 CUnit - A unit testing framework for C - Version 2.1-3 00:08:06.256 http://cunit.sourceforge.net/ 00:08:06.256 00:08:06.256 00:08:06.256 Suite: log 00:08:06.257 Test: log_test ...[2024-07-11 02:31:31.161956] log_ut.c: 54:log_test: *WARNING*: log warning unit test 00:08:06.257 [2024-07-11 02:31:31.162170] log_ut.c: 55:log_test: *DEBUG*: log test 00:08:06.257 log dump test: 00:08:06.257 00000000 6c 6f 67 20 64 75 6d 70 log dump 00:08:06.257 spdk dump test: 00:08:06.257 passed 00:08:06.257 Test: deprecation ...00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:08:06.257 spdk dump test: 00:08:06.257 00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:08:06.257 00000010 65 20 63 68 61 72 73 e chars 00:08:07.192 passed 00:08:07.192 00:08:07.192 Run Summary: Type Total Ran Passed Failed Inactive 00:08:07.192 suites 1 1 n/a 0 0 00:08:07.192 tests 2 2 2 0 0 00:08:07.192 asserts 73 73 73 0 n/a 00:08:07.192 00:08:07.192 Elapsed time = 0.001 seconds 00:08:07.192 00:08:07.192 real 0m1.032s 00:08:07.192 user 0m0.020s 00:08:07.192 sys 0m0.012s 00:08:07.192 02:31:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:07.192 02:31:32 -- common/autotest_common.sh@10 -- # set +x 00:08:07.192 ************************************ 00:08:07.192 END TEST unittest_log 00:08:07.192 ************************************ 00:08:07.192 02:31:32 -- unit/unittest.sh@248 -- # run_test unittest_lvol /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:08:07.192 02:31:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:07.192 02:31:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:07.192 02:31:32 -- common/autotest_common.sh@10 -- # set +x 00:08:07.192 ************************************ 00:08:07.192 START TEST unittest_lvol 00:08:07.192 ************************************ 00:08:07.192 02:31:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:08:07.192 00:08:07.192 00:08:07.192 CUnit - A unit testing framework for C - Version 2.1-3 00:08:07.192 http://cunit.sourceforge.net/ 00:08:07.192 00:08:07.192 00:08:07.192 Suite: lvol 00:08:07.192 Test: lvs_init_unload_success ...[2024-07-11 02:31:32.253023] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:08:07.192 passed 00:08:07.192 Test: lvs_init_destroy_success ...[2024-07-11 02:31:32.253694] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:08:07.192 passed 00:08:07.192 Test: lvs_init_opts_success ...passed 00:08:07.192 Test: lvs_unload_lvs_is_null_fail ...passed 00:08:07.192 Test: lvs_names ...[2024-07-11 02:31:32.253944] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:08:07.192 [2024-07-11 02:31:32.254002] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:08:07.192 [2024-07-11 02:31:32.254043] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:08:07.192 [2024-07-11 02:31:32.254234] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:08:07.192 passed 00:08:07.192 Test: lvol_create_destroy_success ...passed 00:08:07.192 Test: lvol_create_fail ...[2024-07-11 02:31:32.254899] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:08:07.192 [2024-07-11 02:31:32.255055] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:08:07.192 passed 00:08:07.192 Test: lvol_destroy_fail ...[2024-07-11 02:31:32.255438] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:08:07.192 passed 00:08:07.192 Test: lvol_close ...[2024-07-11 02:31:32.255692] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:08:07.192 [2024-07-11 02:31:32.255755] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:08:07.192 passed 00:08:07.192 Test: lvol_resize ...passed 00:08:07.192 Test: lvol_set_read_only ...passed 00:08:07.192 Test: test_lvs_load ...[2024-07-11 02:31:32.256707] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:08:07.192 [2024-07-11 02:31:32.256760] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:08:07.192 passed 00:08:07.192 Test: lvols_load ...[2024-07-11 02:31:32.257034] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:08:07.192 [2024-07-11 02:31:32.257195] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:08:07.192 passed 00:08:07.192 Test: lvol_open ...passed 00:08:07.192 Test: lvol_snapshot ...passed 00:08:07.192 Test: lvol_snapshot_fail ...[2024-07-11 02:31:32.258121] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:08:07.192 passed 00:08:07.192 Test: lvol_clone ...passed 00:08:07.192 Test: lvol_clone_fail ...[2024-07-11 02:31:32.258857] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:08:07.192 passed 00:08:07.192 Test: lvol_iter_clones ...passed 00:08:07.192 Test: lvol_refcnt ...[2024-07-11 02:31:32.259527] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol 64e6a68f-7972-4006-b5d7-174cb1b20f74 because it is still open 00:08:07.192 passed 00:08:07.192 Test: lvol_names ...[2024-07-11 02:31:32.259778] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:08:07.192 [2024-07-11 02:31:32.259888] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:08:07.192 [2024-07-11 02:31:32.260190] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:08:07.192 passed 00:08:07.192 Test: lvol_create_thin_provisioned ...passed 00:08:07.192 Test: lvol_rename ...[2024-07-11 02:31:32.260750] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:08:07.192 [2024-07-11 02:31:32.260873] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:08:07.192 passed 00:08:07.192 Test: lvs_rename ...[2024-07-11 02:31:32.261169] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:08:07.192 passed 00:08:07.192 Test: lvol_inflate ...[2024-07-11 02:31:32.261427] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:08:07.192 passed 00:08:07.192 Test: lvol_decouple_parent ...[2024-07-11 02:31:32.261763] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:08:07.192 passed 00:08:07.192 Test: lvol_get_xattr ...passed 00:08:07.192 Test: lvol_esnap_reload ...passed 00:08:07.192 Test: lvol_esnap_create_bad_args ...[2024-07-11 02:31:32.262374] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:08:07.192 [2024-07-11 02:31:32.262419] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:08:07.192 [2024-07-11 02:31:32.262476] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1258:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:08:07.192 [2024-07-11 02:31:32.262621] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:08:07.192 [2024-07-11 02:31:32.262776] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:08:07.192 passed 00:08:07.192 Test: lvol_esnap_create_delete ...passed 00:08:07.192 Test: lvol_esnap_load_esnaps ...[2024-07-11 02:31:32.263148] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1832:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:08:07.192 passed 00:08:07.193 Test: lvol_esnap_missing ...[2024-07-11 02:31:32.263324] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:08:07.193 [2024-07-11 02:31:32.263386] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:08:07.193 passed 00:08:07.193 Test: lvol_esnap_hotplug ... 00:08:07.193 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:08:07.193 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:08:07.193 [2024-07-11 02:31:32.264242] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol c79df0fa-7872-4cae-ab92-24ea8e057d37: failed to create esnap bs_dev: error -12 00:08:07.193 lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM 00:08:07.193 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:08:07.193 [2024-07-11 02:31:32.264522] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 89cdd7df-351a-49b5-9622-a021d835dee1: failed to create esnap bs_dev: error -12 00:08:07.193 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:08:07.193 [2024-07-11 02:31:32.264691] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol cdf8baa6-5e31-4815-afb2-0b1c5d4239de: failed to create esnap bs_dev: error -12 00:08:07.193 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:08:07.193 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:08:07.193 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:08:07.193 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:08:07.193 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:08:07.193 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:08:07.193 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:08:07.193 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:08:07.193 passed 00:08:07.193 Test: lvol_get_by ...passed 00:08:07.193 00:08:07.193 Run Summary: Type Total Ran Passed Failed Inactive 00:08:07.193 suites 1 1 n/a 0 0 00:08:07.193 tests 34 34 34 0 0 00:08:07.193 asserts 1439 1439 1439 0 n/a 00:08:07.193 00:08:07.193 Elapsed time = 0.014 seconds 00:08:07.452 00:08:07.452 real 0m0.051s 00:08:07.452 user 0m0.036s 00:08:07.452 sys 0m0.016s 00:08:07.452 02:31:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:07.452 02:31:32 -- common/autotest_common.sh@10 -- # set +x 00:08:07.452 ************************************ 00:08:07.452 END TEST unittest_lvol 00:08:07.452 ************************************ 00:08:07.452 02:31:32 -- unit/unittest.sh@249 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:07.452 02:31:32 -- unit/unittest.sh@250 -- # run_test unittest_nvme_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:08:07.452 02:31:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:07.452 02:31:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:07.452 02:31:32 -- common/autotest_common.sh@10 -- # set +x 00:08:07.452 ************************************ 00:08:07.452 START TEST unittest_nvme_rdma 00:08:07.452 ************************************ 00:08:07.452 02:31:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:08:07.452 00:08:07.452 00:08:07.452 CUnit - A unit testing framework for C - Version 2.1-3 00:08:07.452 http://cunit.sourceforge.net/ 00:08:07.452 00:08:07.452 00:08:07.452 Suite: nvme_rdma 00:08:07.452 Test: test_nvme_rdma_build_sgl_request ...[2024-07-11 02:31:32.353584] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1455:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:08:07.452 [2024-07-11 02:31:32.353891] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1628:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:08:07.452 passed 00:08:07.452 Test: test_nvme_rdma_build_sgl_inline_request ...passed 00:08:07.452 Test: test_nvme_rdma_build_contig_request ...[2024-07-11 02:31:32.353970] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1684:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:08:07.452 [2024-07-11 02:31:32.354031] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1565:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:08:07.452 passed 00:08:07.453 Test: test_nvme_rdma_build_contig_inline_request ...passed 00:08:07.453 Test: test_nvme_rdma_create_reqs ...[2024-07-11 02:31:32.354138] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1007:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:08:07.453 passed 00:08:07.453 Test: test_nvme_rdma_create_rsps ...[2024-07-11 02:31:32.354431] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 925:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:08:07.453 passed 00:08:07.453 Test: test_nvme_rdma_ctrlr_create_qpair ...[2024-07-11 02:31:32.354596] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1822:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:08:07.453 passed 00:08:07.453 Test: test_nvme_rdma_poller_create ...[2024-07-11 02:31:32.354651] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1822:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:08:07.453 passed 00:08:07.453 Test: test_nvme_rdma_qpair_process_cm_event ...passed 00:08:07.453 Test: test_nvme_rdma_ctrlr_construct ...[2024-07-11 02:31:32.354797] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 526:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:08:07.453 passed 00:08:07.453 Test: test_nvme_rdma_req_put_and_get ...passed 00:08:07.453 Test: test_nvme_rdma_req_init ...passed 00:08:07.453 Test: test_nvme_rdma_validate_cm_event ...passed 00:08:07.453 Test: test_nvme_rdma_qpair_init ...[2024-07-11 02:31:32.355063] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:08:07.453 [2024-07-11 02:31:32.355098] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:08:07.453 passed 00:08:07.453 Test: test_nvme_rdma_qpair_submit_request ...passed 00:08:07.453 Test: test_nvme_rdma_memory_domain ...[2024-07-11 02:31:32.355268] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 352:nvme_rdma_get_memory_domain: *ERROR*: Failed to create memory domain 00:08:07.453 passed 00:08:07.453 Test: test_rdma_ctrlr_get_memory_domains ...passed 00:08:07.453 Test: test_rdma_get_memory_translation ...[2024-07-11 02:31:32.355356] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1444:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:08:07.453 passed 00:08:07.453 Test: test_get_rdma_qpair_from_wc ...passed 00:08:07.453 Test: test_nvme_rdma_ctrlr_get_max_sges ...passed 00:08:07.453 Test: test_nvme_rdma_poll_group_get_stats ...[2024-07-11 02:31:32.355403] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1455:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:08:07.453 [2024-07-11 02:31:32.355492] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3239:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:07.453 passed 00:08:07.453 Test: test_nvme_rdma_qpair_set_poller ...[2024-07-11 02:31:32.355526] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3239:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:07.453 [2024-07-11 02:31:32.355617] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2972:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:08:07.453 [2024-07-11 02:31:32.355653] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3018:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:08:07.453 [2024-07-11 02:31:32.355678] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 723:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffe9aba8020 on poll group 0x60b0000001a0 00:08:07.453 [2024-07-11 02:31:32.355726] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2972:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:08:07.453 [2024-07-11 02:31:32.355760] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3018:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device (nil) 00:08:07.453 [2024-07-11 02:31:32.355782] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 723:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffe9aba8020 on poll group 0x60b0000001a0 00:08:07.453 [2024-07-11 02:31:32.355848] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 701:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:08:07.453 passed 00:08:07.453 00:08:07.453 Run Summary: Type Total Ran Passed Failed Inactive 00:08:07.453 suites 1 1 n/a 0 0 00:08:07.453 tests 22 22 22 0 0 00:08:07.453 asserts 412 412 412 0 n/a 00:08:07.453 00:08:07.453 Elapsed time = 0.002 seconds 00:08:07.453 00:08:07.453 real 0m0.032s 00:08:07.453 user 0m0.013s 00:08:07.453 sys 0m0.020s 00:08:07.453 02:31:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:07.453 02:31:32 -- common/autotest_common.sh@10 -- # set +x 00:08:07.453 ************************************ 00:08:07.453 END TEST unittest_nvme_rdma 00:08:07.453 ************************************ 00:08:07.453 02:31:32 -- unit/unittest.sh@251 -- # run_test unittest_nvmf_transport /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:08:07.453 02:31:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:07.453 02:31:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:07.453 02:31:32 -- common/autotest_common.sh@10 -- # set +x 00:08:07.453 ************************************ 00:08:07.453 START TEST unittest_nvmf_transport 00:08:07.453 ************************************ 00:08:07.453 02:31:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:08:07.453 00:08:07.453 00:08:07.453 CUnit - A unit testing framework for C - Version 2.1-3 00:08:07.453 http://cunit.sourceforge.net/ 00:08:07.453 00:08:07.453 00:08:07.453 Suite: nvmf 00:08:07.453 Test: test_spdk_nvmf_transport_create ...[2024-07-11 02:31:32.440618] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 247:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:08:07.453 [2024-07-11 02:31:32.440936] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 267:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:08:07.453 [2024-07-11 02:31:32.441005] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 271:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:08:07.453 [2024-07-11 02:31:32.441105] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 254:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:08:07.453 passed 00:08:07.453 Test: test_nvmf_transport_poll_group_create ...passed 00:08:07.453 Test: test_spdk_nvmf_transport_opts_init ...[2024-07-11 02:31:32.441338] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 788:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:08:07.453 passed 00:08:07.453 Test: test_spdk_nvmf_transport_listen_ext ...[2024-07-11 02:31:32.441417] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 793:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:08:07.453 [2024-07-11 02:31:32.441441] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 798:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:08:07.453 passed 00:08:07.453 00:08:07.453 Run Summary: Type Total Ran Passed Failed Inactive 00:08:07.453 suites 1 1 n/a 0 0 00:08:07.453 tests 4 4 4 0 0 00:08:07.453 asserts 49 49 49 0 n/a 00:08:07.453 00:08:07.453 Elapsed time = 0.001 seconds 00:08:07.453 00:08:07.453 real 0m0.037s 00:08:07.453 user 0m0.016s 00:08:07.453 sys 0m0.021s 00:08:07.453 02:31:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:07.453 02:31:32 -- common/autotest_common.sh@10 -- # set +x 00:08:07.453 ************************************ 00:08:07.453 END TEST unittest_nvmf_transport 00:08:07.453 ************************************ 00:08:07.453 02:31:32 -- unit/unittest.sh@252 -- # run_test unittest_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:08:07.453 02:31:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:07.453 02:31:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:07.453 02:31:32 -- common/autotest_common.sh@10 -- # set +x 00:08:07.453 ************************************ 00:08:07.453 START TEST unittest_rdma 00:08:07.453 ************************************ 00:08:07.453 02:31:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:08:07.453 00:08:07.453 00:08:07.453 CUnit - A unit testing framework for C - Version 2.1-3 00:08:07.453 http://cunit.sourceforge.net/ 00:08:07.453 00:08:07.453 00:08:07.453 Suite: rdma_common 00:08:07.453 Test: test_spdk_rdma_pd ...[2024-07-11 02:31:32.521158] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:08:07.453 passed 00:08:07.453 00:08:07.453 [2024-07-11 02:31:32.521483] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:08:07.453 Run Summary: Type Total Ran Passed Failed Inactive 00:08:07.453 suites 1 1 n/a 0 0 00:08:07.453 tests 1 1 1 0 0 00:08:07.453 asserts 31 31 31 0 n/a 00:08:07.453 00:08:07.453 Elapsed time = 0.001 seconds 00:08:07.453 00:08:07.453 real 0m0.026s 00:08:07.453 user 0m0.026s 00:08:07.453 sys 0m0.000s 00:08:07.453 02:31:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:07.453 02:31:32 -- common/autotest_common.sh@10 -- # set +x 00:08:07.453 ************************************ 00:08:07.453 END TEST unittest_rdma 00:08:07.453 ************************************ 00:08:07.713 02:31:32 -- unit/unittest.sh@255 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:07.713 02:31:32 -- unit/unittest.sh@256 -- # run_test unittest_nvme_cuse /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:08:07.713 02:31:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:07.713 02:31:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:07.713 02:31:32 -- common/autotest_common.sh@10 -- # set +x 00:08:07.713 ************************************ 00:08:07.713 START TEST unittest_nvme_cuse 00:08:07.713 ************************************ 00:08:07.713 02:31:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:08:07.713 00:08:07.713 00:08:07.713 CUnit - A unit testing framework for C - Version 2.1-3 00:08:07.713 http://cunit.sourceforge.net/ 00:08:07.713 00:08:07.713 00:08:07.713 Suite: nvme_cuse 00:08:07.713 Test: test_cuse_nvme_submit_io_read_write ...passed 00:08:07.713 Test: test_cuse_nvme_submit_io_read_write_with_md ...passed 00:08:07.713 Test: test_cuse_nvme_submit_passthru_cmd ...passed 00:08:07.713 Test: test_cuse_nvme_submit_passthru_cmd_with_md ...passed 00:08:07.713 Test: test_nvme_cuse_get_cuse_ns_device ...passed 00:08:07.713 Test: test_cuse_nvme_submit_io ...[2024-07-11 02:31:32.605656] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 656:cuse_nvme_submit_io: *ERROR*: SUBMIT_IO: opc:0 not valid 00:08:07.713 passed 00:08:07.713 Test: test_cuse_nvme_reset ...[2024-07-11 02:31:32.606262] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 341:cuse_nvme_reset: *ERROR*: Namespace reset not supported 00:08:07.713 passed 00:08:07.713 Test: test_nvme_cuse_stop ...passed 00:08:07.713 Test: test_spdk_nvme_cuse_get_ctrlr_name ...passed 00:08:07.713 00:08:07.713 Run Summary: Type Total Ran Passed Failed Inactive 00:08:07.713 suites 1 1 n/a 0 0 00:08:07.713 tests 9 9 9 0 0 00:08:07.713 asserts 121 121 121 0 n/a 00:08:07.713 00:08:07.713 Elapsed time = 0.002 seconds 00:08:07.713 00:08:07.713 real 0m0.035s 00:08:07.713 user 0m0.021s 00:08:07.713 sys 0m0.012s 00:08:07.713 02:31:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:07.713 02:31:32 -- common/autotest_common.sh@10 -- # set +x 00:08:07.713 ************************************ 00:08:07.713 END TEST unittest_nvme_cuse 00:08:07.713 ************************************ 00:08:07.713 02:31:32 -- unit/unittest.sh@259 -- # run_test unittest_nvmf unittest_nvmf 00:08:07.713 02:31:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:07.713 02:31:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:07.713 02:31:32 -- common/autotest_common.sh@10 -- # set +x 00:08:07.713 ************************************ 00:08:07.713 START TEST unittest_nvmf 00:08:07.713 ************************************ 00:08:07.713 02:31:32 -- common/autotest_common.sh@1104 -- # unittest_nvmf 00:08:07.713 02:31:32 -- unit/unittest.sh@106 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:08:07.713 00:08:07.713 00:08:07.713 CUnit - A unit testing framework for C - Version 2.1-3 00:08:07.713 http://cunit.sourceforge.net/ 00:08:07.713 00:08:07.713 00:08:07.713 Suite: nvmf 00:08:07.713 Test: test_get_log_page ...[2024-07-11 02:31:32.689065] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:08:07.713 passed 00:08:07.714 Test: test_process_fabrics_cmd ...passed 00:08:07.714 Test: test_connect ...[2024-07-11 02:31:32.689898] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 905:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:08:07.714 [2024-07-11 02:31:32.689997] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 768:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:08:07.714 [2024-07-11 02:31:32.690051] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 944:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:08:07.714 [2024-07-11 02:31:32.690091] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:08:07.714 [2024-07-11 02:31:32.690162] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 779:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:08:07.714 [2024-07-11 02:31:32.690190] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 786:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:08:07.714 [2024-07-11 02:31:32.690275] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 792:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:08:07.714 [2024-07-11 02:31:32.690315] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 819:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:08:07.714 [2024-07-11 02:31:32.690435] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:08:07.714 [2024-07-11 02:31:32.690517] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 587:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:08:07.714 [2024-07-11 02:31:32.690790] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 593:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:08:07.714 [2024-07-11 02:31:32.690889] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 599:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:08:07.714 [2024-07-11 02:31:32.690979] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 606:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:08:07.714 [2024-07-11 02:31:32.691047] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 623:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:08:07.714 [2024-07-11 02:31:32.691143] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 232:ctrlr_add_qpair_and_send_rsp: *ERROR*: Got I/O connect with duplicate QID 1 00:08:07.714 [2024-07-11 02:31:32.691281] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 699:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 2, group (nil)) 00:08:07.714 passed 00:08:07.714 Test: test_get_ns_id_desc_list ...passed 00:08:07.714 Test: test_identify_ns ...[2024-07-11 02:31:32.691527] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:07.714 [2024-07-11 02:31:32.691736] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:08:07.714 [2024-07-11 02:31:32.691874] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:08:07.714 passed 00:08:07.714 Test: test_identify_ns_iocs_specific ...[2024-07-11 02:31:32.692014] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:07.714 [2024-07-11 02:31:32.692340] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:07.714 passed 00:08:07.714 Test: test_reservation_write_exclusive ...passed 00:08:07.714 Test: test_reservation_exclusive_access ...passed 00:08:07.714 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed 00:08:07.714 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:08:07.714 Test: test_reservation_notification_log_page ...passed 00:08:07.714 Test: test_get_dif_ctx ...passed 00:08:07.714 Test: test_set_get_features ...[2024-07-11 02:31:32.692902] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1534:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:08:07.714 [2024-07-11 02:31:32.692972] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1534:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:08:07.714 [2024-07-11 02:31:32.693012] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1545:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3 00:08:07.714 passed 00:08:07.714 Test: test_identify_ctrlr ...passed 00:08:07.714 Test: test_identify_ctrlr_iocs_specific ...[2024-07-11 02:31:32.693077] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1621:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:08:07.714 passed 00:08:07.714 Test: test_custom_admin_cmd ...passed 00:08:07.714 Test: test_fused_compare_and_write ...[2024-07-11 02:31:32.693528] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4105:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:08:07.714 [2024-07-11 02:31:32.693575] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4094:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:08:07.714 [2024-07-11 02:31:32.693612] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4112:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:08:07.714 passed 00:08:07.714 Test: test_multi_async_event_reqs ...passed 00:08:07.714 Test: test_get_ana_log_page_one_ns_per_anagrp ...passed 00:08:07.714 Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed 00:08:07.714 Test: test_multi_async_events ...passed 00:08:07.714 Test: test_rae ...passed 00:08:07.714 Test: test_nvmf_ctrlr_create_destruct ...passed 00:08:07.714 Test: test_nvmf_ctrlr_use_zcopy ...passed 00:08:07.714 Test: test_spdk_nvmf_request_zcopy_start ...passed 00:08:07.714 Test: test_zcopy_read ...passed[2024-07-11 02:31:32.694092] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4232:nvmf_ctrlr_process_io_cmd: *ERROR*: I/O command sent before CONNECT 00:08:07.714 00:08:07.714 Test: test_zcopy_write ...passed 00:08:07.714 Test: test_nvmf_property_set ...passed 00:08:07.714 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...[2024-07-11 02:31:32.694271] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1832:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:08:07.714 passed 00:08:07.714 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...passed 00:08:07.714 00:08:07.714 [2024-07-11 02:31:32.694348] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1832:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:08:07.714 [2024-07-11 02:31:32.694382] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1855:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:08:07.714 [2024-07-11 02:31:32.694413] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1861:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:08:07.714 [2024-07-11 02:31:32.694445] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1873:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:08:07.714 Run Summary: Type Total Ran Passed Failed Inactive 00:08:07.714 suites 1 1 n/a 0 0 00:08:07.714 tests 30 30 30 0 0 00:08:07.714 asserts 885 885 885 0 n/a 00:08:07.714 00:08:07.714 Elapsed time = 0.006 seconds 00:08:07.714 02:31:32 -- unit/unittest.sh@107 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:08:07.714 00:08:07.714 00:08:07.714 CUnit - A unit testing framework for C - Version 2.1-3 00:08:07.714 http://cunit.sourceforge.net/ 00:08:07.714 00:08:07.714 00:08:07.714 Suite: nvmf 00:08:07.714 Test: test_get_rw_params ...passed 00:08:07.714 Test: test_lba_in_range ...passed 00:08:07.714 Test: test_get_dif_ctx ...passed 00:08:07.714 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:08:07.714 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...[2024-07-11 02:31:32.727470] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 435:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:08:07.714 [2024-07-11 02:31:32.727781] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 443:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:08:07.714 passed 00:08:07.714 Test: test_nvmf_bdev_ctrlr_zcopy_start ...[2024-07-11 02:31:32.727882] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 450:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:08:07.714 [2024-07-11 02:31:32.727951] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 946:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:08:07.714 passed 00:08:07.714 Test: test_nvmf_bdev_ctrlr_cmd ...[2024-07-11 02:31:32.728034] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 953:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:08:07.714 [2024-07-11 02:31:32.728133] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 389:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:08:07.714 [2024-07-11 02:31:32.728165] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 396:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:08:07.714 passed 00:08:07.714 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed 00:08:07.714 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...passed 00:08:07.714 00:08:07.714 [2024-07-11 02:31:32.728228] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 488:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:08:07.714 [2024-07-11 02:31:32.728261] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 495:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:08:07.714 Run Summary: Type Total Ran Passed Failed Inactive 00:08:07.714 suites 1 1 n/a 0 0 00:08:07.714 tests 9 9 9 0 0 00:08:07.714 asserts 157 157 157 0 n/a 00:08:07.714 00:08:07.714 Elapsed time = 0.001 seconds 00:08:07.714 02:31:32 -- unit/unittest.sh@108 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:08:07.714 00:08:07.714 00:08:07.714 CUnit - A unit testing framework for C - Version 2.1-3 00:08:07.714 http://cunit.sourceforge.net/ 00:08:07.714 00:08:07.714 00:08:07.714 Suite: nvmf 00:08:07.714 Test: test_discovery_log ...passed 00:08:07.714 Test: test_discovery_log_with_filters ...passed 00:08:07.714 00:08:07.714 Run Summary: Type Total Ran Passed Failed Inactive 00:08:07.714 suites 1 1 n/a 0 0 00:08:07.714 tests 2 2 2 0 0 00:08:07.714 asserts 238 238 238 0 n/a 00:08:07.714 00:08:07.714 Elapsed time = 0.003 seconds 00:08:07.714 02:31:32 -- unit/unittest.sh@109 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:08:07.714 00:08:07.714 00:08:07.714 CUnit - A unit testing framework for C - Version 2.1-3 00:08:07.714 http://cunit.sourceforge.net/ 00:08:07.714 00:08:07.714 00:08:07.714 Suite: nvmf 00:08:07.714 Test: nvmf_test_create_subsystem ...[2024-07-11 02:31:32.801495] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 125:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:08:07.714 [2024-07-11 02:31:32.801851] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:08:07.714 [2024-07-11 02:31:32.801941] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:08:07.714 [2024-07-11 02:31:32.801976] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:08:07.714 [2024-07-11 02:31:32.802000] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 183:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:08:07.714 [2024-07-11 02:31:32.802031] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:08:07.714 [2024-07-11 02:31:32.802129] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:08:07.715 [2024-07-11 02:31:32.802294] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:08:07.715 [2024-07-11 02:31:32.802390] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:08:07.715 [2024-07-11 02:31:32.802423] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:08:07.715 [2024-07-11 02:31:32.802455] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:08:07.715 passed 00:08:07.715 Test: test_spdk_nvmf_subsystem_add_ns ...[2024-07-11 02:31:32.802584] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use 00:08:07.715 passed 00:08:07.715 Test: test_spdk_nvmf_subsystem_set_sn ...passed 00:08:07.715 Test: test_reservation_register ...[2024-07-11 02:31:32.802670] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1774:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295 00:08:07.715 [2024-07-11 02:31:32.802906] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:07.715 [2024-07-11 02:31:32.803021] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2881:nvmf_ns_reservation_register: *ERROR*: No registrant 00:08:07.715 passed 00:08:07.715 Test: test_reservation_register_with_ptpl ...passed 00:08:07.715 Test: test_reservation_acquire_preempt_1 ...[2024-07-11 02:31:32.803982] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:07.715 passed 00:08:07.974 Test: test_reservation_acquire_release_with_ptpl ...passed 00:08:07.974 Test: test_reservation_release ...[2024-07-11 02:31:32.805835] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:07.974 passed 00:08:07.974 Test: test_reservation_unregister_notification ...[2024-07-11 02:31:32.806114] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:07.974 passed 00:08:07.974 Test: test_reservation_release_notification ...[2024-07-11 02:31:32.806370] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:07.974 passed 00:08:07.974 Test: test_reservation_release_notification_write_exclusive ...[2024-07-11 02:31:32.806594] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:07.974 passed 00:08:07.974 Test: test_reservation_clear_notification ...[2024-07-11 02:31:32.806843] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:07.974 passed 00:08:07.974 Test: test_reservation_preempt_notification ...[2024-07-11 02:31:32.807065] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:07.974 passed 00:08:07.974 Test: test_spdk_nvmf_ns_event ...passed 00:08:07.974 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:08:07.974 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:08:07.974 Test: test_spdk_nvmf_subsystem_add_host ...[2024-07-11 02:31:32.807773] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 260:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:08:07.974 passed 00:08:07.974 Test: test_nvmf_ns_reservation_report ...[2024-07-11 02:31:32.807886] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to transport_ut transport 00:08:07.974 passed 00:08:07.974 Test: test_nvmf_nqn_is_valid ...[2024-07-11 02:31:32.808048] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3186:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:08:07.974 passed 00:08:07.974 Test: test_nvmf_ns_reservation_restore ...[2024-07-11 02:31:32.808139] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:08:07.974 [2024-07-11 02:31:32.808174] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:f26ec5f8-7d12-458a-84f0-dc18a5d52c1": uuid is not the correct length 00:08:07.974 [2024-07-11 02:31:32.808203] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:08:07.974 [2024-07-11 02:31:32.808313] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2380:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:08:07.974 passed 00:08:07.974 Test: test_nvmf_subsystem_state_change ...passed 00:08:07.974 Test: test_nvmf_reservation_custom_ops ...passed 00:08:07.974 00:08:07.974 Run Summary: Type Total Ran Passed Failed Inactive 00:08:07.974 suites 1 1 n/a 0 0 00:08:07.974 tests 22 22 22 0 0 00:08:07.974 asserts 407 407 407 0 n/a 00:08:07.974 00:08:07.974 Elapsed time = 0.008 seconds 00:08:07.975 02:31:32 -- unit/unittest.sh@110 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:08:07.975 00:08:07.975 00:08:07.975 CUnit - A unit testing framework for C - Version 2.1-3 00:08:07.975 http://cunit.sourceforge.net/ 00:08:07.975 00:08:07.975 00:08:07.975 Suite: nvmf 00:08:07.975 Test: test_nvmf_tcp_create ...[2024-07-11 02:31:32.870588] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 730:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:08:07.975 passed 00:08:07.975 Test: test_nvmf_tcp_destroy ...passed 00:08:07.975 Test: test_nvmf_tcp_poll_group_create ...passed 00:08:07.975 Test: test_nvmf_tcp_send_c2h_data ...passed 00:08:07.975 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:08:07.975 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:08:07.975 Test: test_nvmf_tcp_qpair_init_mem_resource ...passed 00:08:07.975 Test: test_nvmf_tcp_send_c2h_term_req ...[2024-07-11 02:31:32.972051] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:07.975 [2024-07-11 02:31:32.972133] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff86b9c9e0 is same with the state(5) to be set 00:08:07.975 [2024-07-11 02:31:32.972217] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff86b9c9e0 is same with the state(5) to be set 00:08:07.975 [2024-07-11 02:31:32.972254] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:07.975 [2024-07-11 02:31:32.972278] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff86b9c9e0 is same with the state(5) to be set 00:08:07.975 passed 00:08:07.975 Test: test_nvmf_tcp_send_capsule_resp_pdu ...passed 00:08:07.975 Test: test_nvmf_tcp_icreq_handle ...[2024-07-11 02:31:32.972354] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2089:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:08:07.975 [2024-07-11 02:31:32.972455] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:07.975 [2024-07-11 02:31:32.972519] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff86b9c9e0 is same with the state(5) to be set 00:08:07.975 [2024-07-11 02:31:32.972555] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2089:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:08:07.975 [2024-07-11 02:31:32.972590] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff86b9c9e0 is same with the state(5) to be set 00:08:07.975 passed 00:08:07.975 Test: test_nvmf_tcp_check_xfer_type ...passed 00:08:07.975 Test: test_nvmf_tcp_invalid_sgl ...[2024-07-11 02:31:32.972614] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:07.975 [2024-07-11 02:31:32.972651] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff86b9c9e0 is same with the state(5) to be set 00:08:07.975 [2024-07-11 02:31:32.972685] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=2 00:08:07.975 [2024-07-11 02:31:32.972730] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff86b9c9e0 is same with the state(5) to be set 00:08:07.975 passed 00:08:07.975 Test: test_nvmf_tcp_pdu_ch_handle ...[2024-07-11 02:31:32.972799] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2484:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:08:07.975 [2024-07-11 02:31:32.972843] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:07.975 [2024-07-11 02:31:32.972865] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff86b9c9e0 is same with the state(5) to be set 00:08:07.975 [2024-07-11 02:31:32.972910] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2216:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x7fff86b9d740 00:08:07.975 [2024-07-11 02:31:32.972985] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:07.975 [2024-07-11 02:31:32.973029] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff86b9cea0 is same with the state(5) to be set 00:08:07.975 [2024-07-11 02:31:32.973064] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2273:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x7fff86b9cea0 00:08:07.975 [2024-07-11 02:31:32.973093] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:07.975 [2024-07-11 02:31:32.973121] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff86b9cea0 is same with the state(5) to be set 00:08:07.975 [2024-07-11 02:31:32.973149] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2226:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:08:07.975 [2024-07-11 02:31:32.973180] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:07.975 [2024-07-11 02:31:32.973217] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff86b9cea0 is same with the state(5) to be set 00:08:07.975 [2024-07-11 02:31:32.973249] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2265:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:08:07.975 [2024-07-11 02:31:32.973273] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:07.975 [2024-07-11 02:31:32.973301] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff86b9cea0 is same with the state(5) to be set 00:08:07.975 [2024-07-11 02:31:32.973329] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:07.975 [2024-07-11 02:31:32.973356] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff86b9cea0 is same with the state(5) to be set 00:08:07.975 [2024-07-11 02:31:32.973404] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:07.975 [2024-07-11 02:31:32.973426] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff86b9cea0 is same with the state(5) to be set 00:08:07.975 [2024-07-11 02:31:32.973465] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:07.975 [2024-07-11 02:31:32.973486] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff86b9cea0 is same with the state(5) to be set 00:08:07.975 [2024-07-11 02:31:32.973520] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:07.975 [2024-07-11 02:31:32.973541] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff86b9cea0 is same with the state(5) to be set 00:08:07.975 passed 00:08:07.975 Test: test_nvmf_tcp_tls_add_remove_credentials ...[2024-07-11 02:31:32.973585] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:07.975 [2024-07-11 02:31:32.973607] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff86b9cea0 is same with the state(5) to be set 00:08:07.975 [2024-07-11 02:31:32.973658] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:07.975 [2024-07-11 02:31:32.973681] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff86b9cea0 is same with the state(5) to be set 00:08:07.975 passed 00:08:07.975 Test: test_nvmf_tcp_tls_generate_psk_id ...[2024-07-11 02:31:32.991661] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:08:07.975 [2024-07-11 02:31:32.991711] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:08:07.975 passed 00:08:07.975 Test: test_nvmf_tcp_tls_generate_retained_psk ...[2024-07-11 02:31:32.991952] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:08:07.975 [2024-07-11 02:31:32.991986] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:08:07.975 passed 00:08:07.975 Test: test_nvmf_tcp_tls_generate_tls_psk ...[2024-07-11 02:31:32.992129] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:08:07.975 [2024-07-11 02:31:32.992157] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:08:07.975 passed 00:08:07.975 00:08:07.975 Run Summary: Type Total Ran Passed Failed Inactive 00:08:07.975 suites 1 1 n/a 0 0 00:08:07.975 tests 17 17 17 0 0 00:08:07.975 asserts 222 222 222 0 n/a 00:08:07.975 00:08:07.975 Elapsed time = 0.145 seconds 00:08:07.975 02:31:33 -- unit/unittest.sh@111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:08:08.234 00:08:08.234 00:08:08.234 CUnit - A unit testing framework for C - Version 2.1-3 00:08:08.234 http://cunit.sourceforge.net/ 00:08:08.234 00:08:08.234 00:08:08.234 Suite: nvmf 00:08:08.234 Test: test_nvmf_tgt_create_poll_group ...passed 00:08:08.234 00:08:08.234 Run Summary: Type Total Ran Passed Failed Inactive 00:08:08.234 suites 1 1 n/a 0 0 00:08:08.234 tests 1 1 1 0 0 00:08:08.234 asserts 17 17 17 0 n/a 00:08:08.234 00:08:08.234 Elapsed time = 0.022 seconds 00:08:08.234 00:08:08.234 real 0m0.468s 00:08:08.234 user 0m0.223s 00:08:08.234 sys 0m0.247s 00:08:08.234 02:31:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.234 ************************************ 00:08:08.234 END TEST unittest_nvmf 00:08:08.234 ************************************ 00:08:08.234 02:31:33 -- common/autotest_common.sh@10 -- # set +x 00:08:08.234 02:31:33 -- unit/unittest.sh@260 -- # grep -q '#define SPDK_CONFIG_FC 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:08.234 02:31:33 -- unit/unittest.sh@265 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:08.234 02:31:33 -- unit/unittest.sh@266 -- # run_test unittest_nvmf_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:08:08.234 02:31:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:08.234 02:31:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:08.234 02:31:33 -- common/autotest_common.sh@10 -- # set +x 00:08:08.234 ************************************ 00:08:08.234 START TEST unittest_nvmf_rdma 00:08:08.234 ************************************ 00:08:08.234 02:31:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:08:08.234 00:08:08.234 00:08:08.234 CUnit - A unit testing framework for C - Version 2.1-3 00:08:08.234 http://cunit.sourceforge.net/ 00:08:08.234 00:08:08.234 00:08:08.234 Suite: nvmf 00:08:08.234 Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-07-11 02:31:33.216649] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1914:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:08:08.234 [2024-07-11 02:31:33.216972] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1964:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:08:08.234 [2024-07-11 02:31:33.217022] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1964:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:08:08.234 passed 00:08:08.234 Test: test_spdk_nvmf_rdma_request_process ...passed 00:08:08.234 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:08:08.234 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed 00:08:08.234 Test: test_nvmf_rdma_opts_init ...passed 00:08:08.234 Test: test_nvmf_rdma_request_free_data ...passed 00:08:08.234 Test: test_nvmf_rdma_update_ibv_state ...passed 00:08:08.235 Test: test_nvmf_rdma_resources_create ...[2024-07-11 02:31:33.218180] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 614:nvmf_rdma_update_ibv_state: *ERROR*: Failed to get updated RDMA queue pair state! 00:08:08.235 [2024-07-11 02:31:33.218230] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 625:nvmf_rdma_update_ibv_state: *ERROR*: QP#0: bad state updated: 10, maybe hardware issue 00:08:08.235 passed 00:08:08.235 Test: test_nvmf_rdma_qpair_compare ...passed 00:08:08.235 Test: test_nvmf_rdma_resize_cq ...[2024-07-11 02:31:33.219366] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1006:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:08:08.235 Using CQ of insufficient size may lead to CQ overrun 00:08:08.235 passed 00:08:08.235 00:08:08.235 Run Summary: Type Total Ran Passed Failed Inactive 00:08:08.235 suites 1 1 n/a 0 0 00:08:08.235 tests 10 10 10 0 0 00:08:08.235 asserts 584 584 584 0 n/a 00:08:08.235 00:08:08.235 Elapsed time = 0.003 seconds 00:08:08.235 [2024-07-11 02:31:33.219475] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1011:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:08:08.235 [2024-07-11 02:31:33.219532] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1019:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:08:08.235 00:08:08.235 real 0m0.043s 00:08:08.235 user 0m0.034s 00:08:08.235 sys 0m0.009s 00:08:08.235 02:31:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.235 ************************************ 00:08:08.235 END TEST unittest_nvmf_rdma 00:08:08.235 ************************************ 00:08:08.235 02:31:33 -- common/autotest_common.sh@10 -- # set +x 00:08:08.235 02:31:33 -- unit/unittest.sh@269 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:08.235 02:31:33 -- unit/unittest.sh@273 -- # run_test unittest_scsi unittest_scsi 00:08:08.235 02:31:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:08.235 02:31:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:08.235 02:31:33 -- common/autotest_common.sh@10 -- # set +x 00:08:08.235 ************************************ 00:08:08.235 START TEST unittest_scsi 00:08:08.235 ************************************ 00:08:08.235 02:31:33 -- common/autotest_common.sh@1104 -- # unittest_scsi 00:08:08.235 02:31:33 -- unit/unittest.sh@115 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:08:08.235 00:08:08.235 00:08:08.235 CUnit - A unit testing framework for C - Version 2.1-3 00:08:08.235 http://cunit.sourceforge.net/ 00:08:08.235 00:08:08.235 00:08:08.235 Suite: dev_suite 00:08:08.235 Test: dev_destruct_null_dev ...passed 00:08:08.235 Test: dev_destruct_zero_luns ...passed 00:08:08.235 Test: dev_destruct_null_lun ...passed 00:08:08.235 Test: dev_destruct_success ...passed 00:08:08.235 Test: dev_construct_num_luns_zero ...[2024-07-11 02:31:33.309676] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:08:08.235 passed 00:08:08.235 Test: dev_construct_no_lun_zero ...passed 00:08:08.235 Test: dev_construct_null_lun ...[2024-07-11 02:31:33.309985] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:08:08.235 [2024-07-11 02:31:33.310050] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 247:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:08:08.235 passed 00:08:08.235 Test: dev_construct_name_too_long ...passed 00:08:08.235 Test: dev_construct_success ...passed 00:08:08.235 Test: dev_construct_success_lun_zero_not_first ...passed 00:08:08.235 Test: dev_queue_mgmt_task_success ...[2024-07-11 02:31:33.310084] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 222:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:08:08.235 passed 00:08:08.235 Test: dev_queue_task_success ...passed 00:08:08.235 Test: dev_stop_success ...passed 00:08:08.235 Test: dev_add_port_max_ports ...passed[2024-07-11 02:31:33.310338] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:08:08.235 00:08:08.235 Test: dev_add_port_construct_failure1 ...passed 00:08:08.235 Test: dev_add_port_construct_failure2 ...[2024-07-11 02:31:33.310429] /home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:08:08.235 [2024-07-11 02:31:33.310505] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:08:08.235 passed 00:08:08.235 Test: dev_add_port_success1 ...passed 00:08:08.235 Test: dev_add_port_success2 ...passed 00:08:08.235 Test: dev_add_port_success3 ...passed 00:08:08.235 Test: dev_find_port_by_id_num_ports_zero ...passed 00:08:08.235 Test: dev_find_port_by_id_id_not_found_failure ...passed 00:08:08.235 Test: dev_find_port_by_id_success ...passed 00:08:08.235 Test: dev_add_lun_bdev_not_found ...passed 00:08:08.235 Test: dev_add_lun_no_free_lun_id ...[2024-07-11 02:31:33.310849] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:08:08.235 passed 00:08:08.235 Test: dev_add_lun_success1 ...passed 00:08:08.235 Test: dev_add_lun_success2 ...passed 00:08:08.235 Test: dev_check_pending_tasks ...passed 00:08:08.235 Test: dev_iterate_luns ...passed 00:08:08.235 Test: dev_find_free_lun ...passed 00:08:08.235 00:08:08.235 Run Summary: Type Total Ran Passed Failed Inactive 00:08:08.235 suites 1 1 n/a 0 0 00:08:08.235 tests 29 29 29 0 0 00:08:08.235 asserts 97 97 97 0 n/a 00:08:08.235 00:08:08.235 Elapsed time = 0.002 seconds 00:08:08.494 02:31:33 -- unit/unittest.sh@116 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:08:08.494 00:08:08.494 00:08:08.494 CUnit - A unit testing framework for C - Version 2.1-3 00:08:08.494 http://cunit.sourceforge.net/ 00:08:08.494 00:08:08.494 00:08:08.494 Suite: lun_suite 00:08:08.494 Test: lun_task_mgmt_execute_abort_task_not_supported ...[2024-07-11 02:31:33.344906] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported 00:08:08.494 passed 00:08:08.494 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...[2024-07-11 02:31:33.345279] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported 00:08:08.494 passed 00:08:08.494 Test: lun_task_mgmt_execute_lun_reset ...passed 00:08:08.494 Test: lun_task_mgmt_execute_target_reset ...passed 00:08:08.494 Test: lun_task_mgmt_execute_invalid_case ...[2024-07-11 02:31:33.345466] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:08:08.494 passed 00:08:08.494 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...passed 00:08:08.494 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed 00:08:08.494 Test: lun_append_task_null_lun_not_supported ...passed 00:08:08.494 Test: lun_execute_scsi_task_pending ...passed 00:08:08.494 Test: lun_execute_scsi_task_complete ...passed 00:08:08.494 Test: lun_execute_scsi_task_resize ...passed 00:08:08.494 Test: lun_destruct_success ...passed 00:08:08.494 Test: lun_construct_null_ctx ...passed 00:08:08.494 Test: lun_construct_success ...[2024-07-11 02:31:33.345794] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:08:08.494 passed 00:08:08.494 Test: lun_reset_task_wait_scsi_task_complete ...passed 00:08:08.494 Test: lun_reset_task_suspend_scsi_task ...passed 00:08:08.494 Test: lun_check_pending_tasks_only_for_specific_initiator ...passed 00:08:08.494 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed 00:08:08.494 00:08:08.494 Run Summary: Type Total Ran Passed Failed Inactive 00:08:08.494 suites 1 1 n/a 0 0 00:08:08.494 tests 18 18 18 0 0 00:08:08.494 asserts 153 153 153 0 n/a 00:08:08.494 00:08:08.494 Elapsed time = 0.001 seconds 00:08:08.494 02:31:33 -- unit/unittest.sh@117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:08:08.494 00:08:08.494 00:08:08.494 CUnit - A unit testing framework for C - Version 2.1-3 00:08:08.494 http://cunit.sourceforge.net/ 00:08:08.494 00:08:08.494 00:08:08.494 Suite: scsi_suite 00:08:08.494 Test: scsi_init ...passed 00:08:08.494 00:08:08.494 Run Summary: Type Total Ran Passed Failed Inactive 00:08:08.494 suites 1 1 n/a 0 0 00:08:08.494 tests 1 1 1 0 0 00:08:08.494 asserts 1 1 1 0 n/a 00:08:08.494 00:08:08.494 Elapsed time = 0.000 seconds 00:08:08.494 02:31:33 -- unit/unittest.sh@118 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:08:08.494 00:08:08.494 00:08:08.494 CUnit - A unit testing framework for C - Version 2.1-3 00:08:08.494 http://cunit.sourceforge.net/ 00:08:08.494 00:08:08.494 00:08:08.494 Suite: translation_suite 00:08:08.494 Test: mode_select_6_test ...passed 00:08:08.494 Test: mode_select_6_test2 ...passed 00:08:08.494 Test: mode_sense_6_test ...passed 00:08:08.494 Test: mode_sense_10_test ...passed 00:08:08.494 Test: inquiry_evpd_test ...passed 00:08:08.495 Test: inquiry_standard_test ...passed 00:08:08.495 Test: inquiry_overflow_test ...passed 00:08:08.495 Test: task_complete_test ...passed 00:08:08.495 Test: lba_range_test ...passed 00:08:08.495 Test: xfer_len_test ...[2024-07-11 02:31:33.410234] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1270:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:08:08.495 passed 00:08:08.495 Test: xfer_test ...passed 00:08:08.495 Test: scsi_name_padding_test ...passed 00:08:08.495 Test: get_dif_ctx_test ...passed 00:08:08.495 Test: unmap_split_test ...passed 00:08:08.495 00:08:08.495 Run Summary: Type Total Ran Passed Failed Inactive 00:08:08.495 suites 1 1 n/a 0 0 00:08:08.495 tests 14 14 14 0 0 00:08:08.495 asserts 1200 1200 1200 0 n/a 00:08:08.495 00:08:08.495 Elapsed time = 0.004 seconds 00:08:08.495 02:31:33 -- unit/unittest.sh@119 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:08:08.495 00:08:08.495 00:08:08.495 CUnit - A unit testing framework for C - Version 2.1-3 00:08:08.495 http://cunit.sourceforge.net/ 00:08:08.495 00:08:08.495 00:08:08.495 Suite: reservation_suite 00:08:08.495 Test: test_reservation_register ...[2024-07-11 02:31:33.438681] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:08.495 passed 00:08:08.495 Test: test_reservation_reserve ...[2024-07-11 02:31:33.438977] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:08.495 [2024-07-11 02:31:33.439047] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 209:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:08:08.495 passed 00:08:08.495 Test: test_reservation_preempt_non_all_regs ...[2024-07-11 02:31:33.439130] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 204:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match 00:08:08.495 [2024-07-11 02:31:33.439183] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:08.495 [2024-07-11 02:31:33.439238] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 458:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:08:08.495 passed 00:08:08.495 Test: test_reservation_preempt_all_regs ...[2024-07-11 02:31:33.439344] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:08.495 passed 00:08:08.495 Test: test_reservation_cmds_conflict ...[2024-07-11 02:31:33.439458] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:08.495 [2024-07-11 02:31:33.439511] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:08:08.495 [2024-07-11 02:31:33.439550] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:08:08.495 [2024-07-11 02:31:33.439571] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:08:08.495 passed 00:08:08.495 Test: test_scsi2_reserve_release ...passed 00:08:08.495 Test: test_pr_with_scsi2_reserve_release ...passed 00:08:08.495 00:08:08.495 Run Summary: Type Total Ran Passed Failed Inactive 00:08:08.495 suites 1 1 n/a 0 0 00:08:08.495 tests 7 7 7 0 0 00:08:08.495 asserts 257 257 257 0 n/a 00:08:08.495 00:08:08.495 Elapsed time = 0.001 seconds 00:08:08.495 [2024-07-11 02:31:33.439599] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:08:08.495 [2024-07-11 02:31:33.439619] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:08:08.495 [2024-07-11 02:31:33.439692] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:08.495 00:08:08.495 real 0m0.158s 00:08:08.495 user 0m0.082s 00:08:08.495 sys 0m0.078s 00:08:08.495 02:31:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.495 ************************************ 00:08:08.495 END TEST unittest_scsi 00:08:08.495 ************************************ 00:08:08.495 02:31:33 -- common/autotest_common.sh@10 -- # set +x 00:08:08.495 02:31:33 -- unit/unittest.sh@276 -- # uname -s 00:08:08.495 02:31:33 -- unit/unittest.sh@276 -- # '[' Linux = Linux ']' 00:08:08.495 02:31:33 -- unit/unittest.sh@277 -- # run_test unittest_sock unittest_sock 00:08:08.495 02:31:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:08.495 02:31:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:08.495 02:31:33 -- common/autotest_common.sh@10 -- # set +x 00:08:08.495 ************************************ 00:08:08.495 START TEST unittest_sock 00:08:08.495 ************************************ 00:08:08.495 02:31:33 -- common/autotest_common.sh@1104 -- # unittest_sock 00:08:08.495 02:31:33 -- unit/unittest.sh@123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/sock.c/sock_ut 00:08:08.495 00:08:08.495 00:08:08.495 CUnit - A unit testing framework for C - Version 2.1-3 00:08:08.495 http://cunit.sourceforge.net/ 00:08:08.495 00:08:08.495 00:08:08.495 Suite: sock 00:08:08.495 Test: posix_sock ...passed 00:08:08.495 Test: ut_sock ...passed 00:08:08.495 Test: posix_sock_group ...passed 00:08:08.495 Test: ut_sock_group ...passed 00:08:08.495 Test: posix_sock_group_fairness ...passed 00:08:08.495 Test: _posix_sock_close ...passed 00:08:08.495 Test: sock_get_default_opts ...passed 00:08:08.495 Test: ut_sock_impl_get_set_opts ...passed 00:08:08.495 Test: posix_sock_impl_get_set_opts ...passed 00:08:08.495 Test: ut_sock_map ...passed 00:08:08.495 Test: override_impl_opts ...passed 00:08:08.495 Test: ut_sock_group_get_ctx ...passed 00:08:08.495 00:08:08.495 Run Summary: Type Total Ran Passed Failed Inactive 00:08:08.495 suites 1 1 n/a 0 0 00:08:08.495 tests 12 12 12 0 0 00:08:08.495 asserts 349 349 349 0 n/a 00:08:08.495 00:08:08.495 Elapsed time = 0.007 seconds 00:08:08.495 02:31:33 -- unit/unittest.sh@124 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/posix.c/posix_ut 00:08:08.495 00:08:08.495 00:08:08.495 CUnit - A unit testing framework for C - Version 2.1-3 00:08:08.495 http://cunit.sourceforge.net/ 00:08:08.495 00:08:08.495 00:08:08.495 Suite: posix 00:08:08.495 Test: flush ...passed 00:08:08.495 00:08:08.495 Run Summary: Type Total Ran Passed Failed Inactive 00:08:08.495 suites 1 1 n/a 0 0 00:08:08.495 tests 1 1 1 0 0 00:08:08.495 asserts 28 28 28 0 n/a 00:08:08.495 00:08:08.495 Elapsed time = 0.000 seconds 00:08:08.753 02:31:33 -- unit/unittest.sh@126 -- # grep -q '#define SPDK_CONFIG_URING 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:08.753 00:08:08.753 real 0m0.096s 00:08:08.753 user 0m0.042s 00:08:08.753 sys 0m0.031s 00:08:08.753 02:31:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.754 02:31:33 -- common/autotest_common.sh@10 -- # set +x 00:08:08.754 ************************************ 00:08:08.754 END TEST unittest_sock 00:08:08.754 ************************************ 00:08:08.754 02:31:33 -- unit/unittest.sh@279 -- # run_test unittest_thread /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:08:08.754 02:31:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:08.754 02:31:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:08.754 02:31:33 -- common/autotest_common.sh@10 -- # set +x 00:08:08.754 ************************************ 00:08:08.754 START TEST unittest_thread 00:08:08.754 ************************************ 00:08:08.754 02:31:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:08:08.754 00:08:08.754 00:08:08.754 CUnit - A unit testing framework for C - Version 2.1-3 00:08:08.754 http://cunit.sourceforge.net/ 00:08:08.754 00:08:08.754 00:08:08.754 Suite: io_channel 00:08:08.754 Test: thread_alloc ...passed 00:08:08.754 Test: thread_send_msg ...passed 00:08:08.754 Test: thread_poller ...passed 00:08:08.754 Test: poller_pause ...passed 00:08:08.754 Test: thread_for_each ...passed 00:08:08.754 Test: for_each_channel_remove ...passed 00:08:08.754 Test: for_each_channel_unreg ...[2024-07-11 02:31:33.687822] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2163:spdk_io_device_register: *ERROR*: io_device 0x7fff71097830 already registered (old:0x613000000200 new:0x6130000003c0) 00:08:08.754 passed 00:08:08.754 Test: thread_name ...passed 00:08:08.754 Test: channel ...[2024-07-11 02:31:33.691903] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2297:spdk_get_io_channel: *ERROR*: could not find io_device 0x55a836a2e0e0 00:08:08.754 passed 00:08:08.754 Test: channel_destroy_races ...passed 00:08:08.754 Test: thread_exit_test ...[2024-07-11 02:31:33.696923] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 629:thread_exit: *ERROR*: thread 0x618000005c80 got timeout, and move it to the exited state forcefully 00:08:08.754 passed 00:08:08.754 Test: thread_update_stats_test ...passed 00:08:08.754 Test: nested_channel ...passed 00:08:08.754 Test: device_unregister_and_thread_exit_race ...passed 00:08:08.754 Test: cache_closest_timed_poller ...passed 00:08:08.754 Test: multi_timed_pollers_have_same_expiration ...passed 00:08:08.754 Test: io_device_lookup ...passed 00:08:08.754 Test: spdk_spin ...[2024-07-11 02:31:33.707530] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3061:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:08:08.754 [2024-07-11 02:31:33.707579] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7fff71097820 00:08:08.754 [2024-07-11 02:31:33.707668] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3099:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:08:08.754 [2024-07-11 02:31:33.709299] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3062:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:08:08.754 [2024-07-11 02:31:33.709368] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7fff71097820 00:08:08.754 [2024-07-11 02:31:33.709394] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:08:08.754 [2024-07-11 02:31:33.709423] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7fff71097820 00:08:08.754 [2024-07-11 02:31:33.709445] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:08:08.754 [2024-07-11 02:31:33.709479] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7fff71097820 00:08:08.754 [2024-07-11 02:31:33.709502] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3043:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:08:08.754 [2024-07-11 02:31:33.709549] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7fff71097820 00:08:08.754 passed 00:08:08.754 Test: for_each_channel_and_thread_exit_race ...passed 00:08:08.754 Test: for_each_thread_and_thread_exit_race ...passed 00:08:08.754 00:08:08.754 Run Summary: Type Total Ran Passed Failed Inactive 00:08:08.754 suites 1 1 n/a 0 0 00:08:08.754 tests 20 20 20 0 0 00:08:08.754 asserts 409 409 409 0 n/a 00:08:08.754 00:08:08.754 Elapsed time = 0.049 seconds 00:08:08.754 00:08:08.754 real 0m0.088s 00:08:08.754 user 0m0.054s 00:08:08.754 sys 0m0.034s 00:08:08.754 02:31:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.754 02:31:33 -- common/autotest_common.sh@10 -- # set +x 00:08:08.754 ************************************ 00:08:08.754 END TEST unittest_thread 00:08:08.754 ************************************ 00:08:08.754 02:31:33 -- unit/unittest.sh@280 -- # run_test unittest_iobuf /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:08:08.754 02:31:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:08.754 02:31:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:08.754 02:31:33 -- common/autotest_common.sh@10 -- # set +x 00:08:08.754 ************************************ 00:08:08.754 START TEST unittest_iobuf 00:08:08.754 ************************************ 00:08:08.754 02:31:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:08:08.754 00:08:08.754 00:08:08.754 CUnit - A unit testing framework for C - Version 2.1-3 00:08:08.754 http://cunit.sourceforge.net/ 00:08:08.754 00:08:08.754 00:08:08.754 Suite: io_channel 00:08:08.754 Test: iobuf ...passed 00:08:08.754 Test: iobuf_cache ...[2024-07-11 02:31:33.805432] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 302:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:08:08.754 [2024-07-11 02:31:33.805748] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 305:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:08:08.754 [2024-07-11 02:31:33.805881] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 314:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf large buffer cache. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:08:08.754 [2024-07-11 02:31:33.805928] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 317:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:08:08.754 [2024-07-11 02:31:33.805998] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 302:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:08:08.754 [2024-07-11 02:31:33.806038] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 305:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:08:08.754 passed 00:08:08.754 00:08:08.754 Run Summary: Type Total Ran Passed Failed Inactive 00:08:08.754 suites 1 1 n/a 0 0 00:08:08.754 tests 2 2 2 0 0 00:08:08.754 asserts 107 107 107 0 n/a 00:08:08.754 00:08:08.754 Elapsed time = 0.006 seconds 00:08:08.754 00:08:08.754 real 0m0.037s 00:08:08.754 user 0m0.025s 00:08:08.754 sys 0m0.013s 00:08:08.754 02:31:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.754 ************************************ 00:08:08.754 END TEST unittest_iobuf 00:08:08.754 ************************************ 00:08:08.754 02:31:33 -- common/autotest_common.sh@10 -- # set +x 00:08:09.012 02:31:33 -- unit/unittest.sh@281 -- # run_test unittest_util unittest_util 00:08:09.012 02:31:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:09.012 02:31:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:09.012 02:31:33 -- common/autotest_common.sh@10 -- # set +x 00:08:09.012 ************************************ 00:08:09.012 START TEST unittest_util 00:08:09.012 ************************************ 00:08:09.012 02:31:33 -- common/autotest_common.sh@1104 -- # unittest_util 00:08:09.012 02:31:33 -- unit/unittest.sh@132 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:08:09.012 00:08:09.012 00:08:09.012 CUnit - A unit testing framework for C - Version 2.1-3 00:08:09.012 http://cunit.sourceforge.net/ 00:08:09.012 00:08:09.012 00:08:09.012 Suite: base64 00:08:09.012 Test: test_base64_get_encoded_strlen ...passed 00:08:09.012 Test: test_base64_get_decoded_len ...passed 00:08:09.012 Test: test_base64_encode ...passed 00:08:09.012 Test: test_base64_decode ...passed 00:08:09.012 Test: test_base64_urlsafe_encode ...passed 00:08:09.012 Test: test_base64_urlsafe_decode ...passed 00:08:09.012 00:08:09.012 Run Summary: Type Total Ran Passed Failed Inactive 00:08:09.013 suites 1 1 n/a 0 0 00:08:09.013 tests 6 6 6 0 0 00:08:09.013 asserts 112 112 112 0 n/a 00:08:09.013 00:08:09.013 Elapsed time = 0.000 seconds 00:08:09.013 02:31:33 -- unit/unittest.sh@133 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:08:09.013 00:08:09.013 00:08:09.013 CUnit - A unit testing framework for C - Version 2.1-3 00:08:09.013 http://cunit.sourceforge.net/ 00:08:09.013 00:08:09.013 00:08:09.013 Suite: bit_array 00:08:09.013 Test: test_1bit ...passed 00:08:09.013 Test: test_64bit ...passed 00:08:09.013 Test: test_find ...passed 00:08:09.013 Test: test_resize ...passed 00:08:09.013 Test: test_errors ...passed 00:08:09.013 Test: test_count ...passed 00:08:09.013 Test: test_mask_store_load ...passed 00:08:09.013 Test: test_mask_clear ...passed 00:08:09.013 00:08:09.013 Run Summary: Type Total Ran Passed Failed Inactive 00:08:09.013 suites 1 1 n/a 0 0 00:08:09.013 tests 8 8 8 0 0 00:08:09.013 asserts 5075 5075 5075 0 n/a 00:08:09.013 00:08:09.013 Elapsed time = 0.002 seconds 00:08:09.013 02:31:33 -- unit/unittest.sh@134 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:08:09.013 00:08:09.013 00:08:09.013 CUnit - A unit testing framework for C - Version 2.1-3 00:08:09.013 http://cunit.sourceforge.net/ 00:08:09.013 00:08:09.013 00:08:09.013 Suite: cpuset 00:08:09.013 Test: test_cpuset ...passed 00:08:09.013 Test: test_cpuset_parse ...[2024-07-11 02:31:33.951987] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 239:parse_list: *ERROR*: Unexpected end of core list '[' 00:08:09.013 [2024-07-11 02:31:33.952462] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:08:09.013 [2024-07-11 02:31:33.952551] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:08:09.013 [2024-07-11 02:31:33.952612] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 219:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:08:09.013 [2024-07-11 02:31:33.952635] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:08:09.013 [2024-07-11 02:31:33.952663] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:08:09.013 [2024-07-11 02:31:33.952688] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 203:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:08:09.013 [2024-07-11 02:31:33.952729] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 198:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:08:09.013 passed 00:08:09.013 Test: test_cpuset_fmt ...passed 00:08:09.013 00:08:09.013 Run Summary: Type Total Ran Passed Failed Inactive 00:08:09.013 suites 1 1 n/a 0 0 00:08:09.013 tests 3 3 3 0 0 00:08:09.013 asserts 65 65 65 0 n/a 00:08:09.013 00:08:09.013 Elapsed time = 0.003 seconds 00:08:09.013 02:31:33 -- unit/unittest.sh@135 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:08:09.013 00:08:09.013 00:08:09.013 CUnit - A unit testing framework for C - Version 2.1-3 00:08:09.013 http://cunit.sourceforge.net/ 00:08:09.013 00:08:09.013 00:08:09.013 Suite: crc16 00:08:09.013 Test: test_crc16_t10dif ...passed 00:08:09.013 Test: test_crc16_t10dif_seed ...passed 00:08:09.013 Test: test_crc16_t10dif_copy ...passed 00:08:09.013 00:08:09.013 Run Summary: Type Total Ran Passed Failed Inactive 00:08:09.013 suites 1 1 n/a 0 0 00:08:09.013 tests 3 3 3 0 0 00:08:09.013 asserts 5 5 5 0 n/a 00:08:09.013 00:08:09.013 Elapsed time = 0.000 seconds 00:08:09.013 02:31:33 -- unit/unittest.sh@136 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:08:09.013 00:08:09.013 00:08:09.013 CUnit - A unit testing framework for C - Version 2.1-3 00:08:09.013 http://cunit.sourceforge.net/ 00:08:09.013 00:08:09.013 00:08:09.013 Suite: crc32_ieee 00:08:09.013 Test: test_crc32_ieee ...passed 00:08:09.013 00:08:09.013 Run Summary: Type Total Ran Passed Failed Inactive 00:08:09.013 suites 1 1 n/a 0 0 00:08:09.013 tests 1 1 1 0 0 00:08:09.013 asserts 1 1 1 0 n/a 00:08:09.013 00:08:09.013 Elapsed time = 0.000 seconds 00:08:09.013 02:31:34 -- unit/unittest.sh@137 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:08:09.013 00:08:09.013 00:08:09.013 CUnit - A unit testing framework for C - Version 2.1-3 00:08:09.013 http://cunit.sourceforge.net/ 00:08:09.013 00:08:09.013 00:08:09.013 Suite: crc32c 00:08:09.013 Test: test_crc32c ...passed 00:08:09.013 Test: test_crc32c_nvme ...passed 00:08:09.013 00:08:09.013 Run Summary: Type Total Ran Passed Failed Inactive 00:08:09.013 suites 1 1 n/a 0 0 00:08:09.013 tests 2 2 2 0 0 00:08:09.013 asserts 16 16 16 0 n/a 00:08:09.013 00:08:09.013 Elapsed time = 0.001 seconds 00:08:09.013 02:31:34 -- unit/unittest.sh@138 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:08:09.013 00:08:09.013 00:08:09.013 CUnit - A unit testing framework for C - Version 2.1-3 00:08:09.013 http://cunit.sourceforge.net/ 00:08:09.013 00:08:09.013 00:08:09.013 Suite: crc64 00:08:09.013 Test: test_crc64_nvme ...passed 00:08:09.013 00:08:09.013 Run Summary: Type Total Ran Passed Failed Inactive 00:08:09.013 suites 1 1 n/a 0 0 00:08:09.013 tests 1 1 1 0 0 00:08:09.013 asserts 4 4 4 0 n/a 00:08:09.013 00:08:09.013 Elapsed time = 0.000 seconds 00:08:09.013 02:31:34 -- unit/unittest.sh@139 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:08:09.013 00:08:09.013 00:08:09.013 CUnit - A unit testing framework for C - Version 2.1-3 00:08:09.013 http://cunit.sourceforge.net/ 00:08:09.013 00:08:09.013 00:08:09.013 Suite: string 00:08:09.013 Test: test_parse_ip_addr ...passed 00:08:09.013 Test: test_str_chomp ...passed 00:08:09.013 Test: test_parse_capacity ...passed 00:08:09.013 Test: test_sprintf_append_realloc ...passed 00:08:09.013 Test: test_strtol ...passed 00:08:09.013 Test: test_strtoll ...passed 00:08:09.013 Test: test_strarray ...passed 00:08:09.013 Test: test_strcpy_replace ...passed 00:08:09.013 00:08:09.013 Run Summary: Type Total Ran Passed Failed Inactive 00:08:09.013 suites 1 1 n/a 0 0 00:08:09.013 tests 8 8 8 0 0 00:08:09.013 asserts 161 161 161 0 n/a 00:08:09.013 00:08:09.013 Elapsed time = 0.001 seconds 00:08:09.274 02:31:34 -- unit/unittest.sh@140 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:08:09.274 00:08:09.274 00:08:09.274 CUnit - A unit testing framework for C - Version 2.1-3 00:08:09.274 http://cunit.sourceforge.net/ 00:08:09.274 00:08:09.274 00:08:09.274 Suite: dif 00:08:09.274 Test: dif_generate_and_verify_test ...[2024-07-11 02:31:34.131077] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:08:09.274 [2024-07-11 02:31:34.131586] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:08:09.274 [2024-07-11 02:31:34.131870] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:08:09.274 [2024-07-11 02:31:34.132155] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:08:09.274 [2024-07-11 02:31:34.132432] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:08:09.274 [2024-07-11 02:31:34.132711] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:08:09.274 passed 00:08:09.274 Test: dif_disable_check_test ...[2024-07-11 02:31:34.133724] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:08:09.274 [2024-07-11 02:31:34.134065] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:08:09.274 [2024-07-11 02:31:34.134348] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:08:09.274 passed 00:08:09.274 Test: dif_generate_and_verify_different_pi_formats_test ...[2024-07-11 02:31:34.135386] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:08:09.274 [2024-07-11 02:31:34.135707] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:08:09.274 [2024-07-11 02:31:34.136037] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:08:09.274 [2024-07-11 02:31:34.136398] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:08:09.274 [2024-07-11 02:31:34.136726] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:08:09.274 [2024-07-11 02:31:34.137028] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:08:09.274 [2024-07-11 02:31:34.137338] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:08:09.274 [2024-07-11 02:31:34.137651] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:08:09.274 [2024-07-11 02:31:34.137969] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:08:09.274 [2024-07-11 02:31:34.138293] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:08:09.274 [2024-07-11 02:31:34.138608] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:08:09.274 passed 00:08:09.274 Test: dif_apptag_mask_test ...[2024-07-11 02:31:34.138918] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:08:09.274 [2024-07-11 02:31:34.139206] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:08:09.274 passed 00:08:09.274 Test: dif_sec_512_md_0_error_test ...[2024-07-11 02:31:34.139402] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:09.274 passed 00:08:09.274 Test: dif_sec_4096_md_0_error_test ...passed 00:08:09.274 Test: dif_sec_4100_md_128_error_test ...[2024-07-11 02:31:34.139436] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:09.274 [2024-07-11 02:31:34.139467] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:09.274 passed 00:08:09.274 Test: dif_guard_seed_test ...passed 00:08:09.274 Test: dif_guard_value_test ...[2024-07-11 02:31:34.139514] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:08:09.274 [2024-07-11 02:31:34.139543] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:08:09.274 passed 00:08:09.274 Test: dif_disable_sec_512_md_8_single_iov_test ...passed 00:08:09.274 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:08:09.274 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:08:09.274 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed 00:08:09.274 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:08:09.274 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:08:09.274 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:08:09.274 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:08:09.274 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:08:09.274 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:08:09.274 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:08:09.274 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:08:09.274 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:08:09.274 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:08:09.274 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:08:09.274 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:08:09.274 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:08:09.274 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:08:09.274 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-11 02:31:34.184455] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=fd0c, Actual=fd4c 00:08:09.274 [2024-07-11 02:31:34.186919] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=fe61, Actual=fe21 00:08:09.274 [2024-07-11 02:31:34.189417] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=c8 00:08:09.274 [2024-07-11 02:31:34.191897] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=c8 00:08:09.274 [2024-07-11 02:31:34.194425] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=1d 00:08:09.274 [2024-07-11 02:31:34.196857] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=1d 00:08:09.274 [2024-07-11 02:31:34.199326] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=fd4c, Actual=a866 00:08:09.274 [2024-07-11 02:31:34.201246] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=fe21, Actual=3c8f 00:08:09.274 [2024-07-11 02:31:34.203175] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=1ab753ad, Actual=1ab753ed 00:08:09.274 [2024-07-11 02:31:34.205728] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=38574620, Actual=38574660 00:08:09.274 [2024-07-11 02:31:34.208181] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=c8 00:08:09.274 [2024-07-11 02:31:34.210606] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=c8 00:08:09.275 [2024-07-11 02:31:34.213043] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=1d 00:08:09.275 [2024-07-11 02:31:34.215476] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=1d 00:08:09.275 [2024-07-11 02:31:34.217967] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=1ab753ed, Actual=c286ffe1 00:08:09.275 [2024-07-11 02:31:34.219833] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=38574660, Actual=298157c9 00:08:09.275 [2024-07-11 02:31:34.221811] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=a576a7728ecc2093, Actual=a576a7728ecc20d3 00:08:09.275 [2024-07-11 02:31:34.224280] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=88010a2d4837a226, Actual=88010a2d4837a266 00:08:09.275 [2024-07-11 02:31:34.226786] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=c8 00:08:09.275 [2024-07-11 02:31:34.229317] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=c8 00:08:09.275 [2024-07-11 02:31:34.231811] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=1d 00:08:09.275 [2024-07-11 02:31:34.234309] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=1d 00:08:09.275 [2024-07-11 02:31:34.236791] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=a576a7728ecc20d3, Actual=aa3db93116042cfa 00:08:09.275 [2024-07-11 02:31:34.238694] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=88010a2d4837a266, Actual=9793be4fcb363554 00:08:09.275 passed 00:08:09.275 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-07-11 02:31:34.239789] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd0c, Actual=fd4c 00:08:09.275 [2024-07-11 02:31:34.240149] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe61, Actual=fe21 00:08:09.275 [2024-07-11 02:31:34.240480] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:09.275 [2024-07-11 02:31:34.240806] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:09.275 [2024-07-11 02:31:34.241158] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:08:09.275 [2024-07-11 02:31:34.241473] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:08:09.275 [2024-07-11 02:31:34.241817] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=a866 00:08:09.275 [2024-07-11 02:31:34.242084] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=3c8f 00:08:09.275 [2024-07-11 02:31:34.242370] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ad, Actual=1ab753ed 00:08:09.275 [2024-07-11 02:31:34.242689] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574620, Actual=38574660 00:08:09.275 [2024-07-11 02:31:34.243027] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:09.275 [2024-07-11 02:31:34.243359] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:09.275 [2024-07-11 02:31:34.243682] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:08:09.275 [2024-07-11 02:31:34.244039] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:08:09.275 [2024-07-11 02:31:34.244370] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=c286ffe1 00:08:09.275 [2024-07-11 02:31:34.244635] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=298157c9 00:08:09.275 [2024-07-11 02:31:34.244927] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc2093, Actual=a576a7728ecc20d3 00:08:09.275 [2024-07-11 02:31:34.245266] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a226, Actual=88010a2d4837a266 00:08:09.275 [2024-07-11 02:31:34.245607] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:09.275 [2024-07-11 02:31:34.245962] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:09.275 [2024-07-11 02:31:34.246288] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:08:09.275 [2024-07-11 02:31:34.246603] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:08:09.275 [2024-07-11 02:31:34.246940] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=aa3db93116042cfa 00:08:09.275 [2024-07-11 02:31:34.247228] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=9793be4fcb363554 00:08:09.275 passed 00:08:09.275 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-07-11 02:31:34.247556] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd0c, Actual=fd4c 00:08:09.275 [2024-07-11 02:31:34.247882] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe61, Actual=fe21 00:08:09.275 [2024-07-11 02:31:34.248217] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:09.275 [2024-07-11 02:31:34.248551] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:09.275 [2024-07-11 02:31:34.248886] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:08:09.275 [2024-07-11 02:31:34.249209] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:08:09.275 [2024-07-11 02:31:34.249529] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=a866 00:08:09.275 [2024-07-11 02:31:34.249829] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=3c8f 00:08:09.275 [2024-07-11 02:31:34.250103] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ad, Actual=1ab753ed 00:08:09.275 [2024-07-11 02:31:34.250428] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574620, Actual=38574660 00:08:09.275 [2024-07-11 02:31:34.250753] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:09.275 [2024-07-11 02:31:34.251072] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:09.275 [2024-07-11 02:31:34.251395] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:08:09.275 [2024-07-11 02:31:34.251733] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:08:09.275 [2024-07-11 02:31:34.252085] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=c286ffe1 00:08:09.275 [2024-07-11 02:31:34.252365] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=298157c9 00:08:09.275 [2024-07-11 02:31:34.252635] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc2093, Actual=a576a7728ecc20d3 00:08:09.275 [2024-07-11 02:31:34.252920] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a226, Actual=88010a2d4837a266 00:08:09.275 [2024-07-11 02:31:34.253212] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:09.275 [2024-07-11 02:31:34.253505] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:09.275 [2024-07-11 02:31:34.253814] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:08:09.275 [2024-07-11 02:31:34.254097] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:08:09.275 [2024-07-11 02:31:34.254400] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=aa3db93116042cfa 00:08:09.275 [2024-07-11 02:31:34.254632] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=9793be4fcb363554 00:08:09.275 passed 00:08:09.275 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-07-11 02:31:34.254916] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd0c, Actual=fd4c 00:08:09.275 [2024-07-11 02:31:34.255220] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe61, Actual=fe21 00:08:09.275 [2024-07-11 02:31:34.255512] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:09.275 [2024-07-11 02:31:34.255796] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:09.275 [2024-07-11 02:31:34.256146] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:08:09.275 [2024-07-11 02:31:34.256441] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:08:09.275 [2024-07-11 02:31:34.256738] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=a866 00:08:09.275 [2024-07-11 02:31:34.256988] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=3c8f 00:08:09.275 [2024-07-11 02:31:34.257234] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ad, Actual=1ab753ed 00:08:09.275 [2024-07-11 02:31:34.257521] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574620, Actual=38574660 00:08:09.275 [2024-07-11 02:31:34.257848] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:09.275 [2024-07-11 02:31:34.258170] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:09.276 [2024-07-11 02:31:34.258462] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:08:09.276 [2024-07-11 02:31:34.258756] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:08:09.276 [2024-07-11 02:31:34.259047] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=c286ffe1 00:08:09.276 [2024-07-11 02:31:34.259291] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=298157c9 00:08:09.276 [2024-07-11 02:31:34.259542] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc2093, Actual=a576a7728ecc20d3 00:08:09.276 [2024-07-11 02:31:34.259833] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a226, Actual=88010a2d4837a266 00:08:09.276 [2024-07-11 02:31:34.260142] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:09.276 [2024-07-11 02:31:34.260439] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:09.276 [2024-07-11 02:31:34.260731] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:08:09.276 [2024-07-11 02:31:34.261023] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:08:09.276 [2024-07-11 02:31:34.261331] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=aa3db93116042cfa 00:08:09.276 [2024-07-11 02:31:34.261576] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=9793be4fcb363554 00:08:09.276 passed 00:08:09.276 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-07-11 02:31:34.261883] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd0c, Actual=fd4c 00:08:09.276 [2024-07-11 02:31:34.262174] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe61, Actual=fe21 00:08:09.276 [2024-07-11 02:31:34.262467] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:09.276 [2024-07-11 02:31:34.262761] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:09.276 [2024-07-11 02:31:34.263072] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:08:09.276 [2024-07-11 02:31:34.263358] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:08:09.276 [2024-07-11 02:31:34.263649] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=a866 00:08:09.276 [2024-07-11 02:31:34.263884] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=3c8f 00:08:09.276 passed 00:08:09.276 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-07-11 02:31:34.264229] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ad, Actual=1ab753ed 00:08:09.276 [2024-07-11 02:31:34.264569] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574620, Actual=38574660 00:08:09.276 [2024-07-11 02:31:34.264907] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:09.276 [2024-07-11 02:31:34.265206] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:09.276 [2024-07-11 02:31:34.265501] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:08:09.276 [2024-07-11 02:31:34.265812] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:08:09.276 [2024-07-11 02:31:34.266125] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=c286ffe1 00:08:09.276 [2024-07-11 02:31:34.266366] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=298157c9 00:08:09.276 [2024-07-11 02:31:34.266657] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc2093, Actual=a576a7728ecc20d3 00:08:09.276 [2024-07-11 02:31:34.266953] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a226, Actual=88010a2d4837a266 00:08:09.276 [2024-07-11 02:31:34.267240] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:09.276 [2024-07-11 02:31:34.267549] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:09.276 [2024-07-11 02:31:34.267835] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:08:09.276 [2024-07-11 02:31:34.268135] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:08:09.276 [2024-07-11 02:31:34.268449] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=aa3db93116042cfa 00:08:09.276 [2024-07-11 02:31:34.268694] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=9793be4fcb363554 00:08:09.276 passed 00:08:09.276 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-07-11 02:31:34.268963] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd0c, Actual=fd4c 00:08:09.276 [2024-07-11 02:31:34.269260] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe61, Actual=fe21 00:08:09.276 [2024-07-11 02:31:34.269545] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:09.276 [2024-07-11 02:31:34.269850] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:09.276 [2024-07-11 02:31:34.270160] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:08:09.276 [2024-07-11 02:31:34.270454] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:08:09.276 [2024-07-11 02:31:34.270745] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=a866 00:08:09.276 [2024-07-11 02:31:34.270979] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=3c8f 00:08:09.276 passed 00:08:09.276 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-07-11 02:31:34.271274] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ad, Actual=1ab753ed 00:08:09.276 [2024-07-11 02:31:34.271563] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574620, Actual=38574660 00:08:09.276 [2024-07-11 02:31:34.271872] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:09.276 [2024-07-11 02:31:34.272182] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:09.276 [2024-07-11 02:31:34.272491] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:08:09.276 [2024-07-11 02:31:34.272780] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:08:09.276 [2024-07-11 02:31:34.273074] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=c286ffe1 00:08:09.276 [2024-07-11 02:31:34.273309] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=298157c9 00:08:09.276 [2024-07-11 02:31:34.273615] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc2093, Actual=a576a7728ecc20d3 00:08:09.276 [2024-07-11 02:31:34.273941] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a226, Actual=88010a2d4837a266 00:08:09.276 [2024-07-11 02:31:34.274241] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:09.276 [2024-07-11 02:31:34.274528] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:09.276 [2024-07-11 02:31:34.274822] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:08:09.276 [2024-07-11 02:31:34.275106] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:08:09.276 [2024-07-11 02:31:34.275420] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=aa3db93116042cfa 00:08:09.276 [2024-07-11 02:31:34.275669] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=9793be4fcb363554 00:08:09.276 passed 00:08:09.276 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...passed 00:08:09.276 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:08:09.276 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:08:09.276 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:08:09.276 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:08:09.276 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:08:09.276 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:08:09.276 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:08:09.276 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:08:09.276 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-11 02:31:34.319769] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=fd0c, Actual=fd4c 00:08:09.276 [2024-07-11 02:31:34.320916] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=40cd, Actual=408d 00:08:09.276 [2024-07-11 02:31:34.322023] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=c8 00:08:09.276 [2024-07-11 02:31:34.323151] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=c8 00:08:09.276 [2024-07-11 02:31:34.324271] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=1d 00:08:09.277 [2024-07-11 02:31:34.325420] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=1d 00:08:09.277 [2024-07-11 02:31:34.326555] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=fd4c, Actual=a866 00:08:09.277 [2024-07-11 02:31:34.327658] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=f141, Actual=33ef 00:08:09.277 [2024-07-11 02:31:34.328760] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=1ab753ad, Actual=1ab753ed 00:08:09.277 [2024-07-11 02:31:34.329868] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=3ec9e017, Actual=3ec9e057 00:08:09.277 [2024-07-11 02:31:34.330969] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=c8 00:08:09.277 [2024-07-11 02:31:34.332107] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=c8 00:08:09.277 [2024-07-11 02:31:34.333204] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=1d 00:08:09.277 [2024-07-11 02:31:34.334312] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=1d 00:08:09.277 [2024-07-11 02:31:34.335482] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=1ab753ed, Actual=c286ffe1 00:08:09.277 [2024-07-11 02:31:34.336636] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=5a3d6598, Actual=4beb7431 00:08:09.277 [2024-07-11 02:31:34.337743] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=a576a7728ecc2093, Actual=a576a7728ecc20d3 00:08:09.277 [2024-07-11 02:31:34.338861] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=98d38de860f9065f, Actual=98d38de860f9061f 00:08:09.277 [2024-07-11 02:31:34.339964] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=c8 00:08:09.277 [2024-07-11 02:31:34.341064] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=c8 00:08:09.277 [2024-07-11 02:31:34.342164] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=1d 00:08:09.277 [2024-07-11 02:31:34.343291] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=1d 00:08:09.277 [2024-07-11 02:31:34.344404] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=a576a7728ecc20d3, Actual=aa3db93116042cfa 00:08:09.277 passed 00:08:09.277 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-07-11 02:31:34.345591] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=bdcaeff86fabb130, Actual=a2585b9aecaa2602 00:08:09.277 [2024-07-11 02:31:34.345961] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd0c, Actual=fd4c 00:08:09.277 [2024-07-11 02:31:34.346226] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=234c, Actual=230c 00:08:09.277 [2024-07-11 02:31:34.346491] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=c8 00:08:09.277 [2024-07-11 02:31:34.346750] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=c8 00:08:09.277 [2024-07-11 02:31:34.347036] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=19 00:08:09.277 [2024-07-11 02:31:34.347328] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=19 00:08:09.277 [2024-07-11 02:31:34.347583] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=a866 00:08:09.277 [2024-07-11 02:31:34.347847] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=506e 00:08:09.277 [2024-07-11 02:31:34.348111] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ad, Actual=1ab753ed 00:08:09.277 [2024-07-11 02:31:34.348381] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=ff49d5e2, Actual=ff49d5a2 00:08:09.277 [2024-07-11 02:31:34.348655] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=c8 00:08:09.277 [2024-07-11 02:31:34.348918] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=c8 00:08:09.277 [2024-07-11 02:31:34.349176] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=19 00:08:09.277 [2024-07-11 02:31:34.349437] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=19 00:08:09.277 [2024-07-11 02:31:34.349706] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=c286ffe1 00:08:09.277 [2024-07-11 02:31:34.349971] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=8a6b41c4 00:08:09.277 [2024-07-11 02:31:34.350249] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc2093, Actual=a576a7728ecc20d3 00:08:09.277 [2024-07-11 02:31:34.350503] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=6d31827b46168200, Actual=6d31827b46168240 00:08:09.277 [2024-07-11 02:31:34.350766] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=c8 00:08:09.277 [2024-07-11 02:31:34.351030] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=c8 00:08:09.277 [2024-07-11 02:31:34.351292] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=19 00:08:09.277 [2024-07-11 02:31:34.351545] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=19 00:08:09.277 [2024-07-11 02:31:34.351821] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=aa3db93116042cfa 00:08:09.277 [2024-07-11 02:31:34.352099] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=57ba5409ca45a25d 00:08:09.277 passed 00:08:09.277 Test: dix_sec_512_md_0_error ...passed 00:08:09.277 Test: dix_sec_512_md_8_prchk_0_single_iov ...[2024-07-11 02:31:34.352157] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:09.277 passed 00:08:09.277 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:08:09.277 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:08:09.536 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:08:09.536 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:08:09.536 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:08:09.536 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:08:09.536 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:08:09.536 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:08:09.536 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-11 02:31:34.395878] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=fd0c, Actual=fd4c 00:08:09.536 [2024-07-11 02:31:34.397014] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=40cd, Actual=408d 00:08:09.536 [2024-07-11 02:31:34.398126] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=c8 00:08:09.536 [2024-07-11 02:31:34.399214] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=c8 00:08:09.536 [2024-07-11 02:31:34.400364] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=1d 00:08:09.536 [2024-07-11 02:31:34.401550] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=1d 00:08:09.536 [2024-07-11 02:31:34.402663] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=fd4c, Actual=a866 00:08:09.536 [2024-07-11 02:31:34.403765] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=f141, Actual=33ef 00:08:09.536 [2024-07-11 02:31:34.404852] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=1ab753ad, Actual=1ab753ed 00:08:09.536 [2024-07-11 02:31:34.405963] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=3ec9e017, Actual=3ec9e057 00:08:09.536 [2024-07-11 02:31:34.407070] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=c8 00:08:09.537 [2024-07-11 02:31:34.408174] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=c8 00:08:09.537 [2024-07-11 02:31:34.409264] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=1d 00:08:09.537 [2024-07-11 02:31:34.410408] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=1d 00:08:09.537 [2024-07-11 02:31:34.411652] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=1ab753ed, Actual=c286ffe1 00:08:09.537 [2024-07-11 02:31:34.412789] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=5a3d6598, Actual=4beb7431 00:08:09.537 [2024-07-11 02:31:34.413929] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=a576a7728ecc2093, Actual=a576a7728ecc20d3 00:08:09.537 [2024-07-11 02:31:34.415020] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=98d38de860f9065f, Actual=98d38de860f9061f 00:08:09.537 [2024-07-11 02:31:34.416120] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=c8 00:08:09.537 [2024-07-11 02:31:34.417199] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=c8 00:08:09.537 [2024-07-11 02:31:34.418414] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=1d 00:08:09.537 [2024-07-11 02:31:34.419510] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=1d 00:08:09.537 [2024-07-11 02:31:34.420633] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=a576a7728ecc20d3, Actual=aa3db93116042cfa 00:08:09.537 passed 00:08:09.537 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-07-11 02:31:34.421740] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=bdcaeff86fabb130, Actual=a2585b9aecaa2602 00:08:09.537 [2024-07-11 02:31:34.422114] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd0c, Actual=fd4c 00:08:09.537 [2024-07-11 02:31:34.422382] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=234c, Actual=230c 00:08:09.537 [2024-07-11 02:31:34.422649] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=c8 00:08:09.537 [2024-07-11 02:31:34.422918] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=c8 00:08:09.537 [2024-07-11 02:31:34.423202] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=19 00:08:09.537 [2024-07-11 02:31:34.423470] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=19 00:08:09.537 [2024-07-11 02:31:34.423752] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=a866 00:08:09.537 [2024-07-11 02:31:34.424023] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=506e 00:08:09.537 [2024-07-11 02:31:34.424292] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ad, Actual=1ab753ed 00:08:09.537 [2024-07-11 02:31:34.424547] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=ff49d5e2, Actual=ff49d5a2 00:08:09.537 [2024-07-11 02:31:34.424826] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=c8 00:08:09.537 [2024-07-11 02:31:34.425088] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=c8 00:08:09.537 [2024-07-11 02:31:34.425339] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=19 00:08:09.537 [2024-07-11 02:31:34.425616] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=19 00:08:09.537 [2024-07-11 02:31:34.425913] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=c286ffe1 00:08:09.537 [2024-07-11 02:31:34.426178] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=8a6b41c4 00:08:09.537 [2024-07-11 02:31:34.426446] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc2093, Actual=a576a7728ecc20d3 00:08:09.537 [2024-07-11 02:31:34.426713] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=6d31827b46168200, Actual=6d31827b46168240 00:08:09.537 [2024-07-11 02:31:34.426964] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=c8 00:08:09.537 [2024-07-11 02:31:34.427232] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=c8 00:08:09.537 [2024-07-11 02:31:34.427483] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=19 00:08:09.537 [2024-07-11 02:31:34.427745] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=19 00:08:09.537 [2024-07-11 02:31:34.428019] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=aa3db93116042cfa 00:08:09.537 [2024-07-11 02:31:34.428282] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=57ba5409ca45a25d 00:08:09.537 passed 00:08:09.537 Test: set_md_interleave_iovs_test ...passed 00:08:09.537 Test: set_md_interleave_iovs_split_test ...passed 00:08:09.537 Test: dif_generate_stream_pi_16_test ...passed 00:08:09.537 Test: dif_generate_stream_test ...passed 00:08:09.537 Test: set_md_interleave_iovs_alignment_test ...[2024-07-11 02:31:34.435851] /home/vagrant/spdk_repo/spdk/lib/util/dif.c:1799:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:08:09.537 passed 00:08:09.537 Test: dif_generate_split_test ...passed 00:08:09.537 Test: set_md_interleave_iovs_multi_segments_test ...passed 00:08:09.537 Test: dif_verify_split_test ...passed 00:08:09.537 Test: dif_verify_stream_multi_segments_test ...passed 00:08:09.537 Test: update_crc32c_pi_16_test ...passed 00:08:09.537 Test: update_crc32c_test ...passed 00:08:09.537 Test: dif_update_crc32c_split_test ...passed 00:08:09.537 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:08:09.537 Test: get_range_with_md_test ...passed 00:08:09.537 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:08:09.537 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:08:09.537 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:08:09.537 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:08:09.537 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:08:09.537 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:08:09.537 Test: dif_generate_and_verify_unmap_test ...passed 00:08:09.537 00:08:09.537 Run Summary: Type Total Ran Passed Failed Inactive 00:08:09.537 suites 1 1 n/a 0 0 00:08:09.537 tests 79 79 79 0 0 00:08:09.537 asserts 3584 3584 3584 0 n/a 00:08:09.537 00:08:09.537 Elapsed time = 0.347 seconds 00:08:09.537 02:31:34 -- unit/unittest.sh@141 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:08:09.537 00:08:09.537 00:08:09.537 CUnit - A unit testing framework for C - Version 2.1-3 00:08:09.537 http://cunit.sourceforge.net/ 00:08:09.537 00:08:09.537 00:08:09.537 Suite: iov 00:08:09.537 Test: test_single_iov ...passed 00:08:09.537 Test: test_simple_iov ...passed 00:08:09.537 Test: test_complex_iov ...passed 00:08:09.537 Test: test_iovs_to_buf ...passed 00:08:09.537 Test: test_buf_to_iovs ...passed 00:08:09.537 Test: test_memset ...passed 00:08:09.537 Test: test_iov_one ...passed 00:08:09.537 Test: test_iov_xfer ...passed 00:08:09.537 00:08:09.537 Run Summary: Type Total Ran Passed Failed Inactive 00:08:09.537 suites 1 1 n/a 0 0 00:08:09.537 tests 8 8 8 0 0 00:08:09.537 asserts 156 156 156 0 n/a 00:08:09.537 00:08:09.537 Elapsed time = 0.000 seconds 00:08:09.537 02:31:34 -- unit/unittest.sh@142 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:08:09.537 00:08:09.537 00:08:09.537 CUnit - A unit testing framework for C - Version 2.1-3 00:08:09.537 http://cunit.sourceforge.net/ 00:08:09.537 00:08:09.537 00:08:09.537 Suite: math 00:08:09.537 Test: test_serial_number_arithmetic ...passed 00:08:09.537 Suite: erase 00:08:09.537 Test: test_memset_s ...passed 00:08:09.537 00:08:09.537 Run Summary: Type Total Ran Passed Failed Inactive 00:08:09.537 suites 2 2 n/a 0 0 00:08:09.537 tests 2 2 2 0 0 00:08:09.537 asserts 18 18 18 0 n/a 00:08:09.537 00:08:09.537 Elapsed time = 0.000 seconds 00:08:09.537 02:31:34 -- unit/unittest.sh@143 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:08:09.537 00:08:09.537 00:08:09.537 CUnit - A unit testing framework for C - Version 2.1-3 00:08:09.537 http://cunit.sourceforge.net/ 00:08:09.537 00:08:09.537 00:08:09.537 Suite: pipe 00:08:09.537 Test: test_create_destroy ...passed 00:08:09.537 Test: test_write_get_buffer ...passed 00:08:09.537 Test: test_write_advance ...passed 00:08:09.537 Test: test_read_get_buffer ...passed 00:08:09.537 Test: test_read_advance ...passed 00:08:09.537 Test: test_data ...passed 00:08:09.537 00:08:09.537 Run Summary: Type Total Ran Passed Failed Inactive 00:08:09.537 suites 1 1 n/a 0 0 00:08:09.537 tests 6 6 6 0 0 00:08:09.537 asserts 250 250 250 0 n/a 00:08:09.537 00:08:09.537 Elapsed time = 0.000 seconds 00:08:09.537 02:31:34 -- unit/unittest.sh@144 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut 00:08:09.537 00:08:09.537 00:08:09.537 CUnit - A unit testing framework for C - Version 2.1-3 00:08:09.537 http://cunit.sourceforge.net/ 00:08:09.537 00:08:09.537 00:08:09.537 Suite: xor 00:08:09.537 Test: test_xor_gen ...passed 00:08:09.537 00:08:09.537 Run Summary: Type Total Ran Passed Failed Inactive 00:08:09.537 suites 1 1 n/a 0 0 00:08:09.537 tests 1 1 1 0 0 00:08:09.537 asserts 17 17 17 0 n/a 00:08:09.537 00:08:09.537 Elapsed time = 0.007 seconds 00:08:09.537 00:08:09.537 real 0m0.749s 00:08:09.537 user 0m0.598s 00:08:09.537 sys 0m0.156s 00:08:09.537 02:31:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.537 ************************************ 00:08:09.537 END TEST unittest_util 00:08:09.537 ************************************ 00:08:09.537 02:31:34 -- common/autotest_common.sh@10 -- # set +x 00:08:09.796 02:31:34 -- unit/unittest.sh@282 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:09.796 02:31:34 -- unit/unittest.sh@283 -- # run_test unittest_vhost /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:08:09.796 02:31:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:09.796 02:31:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:09.796 02:31:34 -- common/autotest_common.sh@10 -- # set +x 00:08:09.796 ************************************ 00:08:09.796 START TEST unittest_vhost 00:08:09.796 ************************************ 00:08:09.796 02:31:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:08:09.796 00:08:09.796 00:08:09.796 CUnit - A unit testing framework for C - Version 2.1-3 00:08:09.796 http://cunit.sourceforge.net/ 00:08:09.796 00:08:09.796 00:08:09.796 Suite: vhost_suite 00:08:09.796 Test: desc_to_iov_test ...[2024-07-11 02:31:34.701528] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c: 647:vhost_vring_desc_payload_to_iov: *ERROR*: SPDK_VHOST_IOVS_MAX(129) reached 00:08:09.796 passed 00:08:09.796 Test: create_controller_test ...[2024-07-11 02:31:34.705982] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:08:09.796 [2024-07-11 02:31:34.706100] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xf0 is invalid (core mask is 0xf) 00:08:09.796 [2024-07-11 02:31:34.706220] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:08:09.797 [2024-07-11 02:31:34.706310] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xff is invalid (core mask is 0xf) 00:08:09.797 [2024-07-11 02:31:34.706378] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 121:vhost_dev_register: *ERROR*: Can't register controller with no name 00:08:09.797 [2024-07-11 02:31:34.706532] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1798:vhost_user_dev_init: *ERROR*: Resulting socket path for controller xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx[2024-07-11 02:31:34.708062] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 133:vhost_dev_register: *ERROR*: vhost controller vdev_name_0 already exists. 00:08:09.797 passed 00:08:09.797 Test: session_find_by_vid_test ...passed 00:08:09.797 Test: remove_controller_test ...[2024-07-11 02:31:34.710651] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1883:vhost_user_dev_unregister: *ERROR*: Controller vdev_name_0 has still valid connection. 00:08:09.797 passed 00:08:09.797 Test: vq_avail_ring_get_test ...passed 00:08:09.797 Test: vq_packed_ring_test ...passed 00:08:09.797 Test: vhost_blk_construct_test ...passed 00:08:09.797 00:08:09.797 Run Summary: Type Total Ran Passed Failed Inactive 00:08:09.797 suites 1 1 n/a 0 0 00:08:09.797 tests 7 7 7 0 0 00:08:09.797 asserts 145 145 145 0 n/a 00:08:09.797 00:08:09.797 Elapsed time = 0.014 seconds 00:08:09.797 00:08:09.797 real 0m0.052s 00:08:09.797 user 0m0.035s 00:08:09.797 sys 0m0.016s 00:08:09.797 02:31:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.797 02:31:34 -- common/autotest_common.sh@10 -- # set +x 00:08:09.797 ************************************ 00:08:09.797 END TEST unittest_vhost 00:08:09.797 ************************************ 00:08:09.797 02:31:34 -- unit/unittest.sh@285 -- # run_test unittest_dma /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:08:09.797 02:31:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:09.797 02:31:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:09.797 02:31:34 -- common/autotest_common.sh@10 -- # set +x 00:08:09.797 ************************************ 00:08:09.797 START TEST unittest_dma 00:08:09.797 ************************************ 00:08:09.797 02:31:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:08:09.797 00:08:09.797 00:08:09.797 CUnit - A unit testing framework for C - Version 2.1-3 00:08:09.797 http://cunit.sourceforge.net/ 00:08:09.797 00:08:09.797 00:08:09.797 Suite: dma_suite 00:08:09.797 Test: test_dma ...[2024-07-11 02:31:34.800169] /home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 37:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:08:09.797 passed 00:08:09.797 00:08:09.797 Run Summary: Type Total Ran Passed Failed Inactive 00:08:09.797 suites 1 1 n/a 0 0 00:08:09.797 tests 1 1 1 0 0 00:08:09.797 asserts 50 50 50 0 n/a 00:08:09.797 00:08:09.797 Elapsed time = 0.001 seconds 00:08:09.797 00:08:09.797 real 0m0.030s 00:08:09.797 user 0m0.017s 00:08:09.797 sys 0m0.012s 00:08:09.797 02:31:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.797 02:31:34 -- common/autotest_common.sh@10 -- # set +x 00:08:09.797 ************************************ 00:08:09.797 END TEST unittest_dma 00:08:09.797 ************************************ 00:08:09.797 02:31:34 -- unit/unittest.sh@287 -- # run_test unittest_init unittest_init 00:08:09.797 02:31:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:09.797 02:31:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:09.797 02:31:34 -- common/autotest_common.sh@10 -- # set +x 00:08:09.797 ************************************ 00:08:09.797 START TEST unittest_init 00:08:09.797 ************************************ 00:08:09.797 02:31:34 -- common/autotest_common.sh@1104 -- # unittest_init 00:08:09.797 02:31:34 -- unit/unittest.sh@148 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:08:09.797 00:08:09.797 00:08:09.797 CUnit - A unit testing framework for C - Version 2.1-3 00:08:09.797 http://cunit.sourceforge.net/ 00:08:09.797 00:08:09.797 00:08:09.797 Suite: subsystem_suite 00:08:09.797 Test: subsystem_sort_test_depends_on_single ...passed 00:08:09.797 Test: subsystem_sort_test_depends_on_multiple ...passed 00:08:09.797 Test: subsystem_sort_test_missing_dependency ...[2024-07-11 02:31:34.885246] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 190:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:08:09.797 [2024-07-11 02:31:34.885741] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 185:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:08:09.797 passed 00:08:09.797 00:08:09.797 Run Summary: Type Total Ran Passed Failed Inactive 00:08:09.797 suites 1 1 n/a 0 0 00:08:09.797 tests 3 3 3 0 0 00:08:09.797 asserts 20 20 20 0 n/a 00:08:09.797 00:08:09.797 Elapsed time = 0.001 seconds 00:08:10.055 00:08:10.055 real 0m0.036s 00:08:10.055 user 0m0.019s 00:08:10.055 sys 0m0.016s 00:08:10.055 02:31:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.055 02:31:34 -- common/autotest_common.sh@10 -- # set +x 00:08:10.055 ************************************ 00:08:10.055 END TEST unittest_init 00:08:10.055 ************************************ 00:08:10.055 02:31:34 -- unit/unittest.sh@289 -- # '[' yes = yes ']' 00:08:10.055 02:31:34 -- unit/unittest.sh@289 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:08:10.055 02:31:34 -- unit/unittest.sh@290 -- # hostname 00:08:10.056 02:31:34 -- unit/unittest.sh@290 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -d . -c -t ubuntu2004-cloud-1712646987-2220 -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:08:10.314 geninfo: WARNING: invalid characters removed from testname! 00:08:36.869 02:32:00 -- unit/unittest.sh@291 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info 00:08:41.051 02:32:05 -- unit/unittest.sh@292 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:44.337 02:32:08 -- unit/unittest.sh@293 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/app/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:46.872 02:32:11 -- unit/unittest.sh@294 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:49.405 02:32:14 -- unit/unittest.sh@295 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/examples/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:52.686 02:32:17 -- unit/unittest.sh@296 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:55.218 02:32:19 -- unit/unittest.sh@297 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/test/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:57.120 02:32:21 -- unit/unittest.sh@298 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:08:57.120 02:32:21 -- unit/unittest.sh@299 -- # genhtml /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info --output-directory /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:08:57.687 Reading data file /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:57.687 Found 309 entries. 00:08:57.687 Found common filename prefix "/home/vagrant/spdk_repo/spdk" 00:08:57.687 Writing .css and .png files. 00:08:57.687 Generating output. 00:08:57.687 Processing file include/linux/virtio_ring.h 00:08:57.946 Processing file include/spdk/nvme.h 00:08:57.946 Processing file include/spdk/endian.h 00:08:57.946 Processing file include/spdk/util.h 00:08:57.946 Processing file include/spdk/nvmf_transport.h 00:08:57.946 Processing file include/spdk/nvme_spec.h 00:08:57.946 Processing file include/spdk/base64.h 00:08:57.946 Processing file include/spdk/thread.h 00:08:57.946 Processing file include/spdk/bdev_module.h 00:08:57.946 Processing file include/spdk/histogram_data.h 00:08:57.946 Processing file include/spdk/mmio.h 00:08:57.946 Processing file include/spdk/trace.h 00:08:58.205 Processing file include/spdk_internal/virtio.h 00:08:58.205 Processing file include/spdk_internal/sock.h 00:08:58.205 Processing file include/spdk_internal/sgl.h 00:08:58.205 Processing file include/spdk_internal/utf.h 00:08:58.205 Processing file include/spdk_internal/nvme_tcp.h 00:08:58.205 Processing file include/spdk_internal/rdma.h 00:08:58.205 Processing file lib/accel/accel_sw.c 00:08:58.205 Processing file lib/accel/accel.c 00:08:58.205 Processing file lib/accel/accel_rpc.c 00:08:58.463 Processing file lib/bdev/part.c 00:08:58.463 Processing file lib/bdev/scsi_nvme.c 00:08:58.463 Processing file lib/bdev/bdev_zone.c 00:08:58.463 Processing file lib/bdev/bdev.c 00:08:58.463 Processing file lib/bdev/bdev_rpc.c 00:08:58.721 Processing file lib/blob/blob_bs_dev.c 00:08:58.721 Processing file lib/blob/zeroes.c 00:08:58.721 Processing file lib/blob/blobstore.h 00:08:58.721 Processing file lib/blob/request.c 00:08:58.721 Processing file lib/blob/blobstore.c 00:08:58.979 Processing file lib/blobfs/tree.c 00:08:58.979 Processing file lib/blobfs/blobfs.c 00:08:58.979 Processing file lib/conf/conf.c 00:08:58.979 Processing file lib/dma/dma.c 00:08:59.237 Processing file lib/env_dpdk/threads.c 00:08:59.237 Processing file lib/env_dpdk/memory.c 00:08:59.237 Processing file lib/env_dpdk/init.c 00:08:59.237 Processing file lib/env_dpdk/pci_ioat.c 00:08:59.237 Processing file lib/env_dpdk/pci.c 00:08:59.237 Processing file lib/env_dpdk/pci_dpdk.c 00:08:59.237 Processing file lib/env_dpdk/pci_vmd.c 00:08:59.237 Processing file lib/env_dpdk/pci_virtio.c 00:08:59.237 Processing file lib/env_dpdk/pci_dpdk_2211.c 00:08:59.237 Processing file lib/env_dpdk/pci_idxd.c 00:08:59.237 Processing file lib/env_dpdk/pci_event.c 00:08:59.237 Processing file lib/env_dpdk/sigbus_handler.c 00:08:59.237 Processing file lib/env_dpdk/pci_dpdk_2207.c 00:08:59.237 Processing file lib/env_dpdk/env.c 00:08:59.495 Processing file lib/event/app_rpc.c 00:08:59.495 Processing file lib/event/app.c 00:08:59.495 Processing file lib/event/log_rpc.c 00:08:59.495 Processing file lib/event/scheduler_static.c 00:08:59.495 Processing file lib/event/reactor.c 00:09:00.062 Processing file lib/ftl/ftl_writer.c 00:09:00.062 Processing file lib/ftl/ftl_nv_cache_io.h 00:09:00.062 Processing file lib/ftl/ftl_sb.c 00:09:00.062 Processing file lib/ftl/ftl_writer.h 00:09:00.062 Processing file lib/ftl/ftl_band.c 00:09:00.062 Processing file lib/ftl/ftl_l2p_cache.c 00:09:00.062 Processing file lib/ftl/ftl_nv_cache.c 00:09:00.062 Processing file lib/ftl/ftl_trace.c 00:09:00.062 Processing file lib/ftl/ftl_debug.h 00:09:00.062 Processing file lib/ftl/ftl_core.c 00:09:00.062 Processing file lib/ftl/ftl_nv_cache.h 00:09:00.062 Processing file lib/ftl/ftl_init.c 00:09:00.062 Processing file lib/ftl/ftl_l2p_flat.c 00:09:00.062 Processing file lib/ftl/ftl_l2p.c 00:09:00.062 Processing file lib/ftl/ftl_core.h 00:09:00.062 Processing file lib/ftl/ftl_rq.c 00:09:00.062 Processing file lib/ftl/ftl_band_ops.c 00:09:00.062 Processing file lib/ftl/ftl_io.h 00:09:00.062 Processing file lib/ftl/ftl_p2l.c 00:09:00.062 Processing file lib/ftl/ftl_layout.c 00:09:00.062 Processing file lib/ftl/ftl_reloc.c 00:09:00.062 Processing file lib/ftl/ftl_debug.c 00:09:00.062 Processing file lib/ftl/ftl_io.c 00:09:00.062 Processing file lib/ftl/ftl_band.h 00:09:00.062 Processing file lib/ftl/base/ftl_base_bdev.c 00:09:00.062 Processing file lib/ftl/base/ftl_base_dev.c 00:09:00.320 Processing file lib/ftl/mngt/ftl_mngt_bdev.c 00:09:00.320 Processing file lib/ftl/mngt/ftl_mngt_startup.c 00:09:00.320 Processing file lib/ftl/mngt/ftl_mngt_shutdown.c 00:09:00.320 Processing file lib/ftl/mngt/ftl_mngt.c 00:09:00.320 Processing file lib/ftl/mngt/ftl_mngt_p2l.c 00:09:00.320 Processing file lib/ftl/mngt/ftl_mngt_recovery.c 00:09:00.320 Processing file lib/ftl/mngt/ftl_mngt_md.c 00:09:00.320 Processing file lib/ftl/mngt/ftl_mngt_misc.c 00:09:00.321 Processing file lib/ftl/mngt/ftl_mngt_band.c 00:09:00.321 Processing file lib/ftl/mngt/ftl_mngt_upgrade.c 00:09:00.321 Processing file lib/ftl/mngt/ftl_mngt_self_test.c 00:09:00.321 Processing file lib/ftl/mngt/ftl_mngt_l2p.c 00:09:00.321 Processing file lib/ftl/mngt/ftl_mngt_ioch.c 00:09:00.579 Processing file lib/ftl/nvc/ftl_nvc_dev.c 00:09:00.579 Processing file lib/ftl/nvc/ftl_nvc_bdev_vss.c 00:09:00.579 Processing file lib/ftl/upgrade/ftl_sb_v5.c 00:09:00.579 Processing file lib/ftl/upgrade/ftl_sb_upgrade.c 00:09:00.579 Processing file lib/ftl/upgrade/ftl_sb_v3.c 00:09:00.579 Processing file lib/ftl/upgrade/ftl_layout_upgrade.c 00:09:00.837 Processing file lib/ftl/utils/ftl_df.h 00:09:00.837 Processing file lib/ftl/utils/ftl_property.c 00:09:00.837 Processing file lib/ftl/utils/ftl_conf.c 00:09:00.837 Processing file lib/ftl/utils/ftl_md.c 00:09:00.837 Processing file lib/ftl/utils/ftl_layout_tracker_bdev.c 00:09:00.837 Processing file lib/ftl/utils/ftl_bitmap.c 00:09:00.837 Processing file lib/ftl/utils/ftl_property.h 00:09:00.837 Processing file lib/ftl/utils/ftl_mempool.c 00:09:00.837 Processing file lib/ftl/utils/ftl_addr_utils.h 00:09:00.837 Processing file lib/idxd/idxd.c 00:09:00.837 Processing file lib/idxd/idxd_internal.h 00:09:00.837 Processing file lib/idxd/idxd_user.c 00:09:01.096 Processing file lib/init/subsystem.c 00:09:01.096 Processing file lib/init/rpc.c 00:09:01.096 Processing file lib/init/json_config.c 00:09:01.096 Processing file lib/init/subsystem_rpc.c 00:09:01.096 Processing file lib/ioat/ioat_internal.h 00:09:01.096 Processing file lib/ioat/ioat.c 00:09:01.661 Processing file lib/iscsi/iscsi_subsystem.c 00:09:01.661 Processing file lib/iscsi/conn.c 00:09:01.661 Processing file lib/iscsi/tgt_node.c 00:09:01.661 Processing file lib/iscsi/init_grp.c 00:09:01.661 Processing file lib/iscsi/iscsi.h 00:09:01.661 Processing file lib/iscsi/iscsi.c 00:09:01.661 Processing file lib/iscsi/portal_grp.c 00:09:01.661 Processing file lib/iscsi/param.c 00:09:01.661 Processing file lib/iscsi/task.h 00:09:01.661 Processing file lib/iscsi/task.c 00:09:01.661 Processing file lib/iscsi/iscsi_rpc.c 00:09:01.661 Processing file lib/iscsi/md5.c 00:09:01.661 Processing file lib/json/json_util.c 00:09:01.661 Processing file lib/json/json_write.c 00:09:01.661 Processing file lib/json/json_parse.c 00:09:01.661 Processing file lib/jsonrpc/jsonrpc_client_tcp.c 00:09:01.661 Processing file lib/jsonrpc/jsonrpc_server.c 00:09:01.661 Processing file lib/jsonrpc/jsonrpc_client.c 00:09:01.661 Processing file lib/jsonrpc/jsonrpc_server_tcp.c 00:09:01.661 Processing file lib/log/log_deprecated.c 00:09:01.661 Processing file lib/log/log_flags.c 00:09:01.661 Processing file lib/log/log.c 00:09:01.919 Processing file lib/lvol/lvol.c 00:09:01.919 Processing file lib/nbd/nbd_rpc.c 00:09:01.919 Processing file lib/nbd/nbd.c 00:09:01.919 Processing file lib/notify/notify.c 00:09:01.919 Processing file lib/notify/notify_rpc.c 00:09:02.853 Processing file lib/nvme/nvme_pcie_internal.h 00:09:02.853 Processing file lib/nvme/nvme_vfio_user.c 00:09:02.853 Processing file lib/nvme/nvme_zns.c 00:09:02.853 Processing file lib/nvme/nvme_pcie_common.c 00:09:02.853 Processing file lib/nvme/nvme_pcie.c 00:09:02.853 Processing file lib/nvme/nvme_io_msg.c 00:09:02.853 Processing file lib/nvme/nvme_fabric.c 00:09:02.853 Processing file lib/nvme/nvme_internal.h 00:09:02.853 Processing file lib/nvme/nvme_ns_cmd.c 00:09:02.853 Processing file lib/nvme/nvme_opal.c 00:09:02.853 Processing file lib/nvme/nvme_ns.c 00:09:02.853 Processing file lib/nvme/nvme_discovery.c 00:09:02.853 Processing file lib/nvme/nvme_poll_group.c 00:09:02.853 Processing file lib/nvme/nvme_tcp.c 00:09:02.853 Processing file lib/nvme/nvme_ctrlr_cmd.c 00:09:02.853 Processing file lib/nvme/nvme_quirks.c 00:09:02.853 Processing file lib/nvme/nvme_ctrlr_ocssd_cmd.c 00:09:02.853 Processing file lib/nvme/nvme_ctrlr.c 00:09:02.853 Processing file lib/nvme/nvme_qpair.c 00:09:02.853 Processing file lib/nvme/nvme_rdma.c 00:09:02.853 Processing file lib/nvme/nvme_transport.c 00:09:02.853 Processing file lib/nvme/nvme.c 00:09:02.853 Processing file lib/nvme/nvme_ns_ocssd_cmd.c 00:09:02.853 Processing file lib/nvme/nvme_cuse.c 00:09:03.423 Processing file lib/nvmf/tcp.c 00:09:03.423 Processing file lib/nvmf/ctrlr.c 00:09:03.423 Processing file lib/nvmf/transport.c 00:09:03.423 Processing file lib/nvmf/nvmf.c 00:09:03.423 Processing file lib/nvmf/ctrlr_bdev.c 00:09:03.423 Processing file lib/nvmf/subsystem.c 00:09:03.423 Processing file lib/nvmf/nvmf_rpc.c 00:09:03.423 Processing file lib/nvmf/rdma.c 00:09:03.423 Processing file lib/nvmf/nvmf_internal.h 00:09:03.423 Processing file lib/nvmf/ctrlr_discovery.c 00:09:03.423 Processing file lib/rdma/common.c 00:09:03.423 Processing file lib/rdma/rdma_verbs.c 00:09:03.423 Processing file lib/rpc/rpc.c 00:09:03.682 Processing file lib/scsi/scsi_rpc.c 00:09:03.682 Processing file lib/scsi/lun.c 00:09:03.682 Processing file lib/scsi/task.c 00:09:03.682 Processing file lib/scsi/dev.c 00:09:03.682 Processing file lib/scsi/scsi_pr.c 00:09:03.682 Processing file lib/scsi/scsi.c 00:09:03.682 Processing file lib/scsi/scsi_bdev.c 00:09:03.682 Processing file lib/scsi/port.c 00:09:03.940 Processing file lib/sock/sock.c 00:09:03.940 Processing file lib/sock/sock_rpc.c 00:09:03.940 Processing file lib/thread/thread.c 00:09:03.940 Processing file lib/thread/iobuf.c 00:09:04.199 Processing file lib/trace/trace_rpc.c 00:09:04.199 Processing file lib/trace/trace.c 00:09:04.199 Processing file lib/trace/trace_flags.c 00:09:04.199 Processing file lib/trace_parser/trace.cpp 00:09:04.199 Processing file lib/ut/ut.c 00:09:04.457 Processing file lib/ut_mock/mock.c 00:09:04.715 Processing file lib/util/fd_group.c 00:09:04.715 Processing file lib/util/string.c 00:09:04.715 Processing file lib/util/iov.c 00:09:04.715 Processing file lib/util/dif.c 00:09:04.715 Processing file lib/util/crc32.c 00:09:04.715 Processing file lib/util/pipe.c 00:09:04.715 Processing file lib/util/xor.c 00:09:04.715 Processing file lib/util/bit_array.c 00:09:04.715 Processing file lib/util/uuid.c 00:09:04.715 Processing file lib/util/fd.c 00:09:04.715 Processing file lib/util/zipf.c 00:09:04.715 Processing file lib/util/hexlify.c 00:09:04.715 Processing file lib/util/file.c 00:09:04.715 Processing file lib/util/math.c 00:09:04.715 Processing file lib/util/cpuset.c 00:09:04.715 Processing file lib/util/crc32c.c 00:09:04.715 Processing file lib/util/strerror_tls.c 00:09:04.715 Processing file lib/util/crc64.c 00:09:04.715 Processing file lib/util/crc16.c 00:09:04.715 Processing file lib/util/base64.c 00:09:04.715 Processing file lib/util/crc32_ieee.c 00:09:04.974 Processing file lib/vfio_user/host/vfio_user_pci.c 00:09:04.974 Processing file lib/vfio_user/host/vfio_user.c 00:09:04.974 Processing file lib/vhost/vhost_rpc.c 00:09:04.974 Processing file lib/vhost/vhost_blk.c 00:09:04.974 Processing file lib/vhost/rte_vhost_user.c 00:09:04.974 Processing file lib/vhost/vhost.c 00:09:04.974 Processing file lib/vhost/vhost_internal.h 00:09:04.974 Processing file lib/vhost/vhost_scsi.c 00:09:05.232 Processing file lib/virtio/virtio_vhost_user.c 00:09:05.232 Processing file lib/virtio/virtio.c 00:09:05.232 Processing file lib/virtio/virtio_pci.c 00:09:05.232 Processing file lib/virtio/virtio_vfio_user.c 00:09:05.232 Processing file lib/vmd/vmd.c 00:09:05.232 Processing file lib/vmd/led.c 00:09:05.489 Processing file module/accel/dsa/accel_dsa_rpc.c 00:09:05.490 Processing file module/accel/dsa/accel_dsa.c 00:09:05.490 Processing file module/accel/error/accel_error_rpc.c 00:09:05.490 Processing file module/accel/error/accel_error.c 00:09:05.490 Processing file module/accel/iaa/accel_iaa_rpc.c 00:09:05.490 Processing file module/accel/iaa/accel_iaa.c 00:09:05.747 Processing file module/accel/ioat/accel_ioat.c 00:09:05.747 Processing file module/accel/ioat/accel_ioat_rpc.c 00:09:05.747 Processing file module/bdev/aio/bdev_aio_rpc.c 00:09:05.747 Processing file module/bdev/aio/bdev_aio.c 00:09:05.747 Processing file module/bdev/delay/vbdev_delay.c 00:09:05.747 Processing file module/bdev/delay/vbdev_delay_rpc.c 00:09:06.055 Processing file module/bdev/error/vbdev_error_rpc.c 00:09:06.055 Processing file module/bdev/error/vbdev_error.c 00:09:06.055 Processing file module/bdev/ftl/bdev_ftl_rpc.c 00:09:06.055 Processing file module/bdev/ftl/bdev_ftl.c 00:09:06.055 Processing file module/bdev/gpt/vbdev_gpt.c 00:09:06.055 Processing file module/bdev/gpt/gpt.h 00:09:06.055 Processing file module/bdev/gpt/gpt.c 00:09:06.333 Processing file module/bdev/iscsi/bdev_iscsi.c 00:09:06.333 Processing file module/bdev/iscsi/bdev_iscsi_rpc.c 00:09:06.333 Processing file module/bdev/lvol/vbdev_lvol_rpc.c 00:09:06.333 Processing file module/bdev/lvol/vbdev_lvol.c 00:09:06.333 Processing file module/bdev/malloc/bdev_malloc.c 00:09:06.333 Processing file module/bdev/malloc/bdev_malloc_rpc.c 00:09:06.591 Processing file module/bdev/null/bdev_null_rpc.c 00:09:06.591 Processing file module/bdev/null/bdev_null.c 00:09:06.850 Processing file module/bdev/nvme/vbdev_opal.c 00:09:06.850 Processing file module/bdev/nvme/bdev_nvme_rpc.c 00:09:06.850 Processing file module/bdev/nvme/vbdev_opal_rpc.c 00:09:06.850 Processing file module/bdev/nvme/nvme_rpc.c 00:09:06.850 Processing file module/bdev/nvme/bdev_nvme_cuse_rpc.c 00:09:06.850 Processing file module/bdev/nvme/bdev_mdns_client.c 00:09:06.850 Processing file module/bdev/nvme/bdev_nvme.c 00:09:06.850 Processing file module/bdev/passthru/vbdev_passthru.c 00:09:06.850 Processing file module/bdev/passthru/vbdev_passthru_rpc.c 00:09:07.417 Processing file module/bdev/raid/raid0.c 00:09:07.417 Processing file module/bdev/raid/bdev_raid_rpc.c 00:09:07.417 Processing file module/bdev/raid/raid5f.c 00:09:07.417 Processing file module/bdev/raid/bdev_raid.c 00:09:07.417 Processing file module/bdev/raid/bdev_raid.h 00:09:07.417 Processing file module/bdev/raid/concat.c 00:09:07.417 Processing file module/bdev/raid/raid1.c 00:09:07.417 Processing file module/bdev/raid/bdev_raid_sb.c 00:09:07.417 Processing file module/bdev/split/vbdev_split.c 00:09:07.417 Processing file module/bdev/split/vbdev_split_rpc.c 00:09:07.417 Processing file module/bdev/virtio/bdev_virtio_rpc.c 00:09:07.417 Processing file module/bdev/virtio/bdev_virtio_scsi.c 00:09:07.417 Processing file module/bdev/virtio/bdev_virtio_blk.c 00:09:07.675 Processing file module/bdev/zone_block/vbdev_zone_block.c 00:09:07.675 Processing file module/bdev/zone_block/vbdev_zone_block_rpc.c 00:09:07.675 Processing file module/blob/bdev/blob_bdev.c 00:09:07.675 Processing file module/blobfs/bdev/blobfs_bdev_rpc.c 00:09:07.675 Processing file module/blobfs/bdev/blobfs_bdev.c 00:09:07.675 Processing file module/env_dpdk/env_dpdk_rpc.c 00:09:07.931 Processing file module/event/subsystems/accel/accel.c 00:09:07.931 Processing file module/event/subsystems/bdev/bdev.c 00:09:07.931 Processing file module/event/subsystems/iobuf/iobuf.c 00:09:07.931 Processing file module/event/subsystems/iobuf/iobuf_rpc.c 00:09:07.931 Processing file module/event/subsystems/iscsi/iscsi.c 00:09:08.189 Processing file module/event/subsystems/nbd/nbd.c 00:09:08.189 Processing file module/event/subsystems/nvmf/nvmf_rpc.c 00:09:08.189 Processing file module/event/subsystems/nvmf/nvmf_tgt.c 00:09:08.189 Processing file module/event/subsystems/scheduler/scheduler.c 00:09:08.189 Processing file module/event/subsystems/scsi/scsi.c 00:09:08.448 Processing file module/event/subsystems/sock/sock.c 00:09:08.448 Processing file module/event/subsystems/vhost_blk/vhost_blk.c 00:09:08.448 Processing file module/event/subsystems/vhost_scsi/vhost_scsi.c 00:09:08.448 Processing file module/event/subsystems/vmd/vmd.c 00:09:08.448 Processing file module/event/subsystems/vmd/vmd_rpc.c 00:09:08.708 Processing file module/scheduler/dpdk_governor/dpdk_governor.c 00:09:08.708 Processing file module/scheduler/dynamic/scheduler_dynamic.c 00:09:08.708 Processing file module/scheduler/gscheduler/gscheduler.c 00:09:08.708 Processing file module/sock/sock_kernel.h 00:09:08.966 Processing file module/sock/posix/posix.c 00:09:08.966 Writing directory view page. 00:09:08.966 Overall coverage rate: 00:09:08.966 lines......: 39.1% (39263 of 100392 lines) 00:09:08.966 functions..: 42.8% (3587 of 8384 functions) 00:09:08.966 00:09:08.966 00:09:08.966 ===================== 00:09:08.966 All unit tests passed 00:09:08.966 ===================== 00:09:08.966 Note: coverage report is here: /home/vagrant/spdk_repo/spdk//home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:09:08.966 02:32:33 -- unit/unittest.sh@302 -- # set +x 00:09:08.966 00:09:08.966 00:09:08.966 ************************************ 00:09:08.966 END TEST unittest 00:09:08.966 ************************************ 00:09:08.966 00:09:08.966 real 3m0.707s 00:09:08.966 user 2m35.918s 00:09:08.966 sys 0m13.510s 00:09:08.966 02:32:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:08.966 02:32:33 -- common/autotest_common.sh@10 -- # set +x 00:09:08.966 02:32:33 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:09:08.966 02:32:33 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:09:08.966 02:32:33 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:09:08.966 02:32:33 -- spdk/autotest.sh@173 -- # timing_enter lib 00:09:08.966 02:32:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:08.966 02:32:33 -- common/autotest_common.sh@10 -- # set +x 00:09:08.966 02:32:33 -- spdk/autotest.sh@175 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:09:08.966 02:32:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:08.966 02:32:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:08.966 02:32:33 -- common/autotest_common.sh@10 -- # set +x 00:09:08.966 ************************************ 00:09:08.966 START TEST env 00:09:08.966 ************************************ 00:09:08.966 02:32:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:09:08.966 * Looking for test storage... 00:09:08.966 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:09:08.966 02:32:33 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:09:08.966 02:32:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:08.966 02:32:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:08.966 02:32:33 -- common/autotest_common.sh@10 -- # set +x 00:09:08.966 ************************************ 00:09:08.966 START TEST env_memory 00:09:08.966 ************************************ 00:09:08.966 02:32:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:09:08.966 00:09:08.966 00:09:08.966 CUnit - A unit testing framework for C - Version 2.1-3 00:09:08.966 http://cunit.sourceforge.net/ 00:09:08.966 00:09:08.966 00:09:08.966 Suite: memory 00:09:08.966 Test: alloc and free memory map ...[2024-07-11 02:32:34.033902] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:09:08.966 passed 00:09:09.224 Test: mem map translation ...[2024-07-11 02:32:34.067376] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:09:09.224 [2024-07-11 02:32:34.067603] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:09:09.224 [2024-07-11 02:32:34.067738] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:09:09.224 [2024-07-11 02:32:34.067849] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:09:09.224 passed 00:09:09.224 Test: mem map registration ...[2024-07-11 02:32:34.127881] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:09:09.224 [2024-07-11 02:32:34.128090] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:09:09.224 passed 00:09:09.224 Test: mem map adjacent registrations ...passed 00:09:09.224 00:09:09.224 Run Summary: Type Total Ran Passed Failed Inactive 00:09:09.224 suites 1 1 n/a 0 0 00:09:09.224 tests 4 4 4 0 0 00:09:09.224 asserts 152 152 152 0 n/a 00:09:09.224 00:09:09.224 Elapsed time = 0.209 seconds 00:09:09.224 00:09:09.224 real 0m0.240s 00:09:09.224 user 0m0.231s 00:09:09.224 sys 0m0.008s 00:09:09.224 02:32:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:09.224 02:32:34 -- common/autotest_common.sh@10 -- # set +x 00:09:09.224 ************************************ 00:09:09.224 END TEST env_memory 00:09:09.224 ************************************ 00:09:09.224 02:32:34 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:09:09.224 02:32:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:09.224 02:32:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:09.224 02:32:34 -- common/autotest_common.sh@10 -- # set +x 00:09:09.224 ************************************ 00:09:09.224 START TEST env_vtophys 00:09:09.224 ************************************ 00:09:09.224 02:32:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:09:09.224 EAL: lib.eal log level changed from notice to debug 00:09:09.224 EAL: Detected lcore 0 as core 0 on socket 0 00:09:09.224 EAL: Detected lcore 1 as core 0 on socket 0 00:09:09.224 EAL: Detected lcore 2 as core 0 on socket 0 00:09:09.224 EAL: Detected lcore 3 as core 0 on socket 0 00:09:09.224 EAL: Detected lcore 4 as core 0 on socket 0 00:09:09.224 EAL: Detected lcore 5 as core 0 on socket 0 00:09:09.224 EAL: Detected lcore 6 as core 0 on socket 0 00:09:09.224 EAL: Detected lcore 7 as core 0 on socket 0 00:09:09.224 EAL: Detected lcore 8 as core 0 on socket 0 00:09:09.224 EAL: Detected lcore 9 as core 0 on socket 0 00:09:09.483 EAL: Maximum logical cores by configuration: 128 00:09:09.483 EAL: Detected CPU lcores: 10 00:09:09.483 EAL: Detected NUMA nodes: 1 00:09:09.483 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:09:09.483 EAL: Checking presence of .so 'librte_eal.so.23' 00:09:09.483 EAL: Checking presence of .so 'librte_eal.so' 00:09:09.483 EAL: Detected static linkage of DPDK 00:09:09.483 EAL: No shared files mode enabled, IPC will be disabled 00:09:09.483 EAL: Selected IOVA mode 'PA' 00:09:09.483 EAL: Probing VFIO support... 00:09:09.483 EAL: IOMMU type 1 (Type 1) is supported 00:09:09.483 EAL: IOMMU type 7 (sPAPR) is not supported 00:09:09.483 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:09:09.483 EAL: VFIO support initialized 00:09:09.483 EAL: Ask a virtual area of 0x2e000 bytes 00:09:09.483 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:09:09.483 EAL: Setting up physically contiguous memory... 00:09:09.483 EAL: Setting maximum number of open files to 1048576 00:09:09.483 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:09:09.483 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:09:09.483 EAL: Ask a virtual area of 0x61000 bytes 00:09:09.483 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:09:09.483 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:09.483 EAL: Ask a virtual area of 0x400000000 bytes 00:09:09.483 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:09:09.483 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:09:09.483 EAL: Ask a virtual area of 0x61000 bytes 00:09:09.483 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:09:09.483 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:09.483 EAL: Ask a virtual area of 0x400000000 bytes 00:09:09.483 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:09:09.483 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:09:09.483 EAL: Ask a virtual area of 0x61000 bytes 00:09:09.483 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:09:09.483 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:09.483 EAL: Ask a virtual area of 0x400000000 bytes 00:09:09.483 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:09:09.483 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:09:09.483 EAL: Ask a virtual area of 0x61000 bytes 00:09:09.483 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:09:09.483 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:09.483 EAL: Ask a virtual area of 0x400000000 bytes 00:09:09.483 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:09:09.483 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:09:09.483 EAL: Hugepages will be freed exactly as allocated. 00:09:09.483 EAL: No shared files mode enabled, IPC is disabled 00:09:09.483 EAL: No shared files mode enabled, IPC is disabled 00:09:09.483 EAL: TSC frequency is ~2200000 KHz 00:09:09.483 EAL: Main lcore 0 is ready (tid=7fb7daa66a40;cpuset=[0]) 00:09:09.483 EAL: Trying to obtain current memory policy. 00:09:09.483 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:09.483 EAL: Restoring previous memory policy: 0 00:09:09.483 EAL: request: mp_malloc_sync 00:09:09.483 EAL: No shared files mode enabled, IPC is disabled 00:09:09.483 EAL: Heap on socket 0 was expanded by 2MB 00:09:09.483 EAL: No shared files mode enabled, IPC is disabled 00:09:09.483 EAL: Mem event callback 'spdk:(nil)' registered 00:09:09.483 00:09:09.483 00:09:09.483 CUnit - A unit testing framework for C - Version 2.1-3 00:09:09.483 http://cunit.sourceforge.net/ 00:09:09.483 00:09:09.483 00:09:09.483 Suite: components_suite 00:09:10.049 Test: vtophys_malloc_test ...passed 00:09:10.049 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:09:10.049 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:10.049 EAL: Restoring previous memory policy: 0 00:09:10.049 EAL: Calling mem event callback 'spdk:(nil)' 00:09:10.050 EAL: request: mp_malloc_sync 00:09:10.050 EAL: No shared files mode enabled, IPC is disabled 00:09:10.050 EAL: Heap on socket 0 was expanded by 4MB 00:09:10.050 EAL: Calling mem event callback 'spdk:(nil)' 00:09:10.050 EAL: request: mp_malloc_sync 00:09:10.050 EAL: No shared files mode enabled, IPC is disabled 00:09:10.050 EAL: Heap on socket 0 was shrunk by 4MB 00:09:10.050 EAL: Trying to obtain current memory policy. 00:09:10.050 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:10.050 EAL: Restoring previous memory policy: 0 00:09:10.050 EAL: Calling mem event callback 'spdk:(nil)' 00:09:10.050 EAL: request: mp_malloc_sync 00:09:10.050 EAL: No shared files mode enabled, IPC is disabled 00:09:10.050 EAL: Heap on socket 0 was expanded by 6MB 00:09:10.050 EAL: Calling mem event callback 'spdk:(nil)' 00:09:10.050 EAL: request: mp_malloc_sync 00:09:10.050 EAL: No shared files mode enabled, IPC is disabled 00:09:10.050 EAL: Heap on socket 0 was shrunk by 6MB 00:09:10.050 EAL: Trying to obtain current memory policy. 00:09:10.050 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:10.050 EAL: Restoring previous memory policy: 0 00:09:10.050 EAL: Calling mem event callback 'spdk:(nil)' 00:09:10.050 EAL: request: mp_malloc_sync 00:09:10.050 EAL: No shared files mode enabled, IPC is disabled 00:09:10.050 EAL: Heap on socket 0 was expanded by 10MB 00:09:10.050 EAL: Calling mem event callback 'spdk:(nil)' 00:09:10.050 EAL: request: mp_malloc_sync 00:09:10.050 EAL: No shared files mode enabled, IPC is disabled 00:09:10.050 EAL: Heap on socket 0 was shrunk by 10MB 00:09:10.050 EAL: Trying to obtain current memory policy. 00:09:10.050 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:10.050 EAL: Restoring previous memory policy: 0 00:09:10.050 EAL: Calling mem event callback 'spdk:(nil)' 00:09:10.050 EAL: request: mp_malloc_sync 00:09:10.050 EAL: No shared files mode enabled, IPC is disabled 00:09:10.050 EAL: Heap on socket 0 was expanded by 18MB 00:09:10.050 EAL: Calling mem event callback 'spdk:(nil)' 00:09:10.050 EAL: request: mp_malloc_sync 00:09:10.050 EAL: No shared files mode enabled, IPC is disabled 00:09:10.050 EAL: Heap on socket 0 was shrunk by 18MB 00:09:10.050 EAL: Trying to obtain current memory policy. 00:09:10.050 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:10.050 EAL: Restoring previous memory policy: 0 00:09:10.050 EAL: Calling mem event callback 'spdk:(nil)' 00:09:10.050 EAL: request: mp_malloc_sync 00:09:10.050 EAL: No shared files mode enabled, IPC is disabled 00:09:10.050 EAL: Heap on socket 0 was expanded by 34MB 00:09:10.050 EAL: Calling mem event callback 'spdk:(nil)' 00:09:10.050 EAL: request: mp_malloc_sync 00:09:10.050 EAL: No shared files mode enabled, IPC is disabled 00:09:10.050 EAL: Heap on socket 0 was shrunk by 34MB 00:09:10.050 EAL: Trying to obtain current memory policy. 00:09:10.050 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:10.050 EAL: Restoring previous memory policy: 0 00:09:10.050 EAL: Calling mem event callback 'spdk:(nil)' 00:09:10.050 EAL: request: mp_malloc_sync 00:09:10.050 EAL: No shared files mode enabled, IPC is disabled 00:09:10.050 EAL: Heap on socket 0 was expanded by 66MB 00:09:10.050 EAL: Calling mem event callback 'spdk:(nil)' 00:09:10.050 EAL: request: mp_malloc_sync 00:09:10.050 EAL: No shared files mode enabled, IPC is disabled 00:09:10.050 EAL: Heap on socket 0 was shrunk by 66MB 00:09:10.050 EAL: Trying to obtain current memory policy. 00:09:10.050 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:10.050 EAL: Restoring previous memory policy: 0 00:09:10.050 EAL: Calling mem event callback 'spdk:(nil)' 00:09:10.050 EAL: request: mp_malloc_sync 00:09:10.050 EAL: No shared files mode enabled, IPC is disabled 00:09:10.050 EAL: Heap on socket 0 was expanded by 130MB 00:09:10.050 EAL: Calling mem event callback 'spdk:(nil)' 00:09:10.050 EAL: request: mp_malloc_sync 00:09:10.050 EAL: No shared files mode enabled, IPC is disabled 00:09:10.050 EAL: Heap on socket 0 was shrunk by 130MB 00:09:10.050 EAL: Trying to obtain current memory policy. 00:09:10.050 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:10.050 EAL: Restoring previous memory policy: 0 00:09:10.050 EAL: Calling mem event callback 'spdk:(nil)' 00:09:10.050 EAL: request: mp_malloc_sync 00:09:10.050 EAL: No shared files mode enabled, IPC is disabled 00:09:10.050 EAL: Heap on socket 0 was expanded by 258MB 00:09:10.309 EAL: Calling mem event callback 'spdk:(nil)' 00:09:10.309 EAL: request: mp_malloc_sync 00:09:10.309 EAL: No shared files mode enabled, IPC is disabled 00:09:10.309 EAL: Heap on socket 0 was shrunk by 258MB 00:09:10.309 EAL: Trying to obtain current memory policy. 00:09:10.309 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:10.309 EAL: Restoring previous memory policy: 0 00:09:10.309 EAL: Calling mem event callback 'spdk:(nil)' 00:09:10.309 EAL: request: mp_malloc_sync 00:09:10.309 EAL: No shared files mode enabled, IPC is disabled 00:09:10.309 EAL: Heap on socket 0 was expanded by 514MB 00:09:10.567 EAL: Calling mem event callback 'spdk:(nil)' 00:09:10.567 EAL: request: mp_malloc_sync 00:09:10.567 EAL: No shared files mode enabled, IPC is disabled 00:09:10.567 EAL: Heap on socket 0 was shrunk by 514MB 00:09:10.567 EAL: Trying to obtain current memory policy. 00:09:10.567 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:10.826 EAL: Restoring previous memory policy: 0 00:09:10.826 EAL: Calling mem event callback 'spdk:(nil)' 00:09:10.826 EAL: request: mp_malloc_sync 00:09:10.826 EAL: No shared files mode enabled, IPC is disabled 00:09:10.826 EAL: Heap on socket 0 was expanded by 1026MB 00:09:11.084 EAL: Calling mem event callback 'spdk:(nil)' 00:09:11.343 EAL: request: mp_malloc_sync 00:09:11.343 EAL: No shared files mode enabled, IPC is disabled 00:09:11.343 passedEAL: Heap on socket 0 was shrunk by 1026MB 00:09:11.343 00:09:11.343 00:09:11.343 Run Summary: Type Total Ran Passed Failed Inactive 00:09:11.343 suites 1 1 n/a 0 0 00:09:11.343 tests 2 2 2 0 0 00:09:11.343 asserts 6466 6466 6466 0 n/a 00:09:11.343 00:09:11.343 Elapsed time = 1.694 seconds 00:09:11.343 EAL: Calling mem event callback 'spdk:(nil)' 00:09:11.343 EAL: request: mp_malloc_sync 00:09:11.343 EAL: No shared files mode enabled, IPC is disabled 00:09:11.343 EAL: Heap on socket 0 was shrunk by 2MB 00:09:11.343 EAL: No shared files mode enabled, IPC is disabled 00:09:11.343 EAL: No shared files mode enabled, IPC is disabled 00:09:11.343 EAL: No shared files mode enabled, IPC is disabled 00:09:11.343 ************************************ 00:09:11.343 END TEST env_vtophys 00:09:11.343 ************************************ 00:09:11.343 00:09:11.343 real 0m1.977s 00:09:11.343 user 0m0.952s 00:09:11.343 sys 0m0.856s 00:09:11.343 02:32:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:11.343 02:32:36 -- common/autotest_common.sh@10 -- # set +x 00:09:11.343 02:32:36 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:11.343 02:32:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:11.343 02:32:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:11.343 02:32:36 -- common/autotest_common.sh@10 -- # set +x 00:09:11.343 ************************************ 00:09:11.343 START TEST env_pci 00:09:11.343 ************************************ 00:09:11.343 02:32:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:11.343 00:09:11.343 00:09:11.343 CUnit - A unit testing framework for C - Version 2.1-3 00:09:11.343 http://cunit.sourceforge.net/ 00:09:11.343 00:09:11.343 00:09:11.343 Suite: pci 00:09:11.343 Test: pci_hook ...[2024-07-11 02:32:36.328308] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 116314 has claimed it 00:09:11.343 EAL: Cannot find device (10000:00:01.0) 00:09:11.343 EAL: Failed to attach device on primary process 00:09:11.343 passed 00:09:11.343 00:09:11.343 Run Summary: Type Total Ran Passed Failed Inactive 00:09:11.343 suites 1 1 n/a 0 0 00:09:11.343 tests 1 1 1 0 0 00:09:11.343 asserts 25 25 25 0 n/a 00:09:11.343 00:09:11.343 Elapsed time = 0.006 seconds 00:09:11.343 ************************************ 00:09:11.343 END TEST env_pci 00:09:11.343 ************************************ 00:09:11.343 00:09:11.343 real 0m0.066s 00:09:11.343 user 0m0.033s 00:09:11.343 sys 0m0.033s 00:09:11.343 02:32:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:11.343 02:32:36 -- common/autotest_common.sh@10 -- # set +x 00:09:11.343 02:32:36 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:09:11.343 02:32:36 -- env/env.sh@15 -- # uname 00:09:11.343 02:32:36 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:09:11.343 02:32:36 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:09:11.343 02:32:36 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:11.343 02:32:36 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:09:11.343 02:32:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:11.343 02:32:36 -- common/autotest_common.sh@10 -- # set +x 00:09:11.343 ************************************ 00:09:11.343 START TEST env_dpdk_post_init 00:09:11.343 ************************************ 00:09:11.343 02:32:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:11.601 EAL: Detected CPU lcores: 10 00:09:11.601 EAL: Detected NUMA nodes: 1 00:09:11.601 EAL: Detected static linkage of DPDK 00:09:11.601 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:11.601 EAL: Selected IOVA mode 'PA' 00:09:11.601 EAL: VFIO support initialized 00:09:11.601 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:11.601 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:09:11.601 Starting DPDK initialization... 00:09:11.601 Starting SPDK post initialization... 00:09:11.601 SPDK NVMe probe 00:09:11.601 Attaching to 0000:00:06.0 00:09:11.601 Attached to 0000:00:06.0 00:09:11.601 Cleaning up... 00:09:11.601 00:09:11.601 real 0m0.227s 00:09:11.601 user 0m0.062s 00:09:11.601 sys 0m0.066s 00:09:11.601 02:32:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:11.601 ************************************ 00:09:11.601 END TEST env_dpdk_post_init 00:09:11.601 ************************************ 00:09:11.601 02:32:36 -- common/autotest_common.sh@10 -- # set +x 00:09:11.859 02:32:36 -- env/env.sh@26 -- # uname 00:09:11.860 02:32:36 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:09:11.860 02:32:36 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:11.860 02:32:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:11.860 02:32:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:11.860 02:32:36 -- common/autotest_common.sh@10 -- # set +x 00:09:11.860 ************************************ 00:09:11.860 START TEST env_mem_callbacks 00:09:11.860 ************************************ 00:09:11.860 02:32:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:11.860 EAL: Detected CPU lcores: 10 00:09:11.860 EAL: Detected NUMA nodes: 1 00:09:11.860 EAL: Detected static linkage of DPDK 00:09:11.860 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:11.860 EAL: Selected IOVA mode 'PA' 00:09:11.860 EAL: VFIO support initialized 00:09:11.860 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:11.860 00:09:11.860 00:09:11.860 CUnit - A unit testing framework for C - Version 2.1-3 00:09:11.860 http://cunit.sourceforge.net/ 00:09:11.860 00:09:11.860 00:09:11.860 Suite: memory 00:09:11.860 Test: test ... 00:09:11.860 register 0x200000200000 2097152 00:09:11.860 malloc 3145728 00:09:11.860 register 0x200000400000 4194304 00:09:11.860 buf 0x200000500000 len 3145728 PASSED 00:09:11.860 malloc 64 00:09:11.860 buf 0x2000004fff40 len 64 PASSED 00:09:11.860 malloc 4194304 00:09:11.860 register 0x200000800000 6291456 00:09:11.860 buf 0x200000a00000 len 4194304 PASSED 00:09:11.860 free 0x200000500000 3145728 00:09:11.860 free 0x2000004fff40 64 00:09:11.860 unregister 0x200000400000 4194304 PASSED 00:09:11.860 free 0x200000a00000 4194304 00:09:11.860 unregister 0x200000800000 6291456 PASSED 00:09:11.860 malloc 8388608 00:09:11.860 register 0x200000400000 10485760 00:09:11.860 buf 0x200000600000 len 8388608 PASSED 00:09:11.860 free 0x200000600000 8388608 00:09:11.860 unregister 0x200000400000 10485760 PASSED 00:09:11.860 passed 00:09:11.860 00:09:11.860 Run Summary: Type Total Ran Passed Failed Inactive 00:09:11.860 suites 1 1 n/a 0 0 00:09:11.860 tests 1 1 1 0 0 00:09:11.860 asserts 15 15 15 0 n/a 00:09:11.860 00:09:11.860 Elapsed time = 0.007 seconds 00:09:11.860 00:09:11.860 real 0m0.190s 00:09:11.860 user 0m0.037s 00:09:11.860 sys 0m0.053s 00:09:11.860 02:32:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:11.860 ************************************ 00:09:11.860 END TEST env_mem_callbacks 00:09:11.860 ************************************ 00:09:11.860 02:32:36 -- common/autotest_common.sh@10 -- # set +x 00:09:11.860 00:09:11.860 real 0m3.042s 00:09:11.860 user 0m1.509s 00:09:11.860 sys 0m1.147s 00:09:11.860 02:32:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:11.860 02:32:36 -- common/autotest_common.sh@10 -- # set +x 00:09:11.860 ************************************ 00:09:11.860 END TEST env 00:09:11.860 ************************************ 00:09:12.118 02:32:36 -- spdk/autotest.sh@176 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:12.118 02:32:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:12.118 02:32:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:12.118 02:32:36 -- common/autotest_common.sh@10 -- # set +x 00:09:12.118 ************************************ 00:09:12.118 START TEST rpc 00:09:12.118 ************************************ 00:09:12.118 02:32:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:12.118 * Looking for test storage... 00:09:12.118 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:09:12.118 02:32:37 -- rpc/rpc.sh@65 -- # spdk_pid=116437 00:09:12.118 02:32:37 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:12.118 02:32:37 -- rpc/rpc.sh@67 -- # waitforlisten 116437 00:09:12.118 02:32:37 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:09:12.118 02:32:37 -- common/autotest_common.sh@819 -- # '[' -z 116437 ']' 00:09:12.118 02:32:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.118 02:32:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:12.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.118 02:32:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.118 02:32:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:12.118 02:32:37 -- common/autotest_common.sh@10 -- # set +x 00:09:12.118 [2024-07-11 02:32:37.156381] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:09:12.118 [2024-07-11 02:32:37.156654] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116437 ] 00:09:12.377 [2024-07-11 02:32:37.299737] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.377 [2024-07-11 02:32:37.363635] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:12.377 [2024-07-11 02:32:37.363902] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:09:12.377 [2024-07-11 02:32:37.363937] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 116437' to capture a snapshot of events at runtime. 00:09:12.377 [2024-07-11 02:32:37.363956] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid116437 for offline analysis/debug. 00:09:12.377 [2024-07-11 02:32:37.364024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.313 02:32:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:13.313 02:32:38 -- common/autotest_common.sh@852 -- # return 0 00:09:13.313 02:32:38 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:13.313 02:32:38 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:13.313 02:32:38 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:09:13.313 02:32:38 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:09:13.313 02:32:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:13.313 02:32:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:13.313 02:32:38 -- common/autotest_common.sh@10 -- # set +x 00:09:13.313 ************************************ 00:09:13.313 START TEST rpc_integrity 00:09:13.313 ************************************ 00:09:13.313 02:32:38 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:09:13.313 02:32:38 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:13.313 02:32:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:13.313 02:32:38 -- common/autotest_common.sh@10 -- # set +x 00:09:13.313 02:32:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:13.313 02:32:38 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:13.313 02:32:38 -- rpc/rpc.sh@13 -- # jq length 00:09:13.313 02:32:38 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:13.313 02:32:38 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:13.313 02:32:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:13.313 02:32:38 -- common/autotest_common.sh@10 -- # set +x 00:09:13.313 02:32:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:13.313 02:32:38 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:09:13.313 02:32:38 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:13.313 02:32:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:13.313 02:32:38 -- common/autotest_common.sh@10 -- # set +x 00:09:13.313 02:32:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:13.313 02:32:38 -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:13.313 { 00:09:13.313 "name": "Malloc0", 00:09:13.313 "aliases": [ 00:09:13.313 "47a3de09-e8ed-430d-be7e-5ee891e06e30" 00:09:13.313 ], 00:09:13.313 "product_name": "Malloc disk", 00:09:13.313 "block_size": 512, 00:09:13.313 "num_blocks": 16384, 00:09:13.313 "uuid": "47a3de09-e8ed-430d-be7e-5ee891e06e30", 00:09:13.313 "assigned_rate_limits": { 00:09:13.313 "rw_ios_per_sec": 0, 00:09:13.313 "rw_mbytes_per_sec": 0, 00:09:13.313 "r_mbytes_per_sec": 0, 00:09:13.313 "w_mbytes_per_sec": 0 00:09:13.313 }, 00:09:13.313 "claimed": false, 00:09:13.313 "zoned": false, 00:09:13.313 "supported_io_types": { 00:09:13.313 "read": true, 00:09:13.313 "write": true, 00:09:13.313 "unmap": true, 00:09:13.313 "write_zeroes": true, 00:09:13.313 "flush": true, 00:09:13.313 "reset": true, 00:09:13.313 "compare": false, 00:09:13.313 "compare_and_write": false, 00:09:13.313 "abort": true, 00:09:13.313 "nvme_admin": false, 00:09:13.313 "nvme_io": false 00:09:13.313 }, 00:09:13.313 "memory_domains": [ 00:09:13.313 { 00:09:13.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.313 "dma_device_type": 2 00:09:13.313 } 00:09:13.313 ], 00:09:13.313 "driver_specific": {} 00:09:13.313 } 00:09:13.313 ]' 00:09:13.313 02:32:38 -- rpc/rpc.sh@17 -- # jq length 00:09:13.313 02:32:38 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:13.313 02:32:38 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:09:13.313 02:32:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:13.313 02:32:38 -- common/autotest_common.sh@10 -- # set +x 00:09:13.314 [2024-07-11 02:32:38.261521] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:09:13.314 [2024-07-11 02:32:38.261696] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:13.314 [2024-07-11 02:32:38.261745] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006380 00:09:13.314 [2024-07-11 02:32:38.261774] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:13.314 [2024-07-11 02:32:38.264276] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:13.314 [2024-07-11 02:32:38.264489] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:13.314 Passthru0 00:09:13.314 02:32:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:13.314 02:32:38 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:13.314 02:32:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:13.314 02:32:38 -- common/autotest_common.sh@10 -- # set +x 00:09:13.314 02:32:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:13.314 02:32:38 -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:13.314 { 00:09:13.314 "name": "Malloc0", 00:09:13.314 "aliases": [ 00:09:13.314 "47a3de09-e8ed-430d-be7e-5ee891e06e30" 00:09:13.314 ], 00:09:13.314 "product_name": "Malloc disk", 00:09:13.314 "block_size": 512, 00:09:13.314 "num_blocks": 16384, 00:09:13.314 "uuid": "47a3de09-e8ed-430d-be7e-5ee891e06e30", 00:09:13.314 "assigned_rate_limits": { 00:09:13.314 "rw_ios_per_sec": 0, 00:09:13.314 "rw_mbytes_per_sec": 0, 00:09:13.314 "r_mbytes_per_sec": 0, 00:09:13.314 "w_mbytes_per_sec": 0 00:09:13.314 }, 00:09:13.314 "claimed": true, 00:09:13.314 "claim_type": "exclusive_write", 00:09:13.314 "zoned": false, 00:09:13.314 "supported_io_types": { 00:09:13.314 "read": true, 00:09:13.314 "write": true, 00:09:13.314 "unmap": true, 00:09:13.314 "write_zeroes": true, 00:09:13.314 "flush": true, 00:09:13.314 "reset": true, 00:09:13.314 "compare": false, 00:09:13.314 "compare_and_write": false, 00:09:13.314 "abort": true, 00:09:13.314 "nvme_admin": false, 00:09:13.314 "nvme_io": false 00:09:13.314 }, 00:09:13.314 "memory_domains": [ 00:09:13.314 { 00:09:13.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.314 "dma_device_type": 2 00:09:13.314 } 00:09:13.314 ], 00:09:13.314 "driver_specific": {} 00:09:13.314 }, 00:09:13.314 { 00:09:13.314 "name": "Passthru0", 00:09:13.314 "aliases": [ 00:09:13.314 "5ddbc2bc-0c15-53bd-a7f1-322f74fa8a02" 00:09:13.314 ], 00:09:13.314 "product_name": "passthru", 00:09:13.314 "block_size": 512, 00:09:13.314 "num_blocks": 16384, 00:09:13.314 "uuid": "5ddbc2bc-0c15-53bd-a7f1-322f74fa8a02", 00:09:13.314 "assigned_rate_limits": { 00:09:13.314 "rw_ios_per_sec": 0, 00:09:13.314 "rw_mbytes_per_sec": 0, 00:09:13.314 "r_mbytes_per_sec": 0, 00:09:13.314 "w_mbytes_per_sec": 0 00:09:13.314 }, 00:09:13.314 "claimed": false, 00:09:13.314 "zoned": false, 00:09:13.314 "supported_io_types": { 00:09:13.314 "read": true, 00:09:13.314 "write": true, 00:09:13.314 "unmap": true, 00:09:13.314 "write_zeroes": true, 00:09:13.314 "flush": true, 00:09:13.314 "reset": true, 00:09:13.314 "compare": false, 00:09:13.314 "compare_and_write": false, 00:09:13.314 "abort": true, 00:09:13.314 "nvme_admin": false, 00:09:13.314 "nvme_io": false 00:09:13.314 }, 00:09:13.314 "memory_domains": [ 00:09:13.314 { 00:09:13.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.314 "dma_device_type": 2 00:09:13.314 } 00:09:13.314 ], 00:09:13.314 "driver_specific": { 00:09:13.314 "passthru": { 00:09:13.314 "name": "Passthru0", 00:09:13.314 "base_bdev_name": "Malloc0" 00:09:13.314 } 00:09:13.314 } 00:09:13.314 } 00:09:13.314 ]' 00:09:13.314 02:32:38 -- rpc/rpc.sh@21 -- # jq length 00:09:13.314 02:32:38 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:13.314 02:32:38 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:13.314 02:32:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:13.314 02:32:38 -- common/autotest_common.sh@10 -- # set +x 00:09:13.314 02:32:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:13.314 02:32:38 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:09:13.314 02:32:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:13.314 02:32:38 -- common/autotest_common.sh@10 -- # set +x 00:09:13.314 02:32:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:13.314 02:32:38 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:13.314 02:32:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:13.314 02:32:38 -- common/autotest_common.sh@10 -- # set +x 00:09:13.314 02:32:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:13.314 02:32:38 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:13.314 02:32:38 -- rpc/rpc.sh@26 -- # jq length 00:09:13.573 02:32:38 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:13.573 ************************************ 00:09:13.573 END TEST rpc_integrity 00:09:13.573 ************************************ 00:09:13.573 00:09:13.573 real 0m0.309s 00:09:13.573 user 0m0.242s 00:09:13.573 sys 0m0.014s 00:09:13.573 02:32:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:13.573 02:32:38 -- common/autotest_common.sh@10 -- # set +x 00:09:13.573 02:32:38 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:09:13.573 02:32:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:13.573 02:32:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:13.573 02:32:38 -- common/autotest_common.sh@10 -- # set +x 00:09:13.573 ************************************ 00:09:13.573 START TEST rpc_plugins 00:09:13.573 ************************************ 00:09:13.573 02:32:38 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:09:13.573 02:32:38 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:09:13.573 02:32:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:13.573 02:32:38 -- common/autotest_common.sh@10 -- # set +x 00:09:13.573 02:32:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:13.573 02:32:38 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:09:13.573 02:32:38 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:09:13.573 02:32:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:13.573 02:32:38 -- common/autotest_common.sh@10 -- # set +x 00:09:13.573 02:32:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:13.573 02:32:38 -- rpc/rpc.sh@31 -- # bdevs='[ 00:09:13.573 { 00:09:13.573 "name": "Malloc1", 00:09:13.573 "aliases": [ 00:09:13.573 "1a13847a-aedd-4e1c-944a-333e15898977" 00:09:13.573 ], 00:09:13.573 "product_name": "Malloc disk", 00:09:13.573 "block_size": 4096, 00:09:13.573 "num_blocks": 256, 00:09:13.573 "uuid": "1a13847a-aedd-4e1c-944a-333e15898977", 00:09:13.573 "assigned_rate_limits": { 00:09:13.573 "rw_ios_per_sec": 0, 00:09:13.573 "rw_mbytes_per_sec": 0, 00:09:13.573 "r_mbytes_per_sec": 0, 00:09:13.573 "w_mbytes_per_sec": 0 00:09:13.573 }, 00:09:13.573 "claimed": false, 00:09:13.573 "zoned": false, 00:09:13.573 "supported_io_types": { 00:09:13.573 "read": true, 00:09:13.573 "write": true, 00:09:13.573 "unmap": true, 00:09:13.573 "write_zeroes": true, 00:09:13.573 "flush": true, 00:09:13.573 "reset": true, 00:09:13.573 "compare": false, 00:09:13.573 "compare_and_write": false, 00:09:13.573 "abort": true, 00:09:13.573 "nvme_admin": false, 00:09:13.573 "nvme_io": false 00:09:13.573 }, 00:09:13.573 "memory_domains": [ 00:09:13.573 { 00:09:13.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.573 "dma_device_type": 2 00:09:13.573 } 00:09:13.573 ], 00:09:13.573 "driver_specific": {} 00:09:13.573 } 00:09:13.573 ]' 00:09:13.573 02:32:38 -- rpc/rpc.sh@32 -- # jq length 00:09:13.573 02:32:38 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:09:13.573 02:32:38 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:09:13.573 02:32:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:13.573 02:32:38 -- common/autotest_common.sh@10 -- # set +x 00:09:13.573 02:32:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:13.573 02:32:38 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:09:13.573 02:32:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:13.573 02:32:38 -- common/autotest_common.sh@10 -- # set +x 00:09:13.573 02:32:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:13.573 02:32:38 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:09:13.573 02:32:38 -- rpc/rpc.sh@36 -- # jq length 00:09:13.573 02:32:38 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:09:13.573 00:09:13.573 real 0m0.149s 00:09:13.573 user 0m0.116s 00:09:13.573 sys 0m0.010s 00:09:13.573 02:32:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:13.573 ************************************ 00:09:13.573 END TEST rpc_plugins 00:09:13.573 ************************************ 00:09:13.573 02:32:38 -- common/autotest_common.sh@10 -- # set +x 00:09:13.573 02:32:38 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:09:13.573 02:32:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:13.573 02:32:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:13.573 02:32:38 -- common/autotest_common.sh@10 -- # set +x 00:09:13.573 ************************************ 00:09:13.573 START TEST rpc_trace_cmd_test 00:09:13.573 ************************************ 00:09:13.573 02:32:38 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:09:13.573 02:32:38 -- rpc/rpc.sh@40 -- # local info 00:09:13.573 02:32:38 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:09:13.573 02:32:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:13.573 02:32:38 -- common/autotest_common.sh@10 -- # set +x 00:09:13.833 02:32:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:13.833 02:32:38 -- rpc/rpc.sh@42 -- # info='{ 00:09:13.833 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid116437", 00:09:13.833 "tpoint_group_mask": "0x8", 00:09:13.833 "iscsi_conn": { 00:09:13.833 "mask": "0x2", 00:09:13.833 "tpoint_mask": "0x0" 00:09:13.833 }, 00:09:13.833 "scsi": { 00:09:13.833 "mask": "0x4", 00:09:13.833 "tpoint_mask": "0x0" 00:09:13.833 }, 00:09:13.833 "bdev": { 00:09:13.833 "mask": "0x8", 00:09:13.833 "tpoint_mask": "0xffffffffffffffff" 00:09:13.833 }, 00:09:13.833 "nvmf_rdma": { 00:09:13.833 "mask": "0x10", 00:09:13.833 "tpoint_mask": "0x0" 00:09:13.833 }, 00:09:13.833 "nvmf_tcp": { 00:09:13.833 "mask": "0x20", 00:09:13.833 "tpoint_mask": "0x0" 00:09:13.833 }, 00:09:13.833 "ftl": { 00:09:13.833 "mask": "0x40", 00:09:13.833 "tpoint_mask": "0x0" 00:09:13.833 }, 00:09:13.833 "blobfs": { 00:09:13.833 "mask": "0x80", 00:09:13.833 "tpoint_mask": "0x0" 00:09:13.833 }, 00:09:13.833 "dsa": { 00:09:13.833 "mask": "0x200", 00:09:13.833 "tpoint_mask": "0x0" 00:09:13.833 }, 00:09:13.833 "thread": { 00:09:13.833 "mask": "0x400", 00:09:13.833 "tpoint_mask": "0x0" 00:09:13.833 }, 00:09:13.833 "nvme_pcie": { 00:09:13.833 "mask": "0x800", 00:09:13.833 "tpoint_mask": "0x0" 00:09:13.833 }, 00:09:13.833 "iaa": { 00:09:13.833 "mask": "0x1000", 00:09:13.833 "tpoint_mask": "0x0" 00:09:13.833 }, 00:09:13.833 "nvme_tcp": { 00:09:13.833 "mask": "0x2000", 00:09:13.833 "tpoint_mask": "0x0" 00:09:13.833 }, 00:09:13.833 "bdev_nvme": { 00:09:13.833 "mask": "0x4000", 00:09:13.833 "tpoint_mask": "0x0" 00:09:13.833 } 00:09:13.833 }' 00:09:13.833 02:32:38 -- rpc/rpc.sh@43 -- # jq length 00:09:13.833 02:32:38 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:09:13.833 02:32:38 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:09:13.833 02:32:38 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:09:13.833 02:32:38 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:09:13.833 02:32:38 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:09:13.833 02:32:38 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:09:13.833 02:32:38 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:09:13.833 02:32:38 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:09:14.093 02:32:38 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:09:14.093 00:09:14.093 real 0m0.302s 00:09:14.093 user 0m0.263s 00:09:14.093 sys 0m0.025s 00:09:14.093 ************************************ 00:09:14.093 END TEST rpc_trace_cmd_test 00:09:14.093 ************************************ 00:09:14.093 02:32:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:14.093 02:32:38 -- common/autotest_common.sh@10 -- # set +x 00:09:14.094 02:32:38 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:09:14.094 02:32:38 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:09:14.094 02:32:38 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:09:14.094 02:32:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:14.094 02:32:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:14.094 02:32:38 -- common/autotest_common.sh@10 -- # set +x 00:09:14.094 ************************************ 00:09:14.094 START TEST rpc_daemon_integrity 00:09:14.094 ************************************ 00:09:14.094 02:32:39 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:09:14.094 02:32:39 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:14.094 02:32:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:14.094 02:32:39 -- common/autotest_common.sh@10 -- # set +x 00:09:14.094 02:32:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:14.094 02:32:39 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:14.094 02:32:39 -- rpc/rpc.sh@13 -- # jq length 00:09:14.094 02:32:39 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:14.094 02:32:39 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:14.094 02:32:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:14.094 02:32:39 -- common/autotest_common.sh@10 -- # set +x 00:09:14.094 02:32:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:14.094 02:32:39 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:09:14.094 02:32:39 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:14.094 02:32:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:14.094 02:32:39 -- common/autotest_common.sh@10 -- # set +x 00:09:14.094 02:32:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:14.094 02:32:39 -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:14.094 { 00:09:14.094 "name": "Malloc2", 00:09:14.094 "aliases": [ 00:09:14.094 "b37f224d-6c42-45d1-852f-955b60fa3b9b" 00:09:14.094 ], 00:09:14.094 "product_name": "Malloc disk", 00:09:14.094 "block_size": 512, 00:09:14.094 "num_blocks": 16384, 00:09:14.094 "uuid": "b37f224d-6c42-45d1-852f-955b60fa3b9b", 00:09:14.094 "assigned_rate_limits": { 00:09:14.094 "rw_ios_per_sec": 0, 00:09:14.094 "rw_mbytes_per_sec": 0, 00:09:14.094 "r_mbytes_per_sec": 0, 00:09:14.094 "w_mbytes_per_sec": 0 00:09:14.094 }, 00:09:14.094 "claimed": false, 00:09:14.094 "zoned": false, 00:09:14.094 "supported_io_types": { 00:09:14.094 "read": true, 00:09:14.094 "write": true, 00:09:14.094 "unmap": true, 00:09:14.094 "write_zeroes": true, 00:09:14.094 "flush": true, 00:09:14.094 "reset": true, 00:09:14.094 "compare": false, 00:09:14.094 "compare_and_write": false, 00:09:14.094 "abort": true, 00:09:14.094 "nvme_admin": false, 00:09:14.094 "nvme_io": false 00:09:14.094 }, 00:09:14.094 "memory_domains": [ 00:09:14.094 { 00:09:14.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.094 "dma_device_type": 2 00:09:14.094 } 00:09:14.094 ], 00:09:14.094 "driver_specific": {} 00:09:14.094 } 00:09:14.094 ]' 00:09:14.094 02:32:39 -- rpc/rpc.sh@17 -- # jq length 00:09:14.094 02:32:39 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:14.094 02:32:39 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:09:14.094 02:32:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:14.094 02:32:39 -- common/autotest_common.sh@10 -- # set +x 00:09:14.094 [2024-07-11 02:32:39.149968] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:09:14.094 [2024-07-11 02:32:39.150111] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:14.094 [2024-07-11 02:32:39.150151] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:14.094 [2024-07-11 02:32:39.150173] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:14.094 [2024-07-11 02:32:39.152603] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:14.094 [2024-07-11 02:32:39.152689] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:14.094 Passthru0 00:09:14.094 02:32:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:14.094 02:32:39 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:14.094 02:32:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:14.094 02:32:39 -- common/autotest_common.sh@10 -- # set +x 00:09:14.094 02:32:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:14.094 02:32:39 -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:14.094 { 00:09:14.094 "name": "Malloc2", 00:09:14.094 "aliases": [ 00:09:14.094 "b37f224d-6c42-45d1-852f-955b60fa3b9b" 00:09:14.094 ], 00:09:14.094 "product_name": "Malloc disk", 00:09:14.094 "block_size": 512, 00:09:14.094 "num_blocks": 16384, 00:09:14.094 "uuid": "b37f224d-6c42-45d1-852f-955b60fa3b9b", 00:09:14.094 "assigned_rate_limits": { 00:09:14.094 "rw_ios_per_sec": 0, 00:09:14.094 "rw_mbytes_per_sec": 0, 00:09:14.094 "r_mbytes_per_sec": 0, 00:09:14.094 "w_mbytes_per_sec": 0 00:09:14.094 }, 00:09:14.094 "claimed": true, 00:09:14.094 "claim_type": "exclusive_write", 00:09:14.094 "zoned": false, 00:09:14.094 "supported_io_types": { 00:09:14.094 "read": true, 00:09:14.094 "write": true, 00:09:14.094 "unmap": true, 00:09:14.094 "write_zeroes": true, 00:09:14.094 "flush": true, 00:09:14.094 "reset": true, 00:09:14.094 "compare": false, 00:09:14.094 "compare_and_write": false, 00:09:14.094 "abort": true, 00:09:14.094 "nvme_admin": false, 00:09:14.094 "nvme_io": false 00:09:14.094 }, 00:09:14.094 "memory_domains": [ 00:09:14.094 { 00:09:14.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.094 "dma_device_type": 2 00:09:14.094 } 00:09:14.094 ], 00:09:14.094 "driver_specific": {} 00:09:14.094 }, 00:09:14.094 { 00:09:14.094 "name": "Passthru0", 00:09:14.094 "aliases": [ 00:09:14.094 "b8901dab-8786-5245-9ac2-7de673902e96" 00:09:14.094 ], 00:09:14.094 "product_name": "passthru", 00:09:14.094 "block_size": 512, 00:09:14.094 "num_blocks": 16384, 00:09:14.094 "uuid": "b8901dab-8786-5245-9ac2-7de673902e96", 00:09:14.094 "assigned_rate_limits": { 00:09:14.094 "rw_ios_per_sec": 0, 00:09:14.094 "rw_mbytes_per_sec": 0, 00:09:14.094 "r_mbytes_per_sec": 0, 00:09:14.094 "w_mbytes_per_sec": 0 00:09:14.094 }, 00:09:14.094 "claimed": false, 00:09:14.094 "zoned": false, 00:09:14.094 "supported_io_types": { 00:09:14.094 "read": true, 00:09:14.094 "write": true, 00:09:14.094 "unmap": true, 00:09:14.094 "write_zeroes": true, 00:09:14.094 "flush": true, 00:09:14.094 "reset": true, 00:09:14.094 "compare": false, 00:09:14.094 "compare_and_write": false, 00:09:14.094 "abort": true, 00:09:14.094 "nvme_admin": false, 00:09:14.094 "nvme_io": false 00:09:14.094 }, 00:09:14.094 "memory_domains": [ 00:09:14.094 { 00:09:14.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.094 "dma_device_type": 2 00:09:14.094 } 00:09:14.094 ], 00:09:14.094 "driver_specific": { 00:09:14.094 "passthru": { 00:09:14.094 "name": "Passthru0", 00:09:14.094 "base_bdev_name": "Malloc2" 00:09:14.094 } 00:09:14.094 } 00:09:14.094 } 00:09:14.094 ]' 00:09:14.094 02:32:39 -- rpc/rpc.sh@21 -- # jq length 00:09:14.353 02:32:39 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:14.353 02:32:39 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:14.353 02:32:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:14.353 02:32:39 -- common/autotest_common.sh@10 -- # set +x 00:09:14.353 02:32:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:14.353 02:32:39 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:09:14.353 02:32:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:14.353 02:32:39 -- common/autotest_common.sh@10 -- # set +x 00:09:14.353 02:32:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:14.353 02:32:39 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:14.353 02:32:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:14.353 02:32:39 -- common/autotest_common.sh@10 -- # set +x 00:09:14.353 02:32:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:14.353 02:32:39 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:14.353 02:32:39 -- rpc/rpc.sh@26 -- # jq length 00:09:14.353 02:32:39 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:14.353 00:09:14.353 real 0m0.302s 00:09:14.353 user 0m0.228s 00:09:14.353 sys 0m0.024s 00:09:14.353 02:32:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:14.353 ************************************ 00:09:14.353 END TEST rpc_daemon_integrity 00:09:14.353 ************************************ 00:09:14.353 02:32:39 -- common/autotest_common.sh@10 -- # set +x 00:09:14.353 02:32:39 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:09:14.353 02:32:39 -- rpc/rpc.sh@84 -- # killprocess 116437 00:09:14.353 02:32:39 -- common/autotest_common.sh@926 -- # '[' -z 116437 ']' 00:09:14.353 02:32:39 -- common/autotest_common.sh@930 -- # kill -0 116437 00:09:14.353 02:32:39 -- common/autotest_common.sh@931 -- # uname 00:09:14.353 02:32:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:14.354 02:32:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 116437 00:09:14.354 02:32:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:14.354 02:32:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:14.354 02:32:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 116437' 00:09:14.354 killing process with pid 116437 00:09:14.354 02:32:39 -- common/autotest_common.sh@945 -- # kill 116437 00:09:14.354 02:32:39 -- common/autotest_common.sh@950 -- # wait 116437 00:09:14.921 ************************************ 00:09:14.921 END TEST rpc 00:09:14.921 ************************************ 00:09:14.921 00:09:14.921 real 0m2.788s 00:09:14.921 user 0m3.672s 00:09:14.921 sys 0m0.585s 00:09:14.921 02:32:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:14.921 02:32:39 -- common/autotest_common.sh@10 -- # set +x 00:09:14.921 02:32:39 -- spdk/autotest.sh@177 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:14.921 02:32:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:14.921 02:32:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:14.921 02:32:39 -- common/autotest_common.sh@10 -- # set +x 00:09:14.921 ************************************ 00:09:14.921 START TEST rpc_client 00:09:14.921 ************************************ 00:09:14.921 02:32:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:14.921 * Looking for test storage... 00:09:14.921 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:09:14.921 02:32:39 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:09:14.921 OK 00:09:14.921 02:32:39 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:09:14.921 00:09:14.921 real 0m0.124s 00:09:14.921 user 0m0.086s 00:09:14.922 sys 0m0.050s 00:09:14.922 02:32:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:14.922 02:32:39 -- common/autotest_common.sh@10 -- # set +x 00:09:14.922 ************************************ 00:09:14.922 END TEST rpc_client 00:09:14.922 ************************************ 00:09:14.922 02:32:39 -- spdk/autotest.sh@178 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:14.922 02:32:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:14.922 02:32:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:14.922 02:32:39 -- common/autotest_common.sh@10 -- # set +x 00:09:14.922 ************************************ 00:09:14.922 START TEST json_config 00:09:14.922 ************************************ 00:09:14.922 02:32:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:15.180 02:32:40 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:15.180 02:32:40 -- nvmf/common.sh@7 -- # uname -s 00:09:15.180 02:32:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:15.180 02:32:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:15.180 02:32:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:15.180 02:32:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:15.180 02:32:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:15.180 02:32:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:15.180 02:32:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:15.180 02:32:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:15.180 02:32:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:15.180 02:32:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:15.180 02:32:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331977d4-085d-457e-baee-ade30535d655 00:09:15.180 02:32:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=331977d4-085d-457e-baee-ade30535d655 00:09:15.180 02:32:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:15.180 02:32:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:15.180 02:32:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:15.180 02:32:40 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:15.180 02:32:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:15.180 02:32:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:15.180 02:32:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:15.180 02:32:40 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:15.180 02:32:40 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:15.180 02:32:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:15.180 02:32:40 -- paths/export.sh@5 -- # export PATH 00:09:15.180 02:32:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:15.180 02:32:40 -- nvmf/common.sh@46 -- # : 0 00:09:15.180 02:32:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:15.180 02:32:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:15.180 02:32:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:15.180 02:32:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:15.180 02:32:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:15.180 02:32:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:15.180 02:32:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:15.180 02:32:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:15.180 02:32:40 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:09:15.180 02:32:40 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:09:15.180 02:32:40 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:09:15.180 02:32:40 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:09:15.180 02:32:40 -- json_config/json_config.sh@30 -- # app_pid=([target]="" [initiator]="") 00:09:15.180 02:32:40 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:09:15.180 02:32:40 -- json_config/json_config.sh@31 -- # app_socket=([target]='/var/tmp/spdk_tgt.sock' [initiator]='/var/tmp/spdk_initiator.sock') 00:09:15.180 02:32:40 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:09:15.180 02:32:40 -- json_config/json_config.sh@32 -- # app_params=([target]='-m 0x1 -s 1024' [initiator]='-m 0x2 -g -u -s 1024') 00:09:15.180 02:32:40 -- json_config/json_config.sh@32 -- # declare -A app_params 00:09:15.180 02:32:40 -- json_config/json_config.sh@33 -- # configs_path=([target]="$rootdir/spdk_tgt_config.json" [initiator]="$rootdir/spdk_initiator_config.json") 00:09:15.180 02:32:40 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:09:15.180 02:32:40 -- json_config/json_config.sh@43 -- # last_event_id=0 00:09:15.180 02:32:40 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:15.180 INFO: JSON configuration test init 00:09:15.180 02:32:40 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:09:15.180 02:32:40 -- json_config/json_config.sh@420 -- # json_config_test_init 00:09:15.180 02:32:40 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:09:15.180 02:32:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:15.180 02:32:40 -- common/autotest_common.sh@10 -- # set +x 00:09:15.180 02:32:40 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:09:15.180 02:32:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:15.180 02:32:40 -- common/autotest_common.sh@10 -- # set +x 00:09:15.180 02:32:40 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:09:15.180 02:32:40 -- json_config/json_config.sh@98 -- # local app=target 00:09:15.180 02:32:40 -- json_config/json_config.sh@99 -- # shift 00:09:15.180 02:32:40 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:09:15.180 02:32:40 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:09:15.180 02:32:40 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:09:15.180 02:32:40 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:09:15.180 02:32:40 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:09:15.180 02:32:40 -- json_config/json_config.sh@111 -- # app_pid[$app]=116724 00:09:15.180 Waiting for target to run... 00:09:15.180 02:32:40 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:09:15.180 02:32:40 -- json_config/json_config.sh@114 -- # waitforlisten 116724 /var/tmp/spdk_tgt.sock 00:09:15.180 02:32:40 -- common/autotest_common.sh@819 -- # '[' -z 116724 ']' 00:09:15.180 02:32:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:15.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:15.180 02:32:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:15.180 02:32:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:15.180 02:32:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:15.180 02:32:40 -- common/autotest_common.sh@10 -- # set +x 00:09:15.180 02:32:40 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:09:15.180 [2024-07-11 02:32:40.159860] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:09:15.181 [2024-07-11 02:32:40.160400] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116724 ] 00:09:15.747 [2024-07-11 02:32:40.608634] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.747 [2024-07-11 02:32:40.673125] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:15.747 [2024-07-11 02:32:40.673417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.006 02:32:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:16.006 02:32:41 -- common/autotest_common.sh@852 -- # return 0 00:09:16.006 02:32:41 -- json_config/json_config.sh@115 -- # echo '' 00:09:16.006 00:09:16.006 02:32:41 -- json_config/json_config.sh@322 -- # create_accel_config 00:09:16.006 02:32:41 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:09:16.006 02:32:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:16.006 02:32:41 -- common/autotest_common.sh@10 -- # set +x 00:09:16.006 02:32:41 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:09:16.006 02:32:41 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:09:16.006 02:32:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:16.006 02:32:41 -- common/autotest_common.sh@10 -- # set +x 00:09:16.264 02:32:41 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:09:16.264 02:32:41 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:09:16.264 02:32:41 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:09:16.523 02:32:41 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:09:16.523 02:32:41 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:09:16.523 02:32:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:16.523 02:32:41 -- common/autotest_common.sh@10 -- # set +x 00:09:16.523 02:32:41 -- json_config/json_config.sh@48 -- # local ret=0 00:09:16.523 02:32:41 -- json_config/json_config.sh@49 -- # enabled_types=("bdev_register" "bdev_unregister") 00:09:16.523 02:32:41 -- json_config/json_config.sh@49 -- # local enabled_types 00:09:16.523 02:32:41 -- json_config/json_config.sh@51 -- # get_types=($(tgt_rpc notify_get_types | jq -r '.[]')) 00:09:16.523 02:32:41 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:09:16.523 02:32:41 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:09:16.523 02:32:41 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:09:16.782 02:32:41 -- json_config/json_config.sh@51 -- # local get_types 00:09:16.782 02:32:41 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:09:16.782 02:32:41 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:09:16.782 02:32:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:16.782 02:32:41 -- common/autotest_common.sh@10 -- # set +x 00:09:16.782 02:32:41 -- json_config/json_config.sh@58 -- # return 0 00:09:16.782 02:32:41 -- json_config/json_config.sh@331 -- # [[ 1 -eq 1 ]] 00:09:16.782 02:32:41 -- json_config/json_config.sh@332 -- # create_bdev_subsystem_config 00:09:16.782 02:32:41 -- json_config/json_config.sh@158 -- # timing_enter create_bdev_subsystem_config 00:09:16.782 02:32:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:16.782 02:32:41 -- common/autotest_common.sh@10 -- # set +x 00:09:16.782 02:32:41 -- json_config/json_config.sh@160 -- # expected_notifications=() 00:09:16.782 02:32:41 -- json_config/json_config.sh@160 -- # local expected_notifications 00:09:16.782 02:32:41 -- json_config/json_config.sh@164 -- # expected_notifications+=($(get_notifications)) 00:09:16.782 02:32:41 -- json_config/json_config.sh@164 -- # get_notifications 00:09:16.782 02:32:41 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:09:16.782 02:32:41 -- json_config/json_config.sh@64 -- # IFS=: 00:09:16.782 02:32:41 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:16.782 02:32:41 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:09:16.782 02:32:41 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:09:16.782 02:32:41 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:09:17.041 02:32:42 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:09:17.041 02:32:42 -- json_config/json_config.sh@64 -- # IFS=: 00:09:17.041 02:32:42 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:17.041 02:32:42 -- json_config/json_config.sh@166 -- # [[ 1 -eq 1 ]] 00:09:17.041 02:32:42 -- json_config/json_config.sh@167 -- # local lvol_store_base_bdev=Nvme0n1 00:09:17.041 02:32:42 -- json_config/json_config.sh@169 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:09:17.041 02:32:42 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:09:17.316 Nvme0n1p0 Nvme0n1p1 00:09:17.316 02:32:42 -- json_config/json_config.sh@170 -- # tgt_rpc bdev_split_create Malloc0 3 00:09:17.316 02:32:42 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:09:17.576 [2024-07-11 02:32:42.403624] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:09:17.576 [2024-07-11 02:32:42.403755] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:09:17.576 00:09:17.576 02:32:42 -- json_config/json_config.sh@171 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:09:17.576 02:32:42 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:09:17.576 Malloc3 00:09:17.576 02:32:42 -- json_config/json_config.sh@172 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:09:17.576 02:32:42 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:09:17.834 [2024-07-11 02:32:42.847802] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:17.834 [2024-07-11 02:32:42.847938] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.834 [2024-07-11 02:32:42.847981] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:17.834 [2024-07-11 02:32:42.848007] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.834 [2024-07-11 02:32:42.850229] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.834 [2024-07-11 02:32:42.850310] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:09:17.834 PTBdevFromMalloc3 00:09:17.834 02:32:42 -- json_config/json_config.sh@174 -- # tgt_rpc bdev_null_create Null0 32 512 00:09:17.834 02:32:42 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:09:18.092 Null0 00:09:18.092 02:32:43 -- json_config/json_config.sh@176 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:09:18.092 02:32:43 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:09:18.351 Malloc0 00:09:18.351 02:32:43 -- json_config/json_config.sh@177 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:09:18.351 02:32:43 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:09:18.351 Malloc1 00:09:18.619 02:32:43 -- json_config/json_config.sh@190 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:09:18.620 02:32:43 -- json_config/json_config.sh@193 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:09:18.880 102400+0 records in 00:09:18.880 102400+0 records out 00:09:18.880 104857600 bytes (105 MB, 100 MiB) copied, 0.26103 s, 402 MB/s 00:09:18.880 02:32:43 -- json_config/json_config.sh@194 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:09:18.880 02:32:43 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:09:18.880 aio_disk 00:09:19.137 02:32:43 -- json_config/json_config.sh@195 -- # expected_notifications+=(bdev_register:aio_disk) 00:09:19.137 02:32:43 -- json_config/json_config.sh@200 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:09:19.137 02:32:43 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:09:19.137 32d84ab8-6fd6-4f59-984f-c5586df8984a 00:09:19.137 02:32:44 -- json_config/json_config.sh@207 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:09:19.137 02:32:44 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:09:19.137 02:32:44 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:09:19.394 02:32:44 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:09:19.394 02:32:44 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:09:19.652 02:32:44 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:09:19.652 02:32:44 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:09:19.910 02:32:44 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:09:19.910 02:32:44 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:09:20.168 02:32:45 -- json_config/json_config.sh@210 -- # [[ 0 -eq 1 ]] 00:09:20.168 02:32:45 -- json_config/json_config.sh@225 -- # [[ 0 -eq 1 ]] 00:09:20.168 02:32:45 -- json_config/json_config.sh@231 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:e36367c6-4620-45a1-9c9a-737e37c1c428 bdev_register:81a1cb3a-b9b5-431f-9b21-3e551574cb60 bdev_register:e780bc05-e497-43ca-bd97-2a974e4d86a6 bdev_register:a548fdb5-b5dc-4697-bc14-78a8a27f48fc 00:09:20.168 02:32:45 -- json_config/json_config.sh@70 -- # local events_to_check 00:09:20.168 02:32:45 -- json_config/json_config.sh@71 -- # local recorded_events 00:09:20.168 02:32:45 -- json_config/json_config.sh@74 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:09:20.168 02:32:45 -- json_config/json_config.sh@74 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:e36367c6-4620-45a1-9c9a-737e37c1c428 bdev_register:81a1cb3a-b9b5-431f-9b21-3e551574cb60 bdev_register:e780bc05-e497-43ca-bd97-2a974e4d86a6 bdev_register:a548fdb5-b5dc-4697-bc14-78a8a27f48fc 00:09:20.168 02:32:45 -- json_config/json_config.sh@74 -- # sort 00:09:20.168 02:32:45 -- json_config/json_config.sh@75 -- # recorded_events=($(get_notifications | sort)) 00:09:20.168 02:32:45 -- json_config/json_config.sh@75 -- # get_notifications 00:09:20.168 02:32:45 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:09:20.168 02:32:45 -- json_config/json_config.sh@64 -- # IFS=: 00:09:20.168 02:32:45 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:20.168 02:32:45 -- json_config/json_config.sh@75 -- # sort 00:09:20.168 02:32:45 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:09:20.168 02:32:45 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:09:20.168 02:32:45 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:09:20.427 02:32:45 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:09:20.427 02:32:45 -- json_config/json_config.sh@64 -- # IFS=: 00:09:20.427 02:32:45 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:20.427 02:32:45 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p1 00:09:20.427 02:32:45 -- json_config/json_config.sh@64 -- # IFS=: 00:09:20.427 02:32:45 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:20.427 02:32:45 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p0 00:09:20.427 02:32:45 -- json_config/json_config.sh@64 -- # IFS=: 00:09:20.427 02:32:45 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:20.427 02:32:45 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc3 00:09:20.427 02:32:45 -- json_config/json_config.sh@64 -- # IFS=: 00:09:20.427 02:32:45 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:20.427 02:32:45 -- json_config/json_config.sh@65 -- # echo bdev_register:PTBdevFromMalloc3 00:09:20.427 02:32:45 -- json_config/json_config.sh@64 -- # IFS=: 00:09:20.427 02:32:45 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:20.427 02:32:45 -- json_config/json_config.sh@65 -- # echo bdev_register:Null0 00:09:20.427 02:32:45 -- json_config/json_config.sh@64 -- # IFS=: 00:09:20.427 02:32:45 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:20.427 02:32:45 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0 00:09:20.427 02:32:45 -- json_config/json_config.sh@64 -- # IFS=: 00:09:20.427 02:32:45 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:20.427 02:32:45 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p2 00:09:20.427 02:32:45 -- json_config/json_config.sh@64 -- # IFS=: 00:09:20.427 02:32:45 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:20.427 02:32:45 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p1 00:09:20.427 02:32:45 -- json_config/json_config.sh@64 -- # IFS=: 00:09:20.427 02:32:45 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:20.427 02:32:45 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p0 00:09:20.427 02:32:45 -- json_config/json_config.sh@64 -- # IFS=: 00:09:20.427 02:32:45 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:20.427 02:32:45 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc1 00:09:20.427 02:32:45 -- json_config/json_config.sh@64 -- # IFS=: 00:09:20.427 02:32:45 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:20.427 02:32:45 -- json_config/json_config.sh@65 -- # echo bdev_register:aio_disk 00:09:20.427 02:32:45 -- json_config/json_config.sh@64 -- # IFS=: 00:09:20.427 02:32:45 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:20.427 02:32:45 -- json_config/json_config.sh@65 -- # echo bdev_register:e36367c6-4620-45a1-9c9a-737e37c1c428 00:09:20.427 02:32:45 -- json_config/json_config.sh@64 -- # IFS=: 00:09:20.427 02:32:45 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:20.427 02:32:45 -- json_config/json_config.sh@65 -- # echo bdev_register:81a1cb3a-b9b5-431f-9b21-3e551574cb60 00:09:20.427 02:32:45 -- json_config/json_config.sh@64 -- # IFS=: 00:09:20.427 02:32:45 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:20.427 02:32:45 -- json_config/json_config.sh@65 -- # echo bdev_register:e780bc05-e497-43ca-bd97-2a974e4d86a6 00:09:20.427 02:32:45 -- json_config/json_config.sh@64 -- # IFS=: 00:09:20.427 02:32:45 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:20.427 02:32:45 -- json_config/json_config.sh@65 -- # echo bdev_register:a548fdb5-b5dc-4697-bc14-78a8a27f48fc 00:09:20.427 02:32:45 -- json_config/json_config.sh@64 -- # IFS=: 00:09:20.427 02:32:45 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:20.427 02:32:45 -- json_config/json_config.sh@77 -- # [[ bdev_register:81a1cb3a-b9b5-431f-9b21-3e551574cb60 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:a548fdb5-b5dc-4697-bc14-78a8a27f48fc bdev_register:aio_disk bdev_register:e36367c6-4620-45a1-9c9a-737e37c1c428 bdev_register:e780bc05-e497-43ca-bd97-2a974e4d86a6 != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\8\1\a\1\c\b\3\a\-\b\9\b\5\-\4\3\1\f\-\9\b\2\1\-\3\e\5\5\1\5\7\4\c\b\6\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\5\4\8\f\d\b\5\-\b\5\d\c\-\4\6\9\7\-\b\c\1\4\-\7\8\a\8\a\2\7\f\4\8\f\c\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\e\3\6\3\6\7\c\6\-\4\6\2\0\-\4\5\a\1\-\9\c\9\a\-\7\3\7\e\3\7\c\1\c\4\2\8\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\e\7\8\0\b\c\0\5\-\e\4\9\7\-\4\3\c\a\-\b\d\9\7\-\2\a\9\7\4\e\4\d\8\6\a\6 ]] 00:09:20.427 02:32:45 -- json_config/json_config.sh@89 -- # cat 00:09:20.427 02:32:45 -- json_config/json_config.sh@89 -- # printf ' %s\n' bdev_register:81a1cb3a-b9b5-431f-9b21-3e551574cb60 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:a548fdb5-b5dc-4697-bc14-78a8a27f48fc bdev_register:aio_disk bdev_register:e36367c6-4620-45a1-9c9a-737e37c1c428 bdev_register:e780bc05-e497-43ca-bd97-2a974e4d86a6 00:09:20.427 Expected events matched: 00:09:20.427 bdev_register:81a1cb3a-b9b5-431f-9b21-3e551574cb60 00:09:20.427 bdev_register:Malloc0 00:09:20.427 bdev_register:Malloc0p0 00:09:20.427 bdev_register:Malloc0p1 00:09:20.427 bdev_register:Malloc0p2 00:09:20.427 bdev_register:Malloc1 00:09:20.427 bdev_register:Malloc3 00:09:20.427 bdev_register:Null0 00:09:20.427 bdev_register:Nvme0n1 00:09:20.427 bdev_register:Nvme0n1p0 00:09:20.427 bdev_register:Nvme0n1p1 00:09:20.427 bdev_register:PTBdevFromMalloc3 00:09:20.428 bdev_register:a548fdb5-b5dc-4697-bc14-78a8a27f48fc 00:09:20.428 bdev_register:aio_disk 00:09:20.428 bdev_register:e36367c6-4620-45a1-9c9a-737e37c1c428 00:09:20.428 bdev_register:e780bc05-e497-43ca-bd97-2a974e4d86a6 00:09:20.428 02:32:45 -- json_config/json_config.sh@233 -- # timing_exit create_bdev_subsystem_config 00:09:20.428 02:32:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:20.428 02:32:45 -- common/autotest_common.sh@10 -- # set +x 00:09:20.428 02:32:45 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:09:20.428 02:32:45 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:09:20.428 02:32:45 -- json_config/json_config.sh@343 -- # [[ 0 -eq 1 ]] 00:09:20.428 02:32:45 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:09:20.428 02:32:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:20.428 02:32:45 -- common/autotest_common.sh@10 -- # set +x 00:09:20.428 02:32:45 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:09:20.428 02:32:45 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:09:20.428 02:32:45 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:09:20.686 MallocBdevForConfigChangeCheck 00:09:20.686 02:32:45 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:09:20.686 02:32:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:20.686 02:32:45 -- common/autotest_common.sh@10 -- # set +x 00:09:20.686 02:32:45 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:09:20.686 02:32:45 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:20.945 INFO: shutting down applications... 00:09:20.945 02:32:45 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:09:20.945 02:32:45 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:09:20.945 02:32:45 -- json_config/json_config.sh@431 -- # json_config_clear target 00:09:20.945 02:32:45 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:09:20.945 02:32:45 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:09:21.203 [2024-07-11 02:32:46.112236] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:09:21.203 Calling clear_vhost_scsi_subsystem 00:09:21.203 Calling clear_iscsi_subsystem 00:09:21.203 Calling clear_vhost_blk_subsystem 00:09:21.203 Calling clear_nbd_subsystem 00:09:21.203 Calling clear_nvmf_subsystem 00:09:21.203 Calling clear_bdev_subsystem 00:09:21.203 Calling clear_accel_subsystem 00:09:21.203 Calling clear_iobuf_subsystem 00:09:21.203 Calling clear_sock_subsystem 00:09:21.203 Calling clear_vmd_subsystem 00:09:21.203 Calling clear_scheduler_subsystem 00:09:21.203 02:32:46 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:09:21.203 02:32:46 -- json_config/json_config.sh@396 -- # count=100 00:09:21.203 02:32:46 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:09:21.203 02:32:46 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:21.203 02:32:46 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:09:21.203 02:32:46 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:09:21.770 02:32:46 -- json_config/json_config.sh@398 -- # break 00:09:21.770 02:32:46 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:09:21.770 02:32:46 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:09:21.770 02:32:46 -- json_config/json_config.sh@120 -- # local app=target 00:09:21.770 02:32:46 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:09:21.770 02:32:46 -- json_config/json_config.sh@124 -- # [[ -n 116724 ]] 00:09:21.770 02:32:46 -- json_config/json_config.sh@127 -- # kill -SIGINT 116724 00:09:21.770 02:32:46 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:09:21.770 02:32:46 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:09:21.770 02:32:46 -- json_config/json_config.sh@130 -- # kill -0 116724 00:09:21.770 02:32:46 -- json_config/json_config.sh@134 -- # sleep 0.5 00:09:22.336 02:32:47 -- json_config/json_config.sh@129 -- # (( i++ )) 00:09:22.336 02:32:47 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:09:22.336 02:32:47 -- json_config/json_config.sh@130 -- # kill -0 116724 00:09:22.336 SPDK target shutdown done 00:09:22.336 02:32:47 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:09:22.336 02:32:47 -- json_config/json_config.sh@132 -- # break 00:09:22.336 02:32:47 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:09:22.336 02:32:47 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:09:22.336 INFO: relaunching applications... 00:09:22.336 02:32:47 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:09:22.336 02:32:47 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:22.336 02:32:47 -- json_config/json_config.sh@98 -- # local app=target 00:09:22.336 02:32:47 -- json_config/json_config.sh@99 -- # shift 00:09:22.336 02:32:47 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:09:22.336 02:32:47 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:09:22.336 02:32:47 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:09:22.336 02:32:47 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:09:22.336 02:32:47 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:09:22.336 02:32:47 -- json_config/json_config.sh@111 -- # app_pid[$app]=116968 00:09:22.336 02:32:47 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:09:22.336 Waiting for target to run... 00:09:22.336 02:32:47 -- json_config/json_config.sh@114 -- # waitforlisten 116968 /var/tmp/spdk_tgt.sock 00:09:22.336 02:32:47 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:22.336 02:32:47 -- common/autotest_common.sh@819 -- # '[' -z 116968 ']' 00:09:22.336 02:32:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:22.336 02:32:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:22.336 02:32:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:22.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:22.336 02:32:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:22.336 02:32:47 -- common/autotest_common.sh@10 -- # set +x 00:09:22.336 [2024-07-11 02:32:47.183357] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:09:22.336 [2024-07-11 02:32:47.184258] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116968 ] 00:09:22.594 [2024-07-11 02:32:47.596508] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.594 [2024-07-11 02:32:47.660030] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:22.594 [2024-07-11 02:32:47.660299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.851 [2024-07-11 02:32:47.808832] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:09:22.851 [2024-07-11 02:32:47.808961] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:09:22.851 [2024-07-11 02:32:47.816803] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:09:22.851 [2024-07-11 02:32:47.816893] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:09:22.851 [2024-07-11 02:32:47.824843] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:22.851 [2024-07-11 02:32:47.824918] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:09:22.851 [2024-07-11 02:32:47.824955] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:09:22.851 [2024-07-11 02:32:47.909756] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:22.851 [2024-07-11 02:32:47.909892] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:22.851 [2024-07-11 02:32:47.909931] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:09:22.851 [2024-07-11 02:32:47.909958] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:22.851 [2024-07-11 02:32:47.910540] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:22.851 [2024-07-11 02:32:47.910643] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:09:23.109 02:32:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:23.109 02:32:48 -- common/autotest_common.sh@852 -- # return 0 00:09:23.109 00:09:23.109 INFO: Checking if target configuration is the same... 00:09:23.109 02:32:48 -- json_config/json_config.sh@115 -- # echo '' 00:09:23.109 02:32:48 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:09:23.109 02:32:48 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:09:23.109 02:32:48 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:23.109 02:32:48 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:09:23.109 02:32:48 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:23.109 + '[' 2 -ne 2 ']' 00:09:23.109 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:09:23.109 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:09:23.109 + rootdir=/home/vagrant/spdk_repo/spdk 00:09:23.109 +++ basename /dev/fd/62 00:09:23.109 ++ mktemp /tmp/62.XXX 00:09:23.109 + tmp_file_1=/tmp/62.SYg 00:09:23.109 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:23.109 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:09:23.109 + tmp_file_2=/tmp/spdk_tgt_config.json.9DH 00:09:23.109 + ret=0 00:09:23.109 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:23.367 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:23.625 + diff -u /tmp/62.SYg /tmp/spdk_tgt_config.json.9DH 00:09:23.625 INFO: JSON config files are the same 00:09:23.625 + echo 'INFO: JSON config files are the same' 00:09:23.625 + rm /tmp/62.SYg /tmp/spdk_tgt_config.json.9DH 00:09:23.625 + exit 0 00:09:23.625 INFO: changing configuration and checking if this can be detected... 00:09:23.625 02:32:48 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:09:23.625 02:32:48 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:09:23.625 02:32:48 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:09:23.625 02:32:48 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:09:23.883 02:32:48 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:23.883 02:32:48 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:09:23.883 02:32:48 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:23.883 + '[' 2 -ne 2 ']' 00:09:23.883 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:09:23.883 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:09:23.883 + rootdir=/home/vagrant/spdk_repo/spdk 00:09:23.883 +++ basename /dev/fd/62 00:09:23.883 ++ mktemp /tmp/62.XXX 00:09:23.883 + tmp_file_1=/tmp/62.e9B 00:09:23.883 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:23.883 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:09:23.883 + tmp_file_2=/tmp/spdk_tgt_config.json.9DI 00:09:23.883 + ret=0 00:09:23.883 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:24.142 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:24.142 + diff -u /tmp/62.e9B /tmp/spdk_tgt_config.json.9DI 00:09:24.142 + ret=1 00:09:24.142 + echo '=== Start of file: /tmp/62.e9B ===' 00:09:24.142 + cat /tmp/62.e9B 00:09:24.142 + echo '=== End of file: /tmp/62.e9B ===' 00:09:24.142 + echo '' 00:09:24.142 + echo '=== Start of file: /tmp/spdk_tgt_config.json.9DI ===' 00:09:24.142 + cat /tmp/spdk_tgt_config.json.9DI 00:09:24.142 + echo '=== End of file: /tmp/spdk_tgt_config.json.9DI ===' 00:09:24.142 + echo '' 00:09:24.142 + rm /tmp/62.e9B /tmp/spdk_tgt_config.json.9DI 00:09:24.142 + exit 1 00:09:24.142 INFO: configuration change detected. 00:09:24.142 02:32:49 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:09:24.142 02:32:49 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:09:24.142 02:32:49 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:09:24.142 02:32:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:24.142 02:32:49 -- common/autotest_common.sh@10 -- # set +x 00:09:24.142 02:32:49 -- json_config/json_config.sh@360 -- # local ret=0 00:09:24.142 02:32:49 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:09:24.142 02:32:49 -- json_config/json_config.sh@370 -- # [[ -n 116968 ]] 00:09:24.142 02:32:49 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:09:24.142 02:32:49 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:09:24.142 02:32:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:24.142 02:32:49 -- common/autotest_common.sh@10 -- # set +x 00:09:24.142 02:32:49 -- json_config/json_config.sh@239 -- # [[ 1 -eq 1 ]] 00:09:24.142 02:32:49 -- json_config/json_config.sh@240 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:09:24.142 02:32:49 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:09:24.400 02:32:49 -- json_config/json_config.sh@241 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:09:24.400 02:32:49 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:09:24.658 02:32:49 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:09:24.658 02:32:49 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:09:24.916 02:32:49 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:09:24.916 02:32:49 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:09:25.174 02:32:50 -- json_config/json_config.sh@246 -- # uname -s 00:09:25.174 02:32:50 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:09:25.174 02:32:50 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:09:25.174 02:32:50 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:09:25.174 02:32:50 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:09:25.174 02:32:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:25.174 02:32:50 -- common/autotest_common.sh@10 -- # set +x 00:09:25.174 02:32:50 -- json_config/json_config.sh@376 -- # killprocess 116968 00:09:25.174 02:32:50 -- common/autotest_common.sh@926 -- # '[' -z 116968 ']' 00:09:25.174 02:32:50 -- common/autotest_common.sh@930 -- # kill -0 116968 00:09:25.174 02:32:50 -- common/autotest_common.sh@931 -- # uname 00:09:25.174 02:32:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:25.174 02:32:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 116968 00:09:25.174 02:32:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:25.174 killing process with pid 116968 00:09:25.175 02:32:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:25.175 02:32:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 116968' 00:09:25.175 02:32:50 -- common/autotest_common.sh@945 -- # kill 116968 00:09:25.175 02:32:50 -- common/autotest_common.sh@950 -- # wait 116968 00:09:25.432 02:32:50 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:25.432 02:32:50 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:09:25.432 02:32:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:25.433 02:32:50 -- common/autotest_common.sh@10 -- # set +x 00:09:25.433 02:32:50 -- json_config/json_config.sh@381 -- # return 0 00:09:25.433 INFO: Success 00:09:25.433 02:32:50 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:09:25.433 00:09:25.433 real 0m10.399s 00:09:25.433 user 0m15.795s 00:09:25.433 sys 0m2.054s 00:09:25.433 02:32:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:25.433 02:32:50 -- common/autotest_common.sh@10 -- # set +x 00:09:25.433 ************************************ 00:09:25.433 END TEST json_config 00:09:25.433 ************************************ 00:09:25.433 02:32:50 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:09:25.433 02:32:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:25.433 02:32:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:25.433 02:32:50 -- common/autotest_common.sh@10 -- # set +x 00:09:25.433 ************************************ 00:09:25.433 START TEST json_config_extra_key 00:09:25.433 ************************************ 00:09:25.433 02:32:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:09:25.433 02:32:50 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:25.433 02:32:50 -- nvmf/common.sh@7 -- # uname -s 00:09:25.433 02:32:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:25.433 02:32:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:25.433 02:32:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:25.433 02:32:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:25.433 02:32:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:25.433 02:32:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:25.433 02:32:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:25.433 02:32:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:25.433 02:32:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:25.433 02:32:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:25.433 02:32:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7fde4e5d-1f9a-4a25-9676-a98b12d06940 00:09:25.433 02:32:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=7fde4e5d-1f9a-4a25-9676-a98b12d06940 00:09:25.433 02:32:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:25.433 02:32:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:25.433 02:32:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:25.433 02:32:50 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:25.691 02:32:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:25.691 02:32:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:25.691 02:32:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:25.691 02:32:50 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:25.691 02:32:50 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:25.691 02:32:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:25.691 02:32:50 -- paths/export.sh@5 -- # export PATH 00:09:25.691 02:32:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:25.691 02:32:50 -- nvmf/common.sh@46 -- # : 0 00:09:25.691 02:32:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:25.691 02:32:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:25.691 02:32:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:25.691 02:32:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:25.691 02:32:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:25.691 02:32:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:25.691 02:32:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:25.691 02:32:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:25.691 02:32:50 -- json_config/json_config_extra_key.sh@16 -- # app_pid=([target]="") 00:09:25.691 02:32:50 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:09:25.691 02:32:50 -- json_config/json_config_extra_key.sh@17 -- # app_socket=([target]='/var/tmp/spdk_tgt.sock') 00:09:25.691 02:32:50 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:09:25.691 02:32:50 -- json_config/json_config_extra_key.sh@18 -- # app_params=([target]='-m 0x1 -s 1024') 00:09:25.691 02:32:50 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:09:25.691 02:32:50 -- json_config/json_config_extra_key.sh@19 -- # configs_path=([target]="$rootdir/test/json_config/extra_key.json") 00:09:25.691 02:32:50 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:09:25.691 02:32:50 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:25.691 02:32:50 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:09:25.691 INFO: launching applications... 00:09:25.691 02:32:50 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:09:25.691 02:32:50 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:09:25.691 02:32:50 -- json_config/json_config_extra_key.sh@25 -- # shift 00:09:25.691 02:32:50 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:09:25.691 02:32:50 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:09:25.691 02:32:50 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=117145 00:09:25.691 02:32:50 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:09:25.691 02:32:50 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:09:25.691 Waiting for target to run... 00:09:25.691 02:32:50 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 117145 /var/tmp/spdk_tgt.sock 00:09:25.691 02:32:50 -- common/autotest_common.sh@819 -- # '[' -z 117145 ']' 00:09:25.691 02:32:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:25.691 02:32:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:25.691 02:32:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:25.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:25.691 02:32:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:25.691 02:32:50 -- common/autotest_common.sh@10 -- # set +x 00:09:25.691 [2024-07-11 02:32:50.586234] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:09:25.692 [2024-07-11 02:32:50.586947] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117145 ] 00:09:25.950 [2024-07-11 02:32:51.021926] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.208 [2024-07-11 02:32:51.082963] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:26.208 [2024-07-11 02:32:51.083215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.467 00:09:26.467 INFO: shutting down applications... 00:09:26.467 02:32:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:26.467 02:32:51 -- common/autotest_common.sh@852 -- # return 0 00:09:26.467 02:32:51 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:09:26.467 02:32:51 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:09:26.467 02:32:51 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:09:26.467 02:32:51 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:09:26.467 02:32:51 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:09:26.467 02:32:51 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 117145 ]] 00:09:26.467 02:32:51 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 117145 00:09:26.467 02:32:51 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:09:26.467 02:32:51 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:09:26.467 02:32:51 -- json_config/json_config_extra_key.sh@50 -- # kill -0 117145 00:09:26.467 02:32:51 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:09:27.034 SPDK target shutdown done 00:09:27.034 Success 00:09:27.034 02:32:51 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:09:27.034 02:32:51 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:09:27.034 02:32:51 -- json_config/json_config_extra_key.sh@50 -- # kill -0 117145 00:09:27.034 02:32:51 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:09:27.034 02:32:51 -- json_config/json_config_extra_key.sh@52 -- # break 00:09:27.034 02:32:51 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:09:27.034 02:32:51 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:09:27.034 02:32:51 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:09:27.034 00:09:27.034 real 0m1.525s 00:09:27.034 user 0m1.375s 00:09:27.034 sys 0m0.407s 00:09:27.034 02:32:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:27.034 ************************************ 00:09:27.034 02:32:51 -- common/autotest_common.sh@10 -- # set +x 00:09:27.034 END TEST json_config_extra_key 00:09:27.034 ************************************ 00:09:27.034 02:32:52 -- spdk/autotest.sh@180 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:27.034 02:32:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:27.034 02:32:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:27.034 02:32:52 -- common/autotest_common.sh@10 -- # set +x 00:09:27.034 ************************************ 00:09:27.034 START TEST alias_rpc 00:09:27.034 ************************************ 00:09:27.034 02:32:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:27.034 * Looking for test storage... 00:09:27.034 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:09:27.034 02:32:52 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:27.034 02:32:52 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=117225 00:09:27.034 02:32:52 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:27.034 02:32:52 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 117225 00:09:27.034 02:32:52 -- common/autotest_common.sh@819 -- # '[' -z 117225 ']' 00:09:27.034 02:32:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.034 02:32:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:27.034 02:32:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.034 02:32:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:27.034 02:32:52 -- common/autotest_common.sh@10 -- # set +x 00:09:27.293 [2024-07-11 02:32:52.178102] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:09:27.293 [2024-07-11 02:32:52.178352] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117225 ] 00:09:27.293 [2024-07-11 02:32:52.318316] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.293 [2024-07-11 02:32:52.384636] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:27.293 [2024-07-11 02:32:52.384873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.228 02:32:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:28.228 02:32:53 -- common/autotest_common.sh@852 -- # return 0 00:09:28.228 02:32:53 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:09:28.487 02:32:53 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 117225 00:09:28.487 02:32:53 -- common/autotest_common.sh@926 -- # '[' -z 117225 ']' 00:09:28.487 02:32:53 -- common/autotest_common.sh@930 -- # kill -0 117225 00:09:28.487 02:32:53 -- common/autotest_common.sh@931 -- # uname 00:09:28.487 02:32:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:28.487 02:32:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 117225 00:09:28.487 killing process with pid 117225 00:09:28.487 02:32:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:28.487 02:32:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:28.487 02:32:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 117225' 00:09:28.487 02:32:53 -- common/autotest_common.sh@945 -- # kill 117225 00:09:28.487 02:32:53 -- common/autotest_common.sh@950 -- # wait 117225 00:09:28.744 00:09:28.744 real 0m1.761s 00:09:28.744 user 0m1.962s 00:09:28.744 sys 0m0.421s 00:09:28.744 ************************************ 00:09:28.744 END TEST alias_rpc 00:09:28.744 ************************************ 00:09:28.744 02:32:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:28.744 02:32:53 -- common/autotest_common.sh@10 -- # set +x 00:09:29.008 02:32:53 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:09:29.008 02:32:53 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:09:29.008 02:32:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:29.008 02:32:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:29.008 02:32:53 -- common/autotest_common.sh@10 -- # set +x 00:09:29.008 ************************************ 00:09:29.008 START TEST spdkcli_tcp 00:09:29.008 ************************************ 00:09:29.008 02:32:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:09:29.008 * Looking for test storage... 00:09:29.008 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:09:29.008 02:32:53 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:09:29.008 02:32:53 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:09:29.009 02:32:53 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:09:29.009 02:32:53 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:09:29.009 02:32:53 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:09:29.009 02:32:53 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:09:29.009 02:32:53 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:09:29.009 02:32:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:29.009 02:32:53 -- common/autotest_common.sh@10 -- # set +x 00:09:29.009 02:32:53 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=117306 00:09:29.009 02:32:53 -- spdkcli/tcp.sh@27 -- # waitforlisten 117306 00:09:29.009 02:32:53 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:09:29.009 02:32:53 -- common/autotest_common.sh@819 -- # '[' -z 117306 ']' 00:09:29.009 02:32:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.009 02:32:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:29.009 02:32:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.009 02:32:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:29.009 02:32:53 -- common/autotest_common.sh@10 -- # set +x 00:09:29.009 [2024-07-11 02:32:53.999442] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:09:29.009 [2024-07-11 02:32:53.999674] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117306 ] 00:09:29.298 [2024-07-11 02:32:54.150565] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:29.298 [2024-07-11 02:32:54.225130] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:29.298 [2024-07-11 02:32:54.225624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:29.298 [2024-07-11 02:32:54.225646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.864 02:32:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:29.864 02:32:54 -- common/autotest_common.sh@852 -- # return 0 00:09:29.864 02:32:54 -- spdkcli/tcp.sh@31 -- # socat_pid=117321 00:09:29.864 02:32:54 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:09:29.864 02:32:54 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:09:30.122 [ 00:09:30.122 "spdk_get_version", 00:09:30.122 "rpc_get_methods", 00:09:30.122 "trace_get_info", 00:09:30.122 "trace_get_tpoint_group_mask", 00:09:30.122 "trace_disable_tpoint_group", 00:09:30.122 "trace_enable_tpoint_group", 00:09:30.122 "trace_clear_tpoint_mask", 00:09:30.122 "trace_set_tpoint_mask", 00:09:30.122 "framework_get_pci_devices", 00:09:30.122 "framework_get_config", 00:09:30.122 "framework_get_subsystems", 00:09:30.122 "iobuf_get_stats", 00:09:30.122 "iobuf_set_options", 00:09:30.122 "sock_set_default_impl", 00:09:30.122 "sock_impl_set_options", 00:09:30.122 "sock_impl_get_options", 00:09:30.122 "vmd_rescan", 00:09:30.122 "vmd_remove_device", 00:09:30.122 "vmd_enable", 00:09:30.122 "accel_get_stats", 00:09:30.122 "accel_set_options", 00:09:30.122 "accel_set_driver", 00:09:30.122 "accel_crypto_key_destroy", 00:09:30.122 "accel_crypto_keys_get", 00:09:30.122 "accel_crypto_key_create", 00:09:30.122 "accel_assign_opc", 00:09:30.122 "accel_get_module_info", 00:09:30.122 "accel_get_opc_assignments", 00:09:30.122 "notify_get_notifications", 00:09:30.122 "notify_get_types", 00:09:30.122 "bdev_get_histogram", 00:09:30.122 "bdev_enable_histogram", 00:09:30.122 "bdev_set_qos_limit", 00:09:30.122 "bdev_set_qd_sampling_period", 00:09:30.122 "bdev_get_bdevs", 00:09:30.122 "bdev_reset_iostat", 00:09:30.122 "bdev_get_iostat", 00:09:30.122 "bdev_examine", 00:09:30.122 "bdev_wait_for_examine", 00:09:30.122 "bdev_set_options", 00:09:30.122 "scsi_get_devices", 00:09:30.122 "thread_set_cpumask", 00:09:30.122 "framework_get_scheduler", 00:09:30.122 "framework_set_scheduler", 00:09:30.122 "framework_get_reactors", 00:09:30.122 "thread_get_io_channels", 00:09:30.122 "thread_get_pollers", 00:09:30.122 "thread_get_stats", 00:09:30.122 "framework_monitor_context_switch", 00:09:30.122 "spdk_kill_instance", 00:09:30.122 "log_enable_timestamps", 00:09:30.122 "log_get_flags", 00:09:30.122 "log_clear_flag", 00:09:30.122 "log_set_flag", 00:09:30.122 "log_get_level", 00:09:30.122 "log_set_level", 00:09:30.122 "log_get_print_level", 00:09:30.122 "log_set_print_level", 00:09:30.122 "framework_enable_cpumask_locks", 00:09:30.122 "framework_disable_cpumask_locks", 00:09:30.122 "framework_wait_init", 00:09:30.122 "framework_start_init", 00:09:30.122 "virtio_blk_create_transport", 00:09:30.123 "virtio_blk_get_transports", 00:09:30.123 "vhost_controller_set_coalescing", 00:09:30.123 "vhost_get_controllers", 00:09:30.123 "vhost_delete_controller", 00:09:30.123 "vhost_create_blk_controller", 00:09:30.123 "vhost_scsi_controller_remove_target", 00:09:30.123 "vhost_scsi_controller_add_target", 00:09:30.123 "vhost_start_scsi_controller", 00:09:30.123 "vhost_create_scsi_controller", 00:09:30.123 "nbd_get_disks", 00:09:30.123 "nbd_stop_disk", 00:09:30.123 "nbd_start_disk", 00:09:30.123 "env_dpdk_get_mem_stats", 00:09:30.123 "nvmf_subsystem_get_listeners", 00:09:30.123 "nvmf_subsystem_get_qpairs", 00:09:30.123 "nvmf_subsystem_get_controllers", 00:09:30.123 "nvmf_get_stats", 00:09:30.123 "nvmf_get_transports", 00:09:30.123 "nvmf_create_transport", 00:09:30.123 "nvmf_get_targets", 00:09:30.123 "nvmf_delete_target", 00:09:30.123 "nvmf_create_target", 00:09:30.123 "nvmf_subsystem_allow_any_host", 00:09:30.123 "nvmf_subsystem_remove_host", 00:09:30.123 "nvmf_subsystem_add_host", 00:09:30.123 "nvmf_subsystem_remove_ns", 00:09:30.123 "nvmf_subsystem_add_ns", 00:09:30.123 "nvmf_subsystem_listener_set_ana_state", 00:09:30.123 "nvmf_discovery_get_referrals", 00:09:30.123 "nvmf_discovery_remove_referral", 00:09:30.123 "nvmf_discovery_add_referral", 00:09:30.123 "nvmf_subsystem_remove_listener", 00:09:30.123 "nvmf_subsystem_add_listener", 00:09:30.123 "nvmf_delete_subsystem", 00:09:30.123 "nvmf_create_subsystem", 00:09:30.123 "nvmf_get_subsystems", 00:09:30.123 "nvmf_set_crdt", 00:09:30.123 "nvmf_set_config", 00:09:30.123 "nvmf_set_max_subsystems", 00:09:30.123 "iscsi_set_options", 00:09:30.123 "iscsi_get_auth_groups", 00:09:30.123 "iscsi_auth_group_remove_secret", 00:09:30.123 "iscsi_auth_group_add_secret", 00:09:30.123 "iscsi_delete_auth_group", 00:09:30.123 "iscsi_create_auth_group", 00:09:30.123 "iscsi_set_discovery_auth", 00:09:30.123 "iscsi_get_options", 00:09:30.123 "iscsi_target_node_request_logout", 00:09:30.123 "iscsi_target_node_set_redirect", 00:09:30.123 "iscsi_target_node_set_auth", 00:09:30.123 "iscsi_target_node_add_lun", 00:09:30.123 "iscsi_get_connections", 00:09:30.123 "iscsi_portal_group_set_auth", 00:09:30.123 "iscsi_start_portal_group", 00:09:30.123 "iscsi_delete_portal_group", 00:09:30.123 "iscsi_create_portal_group", 00:09:30.123 "iscsi_get_portal_groups", 00:09:30.123 "iscsi_delete_target_node", 00:09:30.123 "iscsi_target_node_remove_pg_ig_maps", 00:09:30.123 "iscsi_target_node_add_pg_ig_maps", 00:09:30.123 "iscsi_create_target_node", 00:09:30.123 "iscsi_get_target_nodes", 00:09:30.123 "iscsi_delete_initiator_group", 00:09:30.123 "iscsi_initiator_group_remove_initiators", 00:09:30.123 "iscsi_initiator_group_add_initiators", 00:09:30.123 "iscsi_create_initiator_group", 00:09:30.123 "iscsi_get_initiator_groups", 00:09:30.123 "iaa_scan_accel_module", 00:09:30.123 "dsa_scan_accel_module", 00:09:30.123 "ioat_scan_accel_module", 00:09:30.123 "accel_error_inject_error", 00:09:30.123 "bdev_iscsi_delete", 00:09:30.123 "bdev_iscsi_create", 00:09:30.123 "bdev_iscsi_set_options", 00:09:30.123 "bdev_virtio_attach_controller", 00:09:30.123 "bdev_virtio_scsi_get_devices", 00:09:30.123 "bdev_virtio_detach_controller", 00:09:30.123 "bdev_virtio_blk_set_hotplug", 00:09:30.123 "bdev_ftl_set_property", 00:09:30.123 "bdev_ftl_get_properties", 00:09:30.123 "bdev_ftl_get_stats", 00:09:30.123 "bdev_ftl_unmap", 00:09:30.123 "bdev_ftl_unload", 00:09:30.123 "bdev_ftl_delete", 00:09:30.123 "bdev_ftl_load", 00:09:30.123 "bdev_ftl_create", 00:09:30.123 "bdev_aio_delete", 00:09:30.123 "bdev_aio_rescan", 00:09:30.123 "bdev_aio_create", 00:09:30.123 "blobfs_create", 00:09:30.123 "blobfs_detect", 00:09:30.123 "blobfs_set_cache_size", 00:09:30.123 "bdev_zone_block_delete", 00:09:30.123 "bdev_zone_block_create", 00:09:30.123 "bdev_delay_delete", 00:09:30.123 "bdev_delay_create", 00:09:30.123 "bdev_delay_update_latency", 00:09:30.123 "bdev_split_delete", 00:09:30.123 "bdev_split_create", 00:09:30.123 "bdev_error_inject_error", 00:09:30.123 "bdev_error_delete", 00:09:30.123 "bdev_error_create", 00:09:30.123 "bdev_raid_set_options", 00:09:30.123 "bdev_raid_remove_base_bdev", 00:09:30.123 "bdev_raid_add_base_bdev", 00:09:30.123 "bdev_raid_delete", 00:09:30.123 "bdev_raid_create", 00:09:30.123 "bdev_raid_get_bdevs", 00:09:30.123 "bdev_lvol_grow_lvstore", 00:09:30.123 "bdev_lvol_get_lvols", 00:09:30.123 "bdev_lvol_get_lvstores", 00:09:30.123 "bdev_lvol_delete", 00:09:30.123 "bdev_lvol_set_read_only", 00:09:30.123 "bdev_lvol_resize", 00:09:30.123 "bdev_lvol_decouple_parent", 00:09:30.123 "bdev_lvol_inflate", 00:09:30.123 "bdev_lvol_rename", 00:09:30.123 "bdev_lvol_clone_bdev", 00:09:30.123 "bdev_lvol_clone", 00:09:30.123 "bdev_lvol_snapshot", 00:09:30.123 "bdev_lvol_create", 00:09:30.123 "bdev_lvol_delete_lvstore", 00:09:30.123 "bdev_lvol_rename_lvstore", 00:09:30.123 "bdev_lvol_create_lvstore", 00:09:30.123 "bdev_passthru_delete", 00:09:30.123 "bdev_passthru_create", 00:09:30.123 "bdev_nvme_cuse_unregister", 00:09:30.123 "bdev_nvme_cuse_register", 00:09:30.123 "bdev_opal_new_user", 00:09:30.123 "bdev_opal_set_lock_state", 00:09:30.123 "bdev_opal_delete", 00:09:30.123 "bdev_opal_get_info", 00:09:30.123 "bdev_opal_create", 00:09:30.123 "bdev_nvme_opal_revert", 00:09:30.123 "bdev_nvme_opal_init", 00:09:30.123 "bdev_nvme_send_cmd", 00:09:30.123 "bdev_nvme_get_path_iostat", 00:09:30.123 "bdev_nvme_get_mdns_discovery_info", 00:09:30.123 "bdev_nvme_stop_mdns_discovery", 00:09:30.123 "bdev_nvme_start_mdns_discovery", 00:09:30.123 "bdev_nvme_set_multipath_policy", 00:09:30.123 "bdev_nvme_set_preferred_path", 00:09:30.123 "bdev_nvme_get_io_paths", 00:09:30.123 "bdev_nvme_remove_error_injection", 00:09:30.123 "bdev_nvme_add_error_injection", 00:09:30.123 "bdev_nvme_get_discovery_info", 00:09:30.123 "bdev_nvme_stop_discovery", 00:09:30.123 "bdev_nvme_start_discovery", 00:09:30.123 "bdev_nvme_get_controller_health_info", 00:09:30.123 "bdev_nvme_disable_controller", 00:09:30.123 "bdev_nvme_enable_controller", 00:09:30.123 "bdev_nvme_reset_controller", 00:09:30.123 "bdev_nvme_get_transport_statistics", 00:09:30.123 "bdev_nvme_apply_firmware", 00:09:30.123 "bdev_nvme_detach_controller", 00:09:30.123 "bdev_nvme_get_controllers", 00:09:30.123 "bdev_nvme_attach_controller", 00:09:30.123 "bdev_nvme_set_hotplug", 00:09:30.123 "bdev_nvme_set_options", 00:09:30.123 "bdev_null_resize", 00:09:30.123 "bdev_null_delete", 00:09:30.123 "bdev_null_create", 00:09:30.123 "bdev_malloc_delete", 00:09:30.123 "bdev_malloc_create" 00:09:30.123 ] 00:09:30.123 02:32:55 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:09:30.123 02:32:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:30.123 02:32:55 -- common/autotest_common.sh@10 -- # set +x 00:09:30.382 02:32:55 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:30.382 02:32:55 -- spdkcli/tcp.sh@38 -- # killprocess 117306 00:09:30.382 02:32:55 -- common/autotest_common.sh@926 -- # '[' -z 117306 ']' 00:09:30.382 02:32:55 -- common/autotest_common.sh@930 -- # kill -0 117306 00:09:30.382 02:32:55 -- common/autotest_common.sh@931 -- # uname 00:09:30.382 02:32:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:30.382 02:32:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 117306 00:09:30.382 02:32:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:30.382 02:32:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:30.382 killing process with pid 117306 00:09:30.382 02:32:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 117306' 00:09:30.382 02:32:55 -- common/autotest_common.sh@945 -- # kill 117306 00:09:30.382 02:32:55 -- common/autotest_common.sh@950 -- # wait 117306 00:09:30.640 ************************************ 00:09:30.640 END TEST spdkcli_tcp 00:09:30.640 ************************************ 00:09:30.640 00:09:30.640 real 0m1.828s 00:09:30.640 user 0m3.315s 00:09:30.640 sys 0m0.538s 00:09:30.640 02:32:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:30.640 02:32:55 -- common/autotest_common.sh@10 -- # set +x 00:09:30.640 02:32:55 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:30.640 02:32:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:30.640 02:32:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:30.640 02:32:55 -- common/autotest_common.sh@10 -- # set +x 00:09:30.640 ************************************ 00:09:30.640 START TEST dpdk_mem_utility 00:09:30.640 ************************************ 00:09:30.640 02:32:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:30.897 * Looking for test storage... 00:09:30.897 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:09:30.897 02:32:55 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:30.897 02:32:55 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=117397 00:09:30.897 02:32:55 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 117397 00:09:30.897 02:32:55 -- common/autotest_common.sh@819 -- # '[' -z 117397 ']' 00:09:30.897 02:32:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.897 02:32:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:30.897 02:32:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.897 02:32:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:30.897 02:32:55 -- common/autotest_common.sh@10 -- # set +x 00:09:30.897 02:32:55 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:30.897 [2024-07-11 02:32:55.856429] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:09:30.897 [2024-07-11 02:32:55.856675] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117397 ] 00:09:31.156 [2024-07-11 02:32:56.004095] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.156 [2024-07-11 02:32:56.071333] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:31.156 [2024-07-11 02:32:56.071635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.722 02:32:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:31.722 02:32:56 -- common/autotest_common.sh@852 -- # return 0 00:09:31.722 02:32:56 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:09:31.722 02:32:56 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:09:31.722 02:32:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:31.722 02:32:56 -- common/autotest_common.sh@10 -- # set +x 00:09:31.722 { 00:09:31.722 "filename": "/tmp/spdk_mem_dump.txt" 00:09:31.722 } 00:09:31.722 02:32:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:31.722 02:32:56 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:31.982 DPDK memory size 814.000000 MiB in 1 heap(s) 00:09:31.982 1 heaps totaling size 814.000000 MiB 00:09:31.982 size: 814.000000 MiB heap id: 0 00:09:31.982 end heaps---------- 00:09:31.982 8 mempools totaling size 598.116089 MiB 00:09:31.982 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:09:31.982 size: 158.602051 MiB name: PDU_data_out_Pool 00:09:31.982 size: 84.521057 MiB name: bdev_io_117397 00:09:31.982 size: 51.011292 MiB name: evtpool_117397 00:09:31.982 size: 50.003479 MiB name: msgpool_117397 00:09:31.982 size: 21.763794 MiB name: PDU_Pool 00:09:31.982 size: 19.513306 MiB name: SCSI_TASK_Pool 00:09:31.982 size: 0.026123 MiB name: Session_Pool 00:09:31.982 end mempools------- 00:09:31.982 6 memzones totaling size 4.142822 MiB 00:09:31.982 size: 1.000366 MiB name: RG_ring_0_117397 00:09:31.982 size: 1.000366 MiB name: RG_ring_1_117397 00:09:31.982 size: 1.000366 MiB name: RG_ring_4_117397 00:09:31.982 size: 1.000366 MiB name: RG_ring_5_117397 00:09:31.982 size: 0.125366 MiB name: RG_ring_2_117397 00:09:31.982 size: 0.015991 MiB name: RG_ring_3_117397 00:09:31.982 end memzones------- 00:09:31.982 02:32:56 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:09:31.982 heap id: 0 total size: 814.000000 MiB number of busy elements: 226 number of free elements: 15 00:09:31.982 list of free elements. size: 12.485474 MiB 00:09:31.982 element at address: 0x200000400000 with size: 1.999512 MiB 00:09:31.982 element at address: 0x200018e00000 with size: 0.999878 MiB 00:09:31.982 element at address: 0x200019000000 with size: 0.999878 MiB 00:09:31.982 element at address: 0x200003e00000 with size: 0.996277 MiB 00:09:31.982 element at address: 0x200031c00000 with size: 0.994446 MiB 00:09:31.982 element at address: 0x200013800000 with size: 0.978699 MiB 00:09:31.982 element at address: 0x200007000000 with size: 0.959839 MiB 00:09:31.982 element at address: 0x200019200000 with size: 0.936584 MiB 00:09:31.982 element at address: 0x200000200000 with size: 0.837219 MiB 00:09:31.982 element at address: 0x20001aa00000 with size: 0.567322 MiB 00:09:31.982 element at address: 0x20000b200000 with size: 0.489624 MiB 00:09:31.982 element at address: 0x200000800000 with size: 0.486511 MiB 00:09:31.982 element at address: 0x200019400000 with size: 0.485657 MiB 00:09:31.982 element at address: 0x200027e00000 with size: 0.402527 MiB 00:09:31.982 element at address: 0x200003a00000 with size: 0.351501 MiB 00:09:31.982 list of standard malloc elements. size: 199.251953 MiB 00:09:31.982 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:09:31.982 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:09:31.982 element at address: 0x200018efff80 with size: 1.000122 MiB 00:09:31.982 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:09:31.982 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:09:31.982 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:09:31.982 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:09:31.982 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:09:31.982 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:09:31.982 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:09:31.982 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:09:31.982 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:09:31.982 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:09:31.982 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:09:31.982 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:09:31.982 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:09:31.982 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:09:31.982 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:09:31.982 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:09:31.982 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:09:31.982 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:09:31.982 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:09:31.982 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:09:31.982 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:09:31.982 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:09:31.982 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:09:31.982 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:09:31.982 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:09:31.982 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:09:31.982 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:09:31.982 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:09:31.982 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:09:31.982 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:09:31.982 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:09:31.982 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:09:31.982 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:09:31.982 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:09:31.982 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:09:31.982 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:09:31.982 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:09:31.982 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:09:31.982 element at address: 0x20000087c980 with size: 0.000183 MiB 00:09:31.982 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:09:31.982 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:09:31.982 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:09:31.982 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:09:31.982 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:09:31.982 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:09:31.982 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:09:31.982 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:09:31.982 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:09:31.982 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:09:31.982 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:09:31.982 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:09:31.982 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:09:31.982 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:09:31.982 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:09:31.982 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:09:31.982 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:09:31.982 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:09:31.982 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:09:31.982 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:09:31.982 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:09:31.982 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:09:31.982 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:09:31.982 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:09:31.982 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:09:31.982 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:09:31.982 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:09:31.982 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:09:31.982 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:09:31.983 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:09:31.983 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:09:31.983 element at address: 0x200003adb300 with size: 0.000183 MiB 00:09:31.983 element at address: 0x200003adb500 with size: 0.000183 MiB 00:09:31.983 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:09:31.983 element at address: 0x200003affa80 with size: 0.000183 MiB 00:09:31.983 element at address: 0x200003affb40 with size: 0.000183 MiB 00:09:31.983 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:09:31.983 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:09:31.983 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:09:31.983 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:09:31.983 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:09:31.983 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa913c0 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa91480 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa91540 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa91600 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa916c0 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa91780 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa91840 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:09:31.983 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:09:31.983 element at address: 0x200027e670c0 with size: 0.000183 MiB 00:09:31.983 element at address: 0x200027e67180 with size: 0.000183 MiB 00:09:31.983 element at address: 0x200027e6dd80 with size: 0.000183 MiB 00:09:31.983 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:09:31.983 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:09:31.983 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:09:31.983 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:09:31.983 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:09:31.983 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:09:31.983 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:09:31.983 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:09:31.983 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:09:31.983 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:09:31.983 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:09:31.983 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:09:31.983 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:09:31.983 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:09:31.983 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:09:31.983 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:09:31.983 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:09:31.983 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:09:31.983 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:09:31.983 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:09:31.983 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:09:31.983 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:09:31.983 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:09:31.983 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:09:31.983 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:09:31.983 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:09:31.983 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:09:31.983 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:09:31.983 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:09:31.983 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:09:31.983 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:09:31.983 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:09:31.983 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:09:31.983 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:09:31.983 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:09:31.983 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:09:31.983 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:09:31.983 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:09:31.983 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:09:31.983 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:09:31.983 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:09:31.983 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:09:31.983 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:09:31.983 list of memzone associated elements. size: 602.262573 MiB 00:09:31.983 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:09:31.983 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:09:31.983 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:09:31.983 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:09:31.983 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:09:31.983 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_117397_0 00:09:31.983 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:09:31.983 associated memzone info: size: 48.002930 MiB name: MP_evtpool_117397_0 00:09:31.984 element at address: 0x200003fff380 with size: 48.003052 MiB 00:09:31.984 associated memzone info: size: 48.002930 MiB name: MP_msgpool_117397_0 00:09:31.984 element at address: 0x2000195be940 with size: 20.255554 MiB 00:09:31.984 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:09:31.984 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:09:31.984 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:09:31.984 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:09:31.984 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_117397 00:09:31.984 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:09:31.984 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_117397 00:09:31.984 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:09:31.984 associated memzone info: size: 1.007996 MiB name: MP_evtpool_117397 00:09:31.984 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:09:31.984 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:09:31.984 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:09:31.984 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:09:31.984 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:09:31.984 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:09:31.984 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:09:31.984 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:09:31.984 element at address: 0x200003eff180 with size: 1.000488 MiB 00:09:31.984 associated memzone info: size: 1.000366 MiB name: RG_ring_0_117397 00:09:31.984 element at address: 0x200003affc00 with size: 1.000488 MiB 00:09:31.984 associated memzone info: size: 1.000366 MiB name: RG_ring_1_117397 00:09:31.984 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:09:31.984 associated memzone info: size: 1.000366 MiB name: RG_ring_4_117397 00:09:31.984 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:09:31.984 associated memzone info: size: 1.000366 MiB name: RG_ring_5_117397 00:09:31.984 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:09:31.984 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_117397 00:09:31.984 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:09:31.984 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:09:31.984 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:09:31.984 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:09:31.984 element at address: 0x20001947c540 with size: 0.250488 MiB 00:09:31.984 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:09:31.984 element at address: 0x200003adf880 with size: 0.125488 MiB 00:09:31.984 associated memzone info: size: 0.125366 MiB name: RG_ring_2_117397 00:09:31.984 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:09:31.984 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:09:31.984 element at address: 0x200027e67240 with size: 0.023743 MiB 00:09:31.984 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:09:31.984 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:09:31.984 associated memzone info: size: 0.015991 MiB name: RG_ring_3_117397 00:09:31.984 element at address: 0x200027e6d380 with size: 0.002441 MiB 00:09:31.984 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:09:31.984 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:09:31.984 associated memzone info: size: 0.000183 MiB name: MP_msgpool_117397 00:09:31.984 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:09:31.984 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_117397 00:09:31.984 element at address: 0x200027e6de40 with size: 0.000305 MiB 00:09:31.984 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:09:31.984 02:32:56 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:09:31.984 02:32:56 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 117397 00:09:31.984 02:32:56 -- common/autotest_common.sh@926 -- # '[' -z 117397 ']' 00:09:31.984 02:32:56 -- common/autotest_common.sh@930 -- # kill -0 117397 00:09:31.984 02:32:56 -- common/autotest_common.sh@931 -- # uname 00:09:31.984 02:32:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:31.984 02:32:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 117397 00:09:31.984 02:32:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:31.984 killing process with pid 117397 00:09:31.984 02:32:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:31.984 02:32:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 117397' 00:09:31.984 02:32:56 -- common/autotest_common.sh@945 -- # kill 117397 00:09:31.984 02:32:56 -- common/autotest_common.sh@950 -- # wait 117397 00:09:32.550 00:09:32.550 real 0m1.635s 00:09:32.550 user 0m1.698s 00:09:32.550 sys 0m0.457s 00:09:32.550 ************************************ 00:09:32.550 END TEST dpdk_mem_utility 00:09:32.550 ************************************ 00:09:32.550 02:32:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:32.550 02:32:57 -- common/autotest_common.sh@10 -- # set +x 00:09:32.550 02:32:57 -- spdk/autotest.sh@187 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:32.550 02:32:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:32.550 02:32:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:32.550 02:32:57 -- common/autotest_common.sh@10 -- # set +x 00:09:32.550 ************************************ 00:09:32.550 START TEST event 00:09:32.550 ************************************ 00:09:32.550 02:32:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:32.550 * Looking for test storage... 00:09:32.550 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:32.550 02:32:57 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:32.550 02:32:57 -- bdev/nbd_common.sh@6 -- # set -e 00:09:32.550 02:32:57 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:32.550 02:32:57 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:09:32.550 02:32:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:32.550 02:32:57 -- common/autotest_common.sh@10 -- # set +x 00:09:32.550 ************************************ 00:09:32.550 START TEST event_perf 00:09:32.550 ************************************ 00:09:32.550 02:32:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:32.550 Running I/O for 1 seconds...[2024-07-11 02:32:57.531278] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:09:32.550 [2024-07-11 02:32:57.532022] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117481 ] 00:09:32.818 [2024-07-11 02:32:57.700763] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:32.818 [2024-07-11 02:32:57.779542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:32.818 [2024-07-11 02:32:57.779631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:32.818 [2024-07-11 02:32:57.779789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.818 [2024-07-11 02:32:57.779788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:33.756 Running I/O for 1 seconds... 00:09:33.756 lcore 0: 135693 00:09:33.756 lcore 1: 135693 00:09:33.756 lcore 2: 135694 00:09:33.756 lcore 3: 135694 00:09:34.014 done. 00:09:34.014 00:09:34.014 real 0m1.380s 00:09:34.014 user 0m4.178s 00:09:34.014 sys 0m0.104s 00:09:34.014 02:32:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:34.014 02:32:58 -- common/autotest_common.sh@10 -- # set +x 00:09:34.014 ************************************ 00:09:34.014 END TEST event_perf 00:09:34.014 ************************************ 00:09:34.014 02:32:58 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:09:34.014 02:32:58 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:09:34.014 02:32:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:34.014 02:32:58 -- common/autotest_common.sh@10 -- # set +x 00:09:34.014 ************************************ 00:09:34.014 START TEST event_reactor 00:09:34.014 ************************************ 00:09:34.014 02:32:58 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:09:34.014 [2024-07-11 02:32:58.961418] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:09:34.014 [2024-07-11 02:32:58.961689] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117526 ] 00:09:34.014 [2024-07-11 02:32:59.103946] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.272 [2024-07-11 02:32:59.175611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.206 test_start 00:09:35.206 oneshot 00:09:35.206 tick 100 00:09:35.206 tick 100 00:09:35.206 tick 250 00:09:35.206 tick 100 00:09:35.206 tick 100 00:09:35.206 tick 100 00:09:35.206 tick 250 00:09:35.206 tick 500 00:09:35.206 tick 100 00:09:35.206 tick 100 00:09:35.206 tick 250 00:09:35.206 tick 100 00:09:35.206 tick 100 00:09:35.206 test_end 00:09:35.206 00:09:35.206 real 0m1.339s 00:09:35.206 user 0m1.138s 00:09:35.206 sys 0m0.096s 00:09:35.206 02:33:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:35.206 02:33:00 -- common/autotest_common.sh@10 -- # set +x 00:09:35.206 ************************************ 00:09:35.206 END TEST event_reactor 00:09:35.206 ************************************ 00:09:35.464 02:33:00 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:35.464 02:33:00 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:09:35.464 02:33:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:35.464 02:33:00 -- common/autotest_common.sh@10 -- # set +x 00:09:35.464 ************************************ 00:09:35.464 START TEST event_reactor_perf 00:09:35.464 ************************************ 00:09:35.464 02:33:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:35.464 [2024-07-11 02:33:00.359489] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:09:35.464 [2024-07-11 02:33:00.359753] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117594 ] 00:09:35.464 [2024-07-11 02:33:00.509896] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.722 [2024-07-11 02:33:00.585688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.657 test_start 00:09:36.657 test_end 00:09:36.657 Performance: 314235 events per second 00:09:36.657 00:09:36.657 real 0m1.350s 00:09:36.657 user 0m1.170s 00:09:36.657 sys 0m0.080s 00:09:36.657 02:33:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:36.657 02:33:01 -- common/autotest_common.sh@10 -- # set +x 00:09:36.657 ************************************ 00:09:36.657 END TEST event_reactor_perf 00:09:36.657 ************************************ 00:09:36.657 02:33:01 -- event/event.sh@49 -- # uname -s 00:09:36.657 02:33:01 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:09:36.658 02:33:01 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:09:36.658 02:33:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:36.658 02:33:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:36.658 02:33:01 -- common/autotest_common.sh@10 -- # set +x 00:09:36.658 ************************************ 00:09:36.658 START TEST event_scheduler 00:09:36.658 ************************************ 00:09:36.658 02:33:01 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:09:36.917 * Looking for test storage... 00:09:36.917 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:09:36.917 02:33:01 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:09:36.917 02:33:01 -- scheduler/scheduler.sh@35 -- # scheduler_pid=117658 00:09:36.917 02:33:01 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:09:36.917 02:33:01 -- scheduler/scheduler.sh@37 -- # waitforlisten 117658 00:09:36.917 02:33:01 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:09:36.917 02:33:01 -- common/autotest_common.sh@819 -- # '[' -z 117658 ']' 00:09:36.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.917 02:33:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.917 02:33:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:36.917 02:33:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.917 02:33:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:36.917 02:33:01 -- common/autotest_common.sh@10 -- # set +x 00:09:36.917 [2024-07-11 02:33:01.886700] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:09:36.917 [2024-07-11 02:33:01.886971] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117658 ] 00:09:37.177 [2024-07-11 02:33:02.059747] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:37.177 [2024-07-11 02:33:02.146308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.177 [2024-07-11 02:33:02.146414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:37.177 [2024-07-11 02:33:02.146523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:37.177 [2024-07-11 02:33:02.146539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:38.113 02:33:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:38.113 02:33:02 -- common/autotest_common.sh@852 -- # return 0 00:09:38.113 02:33:02 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:09:38.113 02:33:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:38.113 02:33:02 -- common/autotest_common.sh@10 -- # set +x 00:09:38.113 POWER: Env isn't set yet! 00:09:38.113 POWER: Attempting to initialise ACPI cpufreq power management... 00:09:38.113 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:38.113 POWER: Cannot set governor of lcore 0 to userspace 00:09:38.113 POWER: Attempting to initialise PSTAT power management... 00:09:38.113 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:38.113 POWER: Cannot set governor of lcore 0 to performance 00:09:38.113 POWER: Attempting to initialise CPPC power management... 00:09:38.113 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:38.113 POWER: Cannot set governor of lcore 0 to userspace 00:09:38.113 POWER: Attempting to initialise VM power management... 00:09:38.113 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:09:38.113 POWER: Unable to set Power Management Environment for lcore 0 00:09:38.113 [2024-07-11 02:33:02.874161] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:09:38.113 [2024-07-11 02:33:02.874230] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:09:38.113 [2024-07-11 02:33:02.874265] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:09:38.113 [2024-07-11 02:33:02.874341] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:09:38.114 [2024-07-11 02:33:02.874376] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:09:38.114 [2024-07-11 02:33:02.874393] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:09:38.114 02:33:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:38.114 02:33:02 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:09:38.114 02:33:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:38.114 02:33:02 -- common/autotest_common.sh@10 -- # set +x 00:09:38.114 [2024-07-11 02:33:02.968192] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:09:38.114 02:33:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:38.114 02:33:02 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:09:38.114 02:33:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:38.114 02:33:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:38.114 02:33:02 -- common/autotest_common.sh@10 -- # set +x 00:09:38.114 ************************************ 00:09:38.114 START TEST scheduler_create_thread 00:09:38.114 ************************************ 00:09:38.114 02:33:02 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:09:38.114 02:33:02 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:09:38.114 02:33:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:38.114 02:33:02 -- common/autotest_common.sh@10 -- # set +x 00:09:38.114 2 00:09:38.114 02:33:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:38.114 02:33:02 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:09:38.114 02:33:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:38.114 02:33:02 -- common/autotest_common.sh@10 -- # set +x 00:09:38.114 3 00:09:38.114 02:33:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:38.114 02:33:02 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:09:38.114 02:33:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:38.114 02:33:02 -- common/autotest_common.sh@10 -- # set +x 00:09:38.114 4 00:09:38.114 02:33:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:38.114 02:33:03 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:09:38.114 02:33:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:38.114 02:33:03 -- common/autotest_common.sh@10 -- # set +x 00:09:38.114 5 00:09:38.114 02:33:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:38.114 02:33:03 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:09:38.114 02:33:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:38.114 02:33:03 -- common/autotest_common.sh@10 -- # set +x 00:09:38.114 6 00:09:38.114 02:33:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:38.114 02:33:03 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:09:38.114 02:33:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:38.114 02:33:03 -- common/autotest_common.sh@10 -- # set +x 00:09:38.114 7 00:09:38.114 02:33:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:38.114 02:33:03 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:09:38.114 02:33:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:38.114 02:33:03 -- common/autotest_common.sh@10 -- # set +x 00:09:38.114 8 00:09:38.114 02:33:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:38.114 02:33:03 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:09:38.114 02:33:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:38.114 02:33:03 -- common/autotest_common.sh@10 -- # set +x 00:09:38.114 9 00:09:38.114 02:33:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:38.114 02:33:03 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:09:38.114 02:33:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:38.114 02:33:03 -- common/autotest_common.sh@10 -- # set +x 00:09:38.114 10 00:09:38.114 02:33:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:38.114 02:33:03 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:09:38.114 02:33:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:38.114 02:33:03 -- common/autotest_common.sh@10 -- # set +x 00:09:38.114 02:33:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:38.114 02:33:03 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:09:38.114 02:33:03 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:09:38.114 02:33:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:38.114 02:33:03 -- common/autotest_common.sh@10 -- # set +x 00:09:38.114 02:33:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:38.114 02:33:03 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:09:38.114 02:33:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:38.114 02:33:03 -- common/autotest_common.sh@10 -- # set +x 00:09:39.052 02:33:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:39.052 02:33:04 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:09:39.052 02:33:04 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:09:39.052 02:33:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:39.052 02:33:04 -- common/autotest_common.sh@10 -- # set +x 00:09:40.425 02:33:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:40.425 00:09:40.425 real 0m2.148s 00:09:40.425 user 0m0.006s 00:09:40.425 sys 0m0.004s 00:09:40.425 ************************************ 00:09:40.425 END TEST scheduler_create_thread 00:09:40.425 ************************************ 00:09:40.425 02:33:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:40.425 02:33:05 -- common/autotest_common.sh@10 -- # set +x 00:09:40.425 02:33:05 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:09:40.425 02:33:05 -- scheduler/scheduler.sh@46 -- # killprocess 117658 00:09:40.425 02:33:05 -- common/autotest_common.sh@926 -- # '[' -z 117658 ']' 00:09:40.425 02:33:05 -- common/autotest_common.sh@930 -- # kill -0 117658 00:09:40.425 02:33:05 -- common/autotest_common.sh@931 -- # uname 00:09:40.425 02:33:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:40.425 02:33:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 117658 00:09:40.425 02:33:05 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:09:40.425 02:33:05 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:09:40.425 killing process with pid 117658 00:09:40.425 02:33:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 117658' 00:09:40.425 02:33:05 -- common/autotest_common.sh@945 -- # kill 117658 00:09:40.425 02:33:05 -- common/autotest_common.sh@950 -- # wait 117658 00:09:40.683 [2024-07-11 02:33:05.608343] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:09:40.943 ************************************ 00:09:40.943 END TEST event_scheduler 00:09:40.943 ************************************ 00:09:40.943 00:09:40.943 real 0m4.133s 00:09:40.943 user 0m7.529s 00:09:40.943 sys 0m0.389s 00:09:40.943 02:33:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:40.943 02:33:05 -- common/autotest_common.sh@10 -- # set +x 00:09:40.943 02:33:05 -- event/event.sh@51 -- # modprobe -n nbd 00:09:40.943 02:33:05 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:09:40.943 02:33:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:40.943 02:33:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:40.943 02:33:05 -- common/autotest_common.sh@10 -- # set +x 00:09:40.943 ************************************ 00:09:40.943 START TEST app_repeat 00:09:40.943 ************************************ 00:09:40.943 02:33:05 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:09:40.943 02:33:05 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:40.943 02:33:05 -- event/event.sh@13 -- # nbd_list=("/dev/nbd0" "/dev/nbd1") 00:09:40.943 02:33:05 -- event/event.sh@13 -- # local nbd_list 00:09:40.943 02:33:05 -- event/event.sh@14 -- # bdev_list=("Malloc0" "Malloc1") 00:09:40.943 02:33:05 -- event/event.sh@14 -- # local bdev_list 00:09:40.943 02:33:05 -- event/event.sh@15 -- # local repeat_times=4 00:09:40.943 02:33:05 -- event/event.sh@17 -- # modprobe nbd 00:09:40.943 02:33:05 -- event/event.sh@19 -- # repeat_pid=117769 00:09:40.943 02:33:05 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:09:40.943 02:33:05 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:09:40.943 Process app_repeat pid: 117769 00:09:40.943 02:33:05 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 117769' 00:09:40.943 02:33:05 -- event/event.sh@23 -- # for i in {0..2} 00:09:40.943 spdk_app_start Round 0 00:09:40.943 02:33:05 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:09:40.943 02:33:05 -- event/event.sh@25 -- # waitforlisten 117769 /var/tmp/spdk-nbd.sock 00:09:40.943 02:33:05 -- common/autotest_common.sh@819 -- # '[' -z 117769 ']' 00:09:40.943 02:33:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:40.943 02:33:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:40.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:40.943 02:33:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:40.943 02:33:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:40.943 02:33:05 -- common/autotest_common.sh@10 -- # set +x 00:09:40.943 [2024-07-11 02:33:05.958779] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:09:40.943 [2024-07-11 02:33:05.958964] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117769 ] 00:09:41.202 [2024-07-11 02:33:06.101222] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:41.202 [2024-07-11 02:33:06.175056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.202 [2024-07-11 02:33:06.175056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:42.137 02:33:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:42.138 02:33:06 -- common/autotest_common.sh@852 -- # return 0 00:09:42.138 02:33:06 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:42.138 Malloc0 00:09:42.138 02:33:07 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:42.396 Malloc1 00:09:42.396 02:33:07 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:42.396 02:33:07 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:42.396 02:33:07 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:09:42.396 02:33:07 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:42.396 02:33:07 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:09:42.396 02:33:07 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:42.396 02:33:07 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:42.396 02:33:07 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:42.396 02:33:07 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:09:42.396 02:33:07 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:42.396 02:33:07 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:09:42.396 02:33:07 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:42.396 02:33:07 -- bdev/nbd_common.sh@12 -- # local i 00:09:42.396 02:33:07 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:42.396 02:33:07 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:42.396 02:33:07 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:42.655 /dev/nbd0 00:09:42.655 02:33:07 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:42.655 02:33:07 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:42.655 02:33:07 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:09:42.655 02:33:07 -- common/autotest_common.sh@857 -- # local i 00:09:42.655 02:33:07 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:42.655 02:33:07 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:42.655 02:33:07 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:09:42.655 02:33:07 -- common/autotest_common.sh@861 -- # break 00:09:42.655 02:33:07 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:42.655 02:33:07 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:42.655 02:33:07 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:42.655 1+0 records in 00:09:42.655 1+0 records out 00:09:42.655 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000215168 s, 19.0 MB/s 00:09:42.655 02:33:07 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:42.655 02:33:07 -- common/autotest_common.sh@874 -- # size=4096 00:09:42.655 02:33:07 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:42.655 02:33:07 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:42.655 02:33:07 -- common/autotest_common.sh@877 -- # return 0 00:09:42.655 02:33:07 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:42.655 02:33:07 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:42.655 02:33:07 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:42.914 /dev/nbd1 00:09:42.914 02:33:07 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:42.914 02:33:07 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:42.914 02:33:07 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:09:42.914 02:33:07 -- common/autotest_common.sh@857 -- # local i 00:09:42.914 02:33:07 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:42.914 02:33:07 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:42.914 02:33:07 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:09:42.914 02:33:07 -- common/autotest_common.sh@861 -- # break 00:09:42.914 02:33:07 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:42.915 02:33:07 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:42.915 02:33:07 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:42.915 1+0 records in 00:09:42.915 1+0 records out 00:09:42.915 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271776 s, 15.1 MB/s 00:09:42.915 02:33:07 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:42.915 02:33:07 -- common/autotest_common.sh@874 -- # size=4096 00:09:42.915 02:33:07 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:42.915 02:33:07 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:42.915 02:33:07 -- common/autotest_common.sh@877 -- # return 0 00:09:42.915 02:33:07 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:42.915 02:33:07 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:42.915 02:33:07 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:42.915 02:33:07 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:42.915 02:33:07 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:43.174 02:33:08 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:43.174 { 00:09:43.174 "nbd_device": "/dev/nbd0", 00:09:43.174 "bdev_name": "Malloc0" 00:09:43.174 }, 00:09:43.174 { 00:09:43.174 "nbd_device": "/dev/nbd1", 00:09:43.174 "bdev_name": "Malloc1" 00:09:43.174 } 00:09:43.174 ]' 00:09:43.174 02:33:08 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:43.174 02:33:08 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:43.174 { 00:09:43.174 "nbd_device": "/dev/nbd0", 00:09:43.174 "bdev_name": "Malloc0" 00:09:43.174 }, 00:09:43.174 { 00:09:43.174 "nbd_device": "/dev/nbd1", 00:09:43.174 "bdev_name": "Malloc1" 00:09:43.174 } 00:09:43.174 ]' 00:09:43.174 02:33:08 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:43.174 /dev/nbd1' 00:09:43.174 02:33:08 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:43.174 /dev/nbd1' 00:09:43.174 02:33:08 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:43.174 02:33:08 -- bdev/nbd_common.sh@65 -- # count=2 00:09:43.174 02:33:08 -- bdev/nbd_common.sh@66 -- # echo 2 00:09:43.174 02:33:08 -- bdev/nbd_common.sh@95 -- # count=2 00:09:43.174 02:33:08 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:43.174 02:33:08 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:43.174 02:33:08 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:09:43.174 02:33:08 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:43.174 02:33:08 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:43.174 02:33:08 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:43.174 02:33:08 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:43.174 02:33:08 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:43.434 256+0 records in 00:09:43.434 256+0 records out 00:09:43.434 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00753306 s, 139 MB/s 00:09:43.434 02:33:08 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:43.434 02:33:08 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:43.434 256+0 records in 00:09:43.434 256+0 records out 00:09:43.434 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0253523 s, 41.4 MB/s 00:09:43.434 02:33:08 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:43.434 02:33:08 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:43.434 256+0 records in 00:09:43.434 256+0 records out 00:09:43.434 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0378243 s, 27.7 MB/s 00:09:43.434 02:33:08 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:43.434 02:33:08 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:09:43.434 02:33:08 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:43.434 02:33:08 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:43.434 02:33:08 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:43.434 02:33:08 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:43.434 02:33:08 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:43.434 02:33:08 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:43.434 02:33:08 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:43.434 02:33:08 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:43.434 02:33:08 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:43.434 02:33:08 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:43.434 02:33:08 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:43.434 02:33:08 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:43.434 02:33:08 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:09:43.434 02:33:08 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:43.434 02:33:08 -- bdev/nbd_common.sh@51 -- # local i 00:09:43.434 02:33:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:43.434 02:33:08 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:43.693 02:33:08 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:43.693 02:33:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:43.693 02:33:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:43.693 02:33:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:43.693 02:33:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:43.693 02:33:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:43.693 02:33:08 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:09:43.693 02:33:08 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:09:43.693 02:33:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:43.693 02:33:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:43.693 02:33:08 -- bdev/nbd_common.sh@41 -- # break 00:09:43.693 02:33:08 -- bdev/nbd_common.sh@45 -- # return 0 00:09:43.693 02:33:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:43.693 02:33:08 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:43.952 02:33:08 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:43.952 02:33:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:43.952 02:33:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:43.952 02:33:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:43.952 02:33:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:43.952 02:33:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:43.952 02:33:08 -- bdev/nbd_common.sh@41 -- # break 00:09:43.952 02:33:08 -- bdev/nbd_common.sh@45 -- # return 0 00:09:43.952 02:33:08 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:43.952 02:33:08 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:43.952 02:33:08 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:44.211 02:33:09 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:44.211 02:33:09 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:44.211 02:33:09 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:44.211 02:33:09 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:44.211 02:33:09 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:44.211 02:33:09 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:44.211 02:33:09 -- bdev/nbd_common.sh@65 -- # true 00:09:44.211 02:33:09 -- bdev/nbd_common.sh@65 -- # count=0 00:09:44.211 02:33:09 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:44.211 02:33:09 -- bdev/nbd_common.sh@104 -- # count=0 00:09:44.211 02:33:09 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:44.211 02:33:09 -- bdev/nbd_common.sh@109 -- # return 0 00:09:44.211 02:33:09 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:44.779 02:33:09 -- event/event.sh@35 -- # sleep 3 00:09:44.779 [2024-07-11 02:33:09.787440] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:44.779 [2024-07-11 02:33:09.827428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:44.779 [2024-07-11 02:33:09.827434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.037 [2024-07-11 02:33:09.880647] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:45.037 [2024-07-11 02:33:09.881079] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:47.570 spdk_app_start Round 1 00:09:47.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:47.570 02:33:12 -- event/event.sh@23 -- # for i in {0..2} 00:09:47.570 02:33:12 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:09:47.570 02:33:12 -- event/event.sh@25 -- # waitforlisten 117769 /var/tmp/spdk-nbd.sock 00:09:47.570 02:33:12 -- common/autotest_common.sh@819 -- # '[' -z 117769 ']' 00:09:47.570 02:33:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:47.570 02:33:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:47.570 02:33:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:47.570 02:33:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:47.570 02:33:12 -- common/autotest_common.sh@10 -- # set +x 00:09:47.828 02:33:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:47.828 02:33:12 -- common/autotest_common.sh@852 -- # return 0 00:09:47.828 02:33:12 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:48.087 Malloc0 00:09:48.087 02:33:13 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:48.345 Malloc1 00:09:48.345 02:33:13 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:48.345 02:33:13 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:48.345 02:33:13 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:09:48.345 02:33:13 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:48.345 02:33:13 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:09:48.345 02:33:13 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:48.345 02:33:13 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:48.345 02:33:13 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:48.345 02:33:13 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:09:48.345 02:33:13 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:48.345 02:33:13 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:09:48.345 02:33:13 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:48.345 02:33:13 -- bdev/nbd_common.sh@12 -- # local i 00:09:48.345 02:33:13 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:48.345 02:33:13 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:48.345 02:33:13 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:48.609 /dev/nbd0 00:09:48.609 02:33:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:48.609 02:33:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:48.609 02:33:13 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:09:48.609 02:33:13 -- common/autotest_common.sh@857 -- # local i 00:09:48.609 02:33:13 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:48.609 02:33:13 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:48.609 02:33:13 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:09:48.609 02:33:13 -- common/autotest_common.sh@861 -- # break 00:09:48.609 02:33:13 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:48.609 02:33:13 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:48.609 02:33:13 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:48.609 1+0 records in 00:09:48.609 1+0 records out 00:09:48.609 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000432389 s, 9.5 MB/s 00:09:48.609 02:33:13 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:48.609 02:33:13 -- common/autotest_common.sh@874 -- # size=4096 00:09:48.609 02:33:13 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:48.609 02:33:13 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:48.609 02:33:13 -- common/autotest_common.sh@877 -- # return 0 00:09:48.609 02:33:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:48.609 02:33:13 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:48.609 02:33:13 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:48.866 /dev/nbd1 00:09:48.866 02:33:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:48.866 02:33:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:48.866 02:33:13 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:09:48.866 02:33:13 -- common/autotest_common.sh@857 -- # local i 00:09:48.866 02:33:13 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:48.866 02:33:13 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:48.866 02:33:13 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:09:48.866 02:33:13 -- common/autotest_common.sh@861 -- # break 00:09:48.866 02:33:13 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:48.866 02:33:13 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:48.866 02:33:13 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:48.866 1+0 records in 00:09:48.866 1+0 records out 00:09:48.866 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000552989 s, 7.4 MB/s 00:09:48.866 02:33:13 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:48.866 02:33:13 -- common/autotest_common.sh@874 -- # size=4096 00:09:48.866 02:33:13 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:48.866 02:33:13 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:48.866 02:33:13 -- common/autotest_common.sh@877 -- # return 0 00:09:48.866 02:33:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:48.866 02:33:13 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:48.866 02:33:13 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:48.866 02:33:13 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:48.866 02:33:13 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:49.124 02:33:14 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:49.124 { 00:09:49.124 "nbd_device": "/dev/nbd0", 00:09:49.124 "bdev_name": "Malloc0" 00:09:49.124 }, 00:09:49.124 { 00:09:49.124 "nbd_device": "/dev/nbd1", 00:09:49.124 "bdev_name": "Malloc1" 00:09:49.124 } 00:09:49.124 ]' 00:09:49.124 02:33:14 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:49.124 { 00:09:49.124 "nbd_device": "/dev/nbd0", 00:09:49.124 "bdev_name": "Malloc0" 00:09:49.124 }, 00:09:49.124 { 00:09:49.124 "nbd_device": "/dev/nbd1", 00:09:49.124 "bdev_name": "Malloc1" 00:09:49.124 } 00:09:49.124 ]' 00:09:49.124 02:33:14 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:49.124 02:33:14 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:49.124 /dev/nbd1' 00:09:49.124 02:33:14 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:49.124 /dev/nbd1' 00:09:49.124 02:33:14 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:49.124 02:33:14 -- bdev/nbd_common.sh@65 -- # count=2 00:09:49.124 02:33:14 -- bdev/nbd_common.sh@66 -- # echo 2 00:09:49.124 02:33:14 -- bdev/nbd_common.sh@95 -- # count=2 00:09:49.124 02:33:14 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:49.124 02:33:14 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:49.124 02:33:14 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:09:49.124 02:33:14 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:49.124 02:33:14 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:49.124 02:33:14 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:49.124 02:33:14 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:49.124 02:33:14 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:49.124 256+0 records in 00:09:49.124 256+0 records out 00:09:49.124 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00655347 s, 160 MB/s 00:09:49.124 02:33:14 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:49.124 02:33:14 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:49.382 256+0 records in 00:09:49.382 256+0 records out 00:09:49.382 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0261159 s, 40.2 MB/s 00:09:49.382 02:33:14 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:49.382 02:33:14 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:49.382 256+0 records in 00:09:49.382 256+0 records out 00:09:49.382 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0376544 s, 27.8 MB/s 00:09:49.382 02:33:14 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:49.382 02:33:14 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:09:49.382 02:33:14 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:49.382 02:33:14 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:49.382 02:33:14 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:49.382 02:33:14 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:49.382 02:33:14 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:49.382 02:33:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:49.382 02:33:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:49.382 02:33:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:49.382 02:33:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:49.382 02:33:14 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:49.383 02:33:14 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:49.383 02:33:14 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:49.383 02:33:14 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:09:49.383 02:33:14 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:49.383 02:33:14 -- bdev/nbd_common.sh@51 -- # local i 00:09:49.383 02:33:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:49.383 02:33:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:49.641 02:33:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:49.641 02:33:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:49.641 02:33:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:49.641 02:33:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:49.641 02:33:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:49.641 02:33:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:49.641 02:33:14 -- bdev/nbd_common.sh@41 -- # break 00:09:49.641 02:33:14 -- bdev/nbd_common.sh@45 -- # return 0 00:09:49.641 02:33:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:49.641 02:33:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:49.900 02:33:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:49.900 02:33:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:49.900 02:33:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:49.900 02:33:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:49.900 02:33:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:49.900 02:33:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:49.900 02:33:14 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:09:49.900 02:33:14 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:09:49.900 02:33:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:49.900 02:33:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:49.900 02:33:14 -- bdev/nbd_common.sh@41 -- # break 00:09:49.900 02:33:14 -- bdev/nbd_common.sh@45 -- # return 0 00:09:49.900 02:33:14 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:49.900 02:33:14 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:49.900 02:33:14 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:50.158 02:33:15 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:50.158 02:33:15 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:50.158 02:33:15 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:50.158 02:33:15 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:50.158 02:33:15 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:50.158 02:33:15 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:50.416 02:33:15 -- bdev/nbd_common.sh@65 -- # true 00:09:50.416 02:33:15 -- bdev/nbd_common.sh@65 -- # count=0 00:09:50.416 02:33:15 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:50.416 02:33:15 -- bdev/nbd_common.sh@104 -- # count=0 00:09:50.417 02:33:15 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:50.417 02:33:15 -- bdev/nbd_common.sh@109 -- # return 0 00:09:50.417 02:33:15 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:50.675 02:33:15 -- event/event.sh@35 -- # sleep 3 00:09:50.675 [2024-07-11 02:33:15.695806] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:50.934 [2024-07-11 02:33:15.769323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:50.934 [2024-07-11 02:33:15.769328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.934 [2024-07-11 02:33:15.828788] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:50.934 [2024-07-11 02:33:15.829204] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:53.465 spdk_app_start Round 2 00:09:53.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:53.465 02:33:18 -- event/event.sh@23 -- # for i in {0..2} 00:09:53.465 02:33:18 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:09:53.465 02:33:18 -- event/event.sh@25 -- # waitforlisten 117769 /var/tmp/spdk-nbd.sock 00:09:53.465 02:33:18 -- common/autotest_common.sh@819 -- # '[' -z 117769 ']' 00:09:53.465 02:33:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:53.465 02:33:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:53.465 02:33:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:53.465 02:33:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:53.465 02:33:18 -- common/autotest_common.sh@10 -- # set +x 00:09:53.723 02:33:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:53.723 02:33:18 -- common/autotest_common.sh@852 -- # return 0 00:09:53.723 02:33:18 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:53.982 Malloc0 00:09:53.982 02:33:19 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:54.240 Malloc1 00:09:54.240 02:33:19 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:54.240 02:33:19 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:54.240 02:33:19 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:09:54.240 02:33:19 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:54.240 02:33:19 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:09:54.240 02:33:19 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:54.240 02:33:19 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:54.240 02:33:19 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:54.240 02:33:19 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:09:54.240 02:33:19 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:54.240 02:33:19 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:09:54.240 02:33:19 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:54.240 02:33:19 -- bdev/nbd_common.sh@12 -- # local i 00:09:54.240 02:33:19 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:54.240 02:33:19 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:54.240 02:33:19 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:54.498 /dev/nbd0 00:09:54.498 02:33:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:54.498 02:33:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:54.498 02:33:19 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:09:54.498 02:33:19 -- common/autotest_common.sh@857 -- # local i 00:09:54.498 02:33:19 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:54.498 02:33:19 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:54.498 02:33:19 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:09:54.498 02:33:19 -- common/autotest_common.sh@861 -- # break 00:09:54.498 02:33:19 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:54.498 02:33:19 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:54.499 02:33:19 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:54.499 1+0 records in 00:09:54.499 1+0 records out 00:09:54.499 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000483316 s, 8.5 MB/s 00:09:54.499 02:33:19 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:54.499 02:33:19 -- common/autotest_common.sh@874 -- # size=4096 00:09:54.499 02:33:19 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:54.499 02:33:19 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:54.499 02:33:19 -- common/autotest_common.sh@877 -- # return 0 00:09:54.499 02:33:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:54.499 02:33:19 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:54.499 02:33:19 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:54.756 /dev/nbd1 00:09:54.756 02:33:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:54.756 02:33:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:54.756 02:33:19 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:09:54.756 02:33:19 -- common/autotest_common.sh@857 -- # local i 00:09:54.756 02:33:19 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:54.756 02:33:19 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:54.756 02:33:19 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:09:54.756 02:33:19 -- common/autotest_common.sh@861 -- # break 00:09:54.756 02:33:19 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:54.756 02:33:19 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:54.756 02:33:19 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:54.756 1+0 records in 00:09:54.756 1+0 records out 00:09:54.756 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000536062 s, 7.6 MB/s 00:09:54.756 02:33:19 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:54.756 02:33:19 -- common/autotest_common.sh@874 -- # size=4096 00:09:54.756 02:33:19 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:54.756 02:33:19 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:54.756 02:33:19 -- common/autotest_common.sh@877 -- # return 0 00:09:54.756 02:33:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:54.756 02:33:19 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:54.756 02:33:19 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:54.756 02:33:19 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:54.756 02:33:19 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:55.014 02:33:20 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:55.014 { 00:09:55.014 "nbd_device": "/dev/nbd0", 00:09:55.014 "bdev_name": "Malloc0" 00:09:55.014 }, 00:09:55.014 { 00:09:55.014 "nbd_device": "/dev/nbd1", 00:09:55.014 "bdev_name": "Malloc1" 00:09:55.014 } 00:09:55.014 ]' 00:09:55.014 02:33:20 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:55.014 { 00:09:55.014 "nbd_device": "/dev/nbd0", 00:09:55.014 "bdev_name": "Malloc0" 00:09:55.014 }, 00:09:55.014 { 00:09:55.014 "nbd_device": "/dev/nbd1", 00:09:55.014 "bdev_name": "Malloc1" 00:09:55.014 } 00:09:55.014 ]' 00:09:55.014 02:33:20 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:55.272 02:33:20 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:55.272 /dev/nbd1' 00:09:55.272 02:33:20 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:55.272 /dev/nbd1' 00:09:55.272 02:33:20 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:55.272 02:33:20 -- bdev/nbd_common.sh@65 -- # count=2 00:09:55.272 02:33:20 -- bdev/nbd_common.sh@66 -- # echo 2 00:09:55.272 02:33:20 -- bdev/nbd_common.sh@95 -- # count=2 00:09:55.272 02:33:20 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:55.272 02:33:20 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:55.273 02:33:20 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:09:55.273 02:33:20 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:55.273 02:33:20 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:55.273 02:33:20 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:55.273 02:33:20 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:55.273 02:33:20 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:55.273 256+0 records in 00:09:55.273 256+0 records out 00:09:55.273 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0087875 s, 119 MB/s 00:09:55.273 02:33:20 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:55.273 02:33:20 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:55.273 256+0 records in 00:09:55.273 256+0 records out 00:09:55.273 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0235508 s, 44.5 MB/s 00:09:55.273 02:33:20 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:55.273 02:33:20 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:55.273 256+0 records in 00:09:55.273 256+0 records out 00:09:55.273 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0286592 s, 36.6 MB/s 00:09:55.273 02:33:20 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:55.273 02:33:20 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:09:55.273 02:33:20 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:55.273 02:33:20 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:55.273 02:33:20 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:55.273 02:33:20 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:55.273 02:33:20 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:55.273 02:33:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:55.273 02:33:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:55.273 02:33:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:55.273 02:33:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:55.273 02:33:20 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:55.273 02:33:20 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:55.273 02:33:20 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:55.273 02:33:20 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:09:55.273 02:33:20 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:55.273 02:33:20 -- bdev/nbd_common.sh@51 -- # local i 00:09:55.273 02:33:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:55.273 02:33:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:55.534 02:33:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:55.534 02:33:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:55.534 02:33:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:55.534 02:33:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:55.534 02:33:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:55.534 02:33:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:55.534 02:33:20 -- bdev/nbd_common.sh@41 -- # break 00:09:55.534 02:33:20 -- bdev/nbd_common.sh@45 -- # return 0 00:09:55.534 02:33:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:55.534 02:33:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:55.791 02:33:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:55.792 02:33:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:55.792 02:33:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:55.792 02:33:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:55.792 02:33:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:55.792 02:33:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:55.792 02:33:20 -- bdev/nbd_common.sh@41 -- # break 00:09:55.792 02:33:20 -- bdev/nbd_common.sh@45 -- # return 0 00:09:55.792 02:33:20 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:55.792 02:33:20 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:55.792 02:33:20 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:56.050 02:33:20 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:56.050 02:33:20 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:56.050 02:33:20 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:56.050 02:33:21 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:56.050 02:33:21 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:56.050 02:33:21 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:56.050 02:33:21 -- bdev/nbd_common.sh@65 -- # true 00:09:56.050 02:33:21 -- bdev/nbd_common.sh@65 -- # count=0 00:09:56.050 02:33:21 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:56.050 02:33:21 -- bdev/nbd_common.sh@104 -- # count=0 00:09:56.050 02:33:21 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:56.050 02:33:21 -- bdev/nbd_common.sh@109 -- # return 0 00:09:56.050 02:33:21 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:56.309 02:33:21 -- event/event.sh@35 -- # sleep 3 00:09:56.567 [2024-07-11 02:33:21.469301] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:56.567 [2024-07-11 02:33:21.510785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:56.567 [2024-07-11 02:33:21.510793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.567 [2024-07-11 02:33:21.565515] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:56.567 [2024-07-11 02:33:21.565891] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:59.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:59.848 02:33:24 -- event/event.sh@38 -- # waitforlisten 117769 /var/tmp/spdk-nbd.sock 00:09:59.848 02:33:24 -- common/autotest_common.sh@819 -- # '[' -z 117769 ']' 00:09:59.848 02:33:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:59.848 02:33:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:59.848 02:33:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:59.848 02:33:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:59.848 02:33:24 -- common/autotest_common.sh@10 -- # set +x 00:09:59.848 02:33:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:59.848 02:33:24 -- common/autotest_common.sh@852 -- # return 0 00:09:59.848 02:33:24 -- event/event.sh@39 -- # killprocess 117769 00:09:59.848 02:33:24 -- common/autotest_common.sh@926 -- # '[' -z 117769 ']' 00:09:59.848 02:33:24 -- common/autotest_common.sh@930 -- # kill -0 117769 00:09:59.848 02:33:24 -- common/autotest_common.sh@931 -- # uname 00:09:59.848 02:33:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:59.848 02:33:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 117769 00:09:59.848 killing process with pid 117769 00:09:59.848 02:33:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:59.848 02:33:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:59.848 02:33:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 117769' 00:09:59.848 02:33:24 -- common/autotest_common.sh@945 -- # kill 117769 00:09:59.848 02:33:24 -- common/autotest_common.sh@950 -- # wait 117769 00:09:59.848 spdk_app_start is called in Round 0. 00:09:59.848 Shutdown signal received, stop current app iteration 00:09:59.848 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 reinitialization... 00:09:59.848 spdk_app_start is called in Round 1. 00:09:59.848 Shutdown signal received, stop current app iteration 00:09:59.848 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 reinitialization... 00:09:59.848 spdk_app_start is called in Round 2. 00:09:59.848 Shutdown signal received, stop current app iteration 00:09:59.848 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 reinitialization... 00:09:59.848 spdk_app_start is called in Round 3. 00:09:59.848 Shutdown signal received, stop current app iteration 00:09:59.848 ************************************ 00:09:59.848 END TEST app_repeat 00:09:59.848 ************************************ 00:09:59.848 02:33:24 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:09:59.848 02:33:24 -- event/event.sh@42 -- # return 0 00:09:59.848 00:09:59.848 real 0m18.876s 00:09:59.848 user 0m42.106s 00:09:59.848 sys 0m2.865s 00:09:59.848 02:33:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:59.848 02:33:24 -- common/autotest_common.sh@10 -- # set +x 00:09:59.848 02:33:24 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:09:59.848 02:33:24 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:59.848 02:33:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:59.848 02:33:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:59.848 02:33:24 -- common/autotest_common.sh@10 -- # set +x 00:09:59.848 ************************************ 00:09:59.848 START TEST cpu_locks 00:09:59.848 ************************************ 00:09:59.848 02:33:24 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:59.848 * Looking for test storage... 00:09:59.848 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:59.848 02:33:24 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:09:59.848 02:33:24 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:09:59.848 02:33:24 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:09:59.848 02:33:24 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:09:59.848 02:33:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:59.848 02:33:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:59.848 02:33:24 -- common/autotest_common.sh@10 -- # set +x 00:10:00.106 ************************************ 00:10:00.106 START TEST default_locks 00:10:00.106 ************************************ 00:10:00.106 02:33:24 -- common/autotest_common.sh@1104 -- # default_locks 00:10:00.106 02:33:24 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=118320 00:10:00.106 02:33:24 -- event/cpu_locks.sh@47 -- # waitforlisten 118320 00:10:00.106 02:33:24 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:00.106 02:33:24 -- common/autotest_common.sh@819 -- # '[' -z 118320 ']' 00:10:00.106 02:33:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.106 02:33:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:00.106 02:33:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.106 02:33:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:00.106 02:33:24 -- common/autotest_common.sh@10 -- # set +x 00:10:00.106 [2024-07-11 02:33:25.000469] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:00.106 [2024-07-11 02:33:25.001455] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118320 ] 00:10:00.106 [2024-07-11 02:33:25.143447] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.363 [2024-07-11 02:33:25.221181] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:00.363 [2024-07-11 02:33:25.221649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.995 02:33:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:00.995 02:33:25 -- common/autotest_common.sh@852 -- # return 0 00:10:00.995 02:33:25 -- event/cpu_locks.sh@49 -- # locks_exist 118320 00:10:00.995 02:33:25 -- event/cpu_locks.sh@22 -- # lslocks -p 118320 00:10:00.995 02:33:25 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:01.252 02:33:26 -- event/cpu_locks.sh@50 -- # killprocess 118320 00:10:01.252 02:33:26 -- common/autotest_common.sh@926 -- # '[' -z 118320 ']' 00:10:01.252 02:33:26 -- common/autotest_common.sh@930 -- # kill -0 118320 00:10:01.252 02:33:26 -- common/autotest_common.sh@931 -- # uname 00:10:01.252 02:33:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:01.252 02:33:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 118320 00:10:01.252 02:33:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:01.252 02:33:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:01.252 killing process with pid 118320 00:10:01.252 02:33:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 118320' 00:10:01.252 02:33:26 -- common/autotest_common.sh@945 -- # kill 118320 00:10:01.252 02:33:26 -- common/autotest_common.sh@950 -- # wait 118320 00:10:01.819 02:33:26 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 118320 00:10:01.819 02:33:26 -- common/autotest_common.sh@640 -- # local es=0 00:10:01.819 02:33:26 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 118320 00:10:01.819 02:33:26 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:10:01.819 02:33:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:01.819 02:33:26 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:10:01.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.819 ERROR: process (pid: 118320) is no longer running 00:10:01.819 ************************************ 00:10:01.819 END TEST default_locks 00:10:01.819 ************************************ 00:10:01.819 02:33:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:01.819 02:33:26 -- common/autotest_common.sh@643 -- # waitforlisten 118320 00:10:01.819 02:33:26 -- common/autotest_common.sh@819 -- # '[' -z 118320 ']' 00:10:01.819 02:33:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.819 02:33:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:01.819 02:33:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.819 02:33:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:01.819 02:33:26 -- common/autotest_common.sh@10 -- # set +x 00:10:01.819 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (118320) - No such process 00:10:01.819 02:33:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:01.819 02:33:26 -- common/autotest_common.sh@852 -- # return 1 00:10:01.819 02:33:26 -- common/autotest_common.sh@643 -- # es=1 00:10:01.819 02:33:26 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:01.819 02:33:26 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:01.819 02:33:26 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:01.819 02:33:26 -- event/cpu_locks.sh@54 -- # no_locks 00:10:01.819 02:33:26 -- event/cpu_locks.sh@26 -- # lock_files=(/var/tmp/spdk_cpu_lock*) 00:10:01.819 02:33:26 -- event/cpu_locks.sh@26 -- # local lock_files 00:10:01.819 02:33:26 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:10:01.819 00:10:01.819 real 0m1.711s 00:10:01.819 user 0m1.783s 00:10:01.819 sys 0m0.525s 00:10:01.819 02:33:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:01.819 02:33:26 -- common/autotest_common.sh@10 -- # set +x 00:10:01.819 02:33:26 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:10:01.819 02:33:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:01.819 02:33:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:01.819 02:33:26 -- common/autotest_common.sh@10 -- # set +x 00:10:01.819 ************************************ 00:10:01.819 START TEST default_locks_via_rpc 00:10:01.819 ************************************ 00:10:01.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.819 02:33:26 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:10:01.819 02:33:26 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=118381 00:10:01.819 02:33:26 -- event/cpu_locks.sh@63 -- # waitforlisten 118381 00:10:01.819 02:33:26 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:01.819 02:33:26 -- common/autotest_common.sh@819 -- # '[' -z 118381 ']' 00:10:01.819 02:33:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.819 02:33:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:01.819 02:33:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.819 02:33:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:01.819 02:33:26 -- common/autotest_common.sh@10 -- # set +x 00:10:01.819 [2024-07-11 02:33:26.771569] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:01.819 [2024-07-11 02:33:26.772066] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118381 ] 00:10:02.078 [2024-07-11 02:33:26.918939] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.078 [2024-07-11 02:33:27.000679] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:02.078 [2024-07-11 02:33:27.001158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.644 02:33:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:02.644 02:33:27 -- common/autotest_common.sh@852 -- # return 0 00:10:02.644 02:33:27 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:10:02.644 02:33:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:02.644 02:33:27 -- common/autotest_common.sh@10 -- # set +x 00:10:02.645 02:33:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:02.645 02:33:27 -- event/cpu_locks.sh@67 -- # no_locks 00:10:02.645 02:33:27 -- event/cpu_locks.sh@26 -- # lock_files=(/var/tmp/spdk_cpu_lock*) 00:10:02.645 02:33:27 -- event/cpu_locks.sh@26 -- # local lock_files 00:10:02.645 02:33:27 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:10:02.645 02:33:27 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:10:02.645 02:33:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:02.645 02:33:27 -- common/autotest_common.sh@10 -- # set +x 00:10:02.904 02:33:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:02.904 02:33:27 -- event/cpu_locks.sh@71 -- # locks_exist 118381 00:10:02.904 02:33:27 -- event/cpu_locks.sh@22 -- # lslocks -p 118381 00:10:02.904 02:33:27 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:02.904 02:33:27 -- event/cpu_locks.sh@73 -- # killprocess 118381 00:10:02.904 02:33:27 -- common/autotest_common.sh@926 -- # '[' -z 118381 ']' 00:10:02.904 02:33:27 -- common/autotest_common.sh@930 -- # kill -0 118381 00:10:02.904 02:33:27 -- common/autotest_common.sh@931 -- # uname 00:10:02.904 02:33:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:02.904 02:33:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 118381 00:10:02.904 02:33:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:02.904 killing process with pid 118381 00:10:02.904 02:33:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:02.904 02:33:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 118381' 00:10:02.904 02:33:27 -- common/autotest_common.sh@945 -- # kill 118381 00:10:02.904 02:33:27 -- common/autotest_common.sh@950 -- # wait 118381 00:10:03.471 ************************************ 00:10:03.471 END TEST default_locks_via_rpc 00:10:03.471 ************************************ 00:10:03.471 00:10:03.471 real 0m1.675s 00:10:03.471 user 0m1.729s 00:10:03.471 sys 0m0.541s 00:10:03.471 02:33:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:03.471 02:33:28 -- common/autotest_common.sh@10 -- # set +x 00:10:03.471 02:33:28 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:10:03.471 02:33:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:03.471 02:33:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:03.471 02:33:28 -- common/autotest_common.sh@10 -- # set +x 00:10:03.471 ************************************ 00:10:03.471 START TEST non_locking_app_on_locked_coremask 00:10:03.471 ************************************ 00:10:03.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.471 02:33:28 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:10:03.471 02:33:28 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=118429 00:10:03.471 02:33:28 -- event/cpu_locks.sh@81 -- # waitforlisten 118429 /var/tmp/spdk.sock 00:10:03.471 02:33:28 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:03.471 02:33:28 -- common/autotest_common.sh@819 -- # '[' -z 118429 ']' 00:10:03.471 02:33:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.471 02:33:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:03.471 02:33:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.471 02:33:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:03.471 02:33:28 -- common/autotest_common.sh@10 -- # set +x 00:10:03.471 [2024-07-11 02:33:28.488290] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:03.471 [2024-07-11 02:33:28.488783] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118429 ] 00:10:03.730 [2024-07-11 02:33:28.633782] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.730 [2024-07-11 02:33:28.719305] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:03.730 [2024-07-11 02:33:28.719704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:04.666 02:33:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:04.666 02:33:29 -- common/autotest_common.sh@852 -- # return 0 00:10:04.666 02:33:29 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=118450 00:10:04.666 02:33:29 -- event/cpu_locks.sh@85 -- # waitforlisten 118450 /var/tmp/spdk2.sock 00:10:04.666 02:33:29 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:10:04.666 02:33:29 -- common/autotest_common.sh@819 -- # '[' -z 118450 ']' 00:10:04.666 02:33:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:04.666 02:33:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:04.666 02:33:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:04.666 02:33:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:04.666 02:33:29 -- common/autotest_common.sh@10 -- # set +x 00:10:04.666 [2024-07-11 02:33:29.432062] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:04.666 [2024-07-11 02:33:29.433046] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118450 ] 00:10:04.666 [2024-07-11 02:33:29.565764] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:04.666 [2024-07-11 02:33:29.565820] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.666 [2024-07-11 02:33:29.692998] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:04.666 [2024-07-11 02:33:29.693214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.602 02:33:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:05.602 02:33:30 -- common/autotest_common.sh@852 -- # return 0 00:10:05.602 02:33:30 -- event/cpu_locks.sh@87 -- # locks_exist 118429 00:10:05.602 02:33:30 -- event/cpu_locks.sh@22 -- # lslocks -p 118429 00:10:05.602 02:33:30 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:05.860 02:33:30 -- event/cpu_locks.sh@89 -- # killprocess 118429 00:10:05.860 02:33:30 -- common/autotest_common.sh@926 -- # '[' -z 118429 ']' 00:10:05.860 02:33:30 -- common/autotest_common.sh@930 -- # kill -0 118429 00:10:05.860 02:33:30 -- common/autotest_common.sh@931 -- # uname 00:10:05.860 02:33:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:05.860 02:33:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 118429 00:10:05.860 02:33:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:05.860 02:33:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:05.860 killing process with pid 118429 00:10:05.860 02:33:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 118429' 00:10:05.860 02:33:30 -- common/autotest_common.sh@945 -- # kill 118429 00:10:05.860 02:33:30 -- common/autotest_common.sh@950 -- # wait 118429 00:10:06.795 02:33:31 -- event/cpu_locks.sh@90 -- # killprocess 118450 00:10:06.795 02:33:31 -- common/autotest_common.sh@926 -- # '[' -z 118450 ']' 00:10:06.795 02:33:31 -- common/autotest_common.sh@930 -- # kill -0 118450 00:10:06.795 02:33:31 -- common/autotest_common.sh@931 -- # uname 00:10:06.795 02:33:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:06.795 02:33:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 118450 00:10:06.795 02:33:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:06.795 killing process with pid 118450 00:10:06.795 02:33:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:06.795 02:33:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 118450' 00:10:06.795 02:33:31 -- common/autotest_common.sh@945 -- # kill 118450 00:10:06.795 02:33:31 -- common/autotest_common.sh@950 -- # wait 118450 00:10:07.054 ************************************ 00:10:07.054 END TEST non_locking_app_on_locked_coremask 00:10:07.054 ************************************ 00:10:07.054 00:10:07.054 real 0m3.610s 00:10:07.054 user 0m3.862s 00:10:07.054 sys 0m1.017s 00:10:07.054 02:33:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:07.054 02:33:32 -- common/autotest_common.sh@10 -- # set +x 00:10:07.054 02:33:32 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:10:07.054 02:33:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:07.054 02:33:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:07.054 02:33:32 -- common/autotest_common.sh@10 -- # set +x 00:10:07.054 ************************************ 00:10:07.054 START TEST locking_app_on_unlocked_coremask 00:10:07.054 ************************************ 00:10:07.054 02:33:32 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:10:07.054 02:33:32 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=118542 00:10:07.054 02:33:32 -- event/cpu_locks.sh@99 -- # waitforlisten 118542 /var/tmp/spdk.sock 00:10:07.054 02:33:32 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:10:07.054 02:33:32 -- common/autotest_common.sh@819 -- # '[' -z 118542 ']' 00:10:07.054 02:33:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.054 02:33:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:07.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.054 02:33:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.054 02:33:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:07.054 02:33:32 -- common/autotest_common.sh@10 -- # set +x 00:10:07.312 [2024-07-11 02:33:32.151207] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:07.312 [2024-07-11 02:33:32.151468] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118542 ] 00:10:07.312 [2024-07-11 02:33:32.297778] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:07.312 [2024-07-11 02:33:32.297860] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.312 [2024-07-11 02:33:32.369726] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:07.312 [2024-07-11 02:33:32.370041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.248 02:33:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:08.248 02:33:33 -- common/autotest_common.sh@852 -- # return 0 00:10:08.248 02:33:33 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=118563 00:10:08.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:08.248 02:33:33 -- event/cpu_locks.sh@103 -- # waitforlisten 118563 /var/tmp/spdk2.sock 00:10:08.248 02:33:33 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:08.248 02:33:33 -- common/autotest_common.sh@819 -- # '[' -z 118563 ']' 00:10:08.248 02:33:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:08.248 02:33:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:08.248 02:33:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:08.248 02:33:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:08.248 02:33:33 -- common/autotest_common.sh@10 -- # set +x 00:10:08.248 [2024-07-11 02:33:33.135244] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:08.248 [2024-07-11 02:33:33.135451] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118563 ] 00:10:08.248 [2024-07-11 02:33:33.269692] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.507 [2024-07-11 02:33:33.445854] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:08.507 [2024-07-11 02:33:33.446119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.072 02:33:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:09.072 02:33:34 -- common/autotest_common.sh@852 -- # return 0 00:10:09.072 02:33:34 -- event/cpu_locks.sh@105 -- # locks_exist 118563 00:10:09.072 02:33:34 -- event/cpu_locks.sh@22 -- # lslocks -p 118563 00:10:09.072 02:33:34 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:09.638 02:33:34 -- event/cpu_locks.sh@107 -- # killprocess 118542 00:10:09.639 02:33:34 -- common/autotest_common.sh@926 -- # '[' -z 118542 ']' 00:10:09.639 02:33:34 -- common/autotest_common.sh@930 -- # kill -0 118542 00:10:09.639 02:33:34 -- common/autotest_common.sh@931 -- # uname 00:10:09.639 02:33:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:09.639 02:33:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 118542 00:10:09.639 02:33:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:09.639 02:33:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:09.639 killing process with pid 118542 00:10:09.639 02:33:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 118542' 00:10:09.639 02:33:34 -- common/autotest_common.sh@945 -- # kill 118542 00:10:09.639 02:33:34 -- common/autotest_common.sh@950 -- # wait 118542 00:10:10.586 02:33:35 -- event/cpu_locks.sh@108 -- # killprocess 118563 00:10:10.586 02:33:35 -- common/autotest_common.sh@926 -- # '[' -z 118563 ']' 00:10:10.586 02:33:35 -- common/autotest_common.sh@930 -- # kill -0 118563 00:10:10.586 02:33:35 -- common/autotest_common.sh@931 -- # uname 00:10:10.586 02:33:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:10.586 02:33:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 118563 00:10:10.586 02:33:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:10.586 02:33:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:10.586 02:33:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 118563' 00:10:10.586 killing process with pid 118563 00:10:10.586 02:33:35 -- common/autotest_common.sh@945 -- # kill 118563 00:10:10.586 02:33:35 -- common/autotest_common.sh@950 -- # wait 118563 00:10:11.153 00:10:11.153 real 0m3.855s 00:10:11.153 user 0m4.241s 00:10:11.153 sys 0m1.005s 00:10:11.153 02:33:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:11.153 02:33:35 -- common/autotest_common.sh@10 -- # set +x 00:10:11.153 ************************************ 00:10:11.153 END TEST locking_app_on_unlocked_coremask 00:10:11.153 ************************************ 00:10:11.153 02:33:35 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:10:11.153 02:33:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:11.153 02:33:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:11.153 02:33:35 -- common/autotest_common.sh@10 -- # set +x 00:10:11.153 ************************************ 00:10:11.153 START TEST locking_app_on_locked_coremask 00:10:11.153 ************************************ 00:10:11.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.153 02:33:35 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:10:11.153 02:33:35 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=118632 00:10:11.153 02:33:35 -- event/cpu_locks.sh@116 -- # waitforlisten 118632 /var/tmp/spdk.sock 00:10:11.153 02:33:35 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:11.153 02:33:35 -- common/autotest_common.sh@819 -- # '[' -z 118632 ']' 00:10:11.153 02:33:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.153 02:33:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:11.153 02:33:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.153 02:33:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:11.153 02:33:35 -- common/autotest_common.sh@10 -- # set +x 00:10:11.153 [2024-07-11 02:33:36.057506] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:11.153 [2024-07-11 02:33:36.057998] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118632 ] 00:10:11.153 [2024-07-11 02:33:36.202349] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.411 [2024-07-11 02:33:36.292607] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:11.411 [2024-07-11 02:33:36.292862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.976 02:33:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:11.976 02:33:37 -- common/autotest_common.sh@852 -- # return 0 00:10:11.976 02:33:37 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=118653 00:10:11.976 02:33:37 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 118653 /var/tmp/spdk2.sock 00:10:11.976 02:33:37 -- common/autotest_common.sh@640 -- # local es=0 00:10:11.976 02:33:37 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 118653 /var/tmp/spdk2.sock 00:10:11.976 02:33:37 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:10:11.976 02:33:37 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:11.976 02:33:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:11.976 02:33:37 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:10:11.976 02:33:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:11.976 02:33:37 -- common/autotest_common.sh@643 -- # waitforlisten 118653 /var/tmp/spdk2.sock 00:10:11.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:11.976 02:33:37 -- common/autotest_common.sh@819 -- # '[' -z 118653 ']' 00:10:11.976 02:33:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:11.976 02:33:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:11.976 02:33:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:11.976 02:33:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:11.976 02:33:37 -- common/autotest_common.sh@10 -- # set +x 00:10:12.234 [2024-07-11 02:33:37.104759] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:12.234 [2024-07-11 02:33:37.105067] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118653 ] 00:10:12.234 [2024-07-11 02:33:37.247068] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 118632 has claimed it. 00:10:12.234 [2024-07-11 02:33:37.247167] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:12.800 ERROR: process (pid: 118653) is no longer running 00:10:12.800 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (118653) - No such process 00:10:12.800 02:33:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:12.800 02:33:37 -- common/autotest_common.sh@852 -- # return 1 00:10:12.800 02:33:37 -- common/autotest_common.sh@643 -- # es=1 00:10:12.800 02:33:37 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:12.800 02:33:37 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:12.800 02:33:37 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:12.800 02:33:37 -- event/cpu_locks.sh@122 -- # locks_exist 118632 00:10:12.800 02:33:37 -- event/cpu_locks.sh@22 -- # lslocks -p 118632 00:10:12.800 02:33:37 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:13.057 02:33:37 -- event/cpu_locks.sh@124 -- # killprocess 118632 00:10:13.057 02:33:37 -- common/autotest_common.sh@926 -- # '[' -z 118632 ']' 00:10:13.057 02:33:37 -- common/autotest_common.sh@930 -- # kill -0 118632 00:10:13.057 02:33:37 -- common/autotest_common.sh@931 -- # uname 00:10:13.057 02:33:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:13.057 02:33:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 118632 00:10:13.057 02:33:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:13.057 02:33:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:13.057 killing process with pid 118632 00:10:13.057 02:33:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 118632' 00:10:13.057 02:33:37 -- common/autotest_common.sh@945 -- # kill 118632 00:10:13.057 02:33:37 -- common/autotest_common.sh@950 -- # wait 118632 00:10:13.624 00:10:13.624 real 0m2.450s 00:10:13.624 user 0m2.780s 00:10:13.624 sys 0m0.623s 00:10:13.624 02:33:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:13.624 02:33:38 -- common/autotest_common.sh@10 -- # set +x 00:10:13.624 ************************************ 00:10:13.624 END TEST locking_app_on_locked_coremask 00:10:13.624 ************************************ 00:10:13.624 02:33:38 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:10:13.624 02:33:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:13.624 02:33:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:13.624 02:33:38 -- common/autotest_common.sh@10 -- # set +x 00:10:13.624 ************************************ 00:10:13.624 START TEST locking_overlapped_coremask 00:10:13.624 ************************************ 00:10:13.624 02:33:38 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:10:13.624 02:33:38 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=118703 00:10:13.624 02:33:38 -- event/cpu_locks.sh@133 -- # waitforlisten 118703 /var/tmp/spdk.sock 00:10:13.624 02:33:38 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:10:13.624 02:33:38 -- common/autotest_common.sh@819 -- # '[' -z 118703 ']' 00:10:13.624 02:33:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.624 02:33:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:13.624 02:33:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.624 02:33:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:13.624 02:33:38 -- common/autotest_common.sh@10 -- # set +x 00:10:13.624 [2024-07-11 02:33:38.553050] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:13.624 [2024-07-11 02:33:38.553300] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118703 ] 00:10:13.624 [2024-07-11 02:33:38.707290] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:13.882 [2024-07-11 02:33:38.791276] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:13.882 [2024-07-11 02:33:38.791716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:13.882 [2024-07-11 02:33:38.791867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.882 [2024-07-11 02:33:38.791870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:14.447 02:33:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:14.447 02:33:39 -- common/autotest_common.sh@852 -- # return 0 00:10:14.447 02:33:39 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=118725 00:10:14.447 02:33:39 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 118725 /var/tmp/spdk2.sock 00:10:14.447 02:33:39 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:10:14.447 02:33:39 -- common/autotest_common.sh@640 -- # local es=0 00:10:14.447 02:33:39 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 118725 /var/tmp/spdk2.sock 00:10:14.447 02:33:39 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:10:14.447 02:33:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:14.447 02:33:39 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:10:14.447 02:33:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:14.447 02:33:39 -- common/autotest_common.sh@643 -- # waitforlisten 118725 /var/tmp/spdk2.sock 00:10:14.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:14.447 02:33:39 -- common/autotest_common.sh@819 -- # '[' -z 118725 ']' 00:10:14.447 02:33:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:14.447 02:33:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:14.447 02:33:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:14.447 02:33:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:14.447 02:33:39 -- common/autotest_common.sh@10 -- # set +x 00:10:14.704 [2024-07-11 02:33:39.584876] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:14.704 [2024-07-11 02:33:39.585133] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118725 ] 00:10:14.704 [2024-07-11 02:33:39.759541] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 118703 has claimed it. 00:10:14.704 [2024-07-11 02:33:39.759645] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:15.270 ERROR: process (pid: 118725) is no longer running 00:10:15.270 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (118725) - No such process 00:10:15.270 02:33:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:15.270 02:33:40 -- common/autotest_common.sh@852 -- # return 1 00:10:15.270 02:33:40 -- common/autotest_common.sh@643 -- # es=1 00:10:15.270 02:33:40 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:15.270 02:33:40 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:15.270 02:33:40 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:15.270 02:33:40 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:10:15.270 02:33:40 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:15.270 02:33:40 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:15.270 02:33:40 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:15.270 02:33:40 -- event/cpu_locks.sh@141 -- # killprocess 118703 00:10:15.270 02:33:40 -- common/autotest_common.sh@926 -- # '[' -z 118703 ']' 00:10:15.270 02:33:40 -- common/autotest_common.sh@930 -- # kill -0 118703 00:10:15.270 02:33:40 -- common/autotest_common.sh@931 -- # uname 00:10:15.270 02:33:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:15.270 02:33:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 118703 00:10:15.270 02:33:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:15.270 02:33:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:15.270 killing process with pid 118703 00:10:15.270 02:33:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 118703' 00:10:15.270 02:33:40 -- common/autotest_common.sh@945 -- # kill 118703 00:10:15.270 02:33:40 -- common/autotest_common.sh@950 -- # wait 118703 00:10:15.836 00:10:15.836 real 0m2.268s 00:10:15.836 user 0m6.118s 00:10:15.836 sys 0m0.562s 00:10:15.836 02:33:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:15.836 ************************************ 00:10:15.836 END TEST locking_overlapped_coremask 00:10:15.836 ************************************ 00:10:15.836 02:33:40 -- common/autotest_common.sh@10 -- # set +x 00:10:15.836 02:33:40 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:10:15.836 02:33:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:15.836 02:33:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:15.836 02:33:40 -- common/autotest_common.sh@10 -- # set +x 00:10:15.836 ************************************ 00:10:15.836 START TEST locking_overlapped_coremask_via_rpc 00:10:15.836 ************************************ 00:10:15.836 02:33:40 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:10:15.836 02:33:40 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=118800 00:10:15.836 02:33:40 -- event/cpu_locks.sh@149 -- # waitforlisten 118800 /var/tmp/spdk.sock 00:10:15.836 02:33:40 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:10:15.836 02:33:40 -- common/autotest_common.sh@819 -- # '[' -z 118800 ']' 00:10:15.836 02:33:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.836 02:33:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:15.836 02:33:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.837 02:33:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:15.837 02:33:40 -- common/autotest_common.sh@10 -- # set +x 00:10:15.837 [2024-07-11 02:33:40.865329] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:15.837 [2024-07-11 02:33:40.865533] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118800 ] 00:10:16.095 [2024-07-11 02:33:41.023660] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:16.095 [2024-07-11 02:33:41.023741] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:16.095 [2024-07-11 02:33:41.109914] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:16.095 [2024-07-11 02:33:41.110319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:16.095 [2024-07-11 02:33:41.110455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:16.095 [2024-07-11 02:33:41.110464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.031 02:33:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:17.031 02:33:41 -- common/autotest_common.sh@852 -- # return 0 00:10:17.031 02:33:41 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=118823 00:10:17.031 02:33:41 -- event/cpu_locks.sh@153 -- # waitforlisten 118823 /var/tmp/spdk2.sock 00:10:17.031 02:33:41 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:10:17.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:17.031 02:33:41 -- common/autotest_common.sh@819 -- # '[' -z 118823 ']' 00:10:17.031 02:33:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:17.031 02:33:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:17.031 02:33:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:17.031 02:33:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:17.031 02:33:41 -- common/autotest_common.sh@10 -- # set +x 00:10:17.031 [2024-07-11 02:33:41.884569] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:17.031 [2024-07-11 02:33:41.884795] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118823 ] 00:10:17.031 [2024-07-11 02:33:42.045354] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:17.031 [2024-07-11 02:33:42.045428] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:17.289 [2024-07-11 02:33:42.249926] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:17.289 [2024-07-11 02:33:42.250321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:17.289 [2024-07-11 02:33:42.250447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:17.289 [2024-07-11 02:33:42.250451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:10:17.857 02:33:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:17.857 02:33:42 -- common/autotest_common.sh@852 -- # return 0 00:10:17.857 02:33:42 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:10:17.857 02:33:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:17.857 02:33:42 -- common/autotest_common.sh@10 -- # set +x 00:10:17.857 02:33:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:17.857 02:33:42 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:17.857 02:33:42 -- common/autotest_common.sh@640 -- # local es=0 00:10:17.857 02:33:42 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:17.857 02:33:42 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:10:17.857 02:33:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:17.857 02:33:42 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:10:17.857 02:33:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:17.857 02:33:42 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:17.857 02:33:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:17.857 02:33:42 -- common/autotest_common.sh@10 -- # set +x 00:10:17.857 [2024-07-11 02:33:42.877927] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 118800 has claimed it. 00:10:17.857 request: 00:10:17.857 { 00:10:17.857 "method": "framework_enable_cpumask_locks", 00:10:17.857 "req_id": 1 00:10:17.857 } 00:10:17.857 Got JSON-RPC error response 00:10:17.857 response: 00:10:17.857 { 00:10:17.857 "code": -32603, 00:10:17.857 "message": "Failed to claim CPU core: 2" 00:10:17.857 } 00:10:17.857 02:33:42 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:10:17.857 02:33:42 -- common/autotest_common.sh@643 -- # es=1 00:10:17.857 02:33:42 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:17.857 02:33:42 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:17.857 02:33:42 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:17.857 02:33:42 -- event/cpu_locks.sh@158 -- # waitforlisten 118800 /var/tmp/spdk.sock 00:10:17.857 02:33:42 -- common/autotest_common.sh@819 -- # '[' -z 118800 ']' 00:10:17.857 02:33:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:17.857 02:33:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:17.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:17.857 02:33:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:17.857 02:33:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:17.857 02:33:42 -- common/autotest_common.sh@10 -- # set +x 00:10:18.115 02:33:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:18.115 02:33:43 -- common/autotest_common.sh@852 -- # return 0 00:10:18.115 02:33:43 -- event/cpu_locks.sh@159 -- # waitforlisten 118823 /var/tmp/spdk2.sock 00:10:18.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:18.115 02:33:43 -- common/autotest_common.sh@819 -- # '[' -z 118823 ']' 00:10:18.115 02:33:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:18.115 02:33:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:18.115 02:33:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:18.115 02:33:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:18.116 02:33:43 -- common/autotest_common.sh@10 -- # set +x 00:10:18.375 02:33:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:18.375 02:33:43 -- common/autotest_common.sh@852 -- # return 0 00:10:18.375 02:33:43 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:10:18.375 02:33:43 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:18.375 02:33:43 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:18.375 02:33:43 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:18.375 00:10:18.375 real 0m2.503s 00:10:18.375 user 0m1.256s 00:10:18.375 sys 0m0.200s 00:10:18.375 02:33:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:18.375 02:33:43 -- common/autotest_common.sh@10 -- # set +x 00:10:18.375 ************************************ 00:10:18.375 END TEST locking_overlapped_coremask_via_rpc 00:10:18.375 ************************************ 00:10:18.375 02:33:43 -- event/cpu_locks.sh@174 -- # cleanup 00:10:18.375 02:33:43 -- event/cpu_locks.sh@15 -- # [[ -z 118800 ]] 00:10:18.375 02:33:43 -- event/cpu_locks.sh@15 -- # killprocess 118800 00:10:18.375 02:33:43 -- common/autotest_common.sh@926 -- # '[' -z 118800 ']' 00:10:18.375 02:33:43 -- common/autotest_common.sh@930 -- # kill -0 118800 00:10:18.375 02:33:43 -- common/autotest_common.sh@931 -- # uname 00:10:18.375 02:33:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:18.375 02:33:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 118800 00:10:18.375 02:33:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:18.375 killing process with pid 118800 00:10:18.375 02:33:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:18.375 02:33:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 118800' 00:10:18.375 02:33:43 -- common/autotest_common.sh@945 -- # kill 118800 00:10:18.375 02:33:43 -- common/autotest_common.sh@950 -- # wait 118800 00:10:18.945 02:33:43 -- event/cpu_locks.sh@16 -- # [[ -z 118823 ]] 00:10:18.945 02:33:43 -- event/cpu_locks.sh@16 -- # killprocess 118823 00:10:18.945 02:33:43 -- common/autotest_common.sh@926 -- # '[' -z 118823 ']' 00:10:18.945 02:33:43 -- common/autotest_common.sh@930 -- # kill -0 118823 00:10:18.945 02:33:43 -- common/autotest_common.sh@931 -- # uname 00:10:18.945 02:33:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:18.945 02:33:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 118823 00:10:18.945 02:33:43 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:10:18.945 killing process with pid 118823 00:10:18.945 02:33:43 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:10:18.945 02:33:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 118823' 00:10:18.945 02:33:43 -- common/autotest_common.sh@945 -- # kill 118823 00:10:18.945 02:33:43 -- common/autotest_common.sh@950 -- # wait 118823 00:10:19.511 02:33:44 -- event/cpu_locks.sh@18 -- # rm -f 00:10:19.511 02:33:44 -- event/cpu_locks.sh@1 -- # cleanup 00:10:19.511 02:33:44 -- event/cpu_locks.sh@15 -- # [[ -z 118800 ]] 00:10:19.511 Process with pid 118800 is not found 00:10:19.511 02:33:44 -- event/cpu_locks.sh@15 -- # killprocess 118800 00:10:19.511 02:33:44 -- common/autotest_common.sh@926 -- # '[' -z 118800 ']' 00:10:19.511 02:33:44 -- common/autotest_common.sh@930 -- # kill -0 118800 00:10:19.511 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (118800) - No such process 00:10:19.511 02:33:44 -- common/autotest_common.sh@953 -- # echo 'Process with pid 118800 is not found' 00:10:19.511 02:33:44 -- event/cpu_locks.sh@16 -- # [[ -z 118823 ]] 00:10:19.511 Process with pid 118823 is not found 00:10:19.511 02:33:44 -- event/cpu_locks.sh@16 -- # killprocess 118823 00:10:19.511 02:33:44 -- common/autotest_common.sh@926 -- # '[' -z 118823 ']' 00:10:19.511 02:33:44 -- common/autotest_common.sh@930 -- # kill -0 118823 00:10:19.511 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (118823) - No such process 00:10:19.511 02:33:44 -- common/autotest_common.sh@953 -- # echo 'Process with pid 118823 is not found' 00:10:19.511 02:33:44 -- event/cpu_locks.sh@18 -- # rm -f 00:10:19.511 00:10:19.511 real 0m19.504s 00:10:19.511 user 0m34.110s 00:10:19.511 sys 0m5.356s 00:10:19.511 02:33:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:19.511 02:33:44 -- common/autotest_common.sh@10 -- # set +x 00:10:19.511 ************************************ 00:10:19.511 END TEST cpu_locks 00:10:19.511 ************************************ 00:10:19.511 00:10:19.511 real 0m46.985s 00:10:19.511 user 1m30.459s 00:10:19.511 sys 0m9.043s 00:10:19.511 02:33:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:19.511 02:33:44 -- common/autotest_common.sh@10 -- # set +x 00:10:19.511 ************************************ 00:10:19.511 END TEST event 00:10:19.511 ************************************ 00:10:19.511 02:33:44 -- spdk/autotest.sh@188 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:19.511 02:33:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:19.511 02:33:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:19.511 02:33:44 -- common/autotest_common.sh@10 -- # set +x 00:10:19.511 ************************************ 00:10:19.511 START TEST thread 00:10:19.511 ************************************ 00:10:19.511 02:33:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:19.511 * Looking for test storage... 00:10:19.511 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:10:19.511 02:33:44 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:19.511 02:33:44 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:10:19.511 02:33:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:19.511 02:33:44 -- common/autotest_common.sh@10 -- # set +x 00:10:19.511 ************************************ 00:10:19.511 START TEST thread_poller_perf 00:10:19.511 ************************************ 00:10:19.511 02:33:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:19.511 [2024-07-11 02:33:44.555652] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:19.511 [2024-07-11 02:33:44.555884] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118948 ] 00:10:19.770 [2024-07-11 02:33:44.698862] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.770 [2024-07-11 02:33:44.762828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.770 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:10:21.145 ====================================== 00:10:21.145 busy:2210647248 (cyc) 00:10:21.145 total_run_count: 313000 00:10:21.145 tsc_hz: 2200000000 (cyc) 00:10:21.145 ====================================== 00:10:21.145 poller_cost: 7062 (cyc), 3210 (nsec) 00:10:21.145 ************************************ 00:10:21.145 END TEST thread_poller_perf 00:10:21.145 ************************************ 00:10:21.145 00:10:21.145 real 0m1.332s 00:10:21.145 user 0m1.155s 00:10:21.145 sys 0m0.077s 00:10:21.145 02:33:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:21.145 02:33:45 -- common/autotest_common.sh@10 -- # set +x 00:10:21.145 02:33:45 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:21.145 02:33:45 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:10:21.145 02:33:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:21.146 02:33:45 -- common/autotest_common.sh@10 -- # set +x 00:10:21.146 ************************************ 00:10:21.146 START TEST thread_poller_perf 00:10:21.146 ************************************ 00:10:21.146 02:33:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:21.146 [2024-07-11 02:33:45.942440] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:21.146 [2024-07-11 02:33:45.942678] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118993 ] 00:10:21.146 [2024-07-11 02:33:46.084500] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.146 [2024-07-11 02:33:46.156876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.146 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:10:22.523 ====================================== 00:10:22.523 busy:2206098002 (cyc) 00:10:22.523 total_run_count: 3933000 00:10:22.523 tsc_hz: 2200000000 (cyc) 00:10:22.523 ====================================== 00:10:22.523 poller_cost: 560 (cyc), 254 (nsec) 00:10:22.523 ************************************ 00:10:22.523 END TEST thread_poller_perf 00:10:22.523 ************************************ 00:10:22.523 00:10:22.523 real 0m1.355s 00:10:22.523 user 0m1.162s 00:10:22.523 sys 0m0.092s 00:10:22.523 02:33:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:22.523 02:33:47 -- common/autotest_common.sh@10 -- # set +x 00:10:22.523 02:33:47 -- thread/thread.sh@17 -- # [[ n != \y ]] 00:10:22.523 02:33:47 -- thread/thread.sh@18 -- # run_test thread_spdk_lock /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:10:22.523 02:33:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:22.523 02:33:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:22.523 02:33:47 -- common/autotest_common.sh@10 -- # set +x 00:10:22.523 ************************************ 00:10:22.523 START TEST thread_spdk_lock 00:10:22.523 ************************************ 00:10:22.523 02:33:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:10:22.523 [2024-07-11 02:33:47.346296] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:22.523 [2024-07-11 02:33:47.346588] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119034 ] 00:10:22.523 [2024-07-11 02:33:47.498082] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:22.523 [2024-07-11 02:33:47.561293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:22.523 [2024-07-11 02:33:47.561311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.091 [2024-07-11 02:33:48.152103] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 955:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:10:23.091 [2024-07-11 02:33:48.152201] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3062:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:10:23.091 [2024-07-11 02:33:48.152287] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x55562a5b4980 00:10:23.091 [2024-07-11 02:33:48.153786] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:10:23.091 [2024-07-11 02:33:48.153906] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1016:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:10:23.091 [2024-07-11 02:33:48.154007] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:10:23.350 Starting test contend 00:10:23.350 Worker Delay Wait us Hold us Total us 00:10:23.350 0 3 157307 218435 375743 00:10:23.350 1 5 77094 326500 403594 00:10:23.350 PASS test contend 00:10:23.350 Starting test hold_by_poller 00:10:23.350 PASS test hold_by_poller 00:10:23.350 Starting test hold_by_message 00:10:23.350 PASS test hold_by_message 00:10:23.350 /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:10:23.350 100014 assertions passed 00:10:23.350 0 assertions failed 00:10:23.350 00:10:23.350 real 0m0.943s 00:10:23.350 user 0m1.361s 00:10:23.350 sys 0m0.076s 00:10:23.350 02:33:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:23.350 ************************************ 00:10:23.350 02:33:48 -- common/autotest_common.sh@10 -- # set +x 00:10:23.350 END TEST thread_spdk_lock 00:10:23.350 ************************************ 00:10:23.350 00:10:23.350 real 0m3.850s 00:10:23.350 user 0m3.791s 00:10:23.350 sys 0m0.345s 00:10:23.350 ************************************ 00:10:23.350 END TEST thread 00:10:23.350 02:33:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:23.350 02:33:48 -- common/autotest_common.sh@10 -- # set +x 00:10:23.350 ************************************ 00:10:23.350 02:33:48 -- spdk/autotest.sh@189 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:10:23.350 02:33:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:23.350 02:33:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:23.350 02:33:48 -- common/autotest_common.sh@10 -- # set +x 00:10:23.350 ************************************ 00:10:23.350 START TEST accel 00:10:23.350 ************************************ 00:10:23.350 02:33:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:10:23.350 * Looking for test storage... 00:10:23.350 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:10:23.350 02:33:48 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:10:23.350 02:33:48 -- accel/accel.sh@74 -- # get_expected_opcs 00:10:23.350 02:33:48 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:23.350 02:33:48 -- accel/accel.sh@59 -- # spdk_tgt_pid=119112 00:10:23.350 02:33:48 -- accel/accel.sh@60 -- # waitforlisten 119112 00:10:23.350 02:33:48 -- common/autotest_common.sh@819 -- # '[' -z 119112 ']' 00:10:23.350 02:33:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.350 02:33:48 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:10:23.350 02:33:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:23.350 02:33:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.350 02:33:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:23.350 02:33:48 -- common/autotest_common.sh@10 -- # set +x 00:10:23.350 02:33:48 -- accel/accel.sh@58 -- # build_accel_config 00:10:23.350 02:33:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:23.350 02:33:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:23.350 02:33:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:23.350 02:33:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:23.350 02:33:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:23.351 02:33:48 -- accel/accel.sh@41 -- # local IFS=, 00:10:23.351 02:33:48 -- accel/accel.sh@42 -- # jq -r . 00:10:23.610 [2024-07-11 02:33:48.486081] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:23.610 [2024-07-11 02:33:48.486595] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119112 ] 00:10:23.610 [2024-07-11 02:33:48.635049] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.868 [2024-07-11 02:33:48.717460] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:23.868 [2024-07-11 02:33:48.718052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.437 02:33:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:24.437 02:33:49 -- common/autotest_common.sh@852 -- # return 0 00:10:24.437 02:33:49 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:10:24.437 02:33:49 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:10:24.437 02:33:49 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:10:24.437 02:33:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:24.437 02:33:49 -- common/autotest_common.sh@10 -- # set +x 00:10:24.437 02:33:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:24.437 02:33:49 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:24.437 02:33:49 -- accel/accel.sh@64 -- # IFS== 00:10:24.437 02:33:49 -- accel/accel.sh@64 -- # read -r opc module 00:10:24.437 02:33:49 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:24.437 02:33:49 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:24.437 02:33:49 -- accel/accel.sh@64 -- # IFS== 00:10:24.437 02:33:49 -- accel/accel.sh@64 -- # read -r opc module 00:10:24.437 02:33:49 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:24.437 02:33:49 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:24.437 02:33:49 -- accel/accel.sh@64 -- # IFS== 00:10:24.437 02:33:49 -- accel/accel.sh@64 -- # read -r opc module 00:10:24.437 02:33:49 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:24.437 02:33:49 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:24.437 02:33:49 -- accel/accel.sh@64 -- # IFS== 00:10:24.437 02:33:49 -- accel/accel.sh@64 -- # read -r opc module 00:10:24.437 02:33:49 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:24.438 02:33:49 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:24.438 02:33:49 -- accel/accel.sh@64 -- # IFS== 00:10:24.438 02:33:49 -- accel/accel.sh@64 -- # read -r opc module 00:10:24.438 02:33:49 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:24.438 02:33:49 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:24.438 02:33:49 -- accel/accel.sh@64 -- # IFS== 00:10:24.438 02:33:49 -- accel/accel.sh@64 -- # read -r opc module 00:10:24.438 02:33:49 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:24.438 02:33:49 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:24.438 02:33:49 -- accel/accel.sh@64 -- # IFS== 00:10:24.438 02:33:49 -- accel/accel.sh@64 -- # read -r opc module 00:10:24.438 02:33:49 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:24.438 02:33:49 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:24.438 02:33:49 -- accel/accel.sh@64 -- # IFS== 00:10:24.438 02:33:49 -- accel/accel.sh@64 -- # read -r opc module 00:10:24.438 02:33:49 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:24.438 02:33:49 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:24.438 02:33:49 -- accel/accel.sh@64 -- # IFS== 00:10:24.438 02:33:49 -- accel/accel.sh@64 -- # read -r opc module 00:10:24.438 02:33:49 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:24.438 02:33:49 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:24.438 02:33:49 -- accel/accel.sh@64 -- # IFS== 00:10:24.438 02:33:49 -- accel/accel.sh@64 -- # read -r opc module 00:10:24.438 02:33:49 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:24.438 02:33:49 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:24.438 02:33:49 -- accel/accel.sh@64 -- # IFS== 00:10:24.438 02:33:49 -- accel/accel.sh@64 -- # read -r opc module 00:10:24.438 02:33:49 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:24.438 02:33:49 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:24.438 02:33:49 -- accel/accel.sh@64 -- # IFS== 00:10:24.438 02:33:49 -- accel/accel.sh@64 -- # read -r opc module 00:10:24.438 02:33:49 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:24.438 02:33:49 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:24.438 02:33:49 -- accel/accel.sh@64 -- # IFS== 00:10:24.438 02:33:49 -- accel/accel.sh@64 -- # read -r opc module 00:10:24.438 02:33:49 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:24.438 02:33:49 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:24.438 02:33:49 -- accel/accel.sh@64 -- # IFS== 00:10:24.438 02:33:49 -- accel/accel.sh@64 -- # read -r opc module 00:10:24.438 02:33:49 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:24.438 02:33:49 -- accel/accel.sh@67 -- # killprocess 119112 00:10:24.438 02:33:49 -- common/autotest_common.sh@926 -- # '[' -z 119112 ']' 00:10:24.438 02:33:49 -- common/autotest_common.sh@930 -- # kill -0 119112 00:10:24.438 02:33:49 -- common/autotest_common.sh@931 -- # uname 00:10:24.438 02:33:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:24.438 02:33:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 119112 00:10:24.438 02:33:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:24.438 02:33:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:24.438 02:33:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 119112' 00:10:24.438 killing process with pid 119112 00:10:24.438 02:33:49 -- common/autotest_common.sh@945 -- # kill 119112 00:10:24.438 02:33:49 -- common/autotest_common.sh@950 -- # wait 119112 00:10:25.005 02:33:49 -- accel/accel.sh@68 -- # trap - ERR 00:10:25.005 02:33:49 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:10:25.005 02:33:49 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:10:25.005 02:33:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:25.005 02:33:49 -- common/autotest_common.sh@10 -- # set +x 00:10:25.005 02:33:49 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:10:25.005 02:33:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:10:25.005 02:33:49 -- accel/accel.sh@12 -- # build_accel_config 00:10:25.005 02:33:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:25.005 02:33:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:25.005 02:33:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:25.005 02:33:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:25.005 02:33:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:25.005 02:33:49 -- accel/accel.sh@41 -- # local IFS=, 00:10:25.005 02:33:49 -- accel/accel.sh@42 -- # jq -r . 00:10:25.005 02:33:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:25.005 02:33:50 -- common/autotest_common.sh@10 -- # set +x 00:10:25.005 02:33:50 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:10:25.005 02:33:50 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:25.005 02:33:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:25.005 02:33:50 -- common/autotest_common.sh@10 -- # set +x 00:10:25.005 ************************************ 00:10:25.005 START TEST accel_missing_filename 00:10:25.005 ************************************ 00:10:25.005 02:33:50 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:10:25.005 02:33:50 -- common/autotest_common.sh@640 -- # local es=0 00:10:25.005 02:33:50 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:10:25.005 02:33:50 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:10:25.005 02:33:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:25.005 02:33:50 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:10:25.005 02:33:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:25.005 02:33:50 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:10:25.005 02:33:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:10:25.005 02:33:50 -- accel/accel.sh@12 -- # build_accel_config 00:10:25.005 02:33:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:25.005 02:33:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:25.005 02:33:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:25.005 02:33:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:25.005 02:33:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:25.005 02:33:50 -- accel/accel.sh@41 -- # local IFS=, 00:10:25.005 02:33:50 -- accel/accel.sh@42 -- # jq -r . 00:10:25.005 [2024-07-11 02:33:50.082199] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:25.005 [2024-07-11 02:33:50.082555] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119177 ] 00:10:25.263 [2024-07-11 02:33:50.224209] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.263 [2024-07-11 02:33:50.300392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.522 [2024-07-11 02:33:50.362932] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:25.522 [2024-07-11 02:33:50.457230] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:10:25.522 A filename is required. 00:10:25.522 02:33:50 -- common/autotest_common.sh@643 -- # es=234 00:10:25.522 02:33:50 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:25.522 02:33:50 -- common/autotest_common.sh@652 -- # es=106 00:10:25.522 02:33:50 -- common/autotest_common.sh@653 -- # case "$es" in 00:10:25.522 02:33:50 -- common/autotest_common.sh@660 -- # es=1 00:10:25.522 02:33:50 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:25.522 00:10:25.522 real 0m0.506s 00:10:25.522 user 0m0.327s 00:10:25.522 sys 0m0.129s 00:10:25.522 02:33:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:25.522 02:33:50 -- common/autotest_common.sh@10 -- # set +x 00:10:25.522 ************************************ 00:10:25.522 END TEST accel_missing_filename 00:10:25.522 ************************************ 00:10:25.522 02:33:50 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:25.522 02:33:50 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:10:25.522 02:33:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:25.522 02:33:50 -- common/autotest_common.sh@10 -- # set +x 00:10:25.522 ************************************ 00:10:25.522 START TEST accel_compress_verify 00:10:25.522 ************************************ 00:10:25.522 02:33:50 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:25.522 02:33:50 -- common/autotest_common.sh@640 -- # local es=0 00:10:25.522 02:33:50 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:25.522 02:33:50 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:10:25.522 02:33:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:25.522 02:33:50 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:10:25.522 02:33:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:25.522 02:33:50 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:25.522 02:33:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:25.522 02:33:50 -- accel/accel.sh@12 -- # build_accel_config 00:10:25.522 02:33:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:25.522 02:33:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:25.522 02:33:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:25.522 02:33:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:25.522 02:33:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:25.522 02:33:50 -- accel/accel.sh@41 -- # local IFS=, 00:10:25.522 02:33:50 -- accel/accel.sh@42 -- # jq -r . 00:10:25.780 [2024-07-11 02:33:50.636131] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:25.780 [2024-07-11 02:33:50.636536] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119223 ] 00:10:25.780 [2024-07-11 02:33:50.785440] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.780 [2024-07-11 02:33:50.858346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.039 [2024-07-11 02:33:50.920371] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:26.039 [2024-07-11 02:33:51.014066] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:10:26.039 00:10:26.039 Compression does not support the verify option, aborting. 00:10:26.039 ************************************ 00:10:26.039 END TEST accel_compress_verify 00:10:26.039 ************************************ 00:10:26.039 02:33:51 -- common/autotest_common.sh@643 -- # es=161 00:10:26.039 02:33:51 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:26.039 02:33:51 -- common/autotest_common.sh@652 -- # es=33 00:10:26.039 02:33:51 -- common/autotest_common.sh@653 -- # case "$es" in 00:10:26.039 02:33:51 -- common/autotest_common.sh@660 -- # es=1 00:10:26.039 02:33:51 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:26.039 00:10:26.039 real 0m0.507s 00:10:26.039 user 0m0.336s 00:10:26.039 sys 0m0.124s 00:10:26.039 02:33:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:26.039 02:33:51 -- common/autotest_common.sh@10 -- # set +x 00:10:26.319 02:33:51 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:10:26.319 02:33:51 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:26.319 02:33:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:26.319 02:33:51 -- common/autotest_common.sh@10 -- # set +x 00:10:26.319 ************************************ 00:10:26.319 START TEST accel_wrong_workload 00:10:26.319 ************************************ 00:10:26.319 02:33:51 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:10:26.319 02:33:51 -- common/autotest_common.sh@640 -- # local es=0 00:10:26.319 02:33:51 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:10:26.319 02:33:51 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:10:26.319 02:33:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:26.319 02:33:51 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:10:26.319 02:33:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:26.319 02:33:51 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:10:26.319 02:33:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:10:26.319 02:33:51 -- accel/accel.sh@12 -- # build_accel_config 00:10:26.319 02:33:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:26.319 02:33:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:26.319 02:33:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:26.319 02:33:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:26.319 02:33:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:26.319 02:33:51 -- accel/accel.sh@41 -- # local IFS=, 00:10:26.319 02:33:51 -- accel/accel.sh@42 -- # jq -r . 00:10:26.319 Unsupported workload type: foobar 00:10:26.319 [2024-07-11 02:33:51.195722] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:10:26.319 accel_perf options: 00:10:26.319 [-h help message] 00:10:26.319 [-q queue depth per core] 00:10:26.319 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:10:26.319 [-T number of threads per core 00:10:26.319 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:10:26.319 [-t time in seconds] 00:10:26.319 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:10:26.319 [ dif_verify, , dif_generate, dif_generate_copy 00:10:26.319 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:10:26.319 [-l for compress/decompress workloads, name of uncompressed input file 00:10:26.319 [-S for crc32c workload, use this seed value (default 0) 00:10:26.319 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:10:26.319 [-f for fill workload, use this BYTE value (default 255) 00:10:26.319 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:10:26.319 [-y verify result if this switch is on] 00:10:26.319 [-a tasks to allocate per core (default: same value as -q)] 00:10:26.319 Can be used to spread operations across a wider range of memory. 00:10:26.319 02:33:51 -- common/autotest_common.sh@643 -- # es=1 00:10:26.319 02:33:51 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:26.319 02:33:51 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:26.319 02:33:51 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:26.319 00:10:26.319 real 0m0.052s 00:10:26.319 user 0m0.029s 00:10:26.319 sys 0m0.021s 00:10:26.319 02:33:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:26.319 02:33:51 -- common/autotest_common.sh@10 -- # set +x 00:10:26.319 ************************************ 00:10:26.319 END TEST accel_wrong_workload 00:10:26.319 ************************************ 00:10:26.319 02:33:51 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:10:26.319 02:33:51 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:10:26.319 02:33:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:26.319 02:33:51 -- common/autotest_common.sh@10 -- # set +x 00:10:26.319 ************************************ 00:10:26.319 START TEST accel_negative_buffers 00:10:26.319 ************************************ 00:10:26.319 02:33:51 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:10:26.319 02:33:51 -- common/autotest_common.sh@640 -- # local es=0 00:10:26.319 02:33:51 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:10:26.319 02:33:51 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:10:26.319 02:33:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:26.319 02:33:51 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:10:26.319 02:33:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:26.319 02:33:51 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:10:26.319 02:33:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:10:26.319 02:33:51 -- accel/accel.sh@12 -- # build_accel_config 00:10:26.319 02:33:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:26.319 02:33:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:26.319 02:33:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:26.319 02:33:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:26.319 02:33:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:26.319 02:33:51 -- accel/accel.sh@41 -- # local IFS=, 00:10:26.319 02:33:51 -- accel/accel.sh@42 -- # jq -r . 00:10:26.319 -x option must be non-negative. 00:10:26.319 [2024-07-11 02:33:51.296402] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:10:26.319 accel_perf options: 00:10:26.319 [-h help message] 00:10:26.320 [-q queue depth per core] 00:10:26.320 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:10:26.320 [-T number of threads per core 00:10:26.320 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:10:26.320 [-t time in seconds] 00:10:26.320 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:10:26.320 [ dif_verify, , dif_generate, dif_generate_copy 00:10:26.320 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:10:26.320 [-l for compress/decompress workloads, name of uncompressed input file 00:10:26.320 [-S for crc32c workload, use this seed value (default 0) 00:10:26.320 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:10:26.320 [-f for fill workload, use this BYTE value (default 255) 00:10:26.320 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:10:26.320 [-y verify result if this switch is on] 00:10:26.320 [-a tasks to allocate per core (default: same value as -q)] 00:10:26.320 Can be used to spread operations across a wider range of memory. 00:10:26.320 02:33:51 -- common/autotest_common.sh@643 -- # es=1 00:10:26.320 02:33:51 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:26.320 02:33:51 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:26.320 02:33:51 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:26.320 00:10:26.320 real 0m0.048s 00:10:26.320 user 0m0.031s 00:10:26.320 sys 0m0.015s 00:10:26.320 02:33:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:26.320 02:33:51 -- common/autotest_common.sh@10 -- # set +x 00:10:26.320 ************************************ 00:10:26.320 END TEST accel_negative_buffers 00:10:26.320 ************************************ 00:10:26.320 02:33:51 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:10:26.320 02:33:51 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:10:26.320 02:33:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:26.320 02:33:51 -- common/autotest_common.sh@10 -- # set +x 00:10:26.320 ************************************ 00:10:26.320 START TEST accel_crc32c 00:10:26.320 ************************************ 00:10:26.320 02:33:51 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:10:26.320 02:33:51 -- accel/accel.sh@16 -- # local accel_opc 00:10:26.320 02:33:51 -- accel/accel.sh@17 -- # local accel_module 00:10:26.320 02:33:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:10:26.320 02:33:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:10:26.320 02:33:51 -- accel/accel.sh@12 -- # build_accel_config 00:10:26.320 02:33:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:26.320 02:33:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:26.320 02:33:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:26.320 02:33:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:26.320 02:33:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:26.320 02:33:51 -- accel/accel.sh@41 -- # local IFS=, 00:10:26.320 02:33:51 -- accel/accel.sh@42 -- # jq -r . 00:10:26.320 [2024-07-11 02:33:51.396062] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:26.320 [2024-07-11 02:33:51.396413] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119302 ] 00:10:26.583 [2024-07-11 02:33:51.546740] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.583 [2024-07-11 02:33:51.648784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.960 02:33:52 -- accel/accel.sh@18 -- # out=' 00:10:27.960 SPDK Configuration: 00:10:27.960 Core mask: 0x1 00:10:27.960 00:10:27.960 Accel Perf Configuration: 00:10:27.960 Workload Type: crc32c 00:10:27.960 CRC-32C seed: 32 00:10:27.960 Transfer size: 4096 bytes 00:10:27.960 Vector count 1 00:10:27.960 Module: software 00:10:27.960 Queue depth: 32 00:10:27.960 Allocate depth: 32 00:10:27.960 # threads/core: 1 00:10:27.960 Run time: 1 seconds 00:10:27.960 Verify: Yes 00:10:27.960 00:10:27.960 Running for 1 seconds... 00:10:27.960 00:10:27.960 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:27.960 ------------------------------------------------------------------------------------ 00:10:27.960 0,0 394752/s 1542 MiB/s 0 0 00:10:27.960 ==================================================================================== 00:10:27.960 Total 394752/s 1542 MiB/s 0 0' 00:10:27.960 02:33:52 -- accel/accel.sh@20 -- # IFS=: 00:10:27.960 02:33:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:10:27.960 02:33:52 -- accel/accel.sh@20 -- # read -r var val 00:10:27.960 02:33:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:10:27.960 02:33:52 -- accel/accel.sh@12 -- # build_accel_config 00:10:27.960 02:33:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:27.960 02:33:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:27.960 02:33:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:27.960 02:33:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:27.960 02:33:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:27.960 02:33:52 -- accel/accel.sh@41 -- # local IFS=, 00:10:27.960 02:33:52 -- accel/accel.sh@42 -- # jq -r . 00:10:27.960 [2024-07-11 02:33:52.955313] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:27.960 [2024-07-11 02:33:52.955710] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119329 ] 00:10:28.219 [2024-07-11 02:33:53.101632] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.219 [2024-07-11 02:33:53.187132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.219 02:33:53 -- accel/accel.sh@21 -- # val= 00:10:28.219 02:33:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.219 02:33:53 -- accel/accel.sh@20 -- # IFS=: 00:10:28.219 02:33:53 -- accel/accel.sh@20 -- # read -r var val 00:10:28.219 02:33:53 -- accel/accel.sh@21 -- # val= 00:10:28.219 02:33:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.219 02:33:53 -- accel/accel.sh@20 -- # IFS=: 00:10:28.219 02:33:53 -- accel/accel.sh@20 -- # read -r var val 00:10:28.219 02:33:53 -- accel/accel.sh@21 -- # val=0x1 00:10:28.219 02:33:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.219 02:33:53 -- accel/accel.sh@20 -- # IFS=: 00:10:28.219 02:33:53 -- accel/accel.sh@20 -- # read -r var val 00:10:28.219 02:33:53 -- accel/accel.sh@21 -- # val= 00:10:28.219 02:33:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.219 02:33:53 -- accel/accel.sh@20 -- # IFS=: 00:10:28.219 02:33:53 -- accel/accel.sh@20 -- # read -r var val 00:10:28.219 02:33:53 -- accel/accel.sh@21 -- # val= 00:10:28.219 02:33:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.219 02:33:53 -- accel/accel.sh@20 -- # IFS=: 00:10:28.219 02:33:53 -- accel/accel.sh@20 -- # read -r var val 00:10:28.219 02:33:53 -- accel/accel.sh@21 -- # val=crc32c 00:10:28.219 02:33:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.219 02:33:53 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:10:28.219 02:33:53 -- accel/accel.sh@20 -- # IFS=: 00:10:28.219 02:33:53 -- accel/accel.sh@20 -- # read -r var val 00:10:28.219 02:33:53 -- accel/accel.sh@21 -- # val=32 00:10:28.219 02:33:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.219 02:33:53 -- accel/accel.sh@20 -- # IFS=: 00:10:28.219 02:33:53 -- accel/accel.sh@20 -- # read -r var val 00:10:28.219 02:33:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:28.219 02:33:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.219 02:33:53 -- accel/accel.sh@20 -- # IFS=: 00:10:28.219 02:33:53 -- accel/accel.sh@20 -- # read -r var val 00:10:28.219 02:33:53 -- accel/accel.sh@21 -- # val= 00:10:28.219 02:33:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.219 02:33:53 -- accel/accel.sh@20 -- # IFS=: 00:10:28.219 02:33:53 -- accel/accel.sh@20 -- # read -r var val 00:10:28.219 02:33:53 -- accel/accel.sh@21 -- # val=software 00:10:28.219 02:33:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.219 02:33:53 -- accel/accel.sh@23 -- # accel_module=software 00:10:28.219 02:33:53 -- accel/accel.sh@20 -- # IFS=: 00:10:28.219 02:33:53 -- accel/accel.sh@20 -- # read -r var val 00:10:28.219 02:33:53 -- accel/accel.sh@21 -- # val=32 00:10:28.219 02:33:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.219 02:33:53 -- accel/accel.sh@20 -- # IFS=: 00:10:28.219 02:33:53 -- accel/accel.sh@20 -- # read -r var val 00:10:28.219 02:33:53 -- accel/accel.sh@21 -- # val=32 00:10:28.219 02:33:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.219 02:33:53 -- accel/accel.sh@20 -- # IFS=: 00:10:28.219 02:33:53 -- accel/accel.sh@20 -- # read -r var val 00:10:28.219 02:33:53 -- accel/accel.sh@21 -- # val=1 00:10:28.219 02:33:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.219 02:33:53 -- accel/accel.sh@20 -- # IFS=: 00:10:28.219 02:33:53 -- accel/accel.sh@20 -- # read -r var val 00:10:28.219 02:33:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:28.219 02:33:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.219 02:33:53 -- accel/accel.sh@20 -- # IFS=: 00:10:28.219 02:33:53 -- accel/accel.sh@20 -- # read -r var val 00:10:28.219 02:33:53 -- accel/accel.sh@21 -- # val=Yes 00:10:28.219 02:33:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.219 02:33:53 -- accel/accel.sh@20 -- # IFS=: 00:10:28.219 02:33:53 -- accel/accel.sh@20 -- # read -r var val 00:10:28.219 02:33:53 -- accel/accel.sh@21 -- # val= 00:10:28.219 02:33:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.219 02:33:53 -- accel/accel.sh@20 -- # IFS=: 00:10:28.219 02:33:53 -- accel/accel.sh@20 -- # read -r var val 00:10:28.219 02:33:53 -- accel/accel.sh@21 -- # val= 00:10:28.219 02:33:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.219 02:33:53 -- accel/accel.sh@20 -- # IFS=: 00:10:28.219 02:33:53 -- accel/accel.sh@20 -- # read -r var val 00:10:29.614 02:33:54 -- accel/accel.sh@21 -- # val= 00:10:29.614 02:33:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.614 02:33:54 -- accel/accel.sh@20 -- # IFS=: 00:10:29.614 02:33:54 -- accel/accel.sh@20 -- # read -r var val 00:10:29.614 02:33:54 -- accel/accel.sh@21 -- # val= 00:10:29.614 02:33:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.614 02:33:54 -- accel/accel.sh@20 -- # IFS=: 00:10:29.614 02:33:54 -- accel/accel.sh@20 -- # read -r var val 00:10:29.614 02:33:54 -- accel/accel.sh@21 -- # val= 00:10:29.614 02:33:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.614 02:33:54 -- accel/accel.sh@20 -- # IFS=: 00:10:29.614 02:33:54 -- accel/accel.sh@20 -- # read -r var val 00:10:29.614 02:33:54 -- accel/accel.sh@21 -- # val= 00:10:29.614 02:33:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.614 02:33:54 -- accel/accel.sh@20 -- # IFS=: 00:10:29.614 02:33:54 -- accel/accel.sh@20 -- # read -r var val 00:10:29.614 02:33:54 -- accel/accel.sh@21 -- # val= 00:10:29.614 02:33:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.614 02:33:54 -- accel/accel.sh@20 -- # IFS=: 00:10:29.614 02:33:54 -- accel/accel.sh@20 -- # read -r var val 00:10:29.614 02:33:54 -- accel/accel.sh@21 -- # val= 00:10:29.614 02:33:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.614 02:33:54 -- accel/accel.sh@20 -- # IFS=: 00:10:29.614 02:33:54 -- accel/accel.sh@20 -- # read -r var val 00:10:29.614 02:33:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:29.614 02:33:54 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:10:29.614 02:33:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:29.614 00:10:29.614 real 0m3.086s 00:10:29.614 user 0m2.628s 00:10:29.614 sys 0m0.306s 00:10:29.614 02:33:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:29.614 02:33:54 -- common/autotest_common.sh@10 -- # set +x 00:10:29.614 ************************************ 00:10:29.614 END TEST accel_crc32c 00:10:29.614 ************************************ 00:10:29.614 02:33:54 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:10:29.614 02:33:54 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:10:29.614 02:33:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:29.614 02:33:54 -- common/autotest_common.sh@10 -- # set +x 00:10:29.614 ************************************ 00:10:29.614 START TEST accel_crc32c_C2 00:10:29.614 ************************************ 00:10:29.614 02:33:54 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:10:29.614 02:33:54 -- accel/accel.sh@16 -- # local accel_opc 00:10:29.614 02:33:54 -- accel/accel.sh@17 -- # local accel_module 00:10:29.614 02:33:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:10:29.614 02:33:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:10:29.614 02:33:54 -- accel/accel.sh@12 -- # build_accel_config 00:10:29.614 02:33:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:29.614 02:33:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:29.614 02:33:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:29.614 02:33:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:29.614 02:33:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:29.614 02:33:54 -- accel/accel.sh@41 -- # local IFS=, 00:10:29.614 02:33:54 -- accel/accel.sh@42 -- # jq -r . 00:10:29.614 [2024-07-11 02:33:54.531009] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:29.614 [2024-07-11 02:33:54.531385] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119371 ] 00:10:29.614 [2024-07-11 02:33:54.668637] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.872 [2024-07-11 02:33:54.749944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.246 02:33:56 -- accel/accel.sh@18 -- # out=' 00:10:31.246 SPDK Configuration: 00:10:31.246 Core mask: 0x1 00:10:31.246 00:10:31.246 Accel Perf Configuration: 00:10:31.246 Workload Type: crc32c 00:10:31.246 CRC-32C seed: 0 00:10:31.246 Transfer size: 4096 bytes 00:10:31.246 Vector count 2 00:10:31.246 Module: software 00:10:31.246 Queue depth: 32 00:10:31.246 Allocate depth: 32 00:10:31.246 # threads/core: 1 00:10:31.246 Run time: 1 seconds 00:10:31.246 Verify: Yes 00:10:31.246 00:10:31.246 Running for 1 seconds... 00:10:31.246 00:10:31.246 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:31.246 ------------------------------------------------------------------------------------ 00:10:31.246 0,0 321120/s 2508 MiB/s 0 0 00:10:31.246 ==================================================================================== 00:10:31.246 Total 321120/s 1254 MiB/s 0 0' 00:10:31.246 02:33:56 -- accel/accel.sh@20 -- # IFS=: 00:10:31.246 02:33:56 -- accel/accel.sh@20 -- # read -r var val 00:10:31.246 02:33:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:10:31.246 02:33:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:10:31.246 02:33:56 -- accel/accel.sh@12 -- # build_accel_config 00:10:31.246 02:33:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:31.246 02:33:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:31.246 02:33:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:31.246 02:33:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:31.246 02:33:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:31.246 02:33:56 -- accel/accel.sh@41 -- # local IFS=, 00:10:31.246 02:33:56 -- accel/accel.sh@42 -- # jq -r . 00:10:31.246 [2024-07-11 02:33:56.051009] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:31.246 [2024-07-11 02:33:56.051448] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119398 ] 00:10:31.246 [2024-07-11 02:33:56.199792] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.246 [2024-07-11 02:33:56.275613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.504 02:33:56 -- accel/accel.sh@21 -- # val= 00:10:31.504 02:33:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.504 02:33:56 -- accel/accel.sh@20 -- # IFS=: 00:10:31.504 02:33:56 -- accel/accel.sh@20 -- # read -r var val 00:10:31.504 02:33:56 -- accel/accel.sh@21 -- # val= 00:10:31.504 02:33:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.504 02:33:56 -- accel/accel.sh@20 -- # IFS=: 00:10:31.504 02:33:56 -- accel/accel.sh@20 -- # read -r var val 00:10:31.504 02:33:56 -- accel/accel.sh@21 -- # val=0x1 00:10:31.504 02:33:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.504 02:33:56 -- accel/accel.sh@20 -- # IFS=: 00:10:31.504 02:33:56 -- accel/accel.sh@20 -- # read -r var val 00:10:31.504 02:33:56 -- accel/accel.sh@21 -- # val= 00:10:31.504 02:33:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.504 02:33:56 -- accel/accel.sh@20 -- # IFS=: 00:10:31.504 02:33:56 -- accel/accel.sh@20 -- # read -r var val 00:10:31.504 02:33:56 -- accel/accel.sh@21 -- # val= 00:10:31.504 02:33:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.504 02:33:56 -- accel/accel.sh@20 -- # IFS=: 00:10:31.504 02:33:56 -- accel/accel.sh@20 -- # read -r var val 00:10:31.504 02:33:56 -- accel/accel.sh@21 -- # val=crc32c 00:10:31.504 02:33:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.504 02:33:56 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:10:31.504 02:33:56 -- accel/accel.sh@20 -- # IFS=: 00:10:31.504 02:33:56 -- accel/accel.sh@20 -- # read -r var val 00:10:31.504 02:33:56 -- accel/accel.sh@21 -- # val=0 00:10:31.504 02:33:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.504 02:33:56 -- accel/accel.sh@20 -- # IFS=: 00:10:31.504 02:33:56 -- accel/accel.sh@20 -- # read -r var val 00:10:31.504 02:33:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:31.504 02:33:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.504 02:33:56 -- accel/accel.sh@20 -- # IFS=: 00:10:31.504 02:33:56 -- accel/accel.sh@20 -- # read -r var val 00:10:31.504 02:33:56 -- accel/accel.sh@21 -- # val= 00:10:31.504 02:33:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.504 02:33:56 -- accel/accel.sh@20 -- # IFS=: 00:10:31.504 02:33:56 -- accel/accel.sh@20 -- # read -r var val 00:10:31.504 02:33:56 -- accel/accel.sh@21 -- # val=software 00:10:31.504 02:33:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.504 02:33:56 -- accel/accel.sh@23 -- # accel_module=software 00:10:31.504 02:33:56 -- accel/accel.sh@20 -- # IFS=: 00:10:31.504 02:33:56 -- accel/accel.sh@20 -- # read -r var val 00:10:31.504 02:33:56 -- accel/accel.sh@21 -- # val=32 00:10:31.504 02:33:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.504 02:33:56 -- accel/accel.sh@20 -- # IFS=: 00:10:31.504 02:33:56 -- accel/accel.sh@20 -- # read -r var val 00:10:31.504 02:33:56 -- accel/accel.sh@21 -- # val=32 00:10:31.504 02:33:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.504 02:33:56 -- accel/accel.sh@20 -- # IFS=: 00:10:31.504 02:33:56 -- accel/accel.sh@20 -- # read -r var val 00:10:31.504 02:33:56 -- accel/accel.sh@21 -- # val=1 00:10:31.504 02:33:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.504 02:33:56 -- accel/accel.sh@20 -- # IFS=: 00:10:31.504 02:33:56 -- accel/accel.sh@20 -- # read -r var val 00:10:31.504 02:33:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:31.504 02:33:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.504 02:33:56 -- accel/accel.sh@20 -- # IFS=: 00:10:31.504 02:33:56 -- accel/accel.sh@20 -- # read -r var val 00:10:31.504 02:33:56 -- accel/accel.sh@21 -- # val=Yes 00:10:31.504 02:33:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.504 02:33:56 -- accel/accel.sh@20 -- # IFS=: 00:10:31.504 02:33:56 -- accel/accel.sh@20 -- # read -r var val 00:10:31.504 02:33:56 -- accel/accel.sh@21 -- # val= 00:10:31.504 02:33:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.504 02:33:56 -- accel/accel.sh@20 -- # IFS=: 00:10:31.504 02:33:56 -- accel/accel.sh@20 -- # read -r var val 00:10:31.504 02:33:56 -- accel/accel.sh@21 -- # val= 00:10:31.504 02:33:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.504 02:33:56 -- accel/accel.sh@20 -- # IFS=: 00:10:31.504 02:33:56 -- accel/accel.sh@20 -- # read -r var val 00:10:32.880 02:33:57 -- accel/accel.sh@21 -- # val= 00:10:32.880 02:33:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.880 02:33:57 -- accel/accel.sh@20 -- # IFS=: 00:10:32.880 02:33:57 -- accel/accel.sh@20 -- # read -r var val 00:10:32.880 02:33:57 -- accel/accel.sh@21 -- # val= 00:10:32.880 02:33:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.880 02:33:57 -- accel/accel.sh@20 -- # IFS=: 00:10:32.880 02:33:57 -- accel/accel.sh@20 -- # read -r var val 00:10:32.880 02:33:57 -- accel/accel.sh@21 -- # val= 00:10:32.880 02:33:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.880 02:33:57 -- accel/accel.sh@20 -- # IFS=: 00:10:32.880 02:33:57 -- accel/accel.sh@20 -- # read -r var val 00:10:32.880 02:33:57 -- accel/accel.sh@21 -- # val= 00:10:32.880 02:33:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.880 02:33:57 -- accel/accel.sh@20 -- # IFS=: 00:10:32.880 02:33:57 -- accel/accel.sh@20 -- # read -r var val 00:10:32.880 02:33:57 -- accel/accel.sh@21 -- # val= 00:10:32.880 02:33:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.880 02:33:57 -- accel/accel.sh@20 -- # IFS=: 00:10:32.880 02:33:57 -- accel/accel.sh@20 -- # read -r var val 00:10:32.880 02:33:57 -- accel/accel.sh@21 -- # val= 00:10:32.880 02:33:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.880 02:33:57 -- accel/accel.sh@20 -- # IFS=: 00:10:32.880 02:33:57 -- accel/accel.sh@20 -- # read -r var val 00:10:32.880 02:33:57 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:32.880 02:33:57 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:10:32.880 02:33:57 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:32.880 00:10:32.880 real 0m3.043s 00:10:32.880 user 0m2.591s 00:10:32.880 sys 0m0.309s 00:10:32.880 02:33:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:32.880 02:33:57 -- common/autotest_common.sh@10 -- # set +x 00:10:32.880 ************************************ 00:10:32.880 END TEST accel_crc32c_C2 00:10:32.880 ************************************ 00:10:32.880 02:33:57 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:10:32.880 02:33:57 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:32.880 02:33:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:32.880 02:33:57 -- common/autotest_common.sh@10 -- # set +x 00:10:32.880 ************************************ 00:10:32.880 START TEST accel_copy 00:10:32.880 ************************************ 00:10:32.880 02:33:57 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:10:32.880 02:33:57 -- accel/accel.sh@16 -- # local accel_opc 00:10:32.880 02:33:57 -- accel/accel.sh@17 -- # local accel_module 00:10:32.880 02:33:57 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:10:32.880 02:33:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:10:32.880 02:33:57 -- accel/accel.sh@12 -- # build_accel_config 00:10:32.880 02:33:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:32.880 02:33:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:32.880 02:33:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:32.880 02:33:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:32.880 02:33:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:32.880 02:33:57 -- accel/accel.sh@41 -- # local IFS=, 00:10:32.880 02:33:57 -- accel/accel.sh@42 -- # jq -r . 00:10:32.880 [2024-07-11 02:33:57.624417] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:32.880 [2024-07-11 02:33:57.624820] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119440 ] 00:10:32.880 [2024-07-11 02:33:57.765953] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.880 [2024-07-11 02:33:57.849764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.255 02:33:59 -- accel/accel.sh@18 -- # out=' 00:10:34.255 SPDK Configuration: 00:10:34.255 Core mask: 0x1 00:10:34.255 00:10:34.255 Accel Perf Configuration: 00:10:34.255 Workload Type: copy 00:10:34.255 Transfer size: 4096 bytes 00:10:34.255 Vector count 1 00:10:34.255 Module: software 00:10:34.255 Queue depth: 32 00:10:34.255 Allocate depth: 32 00:10:34.255 # threads/core: 1 00:10:34.255 Run time: 1 seconds 00:10:34.256 Verify: Yes 00:10:34.256 00:10:34.256 Running for 1 seconds... 00:10:34.256 00:10:34.256 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:34.256 ------------------------------------------------------------------------------------ 00:10:34.256 0,0 243296/s 950 MiB/s 0 0 00:10:34.256 ==================================================================================== 00:10:34.256 Total 243296/s 950 MiB/s 0 0' 00:10:34.256 02:33:59 -- accel/accel.sh@20 -- # IFS=: 00:10:34.256 02:33:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:10:34.256 02:33:59 -- accel/accel.sh@20 -- # read -r var val 00:10:34.256 02:33:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:10:34.256 02:33:59 -- accel/accel.sh@12 -- # build_accel_config 00:10:34.256 02:33:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:34.256 02:33:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:34.256 02:33:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:34.256 02:33:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:34.256 02:33:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:34.256 02:33:59 -- accel/accel.sh@41 -- # local IFS=, 00:10:34.256 02:33:59 -- accel/accel.sh@42 -- # jq -r . 00:10:34.256 [2024-07-11 02:33:59.149898] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:34.256 [2024-07-11 02:33:59.150389] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119469 ] 00:10:34.256 [2024-07-11 02:33:59.289236] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.514 [2024-07-11 02:33:59.372786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.514 02:33:59 -- accel/accel.sh@21 -- # val= 00:10:34.514 02:33:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.514 02:33:59 -- accel/accel.sh@20 -- # IFS=: 00:10:34.514 02:33:59 -- accel/accel.sh@20 -- # read -r var val 00:10:34.514 02:33:59 -- accel/accel.sh@21 -- # val= 00:10:34.514 02:33:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.514 02:33:59 -- accel/accel.sh@20 -- # IFS=: 00:10:34.514 02:33:59 -- accel/accel.sh@20 -- # read -r var val 00:10:34.514 02:33:59 -- accel/accel.sh@21 -- # val=0x1 00:10:34.514 02:33:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.514 02:33:59 -- accel/accel.sh@20 -- # IFS=: 00:10:34.514 02:33:59 -- accel/accel.sh@20 -- # read -r var val 00:10:34.514 02:33:59 -- accel/accel.sh@21 -- # val= 00:10:34.514 02:33:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.514 02:33:59 -- accel/accel.sh@20 -- # IFS=: 00:10:34.514 02:33:59 -- accel/accel.sh@20 -- # read -r var val 00:10:34.514 02:33:59 -- accel/accel.sh@21 -- # val= 00:10:34.514 02:33:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.514 02:33:59 -- accel/accel.sh@20 -- # IFS=: 00:10:34.514 02:33:59 -- accel/accel.sh@20 -- # read -r var val 00:10:34.514 02:33:59 -- accel/accel.sh@21 -- # val=copy 00:10:34.514 02:33:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.514 02:33:59 -- accel/accel.sh@24 -- # accel_opc=copy 00:10:34.514 02:33:59 -- accel/accel.sh@20 -- # IFS=: 00:10:34.514 02:33:59 -- accel/accel.sh@20 -- # read -r var val 00:10:34.514 02:33:59 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:34.514 02:33:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.514 02:33:59 -- accel/accel.sh@20 -- # IFS=: 00:10:34.514 02:33:59 -- accel/accel.sh@20 -- # read -r var val 00:10:34.514 02:33:59 -- accel/accel.sh@21 -- # val= 00:10:34.514 02:33:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.514 02:33:59 -- accel/accel.sh@20 -- # IFS=: 00:10:34.514 02:33:59 -- accel/accel.sh@20 -- # read -r var val 00:10:34.514 02:33:59 -- accel/accel.sh@21 -- # val=software 00:10:34.514 02:33:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.514 02:33:59 -- accel/accel.sh@23 -- # accel_module=software 00:10:34.514 02:33:59 -- accel/accel.sh@20 -- # IFS=: 00:10:34.514 02:33:59 -- accel/accel.sh@20 -- # read -r var val 00:10:34.514 02:33:59 -- accel/accel.sh@21 -- # val=32 00:10:34.514 02:33:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.514 02:33:59 -- accel/accel.sh@20 -- # IFS=: 00:10:34.514 02:33:59 -- accel/accel.sh@20 -- # read -r var val 00:10:34.514 02:33:59 -- accel/accel.sh@21 -- # val=32 00:10:34.514 02:33:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.514 02:33:59 -- accel/accel.sh@20 -- # IFS=: 00:10:34.514 02:33:59 -- accel/accel.sh@20 -- # read -r var val 00:10:34.514 02:33:59 -- accel/accel.sh@21 -- # val=1 00:10:34.514 02:33:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.514 02:33:59 -- accel/accel.sh@20 -- # IFS=: 00:10:34.514 02:33:59 -- accel/accel.sh@20 -- # read -r var val 00:10:34.514 02:33:59 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:34.514 02:33:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.514 02:33:59 -- accel/accel.sh@20 -- # IFS=: 00:10:34.514 02:33:59 -- accel/accel.sh@20 -- # read -r var val 00:10:34.514 02:33:59 -- accel/accel.sh@21 -- # val=Yes 00:10:34.514 02:33:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.514 02:33:59 -- accel/accel.sh@20 -- # IFS=: 00:10:34.514 02:33:59 -- accel/accel.sh@20 -- # read -r var val 00:10:34.514 02:33:59 -- accel/accel.sh@21 -- # val= 00:10:34.514 02:33:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.514 02:33:59 -- accel/accel.sh@20 -- # IFS=: 00:10:34.514 02:33:59 -- accel/accel.sh@20 -- # read -r var val 00:10:34.514 02:33:59 -- accel/accel.sh@21 -- # val= 00:10:34.514 02:33:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.514 02:33:59 -- accel/accel.sh@20 -- # IFS=: 00:10:34.514 02:33:59 -- accel/accel.sh@20 -- # read -r var val 00:10:35.890 02:34:00 -- accel/accel.sh@21 -- # val= 00:10:35.891 02:34:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.891 02:34:00 -- accel/accel.sh@20 -- # IFS=: 00:10:35.891 02:34:00 -- accel/accel.sh@20 -- # read -r var val 00:10:35.891 02:34:00 -- accel/accel.sh@21 -- # val= 00:10:35.891 02:34:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.891 02:34:00 -- accel/accel.sh@20 -- # IFS=: 00:10:35.891 02:34:00 -- accel/accel.sh@20 -- # read -r var val 00:10:35.891 02:34:00 -- accel/accel.sh@21 -- # val= 00:10:35.891 02:34:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.891 02:34:00 -- accel/accel.sh@20 -- # IFS=: 00:10:35.891 02:34:00 -- accel/accel.sh@20 -- # read -r var val 00:10:35.891 02:34:00 -- accel/accel.sh@21 -- # val= 00:10:35.891 02:34:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.891 02:34:00 -- accel/accel.sh@20 -- # IFS=: 00:10:35.891 02:34:00 -- accel/accel.sh@20 -- # read -r var val 00:10:35.891 02:34:00 -- accel/accel.sh@21 -- # val= 00:10:35.891 02:34:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.891 02:34:00 -- accel/accel.sh@20 -- # IFS=: 00:10:35.891 02:34:00 -- accel/accel.sh@20 -- # read -r var val 00:10:35.891 02:34:00 -- accel/accel.sh@21 -- # val= 00:10:35.891 02:34:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.891 02:34:00 -- accel/accel.sh@20 -- # IFS=: 00:10:35.891 02:34:00 -- accel/accel.sh@20 -- # read -r var val 00:10:35.891 02:34:00 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:35.891 02:34:00 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:10:35.891 02:34:00 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:35.891 00:10:35.891 real 0m3.053s 00:10:35.891 user 0m2.635s 00:10:35.891 sys 0m0.283s 00:10:35.891 02:34:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:35.891 02:34:00 -- common/autotest_common.sh@10 -- # set +x 00:10:35.891 ************************************ 00:10:35.891 END TEST accel_copy 00:10:35.891 ************************************ 00:10:35.891 02:34:00 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:35.891 02:34:00 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:10:35.891 02:34:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:35.891 02:34:00 -- common/autotest_common.sh@10 -- # set +x 00:10:35.891 ************************************ 00:10:35.891 START TEST accel_fill 00:10:35.891 ************************************ 00:10:35.891 02:34:00 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:35.891 02:34:00 -- accel/accel.sh@16 -- # local accel_opc 00:10:35.891 02:34:00 -- accel/accel.sh@17 -- # local accel_module 00:10:35.891 02:34:00 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:35.891 02:34:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:35.891 02:34:00 -- accel/accel.sh@12 -- # build_accel_config 00:10:35.891 02:34:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:35.891 02:34:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:35.891 02:34:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:35.891 02:34:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:35.891 02:34:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:35.891 02:34:00 -- accel/accel.sh@41 -- # local IFS=, 00:10:35.891 02:34:00 -- accel/accel.sh@42 -- # jq -r . 00:10:35.891 [2024-07-11 02:34:00.733199] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:35.891 [2024-07-11 02:34:00.733471] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119509 ] 00:10:35.891 [2024-07-11 02:34:00.881839] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.148 [2024-07-11 02:34:00.997111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.523 02:34:02 -- accel/accel.sh@18 -- # out=' 00:10:37.523 SPDK Configuration: 00:10:37.523 Core mask: 0x1 00:10:37.523 00:10:37.523 Accel Perf Configuration: 00:10:37.523 Workload Type: fill 00:10:37.523 Fill pattern: 0x80 00:10:37.523 Transfer size: 4096 bytes 00:10:37.523 Vector count 1 00:10:37.523 Module: software 00:10:37.523 Queue depth: 64 00:10:37.523 Allocate depth: 64 00:10:37.523 # threads/core: 1 00:10:37.523 Run time: 1 seconds 00:10:37.523 Verify: Yes 00:10:37.523 00:10:37.523 Running for 1 seconds... 00:10:37.523 00:10:37.523 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:37.523 ------------------------------------------------------------------------------------ 00:10:37.523 0,0 373760/s 1460 MiB/s 0 0 00:10:37.523 ==================================================================================== 00:10:37.523 Total 373760/s 1460 MiB/s 0 0' 00:10:37.523 02:34:02 -- accel/accel.sh@20 -- # IFS=: 00:10:37.523 02:34:02 -- accel/accel.sh@20 -- # read -r var val 00:10:37.523 02:34:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:37.523 02:34:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:37.523 02:34:02 -- accel/accel.sh@12 -- # build_accel_config 00:10:37.523 02:34:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:37.523 02:34:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:37.523 02:34:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:37.523 02:34:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:37.523 02:34:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:37.523 02:34:02 -- accel/accel.sh@41 -- # local IFS=, 00:10:37.523 02:34:02 -- accel/accel.sh@42 -- # jq -r . 00:10:37.523 [2024-07-11 02:34:02.293602] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:37.523 [2024-07-11 02:34:02.293871] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119557 ] 00:10:37.523 [2024-07-11 02:34:02.437719] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.523 [2024-07-11 02:34:02.521734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.523 02:34:02 -- accel/accel.sh@21 -- # val= 00:10:37.523 02:34:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.523 02:34:02 -- accel/accel.sh@20 -- # IFS=: 00:10:37.523 02:34:02 -- accel/accel.sh@20 -- # read -r var val 00:10:37.523 02:34:02 -- accel/accel.sh@21 -- # val= 00:10:37.523 02:34:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.523 02:34:02 -- accel/accel.sh@20 -- # IFS=: 00:10:37.523 02:34:02 -- accel/accel.sh@20 -- # read -r var val 00:10:37.523 02:34:02 -- accel/accel.sh@21 -- # val=0x1 00:10:37.523 02:34:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.523 02:34:02 -- accel/accel.sh@20 -- # IFS=: 00:10:37.523 02:34:02 -- accel/accel.sh@20 -- # read -r var val 00:10:37.523 02:34:02 -- accel/accel.sh@21 -- # val= 00:10:37.523 02:34:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.523 02:34:02 -- accel/accel.sh@20 -- # IFS=: 00:10:37.523 02:34:02 -- accel/accel.sh@20 -- # read -r var val 00:10:37.523 02:34:02 -- accel/accel.sh@21 -- # val= 00:10:37.523 02:34:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.523 02:34:02 -- accel/accel.sh@20 -- # IFS=: 00:10:37.523 02:34:02 -- accel/accel.sh@20 -- # read -r var val 00:10:37.523 02:34:02 -- accel/accel.sh@21 -- # val=fill 00:10:37.523 02:34:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.523 02:34:02 -- accel/accel.sh@24 -- # accel_opc=fill 00:10:37.523 02:34:02 -- accel/accel.sh@20 -- # IFS=: 00:10:37.523 02:34:02 -- accel/accel.sh@20 -- # read -r var val 00:10:37.523 02:34:02 -- accel/accel.sh@21 -- # val=0x80 00:10:37.523 02:34:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.523 02:34:02 -- accel/accel.sh@20 -- # IFS=: 00:10:37.523 02:34:02 -- accel/accel.sh@20 -- # read -r var val 00:10:37.523 02:34:02 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:37.523 02:34:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.523 02:34:02 -- accel/accel.sh@20 -- # IFS=: 00:10:37.523 02:34:02 -- accel/accel.sh@20 -- # read -r var val 00:10:37.523 02:34:02 -- accel/accel.sh@21 -- # val= 00:10:37.523 02:34:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.523 02:34:02 -- accel/accel.sh@20 -- # IFS=: 00:10:37.523 02:34:02 -- accel/accel.sh@20 -- # read -r var val 00:10:37.523 02:34:02 -- accel/accel.sh@21 -- # val=software 00:10:37.523 02:34:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.523 02:34:02 -- accel/accel.sh@23 -- # accel_module=software 00:10:37.523 02:34:02 -- accel/accel.sh@20 -- # IFS=: 00:10:37.523 02:34:02 -- accel/accel.sh@20 -- # read -r var val 00:10:37.523 02:34:02 -- accel/accel.sh@21 -- # val=64 00:10:37.523 02:34:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.523 02:34:02 -- accel/accel.sh@20 -- # IFS=: 00:10:37.523 02:34:02 -- accel/accel.sh@20 -- # read -r var val 00:10:37.523 02:34:02 -- accel/accel.sh@21 -- # val=64 00:10:37.781 02:34:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.781 02:34:02 -- accel/accel.sh@20 -- # IFS=: 00:10:37.781 02:34:02 -- accel/accel.sh@20 -- # read -r var val 00:10:37.781 02:34:02 -- accel/accel.sh@21 -- # val=1 00:10:37.781 02:34:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.781 02:34:02 -- accel/accel.sh@20 -- # IFS=: 00:10:37.781 02:34:02 -- accel/accel.sh@20 -- # read -r var val 00:10:37.781 02:34:02 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:37.781 02:34:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.781 02:34:02 -- accel/accel.sh@20 -- # IFS=: 00:10:37.781 02:34:02 -- accel/accel.sh@20 -- # read -r var val 00:10:37.781 02:34:02 -- accel/accel.sh@21 -- # val=Yes 00:10:37.781 02:34:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.781 02:34:02 -- accel/accel.sh@20 -- # IFS=: 00:10:37.781 02:34:02 -- accel/accel.sh@20 -- # read -r var val 00:10:37.781 02:34:02 -- accel/accel.sh@21 -- # val= 00:10:37.781 02:34:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.781 02:34:02 -- accel/accel.sh@20 -- # IFS=: 00:10:37.781 02:34:02 -- accel/accel.sh@20 -- # read -r var val 00:10:37.781 02:34:02 -- accel/accel.sh@21 -- # val= 00:10:37.781 02:34:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.781 02:34:02 -- accel/accel.sh@20 -- # IFS=: 00:10:37.781 02:34:02 -- accel/accel.sh@20 -- # read -r var val 00:10:39.170 02:34:03 -- accel/accel.sh@21 -- # val= 00:10:39.170 02:34:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.170 02:34:03 -- accel/accel.sh@20 -- # IFS=: 00:10:39.170 02:34:03 -- accel/accel.sh@20 -- # read -r var val 00:10:39.170 02:34:03 -- accel/accel.sh@21 -- # val= 00:10:39.170 02:34:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.170 02:34:03 -- accel/accel.sh@20 -- # IFS=: 00:10:39.170 02:34:03 -- accel/accel.sh@20 -- # read -r var val 00:10:39.170 02:34:03 -- accel/accel.sh@21 -- # val= 00:10:39.170 02:34:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.170 02:34:03 -- accel/accel.sh@20 -- # IFS=: 00:10:39.170 02:34:03 -- accel/accel.sh@20 -- # read -r var val 00:10:39.170 02:34:03 -- accel/accel.sh@21 -- # val= 00:10:39.170 02:34:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.170 02:34:03 -- accel/accel.sh@20 -- # IFS=: 00:10:39.170 02:34:03 -- accel/accel.sh@20 -- # read -r var val 00:10:39.170 02:34:03 -- accel/accel.sh@21 -- # val= 00:10:39.170 02:34:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.170 02:34:03 -- accel/accel.sh@20 -- # IFS=: 00:10:39.170 02:34:03 -- accel/accel.sh@20 -- # read -r var val 00:10:39.170 02:34:03 -- accel/accel.sh@21 -- # val= 00:10:39.170 02:34:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.170 02:34:03 -- accel/accel.sh@20 -- # IFS=: 00:10:39.170 02:34:03 -- accel/accel.sh@20 -- # read -r var val 00:10:39.170 02:34:03 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:39.170 02:34:03 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:10:39.170 02:34:03 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:39.170 00:10:39.170 real 0m3.129s 00:10:39.170 user 0m2.667s 00:10:39.170 sys 0m0.301s 00:10:39.170 02:34:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:39.170 02:34:03 -- common/autotest_common.sh@10 -- # set +x 00:10:39.170 ************************************ 00:10:39.170 END TEST accel_fill 00:10:39.170 ************************************ 00:10:39.170 02:34:03 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:10:39.170 02:34:03 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:39.170 02:34:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:39.170 02:34:03 -- common/autotest_common.sh@10 -- # set +x 00:10:39.170 ************************************ 00:10:39.170 START TEST accel_copy_crc32c 00:10:39.170 ************************************ 00:10:39.170 02:34:03 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:10:39.171 02:34:03 -- accel/accel.sh@16 -- # local accel_opc 00:10:39.171 02:34:03 -- accel/accel.sh@17 -- # local accel_module 00:10:39.171 02:34:03 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:10:39.171 02:34:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:10:39.171 02:34:03 -- accel/accel.sh@12 -- # build_accel_config 00:10:39.171 02:34:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:39.171 02:34:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:39.171 02:34:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:39.171 02:34:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:39.171 02:34:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:39.171 02:34:03 -- accel/accel.sh@41 -- # local IFS=, 00:10:39.171 02:34:03 -- accel/accel.sh@42 -- # jq -r . 00:10:39.171 [2024-07-11 02:34:03.917349] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:39.171 [2024-07-11 02:34:03.917563] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119597 ] 00:10:39.171 [2024-07-11 02:34:04.059065] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.171 [2024-07-11 02:34:04.140217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.553 02:34:05 -- accel/accel.sh@18 -- # out=' 00:10:40.553 SPDK Configuration: 00:10:40.553 Core mask: 0x1 00:10:40.553 00:10:40.553 Accel Perf Configuration: 00:10:40.553 Workload Type: copy_crc32c 00:10:40.553 CRC-32C seed: 0 00:10:40.553 Vector size: 4096 bytes 00:10:40.553 Transfer size: 4096 bytes 00:10:40.553 Vector count 1 00:10:40.553 Module: software 00:10:40.553 Queue depth: 32 00:10:40.553 Allocate depth: 32 00:10:40.553 # threads/core: 1 00:10:40.553 Run time: 1 seconds 00:10:40.553 Verify: Yes 00:10:40.553 00:10:40.553 Running for 1 seconds... 00:10:40.553 00:10:40.553 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:40.553 ------------------------------------------------------------------------------------ 00:10:40.553 0,0 207392/s 810 MiB/s 0 0 00:10:40.553 ==================================================================================== 00:10:40.553 Total 207392/s 810 MiB/s 0 0' 00:10:40.553 02:34:05 -- accel/accel.sh@20 -- # IFS=: 00:10:40.553 02:34:05 -- accel/accel.sh@20 -- # read -r var val 00:10:40.553 02:34:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:10:40.553 02:34:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:10:40.553 02:34:05 -- accel/accel.sh@12 -- # build_accel_config 00:10:40.553 02:34:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:40.553 02:34:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:40.553 02:34:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:40.553 02:34:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:40.553 02:34:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:40.553 02:34:05 -- accel/accel.sh@41 -- # local IFS=, 00:10:40.553 02:34:05 -- accel/accel.sh@42 -- # jq -r . 00:10:40.553 [2024-07-11 02:34:05.439625] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:40.553 [2024-07-11 02:34:05.439873] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119626 ] 00:10:40.553 [2024-07-11 02:34:05.585989] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.812 [2024-07-11 02:34:05.668745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.812 02:34:05 -- accel/accel.sh@21 -- # val= 00:10:40.812 02:34:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.812 02:34:05 -- accel/accel.sh@20 -- # IFS=: 00:10:40.812 02:34:05 -- accel/accel.sh@20 -- # read -r var val 00:10:40.812 02:34:05 -- accel/accel.sh@21 -- # val= 00:10:40.812 02:34:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.812 02:34:05 -- accel/accel.sh@20 -- # IFS=: 00:10:40.812 02:34:05 -- accel/accel.sh@20 -- # read -r var val 00:10:40.812 02:34:05 -- accel/accel.sh@21 -- # val=0x1 00:10:40.812 02:34:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.812 02:34:05 -- accel/accel.sh@20 -- # IFS=: 00:10:40.812 02:34:05 -- accel/accel.sh@20 -- # read -r var val 00:10:40.812 02:34:05 -- accel/accel.sh@21 -- # val= 00:10:40.812 02:34:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.812 02:34:05 -- accel/accel.sh@20 -- # IFS=: 00:10:40.812 02:34:05 -- accel/accel.sh@20 -- # read -r var val 00:10:40.812 02:34:05 -- accel/accel.sh@21 -- # val= 00:10:40.812 02:34:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.812 02:34:05 -- accel/accel.sh@20 -- # IFS=: 00:10:40.812 02:34:05 -- accel/accel.sh@20 -- # read -r var val 00:10:40.812 02:34:05 -- accel/accel.sh@21 -- # val=copy_crc32c 00:10:40.812 02:34:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.812 02:34:05 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:10:40.812 02:34:05 -- accel/accel.sh@20 -- # IFS=: 00:10:40.812 02:34:05 -- accel/accel.sh@20 -- # read -r var val 00:10:40.812 02:34:05 -- accel/accel.sh@21 -- # val=0 00:10:40.812 02:34:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.812 02:34:05 -- accel/accel.sh@20 -- # IFS=: 00:10:40.812 02:34:05 -- accel/accel.sh@20 -- # read -r var val 00:10:40.812 02:34:05 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:40.812 02:34:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.812 02:34:05 -- accel/accel.sh@20 -- # IFS=: 00:10:40.812 02:34:05 -- accel/accel.sh@20 -- # read -r var val 00:10:40.812 02:34:05 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:40.812 02:34:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.812 02:34:05 -- accel/accel.sh@20 -- # IFS=: 00:10:40.812 02:34:05 -- accel/accel.sh@20 -- # read -r var val 00:10:40.812 02:34:05 -- accel/accel.sh@21 -- # val= 00:10:40.812 02:34:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.812 02:34:05 -- accel/accel.sh@20 -- # IFS=: 00:10:40.812 02:34:05 -- accel/accel.sh@20 -- # read -r var val 00:10:40.812 02:34:05 -- accel/accel.sh@21 -- # val=software 00:10:40.812 02:34:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.812 02:34:05 -- accel/accel.sh@23 -- # accel_module=software 00:10:40.812 02:34:05 -- accel/accel.sh@20 -- # IFS=: 00:10:40.812 02:34:05 -- accel/accel.sh@20 -- # read -r var val 00:10:40.812 02:34:05 -- accel/accel.sh@21 -- # val=32 00:10:40.812 02:34:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.812 02:34:05 -- accel/accel.sh@20 -- # IFS=: 00:10:40.812 02:34:05 -- accel/accel.sh@20 -- # read -r var val 00:10:40.812 02:34:05 -- accel/accel.sh@21 -- # val=32 00:10:40.812 02:34:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.812 02:34:05 -- accel/accel.sh@20 -- # IFS=: 00:10:40.812 02:34:05 -- accel/accel.sh@20 -- # read -r var val 00:10:40.812 02:34:05 -- accel/accel.sh@21 -- # val=1 00:10:40.812 02:34:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.812 02:34:05 -- accel/accel.sh@20 -- # IFS=: 00:10:40.812 02:34:05 -- accel/accel.sh@20 -- # read -r var val 00:10:40.812 02:34:05 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:40.812 02:34:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.812 02:34:05 -- accel/accel.sh@20 -- # IFS=: 00:10:40.812 02:34:05 -- accel/accel.sh@20 -- # read -r var val 00:10:40.812 02:34:05 -- accel/accel.sh@21 -- # val=Yes 00:10:40.812 02:34:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.812 02:34:05 -- accel/accel.sh@20 -- # IFS=: 00:10:40.812 02:34:05 -- accel/accel.sh@20 -- # read -r var val 00:10:40.812 02:34:05 -- accel/accel.sh@21 -- # val= 00:10:40.812 02:34:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.812 02:34:05 -- accel/accel.sh@20 -- # IFS=: 00:10:40.812 02:34:05 -- accel/accel.sh@20 -- # read -r var val 00:10:40.812 02:34:05 -- accel/accel.sh@21 -- # val= 00:10:40.812 02:34:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.812 02:34:05 -- accel/accel.sh@20 -- # IFS=: 00:10:40.812 02:34:05 -- accel/accel.sh@20 -- # read -r var val 00:10:42.189 02:34:06 -- accel/accel.sh@21 -- # val= 00:10:42.189 02:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.189 02:34:06 -- accel/accel.sh@20 -- # IFS=: 00:10:42.189 02:34:06 -- accel/accel.sh@20 -- # read -r var val 00:10:42.189 02:34:06 -- accel/accel.sh@21 -- # val= 00:10:42.189 02:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.189 02:34:06 -- accel/accel.sh@20 -- # IFS=: 00:10:42.189 02:34:06 -- accel/accel.sh@20 -- # read -r var val 00:10:42.189 02:34:06 -- accel/accel.sh@21 -- # val= 00:10:42.189 02:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.189 02:34:06 -- accel/accel.sh@20 -- # IFS=: 00:10:42.189 02:34:06 -- accel/accel.sh@20 -- # read -r var val 00:10:42.189 02:34:06 -- accel/accel.sh@21 -- # val= 00:10:42.189 02:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.189 02:34:06 -- accel/accel.sh@20 -- # IFS=: 00:10:42.189 02:34:06 -- accel/accel.sh@20 -- # read -r var val 00:10:42.189 02:34:06 -- accel/accel.sh@21 -- # val= 00:10:42.189 02:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.189 02:34:06 -- accel/accel.sh@20 -- # IFS=: 00:10:42.189 02:34:06 -- accel/accel.sh@20 -- # read -r var val 00:10:42.189 02:34:06 -- accel/accel.sh@21 -- # val= 00:10:42.189 02:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.189 02:34:06 -- accel/accel.sh@20 -- # IFS=: 00:10:42.189 02:34:06 -- accel/accel.sh@20 -- # read -r var val 00:10:42.189 02:34:06 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:42.189 02:34:06 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:10:42.189 02:34:06 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:42.189 00:10:42.189 real 0m3.022s 00:10:42.189 user 0m2.603s 00:10:42.189 sys 0m0.286s 00:10:42.189 02:34:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:42.189 ************************************ 00:10:42.189 END TEST accel_copy_crc32c 00:10:42.189 ************************************ 00:10:42.189 02:34:06 -- common/autotest_common.sh@10 -- # set +x 00:10:42.189 02:34:06 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:10:42.189 02:34:06 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:10:42.189 02:34:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:42.189 02:34:06 -- common/autotest_common.sh@10 -- # set +x 00:10:42.189 ************************************ 00:10:42.189 START TEST accel_copy_crc32c_C2 00:10:42.189 ************************************ 00:10:42.189 02:34:06 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:10:42.189 02:34:06 -- accel/accel.sh@16 -- # local accel_opc 00:10:42.189 02:34:06 -- accel/accel.sh@17 -- # local accel_module 00:10:42.189 02:34:06 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:10:42.189 02:34:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:10:42.189 02:34:06 -- accel/accel.sh@12 -- # build_accel_config 00:10:42.189 02:34:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:42.189 02:34:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:42.189 02:34:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:42.189 02:34:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:42.189 02:34:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:42.189 02:34:06 -- accel/accel.sh@41 -- # local IFS=, 00:10:42.189 02:34:06 -- accel/accel.sh@42 -- # jq -r . 00:10:42.189 [2024-07-11 02:34:06.992972] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:42.190 [2024-07-11 02:34:06.993207] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119661 ] 00:10:42.190 [2024-07-11 02:34:07.137973] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.190 [2024-07-11 02:34:07.205279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.566 02:34:08 -- accel/accel.sh@18 -- # out=' 00:10:43.566 SPDK Configuration: 00:10:43.566 Core mask: 0x1 00:10:43.566 00:10:43.566 Accel Perf Configuration: 00:10:43.566 Workload Type: copy_crc32c 00:10:43.566 CRC-32C seed: 0 00:10:43.566 Vector size: 4096 bytes 00:10:43.566 Transfer size: 8192 bytes 00:10:43.566 Vector count 2 00:10:43.566 Module: software 00:10:43.566 Queue depth: 32 00:10:43.566 Allocate depth: 32 00:10:43.566 # threads/core: 1 00:10:43.566 Run time: 1 seconds 00:10:43.566 Verify: Yes 00:10:43.566 00:10:43.566 Running for 1 seconds... 00:10:43.566 00:10:43.566 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:43.566 ------------------------------------------------------------------------------------ 00:10:43.566 0,0 184096/s 1438 MiB/s 0 0 00:10:43.566 ==================================================================================== 00:10:43.566 Total 184096/s 719 MiB/s 0 0' 00:10:43.566 02:34:08 -- accel/accel.sh@20 -- # IFS=: 00:10:43.566 02:34:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:10:43.566 02:34:08 -- accel/accel.sh@20 -- # read -r var val 00:10:43.566 02:34:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:10:43.566 02:34:08 -- accel/accel.sh@12 -- # build_accel_config 00:10:43.566 02:34:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:43.566 02:34:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:43.566 02:34:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:43.566 02:34:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:43.566 02:34:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:43.566 02:34:08 -- accel/accel.sh@41 -- # local IFS=, 00:10:43.566 02:34:08 -- accel/accel.sh@42 -- # jq -r . 00:10:43.566 [2024-07-11 02:34:08.486527] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:43.566 [2024-07-11 02:34:08.486780] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119695 ] 00:10:43.566 [2024-07-11 02:34:08.630642] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.825 [2024-07-11 02:34:08.703950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.825 02:34:08 -- accel/accel.sh@21 -- # val= 00:10:43.825 02:34:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.825 02:34:08 -- accel/accel.sh@20 -- # IFS=: 00:10:43.825 02:34:08 -- accel/accel.sh@20 -- # read -r var val 00:10:43.825 02:34:08 -- accel/accel.sh@21 -- # val= 00:10:43.825 02:34:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.825 02:34:08 -- accel/accel.sh@20 -- # IFS=: 00:10:43.825 02:34:08 -- accel/accel.sh@20 -- # read -r var val 00:10:43.825 02:34:08 -- accel/accel.sh@21 -- # val=0x1 00:10:43.825 02:34:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.825 02:34:08 -- accel/accel.sh@20 -- # IFS=: 00:10:43.825 02:34:08 -- accel/accel.sh@20 -- # read -r var val 00:10:43.825 02:34:08 -- accel/accel.sh@21 -- # val= 00:10:43.825 02:34:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.825 02:34:08 -- accel/accel.sh@20 -- # IFS=: 00:10:43.825 02:34:08 -- accel/accel.sh@20 -- # read -r var val 00:10:43.825 02:34:08 -- accel/accel.sh@21 -- # val= 00:10:43.825 02:34:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.825 02:34:08 -- accel/accel.sh@20 -- # IFS=: 00:10:43.825 02:34:08 -- accel/accel.sh@20 -- # read -r var val 00:10:43.825 02:34:08 -- accel/accel.sh@21 -- # val=copy_crc32c 00:10:43.825 02:34:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.825 02:34:08 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:10:43.825 02:34:08 -- accel/accel.sh@20 -- # IFS=: 00:10:43.825 02:34:08 -- accel/accel.sh@20 -- # read -r var val 00:10:43.825 02:34:08 -- accel/accel.sh@21 -- # val=0 00:10:43.825 02:34:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.825 02:34:08 -- accel/accel.sh@20 -- # IFS=: 00:10:43.825 02:34:08 -- accel/accel.sh@20 -- # read -r var val 00:10:43.825 02:34:08 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:43.825 02:34:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.825 02:34:08 -- accel/accel.sh@20 -- # IFS=: 00:10:43.825 02:34:08 -- accel/accel.sh@20 -- # read -r var val 00:10:43.825 02:34:08 -- accel/accel.sh@21 -- # val='8192 bytes' 00:10:43.825 02:34:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.825 02:34:08 -- accel/accel.sh@20 -- # IFS=: 00:10:43.825 02:34:08 -- accel/accel.sh@20 -- # read -r var val 00:10:43.825 02:34:08 -- accel/accel.sh@21 -- # val= 00:10:43.825 02:34:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.825 02:34:08 -- accel/accel.sh@20 -- # IFS=: 00:10:43.825 02:34:08 -- accel/accel.sh@20 -- # read -r var val 00:10:43.825 02:34:08 -- accel/accel.sh@21 -- # val=software 00:10:43.825 02:34:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.825 02:34:08 -- accel/accel.sh@23 -- # accel_module=software 00:10:43.825 02:34:08 -- accel/accel.sh@20 -- # IFS=: 00:10:43.825 02:34:08 -- accel/accel.sh@20 -- # read -r var val 00:10:43.825 02:34:08 -- accel/accel.sh@21 -- # val=32 00:10:43.825 02:34:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.825 02:34:08 -- accel/accel.sh@20 -- # IFS=: 00:10:43.825 02:34:08 -- accel/accel.sh@20 -- # read -r var val 00:10:43.825 02:34:08 -- accel/accel.sh@21 -- # val=32 00:10:43.825 02:34:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.825 02:34:08 -- accel/accel.sh@20 -- # IFS=: 00:10:43.825 02:34:08 -- accel/accel.sh@20 -- # read -r var val 00:10:43.825 02:34:08 -- accel/accel.sh@21 -- # val=1 00:10:43.825 02:34:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.825 02:34:08 -- accel/accel.sh@20 -- # IFS=: 00:10:43.825 02:34:08 -- accel/accel.sh@20 -- # read -r var val 00:10:43.825 02:34:08 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:43.825 02:34:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.825 02:34:08 -- accel/accel.sh@20 -- # IFS=: 00:10:43.825 02:34:08 -- accel/accel.sh@20 -- # read -r var val 00:10:43.825 02:34:08 -- accel/accel.sh@21 -- # val=Yes 00:10:43.825 02:34:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.825 02:34:08 -- accel/accel.sh@20 -- # IFS=: 00:10:43.825 02:34:08 -- accel/accel.sh@20 -- # read -r var val 00:10:43.825 02:34:08 -- accel/accel.sh@21 -- # val= 00:10:43.825 02:34:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.825 02:34:08 -- accel/accel.sh@20 -- # IFS=: 00:10:43.825 02:34:08 -- accel/accel.sh@20 -- # read -r var val 00:10:43.825 02:34:08 -- accel/accel.sh@21 -- # val= 00:10:43.825 02:34:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.825 02:34:08 -- accel/accel.sh@20 -- # IFS=: 00:10:43.825 02:34:08 -- accel/accel.sh@20 -- # read -r var val 00:10:45.201 02:34:09 -- accel/accel.sh@21 -- # val= 00:10:45.201 02:34:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.201 02:34:09 -- accel/accel.sh@20 -- # IFS=: 00:10:45.201 02:34:09 -- accel/accel.sh@20 -- # read -r var val 00:10:45.201 02:34:09 -- accel/accel.sh@21 -- # val= 00:10:45.201 02:34:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.201 02:34:09 -- accel/accel.sh@20 -- # IFS=: 00:10:45.201 02:34:09 -- accel/accel.sh@20 -- # read -r var val 00:10:45.201 02:34:09 -- accel/accel.sh@21 -- # val= 00:10:45.201 02:34:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.201 02:34:09 -- accel/accel.sh@20 -- # IFS=: 00:10:45.201 02:34:09 -- accel/accel.sh@20 -- # read -r var val 00:10:45.201 02:34:09 -- accel/accel.sh@21 -- # val= 00:10:45.201 02:34:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.201 02:34:09 -- accel/accel.sh@20 -- # IFS=: 00:10:45.201 02:34:09 -- accel/accel.sh@20 -- # read -r var val 00:10:45.201 02:34:09 -- accel/accel.sh@21 -- # val= 00:10:45.201 02:34:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.201 02:34:09 -- accel/accel.sh@20 -- # IFS=: 00:10:45.201 02:34:09 -- accel/accel.sh@20 -- # read -r var val 00:10:45.201 02:34:09 -- accel/accel.sh@21 -- # val= 00:10:45.201 02:34:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.201 02:34:09 -- accel/accel.sh@20 -- # IFS=: 00:10:45.201 02:34:09 -- accel/accel.sh@20 -- # read -r var val 00:10:45.201 02:34:09 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:45.201 02:34:09 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:10:45.201 02:34:09 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:45.201 00:10:45.201 real 0m2.982s 00:10:45.201 user 0m2.585s 00:10:45.201 sys 0m0.268s 00:10:45.201 02:34:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:45.201 ************************************ 00:10:45.201 END TEST accel_copy_crc32c_C2 00:10:45.201 ************************************ 00:10:45.201 02:34:09 -- common/autotest_common.sh@10 -- # set +x 00:10:45.201 02:34:09 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:10:45.201 02:34:09 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:45.201 02:34:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:45.201 02:34:09 -- common/autotest_common.sh@10 -- # set +x 00:10:45.201 ************************************ 00:10:45.201 START TEST accel_dualcast 00:10:45.201 ************************************ 00:10:45.201 02:34:09 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:10:45.201 02:34:09 -- accel/accel.sh@16 -- # local accel_opc 00:10:45.201 02:34:09 -- accel/accel.sh@17 -- # local accel_module 00:10:45.201 02:34:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:10:45.201 02:34:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:10:45.201 02:34:10 -- accel/accel.sh@12 -- # build_accel_config 00:10:45.201 02:34:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:45.201 02:34:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:45.201 02:34:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:45.201 02:34:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:45.201 02:34:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:45.201 02:34:10 -- accel/accel.sh@41 -- # local IFS=, 00:10:45.201 02:34:10 -- accel/accel.sh@42 -- # jq -r . 00:10:45.201 [2024-07-11 02:34:10.025765] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:45.201 [2024-07-11 02:34:10.026002] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119730 ] 00:10:45.201 [2024-07-11 02:34:10.169114] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:45.201 [2024-07-11 02:34:10.235759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.576 02:34:11 -- accel/accel.sh@18 -- # out=' 00:10:46.576 SPDK Configuration: 00:10:46.576 Core mask: 0x1 00:10:46.576 00:10:46.576 Accel Perf Configuration: 00:10:46.576 Workload Type: dualcast 00:10:46.576 Transfer size: 4096 bytes 00:10:46.576 Vector count 1 00:10:46.576 Module: software 00:10:46.576 Queue depth: 32 00:10:46.576 Allocate depth: 32 00:10:46.576 # threads/core: 1 00:10:46.576 Run time: 1 seconds 00:10:46.576 Verify: Yes 00:10:46.576 00:10:46.576 Running for 1 seconds... 00:10:46.576 00:10:46.576 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:46.576 ------------------------------------------------------------------------------------ 00:10:46.576 0,0 333536/s 1302 MiB/s 0 0 00:10:46.576 ==================================================================================== 00:10:46.576 Total 333536/s 1302 MiB/s 0 0' 00:10:46.576 02:34:11 -- accel/accel.sh@20 -- # IFS=: 00:10:46.576 02:34:11 -- accel/accel.sh@20 -- # read -r var val 00:10:46.576 02:34:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:10:46.576 02:34:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:10:46.576 02:34:11 -- accel/accel.sh@12 -- # build_accel_config 00:10:46.576 02:34:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:46.576 02:34:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:46.576 02:34:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:46.576 02:34:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:46.576 02:34:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:46.576 02:34:11 -- accel/accel.sh@41 -- # local IFS=, 00:10:46.576 02:34:11 -- accel/accel.sh@42 -- # jq -r . 00:10:46.576 [2024-07-11 02:34:11.504720] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:46.576 [2024-07-11 02:34:11.504956] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119781 ] 00:10:46.576 [2024-07-11 02:34:11.649874] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.835 [2024-07-11 02:34:11.726286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.835 02:34:11 -- accel/accel.sh@21 -- # val= 00:10:46.835 02:34:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.835 02:34:11 -- accel/accel.sh@20 -- # IFS=: 00:10:46.835 02:34:11 -- accel/accel.sh@20 -- # read -r var val 00:10:46.835 02:34:11 -- accel/accel.sh@21 -- # val= 00:10:46.835 02:34:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.835 02:34:11 -- accel/accel.sh@20 -- # IFS=: 00:10:46.835 02:34:11 -- accel/accel.sh@20 -- # read -r var val 00:10:46.835 02:34:11 -- accel/accel.sh@21 -- # val=0x1 00:10:46.835 02:34:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.835 02:34:11 -- accel/accel.sh@20 -- # IFS=: 00:10:46.835 02:34:11 -- accel/accel.sh@20 -- # read -r var val 00:10:46.835 02:34:11 -- accel/accel.sh@21 -- # val= 00:10:46.835 02:34:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.835 02:34:11 -- accel/accel.sh@20 -- # IFS=: 00:10:46.835 02:34:11 -- accel/accel.sh@20 -- # read -r var val 00:10:46.835 02:34:11 -- accel/accel.sh@21 -- # val= 00:10:46.835 02:34:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.835 02:34:11 -- accel/accel.sh@20 -- # IFS=: 00:10:46.835 02:34:11 -- accel/accel.sh@20 -- # read -r var val 00:10:46.835 02:34:11 -- accel/accel.sh@21 -- # val=dualcast 00:10:46.835 02:34:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.835 02:34:11 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:10:46.835 02:34:11 -- accel/accel.sh@20 -- # IFS=: 00:10:46.835 02:34:11 -- accel/accel.sh@20 -- # read -r var val 00:10:46.835 02:34:11 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:46.835 02:34:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.835 02:34:11 -- accel/accel.sh@20 -- # IFS=: 00:10:46.835 02:34:11 -- accel/accel.sh@20 -- # read -r var val 00:10:46.835 02:34:11 -- accel/accel.sh@21 -- # val= 00:10:46.835 02:34:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.835 02:34:11 -- accel/accel.sh@20 -- # IFS=: 00:10:46.835 02:34:11 -- accel/accel.sh@20 -- # read -r var val 00:10:46.835 02:34:11 -- accel/accel.sh@21 -- # val=software 00:10:46.835 02:34:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.835 02:34:11 -- accel/accel.sh@23 -- # accel_module=software 00:10:46.835 02:34:11 -- accel/accel.sh@20 -- # IFS=: 00:10:46.835 02:34:11 -- accel/accel.sh@20 -- # read -r var val 00:10:46.835 02:34:11 -- accel/accel.sh@21 -- # val=32 00:10:46.835 02:34:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.835 02:34:11 -- accel/accel.sh@20 -- # IFS=: 00:10:46.835 02:34:11 -- accel/accel.sh@20 -- # read -r var val 00:10:46.835 02:34:11 -- accel/accel.sh@21 -- # val=32 00:10:46.835 02:34:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.835 02:34:11 -- accel/accel.sh@20 -- # IFS=: 00:10:46.835 02:34:11 -- accel/accel.sh@20 -- # read -r var val 00:10:46.835 02:34:11 -- accel/accel.sh@21 -- # val=1 00:10:46.835 02:34:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.835 02:34:11 -- accel/accel.sh@20 -- # IFS=: 00:10:46.835 02:34:11 -- accel/accel.sh@20 -- # read -r var val 00:10:46.835 02:34:11 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:46.835 02:34:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.835 02:34:11 -- accel/accel.sh@20 -- # IFS=: 00:10:46.835 02:34:11 -- accel/accel.sh@20 -- # read -r var val 00:10:46.835 02:34:11 -- accel/accel.sh@21 -- # val=Yes 00:10:46.835 02:34:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.835 02:34:11 -- accel/accel.sh@20 -- # IFS=: 00:10:46.835 02:34:11 -- accel/accel.sh@20 -- # read -r var val 00:10:46.835 02:34:11 -- accel/accel.sh@21 -- # val= 00:10:46.835 02:34:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.835 02:34:11 -- accel/accel.sh@20 -- # IFS=: 00:10:46.835 02:34:11 -- accel/accel.sh@20 -- # read -r var val 00:10:46.835 02:34:11 -- accel/accel.sh@21 -- # val= 00:10:46.835 02:34:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.835 02:34:11 -- accel/accel.sh@20 -- # IFS=: 00:10:46.835 02:34:11 -- accel/accel.sh@20 -- # read -r var val 00:10:48.213 02:34:12 -- accel/accel.sh@21 -- # val= 00:10:48.213 02:34:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.213 02:34:12 -- accel/accel.sh@20 -- # IFS=: 00:10:48.213 02:34:12 -- accel/accel.sh@20 -- # read -r var val 00:10:48.213 02:34:12 -- accel/accel.sh@21 -- # val= 00:10:48.213 02:34:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.213 02:34:12 -- accel/accel.sh@20 -- # IFS=: 00:10:48.213 02:34:12 -- accel/accel.sh@20 -- # read -r var val 00:10:48.213 02:34:12 -- accel/accel.sh@21 -- # val= 00:10:48.213 02:34:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.213 02:34:12 -- accel/accel.sh@20 -- # IFS=: 00:10:48.213 02:34:12 -- accel/accel.sh@20 -- # read -r var val 00:10:48.213 02:34:12 -- accel/accel.sh@21 -- # val= 00:10:48.213 02:34:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.213 02:34:12 -- accel/accel.sh@20 -- # IFS=: 00:10:48.213 02:34:12 -- accel/accel.sh@20 -- # read -r var val 00:10:48.213 02:34:12 -- accel/accel.sh@21 -- # val= 00:10:48.213 02:34:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.213 02:34:12 -- accel/accel.sh@20 -- # IFS=: 00:10:48.213 02:34:12 -- accel/accel.sh@20 -- # read -r var val 00:10:48.213 02:34:12 -- accel/accel.sh@21 -- # val= 00:10:48.213 02:34:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.213 02:34:12 -- accel/accel.sh@20 -- # IFS=: 00:10:48.213 02:34:12 -- accel/accel.sh@20 -- # read -r var val 00:10:48.213 02:34:12 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:48.213 02:34:12 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:10:48.213 02:34:12 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:48.213 00:10:48.213 real 0m2.989s 00:10:48.213 user 0m2.591s 00:10:48.213 sys 0m0.260s 00:10:48.213 02:34:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:48.213 ************************************ 00:10:48.213 END TEST accel_dualcast 00:10:48.213 02:34:12 -- common/autotest_common.sh@10 -- # set +x 00:10:48.213 ************************************ 00:10:48.213 02:34:13 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:10:48.213 02:34:13 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:48.213 02:34:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:48.213 02:34:13 -- common/autotest_common.sh@10 -- # set +x 00:10:48.213 ************************************ 00:10:48.213 START TEST accel_compare 00:10:48.213 ************************************ 00:10:48.213 02:34:13 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:10:48.213 02:34:13 -- accel/accel.sh@16 -- # local accel_opc 00:10:48.213 02:34:13 -- accel/accel.sh@17 -- # local accel_module 00:10:48.213 02:34:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:10:48.213 02:34:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:10:48.213 02:34:13 -- accel/accel.sh@12 -- # build_accel_config 00:10:48.213 02:34:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:48.213 02:34:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:48.213 02:34:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:48.213 02:34:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:48.213 02:34:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:48.213 02:34:13 -- accel/accel.sh@41 -- # local IFS=, 00:10:48.213 02:34:13 -- accel/accel.sh@42 -- # jq -r . 00:10:48.213 [2024-07-11 02:34:13.069750] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:48.213 [2024-07-11 02:34:13.069983] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119815 ] 00:10:48.213 [2024-07-11 02:34:13.214260] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.213 [2024-07-11 02:34:13.286884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.590 02:34:14 -- accel/accel.sh@18 -- # out=' 00:10:49.590 SPDK Configuration: 00:10:49.590 Core mask: 0x1 00:10:49.590 00:10:49.590 Accel Perf Configuration: 00:10:49.590 Workload Type: compare 00:10:49.590 Transfer size: 4096 bytes 00:10:49.590 Vector count 1 00:10:49.590 Module: software 00:10:49.590 Queue depth: 32 00:10:49.590 Allocate depth: 32 00:10:49.590 # threads/core: 1 00:10:49.590 Run time: 1 seconds 00:10:49.590 Verify: Yes 00:10:49.590 00:10:49.590 Running for 1 seconds... 00:10:49.590 00:10:49.590 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:49.590 ------------------------------------------------------------------------------------ 00:10:49.590 0,0 449664/s 1756 MiB/s 0 0 00:10:49.590 ==================================================================================== 00:10:49.590 Total 449664/s 1756 MiB/s 0 0' 00:10:49.590 02:34:14 -- accel/accel.sh@20 -- # IFS=: 00:10:49.590 02:34:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:10:49.590 02:34:14 -- accel/accel.sh@20 -- # read -r var val 00:10:49.590 02:34:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:10:49.590 02:34:14 -- accel/accel.sh@12 -- # build_accel_config 00:10:49.590 02:34:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:49.590 02:34:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:49.590 02:34:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:49.590 02:34:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:49.590 02:34:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:49.590 02:34:14 -- accel/accel.sh@41 -- # local IFS=, 00:10:49.590 02:34:14 -- accel/accel.sh@42 -- # jq -r . 00:10:49.590 [2024-07-11 02:34:14.543811] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:49.590 [2024-07-11 02:34:14.544011] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119844 ] 00:10:49.590 [2024-07-11 02:34:14.679112] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.849 [2024-07-11 02:34:14.772929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.849 02:34:14 -- accel/accel.sh@21 -- # val= 00:10:49.849 02:34:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.849 02:34:14 -- accel/accel.sh@20 -- # IFS=: 00:10:49.849 02:34:14 -- accel/accel.sh@20 -- # read -r var val 00:10:49.849 02:34:14 -- accel/accel.sh@21 -- # val= 00:10:49.849 02:34:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.849 02:34:14 -- accel/accel.sh@20 -- # IFS=: 00:10:49.849 02:34:14 -- accel/accel.sh@20 -- # read -r var val 00:10:49.849 02:34:14 -- accel/accel.sh@21 -- # val=0x1 00:10:49.849 02:34:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.849 02:34:14 -- accel/accel.sh@20 -- # IFS=: 00:10:49.849 02:34:14 -- accel/accel.sh@20 -- # read -r var val 00:10:49.849 02:34:14 -- accel/accel.sh@21 -- # val= 00:10:49.849 02:34:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.849 02:34:14 -- accel/accel.sh@20 -- # IFS=: 00:10:49.849 02:34:14 -- accel/accel.sh@20 -- # read -r var val 00:10:49.849 02:34:14 -- accel/accel.sh@21 -- # val= 00:10:49.849 02:34:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.849 02:34:14 -- accel/accel.sh@20 -- # IFS=: 00:10:49.849 02:34:14 -- accel/accel.sh@20 -- # read -r var val 00:10:49.849 02:34:14 -- accel/accel.sh@21 -- # val=compare 00:10:49.849 02:34:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.849 02:34:14 -- accel/accel.sh@24 -- # accel_opc=compare 00:10:49.849 02:34:14 -- accel/accel.sh@20 -- # IFS=: 00:10:49.849 02:34:14 -- accel/accel.sh@20 -- # read -r var val 00:10:49.849 02:34:14 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:49.849 02:34:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.849 02:34:14 -- accel/accel.sh@20 -- # IFS=: 00:10:49.849 02:34:14 -- accel/accel.sh@20 -- # read -r var val 00:10:49.849 02:34:14 -- accel/accel.sh@21 -- # val= 00:10:49.849 02:34:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.849 02:34:14 -- accel/accel.sh@20 -- # IFS=: 00:10:49.849 02:34:14 -- accel/accel.sh@20 -- # read -r var val 00:10:49.849 02:34:14 -- accel/accel.sh@21 -- # val=software 00:10:49.849 02:34:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.849 02:34:14 -- accel/accel.sh@23 -- # accel_module=software 00:10:49.849 02:34:14 -- accel/accel.sh@20 -- # IFS=: 00:10:49.849 02:34:14 -- accel/accel.sh@20 -- # read -r var val 00:10:49.849 02:34:14 -- accel/accel.sh@21 -- # val=32 00:10:49.849 02:34:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.849 02:34:14 -- accel/accel.sh@20 -- # IFS=: 00:10:49.849 02:34:14 -- accel/accel.sh@20 -- # read -r var val 00:10:49.849 02:34:14 -- accel/accel.sh@21 -- # val=32 00:10:49.849 02:34:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.849 02:34:14 -- accel/accel.sh@20 -- # IFS=: 00:10:49.849 02:34:14 -- accel/accel.sh@20 -- # read -r var val 00:10:49.849 02:34:14 -- accel/accel.sh@21 -- # val=1 00:10:49.849 02:34:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.849 02:34:14 -- accel/accel.sh@20 -- # IFS=: 00:10:49.849 02:34:14 -- accel/accel.sh@20 -- # read -r var val 00:10:49.849 02:34:14 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:49.849 02:34:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.849 02:34:14 -- accel/accel.sh@20 -- # IFS=: 00:10:49.849 02:34:14 -- accel/accel.sh@20 -- # read -r var val 00:10:49.849 02:34:14 -- accel/accel.sh@21 -- # val=Yes 00:10:49.849 02:34:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.849 02:34:14 -- accel/accel.sh@20 -- # IFS=: 00:10:49.849 02:34:14 -- accel/accel.sh@20 -- # read -r var val 00:10:49.849 02:34:14 -- accel/accel.sh@21 -- # val= 00:10:49.849 02:34:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.849 02:34:14 -- accel/accel.sh@20 -- # IFS=: 00:10:49.849 02:34:14 -- accel/accel.sh@20 -- # read -r var val 00:10:49.849 02:34:14 -- accel/accel.sh@21 -- # val= 00:10:49.849 02:34:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.849 02:34:14 -- accel/accel.sh@20 -- # IFS=: 00:10:49.849 02:34:14 -- accel/accel.sh@20 -- # read -r var val 00:10:51.322 02:34:16 -- accel/accel.sh@21 -- # val= 00:10:51.322 02:34:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.322 02:34:16 -- accel/accel.sh@20 -- # IFS=: 00:10:51.322 02:34:16 -- accel/accel.sh@20 -- # read -r var val 00:10:51.322 02:34:16 -- accel/accel.sh@21 -- # val= 00:10:51.322 02:34:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.322 02:34:16 -- accel/accel.sh@20 -- # IFS=: 00:10:51.322 02:34:16 -- accel/accel.sh@20 -- # read -r var val 00:10:51.322 02:34:16 -- accel/accel.sh@21 -- # val= 00:10:51.322 02:34:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.322 02:34:16 -- accel/accel.sh@20 -- # IFS=: 00:10:51.322 02:34:16 -- accel/accel.sh@20 -- # read -r var val 00:10:51.322 02:34:16 -- accel/accel.sh@21 -- # val= 00:10:51.322 02:34:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.322 02:34:16 -- accel/accel.sh@20 -- # IFS=: 00:10:51.322 02:34:16 -- accel/accel.sh@20 -- # read -r var val 00:10:51.322 02:34:16 -- accel/accel.sh@21 -- # val= 00:10:51.322 02:34:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.322 02:34:16 -- accel/accel.sh@20 -- # IFS=: 00:10:51.322 02:34:16 -- accel/accel.sh@20 -- # read -r var val 00:10:51.322 02:34:16 -- accel/accel.sh@21 -- # val= 00:10:51.322 02:34:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.322 02:34:16 -- accel/accel.sh@20 -- # IFS=: 00:10:51.322 02:34:16 -- accel/accel.sh@20 -- # read -r var val 00:10:51.322 ************************************ 00:10:51.322 END TEST accel_compare 00:10:51.322 ************************************ 00:10:51.322 02:34:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:51.322 02:34:16 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:10:51.322 02:34:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:51.322 00:10:51.322 real 0m2.978s 00:10:51.322 user 0m2.574s 00:10:51.322 sys 0m0.265s 00:10:51.322 02:34:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:51.322 02:34:16 -- common/autotest_common.sh@10 -- # set +x 00:10:51.322 02:34:16 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:10:51.322 02:34:16 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:51.322 02:34:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:51.322 02:34:16 -- common/autotest_common.sh@10 -- # set +x 00:10:51.322 ************************************ 00:10:51.322 START TEST accel_xor 00:10:51.322 ************************************ 00:10:51.322 02:34:16 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:10:51.322 02:34:16 -- accel/accel.sh@16 -- # local accel_opc 00:10:51.322 02:34:16 -- accel/accel.sh@17 -- # local accel_module 00:10:51.322 02:34:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:10:51.322 02:34:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:10:51.322 02:34:16 -- accel/accel.sh@12 -- # build_accel_config 00:10:51.322 02:34:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:51.322 02:34:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:51.322 02:34:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:51.322 02:34:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:51.322 02:34:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:51.322 02:34:16 -- accel/accel.sh@41 -- # local IFS=, 00:10:51.322 02:34:16 -- accel/accel.sh@42 -- # jq -r . 00:10:51.322 [2024-07-11 02:34:16.099371] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:51.322 [2024-07-11 02:34:16.099633] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119884 ] 00:10:51.322 [2024-07-11 02:34:16.246260] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.322 [2024-07-11 02:34:16.316382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.697 02:34:17 -- accel/accel.sh@18 -- # out=' 00:10:52.697 SPDK Configuration: 00:10:52.697 Core mask: 0x1 00:10:52.697 00:10:52.697 Accel Perf Configuration: 00:10:52.697 Workload Type: xor 00:10:52.697 Source buffers: 2 00:10:52.697 Transfer size: 4096 bytes 00:10:52.697 Vector count 1 00:10:52.697 Module: software 00:10:52.697 Queue depth: 32 00:10:52.697 Allocate depth: 32 00:10:52.697 # threads/core: 1 00:10:52.697 Run time: 1 seconds 00:10:52.697 Verify: Yes 00:10:52.697 00:10:52.697 Running for 1 seconds... 00:10:52.697 00:10:52.698 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:52.698 ------------------------------------------------------------------------------------ 00:10:52.698 0,0 247744/s 967 MiB/s 0 0 00:10:52.698 ==================================================================================== 00:10:52.698 Total 247744/s 967 MiB/s 0 0' 00:10:52.698 02:34:17 -- accel/accel.sh@20 -- # IFS=: 00:10:52.698 02:34:17 -- accel/accel.sh@20 -- # read -r var val 00:10:52.698 02:34:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:10:52.698 02:34:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:10:52.698 02:34:17 -- accel/accel.sh@12 -- # build_accel_config 00:10:52.698 02:34:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:52.698 02:34:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:52.698 02:34:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:52.698 02:34:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:52.698 02:34:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:52.698 02:34:17 -- accel/accel.sh@41 -- # local IFS=, 00:10:52.698 02:34:17 -- accel/accel.sh@42 -- # jq -r . 00:10:52.698 [2024-07-11 02:34:17.599166] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:52.698 [2024-07-11 02:34:17.599422] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119913 ] 00:10:52.698 [2024-07-11 02:34:17.740257] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.957 [2024-07-11 02:34:17.808336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.957 02:34:17 -- accel/accel.sh@21 -- # val= 00:10:52.957 02:34:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.957 02:34:17 -- accel/accel.sh@20 -- # IFS=: 00:10:52.957 02:34:17 -- accel/accel.sh@20 -- # read -r var val 00:10:52.957 02:34:17 -- accel/accel.sh@21 -- # val= 00:10:52.957 02:34:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.957 02:34:17 -- accel/accel.sh@20 -- # IFS=: 00:10:52.957 02:34:17 -- accel/accel.sh@20 -- # read -r var val 00:10:52.957 02:34:17 -- accel/accel.sh@21 -- # val=0x1 00:10:52.957 02:34:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.957 02:34:17 -- accel/accel.sh@20 -- # IFS=: 00:10:52.957 02:34:17 -- accel/accel.sh@20 -- # read -r var val 00:10:52.957 02:34:17 -- accel/accel.sh@21 -- # val= 00:10:52.957 02:34:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.957 02:34:17 -- accel/accel.sh@20 -- # IFS=: 00:10:52.957 02:34:17 -- accel/accel.sh@20 -- # read -r var val 00:10:52.957 02:34:17 -- accel/accel.sh@21 -- # val= 00:10:52.957 02:34:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.957 02:34:17 -- accel/accel.sh@20 -- # IFS=: 00:10:52.957 02:34:17 -- accel/accel.sh@20 -- # read -r var val 00:10:52.957 02:34:17 -- accel/accel.sh@21 -- # val=xor 00:10:52.957 02:34:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.957 02:34:17 -- accel/accel.sh@24 -- # accel_opc=xor 00:10:52.957 02:34:17 -- accel/accel.sh@20 -- # IFS=: 00:10:52.957 02:34:17 -- accel/accel.sh@20 -- # read -r var val 00:10:52.957 02:34:17 -- accel/accel.sh@21 -- # val=2 00:10:52.957 02:34:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.957 02:34:17 -- accel/accel.sh@20 -- # IFS=: 00:10:52.957 02:34:17 -- accel/accel.sh@20 -- # read -r var val 00:10:52.957 02:34:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:52.957 02:34:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.957 02:34:17 -- accel/accel.sh@20 -- # IFS=: 00:10:52.957 02:34:17 -- accel/accel.sh@20 -- # read -r var val 00:10:52.957 02:34:17 -- accel/accel.sh@21 -- # val= 00:10:52.957 02:34:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.957 02:34:17 -- accel/accel.sh@20 -- # IFS=: 00:10:52.957 02:34:17 -- accel/accel.sh@20 -- # read -r var val 00:10:52.957 02:34:17 -- accel/accel.sh@21 -- # val=software 00:10:52.957 02:34:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.957 02:34:17 -- accel/accel.sh@23 -- # accel_module=software 00:10:52.957 02:34:17 -- accel/accel.sh@20 -- # IFS=: 00:10:52.957 02:34:17 -- accel/accel.sh@20 -- # read -r var val 00:10:52.957 02:34:17 -- accel/accel.sh@21 -- # val=32 00:10:52.957 02:34:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.957 02:34:17 -- accel/accel.sh@20 -- # IFS=: 00:10:52.957 02:34:17 -- accel/accel.sh@20 -- # read -r var val 00:10:52.957 02:34:17 -- accel/accel.sh@21 -- # val=32 00:10:52.957 02:34:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.957 02:34:17 -- accel/accel.sh@20 -- # IFS=: 00:10:52.957 02:34:17 -- accel/accel.sh@20 -- # read -r var val 00:10:52.957 02:34:17 -- accel/accel.sh@21 -- # val=1 00:10:52.957 02:34:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.957 02:34:17 -- accel/accel.sh@20 -- # IFS=: 00:10:52.957 02:34:17 -- accel/accel.sh@20 -- # read -r var val 00:10:52.957 02:34:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:52.957 02:34:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.957 02:34:17 -- accel/accel.sh@20 -- # IFS=: 00:10:52.957 02:34:17 -- accel/accel.sh@20 -- # read -r var val 00:10:52.957 02:34:17 -- accel/accel.sh@21 -- # val=Yes 00:10:52.957 02:34:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.957 02:34:17 -- accel/accel.sh@20 -- # IFS=: 00:10:52.957 02:34:17 -- accel/accel.sh@20 -- # read -r var val 00:10:52.957 02:34:17 -- accel/accel.sh@21 -- # val= 00:10:52.957 02:34:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.957 02:34:17 -- accel/accel.sh@20 -- # IFS=: 00:10:52.957 02:34:17 -- accel/accel.sh@20 -- # read -r var val 00:10:52.957 02:34:17 -- accel/accel.sh@21 -- # val= 00:10:52.957 02:34:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.957 02:34:17 -- accel/accel.sh@20 -- # IFS=: 00:10:52.957 02:34:17 -- accel/accel.sh@20 -- # read -r var val 00:10:54.333 02:34:19 -- accel/accel.sh@21 -- # val= 00:10:54.333 02:34:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.333 02:34:19 -- accel/accel.sh@20 -- # IFS=: 00:10:54.333 02:34:19 -- accel/accel.sh@20 -- # read -r var val 00:10:54.333 02:34:19 -- accel/accel.sh@21 -- # val= 00:10:54.333 02:34:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.333 02:34:19 -- accel/accel.sh@20 -- # IFS=: 00:10:54.333 02:34:19 -- accel/accel.sh@20 -- # read -r var val 00:10:54.333 02:34:19 -- accel/accel.sh@21 -- # val= 00:10:54.333 02:34:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.333 02:34:19 -- accel/accel.sh@20 -- # IFS=: 00:10:54.333 02:34:19 -- accel/accel.sh@20 -- # read -r var val 00:10:54.333 02:34:19 -- accel/accel.sh@21 -- # val= 00:10:54.333 02:34:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.333 02:34:19 -- accel/accel.sh@20 -- # IFS=: 00:10:54.333 02:34:19 -- accel/accel.sh@20 -- # read -r var val 00:10:54.333 02:34:19 -- accel/accel.sh@21 -- # val= 00:10:54.333 02:34:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.333 02:34:19 -- accel/accel.sh@20 -- # IFS=: 00:10:54.333 02:34:19 -- accel/accel.sh@20 -- # read -r var val 00:10:54.333 02:34:19 -- accel/accel.sh@21 -- # val= 00:10:54.333 02:34:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.333 02:34:19 -- accel/accel.sh@20 -- # IFS=: 00:10:54.333 02:34:19 -- accel/accel.sh@20 -- # read -r var val 00:10:54.333 02:34:19 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:54.333 02:34:19 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:10:54.333 02:34:19 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:54.333 ************************************ 00:10:54.333 END TEST accel_xor 00:10:54.333 ************************************ 00:10:54.333 00:10:54.333 real 0m2.987s 00:10:54.333 user 0m2.600s 00:10:54.333 sys 0m0.258s 00:10:54.333 02:34:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:54.333 02:34:19 -- common/autotest_common.sh@10 -- # set +x 00:10:54.333 02:34:19 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:10:54.333 02:34:19 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:10:54.333 02:34:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:54.333 02:34:19 -- common/autotest_common.sh@10 -- # set +x 00:10:54.333 ************************************ 00:10:54.333 START TEST accel_xor 00:10:54.333 ************************************ 00:10:54.333 02:34:19 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:10:54.333 02:34:19 -- accel/accel.sh@16 -- # local accel_opc 00:10:54.333 02:34:19 -- accel/accel.sh@17 -- # local accel_module 00:10:54.333 02:34:19 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:10:54.333 02:34:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:10:54.333 02:34:19 -- accel/accel.sh@12 -- # build_accel_config 00:10:54.333 02:34:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:54.333 02:34:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:54.333 02:34:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:54.333 02:34:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:54.333 02:34:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:54.333 02:34:19 -- accel/accel.sh@41 -- # local IFS=, 00:10:54.333 02:34:19 -- accel/accel.sh@42 -- # jq -r . 00:10:54.333 [2024-07-11 02:34:19.137325] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:54.333 [2024-07-11 02:34:19.137552] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119953 ] 00:10:54.333 [2024-07-11 02:34:19.281944] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.333 [2024-07-11 02:34:19.353408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.709 02:34:20 -- accel/accel.sh@18 -- # out=' 00:10:55.709 SPDK Configuration: 00:10:55.709 Core mask: 0x1 00:10:55.709 00:10:55.709 Accel Perf Configuration: 00:10:55.709 Workload Type: xor 00:10:55.709 Source buffers: 3 00:10:55.709 Transfer size: 4096 bytes 00:10:55.709 Vector count 1 00:10:55.709 Module: software 00:10:55.709 Queue depth: 32 00:10:55.709 Allocate depth: 32 00:10:55.709 # threads/core: 1 00:10:55.709 Run time: 1 seconds 00:10:55.709 Verify: Yes 00:10:55.709 00:10:55.709 Running for 1 seconds... 00:10:55.709 00:10:55.709 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:55.709 ------------------------------------------------------------------------------------ 00:10:55.709 0,0 235296/s 919 MiB/s 0 0 00:10:55.709 ==================================================================================== 00:10:55.709 Total 235296/s 919 MiB/s 0 0' 00:10:55.709 02:34:20 -- accel/accel.sh@20 -- # IFS=: 00:10:55.709 02:34:20 -- accel/accel.sh@20 -- # read -r var val 00:10:55.709 02:34:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:10:55.709 02:34:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:10:55.709 02:34:20 -- accel/accel.sh@12 -- # build_accel_config 00:10:55.709 02:34:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:55.709 02:34:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:55.709 02:34:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:55.709 02:34:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:55.709 02:34:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:55.709 02:34:20 -- accel/accel.sh@41 -- # local IFS=, 00:10:55.709 02:34:20 -- accel/accel.sh@42 -- # jq -r . 00:10:55.709 [2024-07-11 02:34:20.622810] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:55.709 [2024-07-11 02:34:20.623059] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119980 ] 00:10:55.709 [2024-07-11 02:34:20.768408] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.968 [2024-07-11 02:34:20.843854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.968 02:34:20 -- accel/accel.sh@21 -- # val= 00:10:55.968 02:34:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.968 02:34:20 -- accel/accel.sh@20 -- # IFS=: 00:10:55.968 02:34:20 -- accel/accel.sh@20 -- # read -r var val 00:10:55.968 02:34:20 -- accel/accel.sh@21 -- # val= 00:10:55.968 02:34:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.968 02:34:20 -- accel/accel.sh@20 -- # IFS=: 00:10:55.968 02:34:20 -- accel/accel.sh@20 -- # read -r var val 00:10:55.968 02:34:20 -- accel/accel.sh@21 -- # val=0x1 00:10:55.968 02:34:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.968 02:34:20 -- accel/accel.sh@20 -- # IFS=: 00:10:55.968 02:34:20 -- accel/accel.sh@20 -- # read -r var val 00:10:55.968 02:34:20 -- accel/accel.sh@21 -- # val= 00:10:55.968 02:34:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.968 02:34:20 -- accel/accel.sh@20 -- # IFS=: 00:10:55.968 02:34:20 -- accel/accel.sh@20 -- # read -r var val 00:10:55.968 02:34:20 -- accel/accel.sh@21 -- # val= 00:10:55.968 02:34:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.968 02:34:20 -- accel/accel.sh@20 -- # IFS=: 00:10:55.968 02:34:20 -- accel/accel.sh@20 -- # read -r var val 00:10:55.968 02:34:20 -- accel/accel.sh@21 -- # val=xor 00:10:55.968 02:34:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.968 02:34:20 -- accel/accel.sh@24 -- # accel_opc=xor 00:10:55.968 02:34:20 -- accel/accel.sh@20 -- # IFS=: 00:10:55.968 02:34:20 -- accel/accel.sh@20 -- # read -r var val 00:10:55.968 02:34:20 -- accel/accel.sh@21 -- # val=3 00:10:55.968 02:34:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.968 02:34:20 -- accel/accel.sh@20 -- # IFS=: 00:10:55.968 02:34:20 -- accel/accel.sh@20 -- # read -r var val 00:10:55.968 02:34:20 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:55.968 02:34:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.968 02:34:20 -- accel/accel.sh@20 -- # IFS=: 00:10:55.968 02:34:20 -- accel/accel.sh@20 -- # read -r var val 00:10:55.968 02:34:20 -- accel/accel.sh@21 -- # val= 00:10:55.968 02:34:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.968 02:34:20 -- accel/accel.sh@20 -- # IFS=: 00:10:55.968 02:34:20 -- accel/accel.sh@20 -- # read -r var val 00:10:55.968 02:34:20 -- accel/accel.sh@21 -- # val=software 00:10:55.968 02:34:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.968 02:34:20 -- accel/accel.sh@23 -- # accel_module=software 00:10:55.968 02:34:20 -- accel/accel.sh@20 -- # IFS=: 00:10:55.968 02:34:20 -- accel/accel.sh@20 -- # read -r var val 00:10:55.968 02:34:20 -- accel/accel.sh@21 -- # val=32 00:10:55.968 02:34:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.968 02:34:20 -- accel/accel.sh@20 -- # IFS=: 00:10:55.968 02:34:20 -- accel/accel.sh@20 -- # read -r var val 00:10:55.968 02:34:20 -- accel/accel.sh@21 -- # val=32 00:10:55.968 02:34:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.968 02:34:20 -- accel/accel.sh@20 -- # IFS=: 00:10:55.968 02:34:20 -- accel/accel.sh@20 -- # read -r var val 00:10:55.968 02:34:20 -- accel/accel.sh@21 -- # val=1 00:10:55.968 02:34:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.968 02:34:20 -- accel/accel.sh@20 -- # IFS=: 00:10:55.968 02:34:20 -- accel/accel.sh@20 -- # read -r var val 00:10:55.968 02:34:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:55.968 02:34:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.968 02:34:20 -- accel/accel.sh@20 -- # IFS=: 00:10:55.968 02:34:20 -- accel/accel.sh@20 -- # read -r var val 00:10:55.968 02:34:20 -- accel/accel.sh@21 -- # val=Yes 00:10:55.968 02:34:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.968 02:34:20 -- accel/accel.sh@20 -- # IFS=: 00:10:55.968 02:34:20 -- accel/accel.sh@20 -- # read -r var val 00:10:55.968 02:34:20 -- accel/accel.sh@21 -- # val= 00:10:55.968 02:34:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.968 02:34:20 -- accel/accel.sh@20 -- # IFS=: 00:10:55.968 02:34:20 -- accel/accel.sh@20 -- # read -r var val 00:10:55.968 02:34:20 -- accel/accel.sh@21 -- # val= 00:10:55.968 02:34:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.968 02:34:20 -- accel/accel.sh@20 -- # IFS=: 00:10:55.968 02:34:20 -- accel/accel.sh@20 -- # read -r var val 00:10:57.340 02:34:22 -- accel/accel.sh@21 -- # val= 00:10:57.340 02:34:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.340 02:34:22 -- accel/accel.sh@20 -- # IFS=: 00:10:57.340 02:34:22 -- accel/accel.sh@20 -- # read -r var val 00:10:57.340 02:34:22 -- accel/accel.sh@21 -- # val= 00:10:57.340 02:34:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.340 02:34:22 -- accel/accel.sh@20 -- # IFS=: 00:10:57.340 02:34:22 -- accel/accel.sh@20 -- # read -r var val 00:10:57.340 02:34:22 -- accel/accel.sh@21 -- # val= 00:10:57.340 02:34:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.340 02:34:22 -- accel/accel.sh@20 -- # IFS=: 00:10:57.340 02:34:22 -- accel/accel.sh@20 -- # read -r var val 00:10:57.340 02:34:22 -- accel/accel.sh@21 -- # val= 00:10:57.340 02:34:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.340 02:34:22 -- accel/accel.sh@20 -- # IFS=: 00:10:57.340 02:34:22 -- accel/accel.sh@20 -- # read -r var val 00:10:57.340 02:34:22 -- accel/accel.sh@21 -- # val= 00:10:57.340 02:34:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.340 02:34:22 -- accel/accel.sh@20 -- # IFS=: 00:10:57.340 02:34:22 -- accel/accel.sh@20 -- # read -r var val 00:10:57.340 02:34:22 -- accel/accel.sh@21 -- # val= 00:10:57.340 02:34:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.340 02:34:22 -- accel/accel.sh@20 -- # IFS=: 00:10:57.340 02:34:22 -- accel/accel.sh@20 -- # read -r var val 00:10:57.340 02:34:22 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:57.340 02:34:22 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:10:57.340 02:34:22 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:57.340 ************************************ 00:10:57.340 END TEST accel_xor 00:10:57.340 ************************************ 00:10:57.340 00:10:57.340 real 0m2.996s 00:10:57.340 user 0m2.601s 00:10:57.340 sys 0m0.255s 00:10:57.340 02:34:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:57.340 02:34:22 -- common/autotest_common.sh@10 -- # set +x 00:10:57.340 02:34:22 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:10:57.340 02:34:22 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:10:57.340 02:34:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:57.340 02:34:22 -- common/autotest_common.sh@10 -- # set +x 00:10:57.340 ************************************ 00:10:57.340 START TEST accel_dif_verify 00:10:57.340 ************************************ 00:10:57.340 02:34:22 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:10:57.340 02:34:22 -- accel/accel.sh@16 -- # local accel_opc 00:10:57.340 02:34:22 -- accel/accel.sh@17 -- # local accel_module 00:10:57.341 02:34:22 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:10:57.341 02:34:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:10:57.341 02:34:22 -- accel/accel.sh@12 -- # build_accel_config 00:10:57.341 02:34:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:57.341 02:34:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:57.341 02:34:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:57.341 02:34:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:57.341 02:34:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:57.341 02:34:22 -- accel/accel.sh@41 -- # local IFS=, 00:10:57.341 02:34:22 -- accel/accel.sh@42 -- # jq -r . 00:10:57.341 [2024-07-11 02:34:22.182993] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:57.341 [2024-07-11 02:34:22.183244] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120035 ] 00:10:57.341 [2024-07-11 02:34:22.329091] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.341 [2024-07-11 02:34:22.415092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.717 02:34:23 -- accel/accel.sh@18 -- # out=' 00:10:58.717 SPDK Configuration: 00:10:58.717 Core mask: 0x1 00:10:58.717 00:10:58.717 Accel Perf Configuration: 00:10:58.717 Workload Type: dif_verify 00:10:58.717 Vector size: 4096 bytes 00:10:58.717 Transfer size: 4096 bytes 00:10:58.717 Block size: 512 bytes 00:10:58.717 Metadata size: 8 bytes 00:10:58.717 Vector count 1 00:10:58.717 Module: software 00:10:58.717 Queue depth: 32 00:10:58.717 Allocate depth: 32 00:10:58.717 # threads/core: 1 00:10:58.717 Run time: 1 seconds 00:10:58.717 Verify: No 00:10:58.717 00:10:58.717 Running for 1 seconds... 00:10:58.717 00:10:58.717 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:58.717 ------------------------------------------------------------------------------------ 00:10:58.717 0,0 113920/s 451 MiB/s 0 0 00:10:58.717 ==================================================================================== 00:10:58.717 Total 113920/s 445 MiB/s 0 0' 00:10:58.717 02:34:23 -- accel/accel.sh@20 -- # IFS=: 00:10:58.717 02:34:23 -- accel/accel.sh@20 -- # read -r var val 00:10:58.717 02:34:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:10:58.717 02:34:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:10:58.717 02:34:23 -- accel/accel.sh@12 -- # build_accel_config 00:10:58.717 02:34:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:58.717 02:34:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:58.717 02:34:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:58.717 02:34:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:58.717 02:34:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:58.717 02:34:23 -- accel/accel.sh@41 -- # local IFS=, 00:10:58.717 02:34:23 -- accel/accel.sh@42 -- # jq -r . 00:10:58.717 [2024-07-11 02:34:23.696818] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:58.717 [2024-07-11 02:34:23.697128] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120069 ] 00:10:58.976 [2024-07-11 02:34:23.842865] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.976 [2024-07-11 02:34:23.922081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.976 02:34:23 -- accel/accel.sh@21 -- # val= 00:10:58.976 02:34:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.976 02:34:23 -- accel/accel.sh@20 -- # IFS=: 00:10:58.976 02:34:23 -- accel/accel.sh@20 -- # read -r var val 00:10:58.976 02:34:23 -- accel/accel.sh@21 -- # val= 00:10:58.976 02:34:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.976 02:34:23 -- accel/accel.sh@20 -- # IFS=: 00:10:58.976 02:34:23 -- accel/accel.sh@20 -- # read -r var val 00:10:58.976 02:34:23 -- accel/accel.sh@21 -- # val=0x1 00:10:58.976 02:34:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.976 02:34:23 -- accel/accel.sh@20 -- # IFS=: 00:10:58.976 02:34:23 -- accel/accel.sh@20 -- # read -r var val 00:10:58.976 02:34:23 -- accel/accel.sh@21 -- # val= 00:10:58.976 02:34:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.976 02:34:23 -- accel/accel.sh@20 -- # IFS=: 00:10:58.976 02:34:23 -- accel/accel.sh@20 -- # read -r var val 00:10:58.976 02:34:23 -- accel/accel.sh@21 -- # val= 00:10:58.976 02:34:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.976 02:34:23 -- accel/accel.sh@20 -- # IFS=: 00:10:58.976 02:34:23 -- accel/accel.sh@20 -- # read -r var val 00:10:58.976 02:34:23 -- accel/accel.sh@21 -- # val=dif_verify 00:10:58.976 02:34:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.976 02:34:23 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:10:58.976 02:34:23 -- accel/accel.sh@20 -- # IFS=: 00:10:58.976 02:34:23 -- accel/accel.sh@20 -- # read -r var val 00:10:58.976 02:34:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:58.976 02:34:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.976 02:34:23 -- accel/accel.sh@20 -- # IFS=: 00:10:58.976 02:34:23 -- accel/accel.sh@20 -- # read -r var val 00:10:58.976 02:34:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:58.976 02:34:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.976 02:34:23 -- accel/accel.sh@20 -- # IFS=: 00:10:58.976 02:34:23 -- accel/accel.sh@20 -- # read -r var val 00:10:58.976 02:34:23 -- accel/accel.sh@21 -- # val='512 bytes' 00:10:58.976 02:34:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.976 02:34:23 -- accel/accel.sh@20 -- # IFS=: 00:10:58.976 02:34:23 -- accel/accel.sh@20 -- # read -r var val 00:10:58.976 02:34:23 -- accel/accel.sh@21 -- # val='8 bytes' 00:10:58.976 02:34:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.976 02:34:23 -- accel/accel.sh@20 -- # IFS=: 00:10:58.976 02:34:23 -- accel/accel.sh@20 -- # read -r var val 00:10:58.976 02:34:23 -- accel/accel.sh@21 -- # val= 00:10:58.976 02:34:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.976 02:34:23 -- accel/accel.sh@20 -- # IFS=: 00:10:58.976 02:34:23 -- accel/accel.sh@20 -- # read -r var val 00:10:58.976 02:34:23 -- accel/accel.sh@21 -- # val=software 00:10:58.976 02:34:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.976 02:34:23 -- accel/accel.sh@23 -- # accel_module=software 00:10:58.976 02:34:23 -- accel/accel.sh@20 -- # IFS=: 00:10:58.976 02:34:23 -- accel/accel.sh@20 -- # read -r var val 00:10:58.976 02:34:23 -- accel/accel.sh@21 -- # val=32 00:10:58.976 02:34:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.976 02:34:23 -- accel/accel.sh@20 -- # IFS=: 00:10:58.976 02:34:23 -- accel/accel.sh@20 -- # read -r var val 00:10:58.976 02:34:23 -- accel/accel.sh@21 -- # val=32 00:10:58.976 02:34:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.976 02:34:23 -- accel/accel.sh@20 -- # IFS=: 00:10:58.976 02:34:23 -- accel/accel.sh@20 -- # read -r var val 00:10:58.976 02:34:23 -- accel/accel.sh@21 -- # val=1 00:10:58.976 02:34:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.976 02:34:23 -- accel/accel.sh@20 -- # IFS=: 00:10:58.976 02:34:23 -- accel/accel.sh@20 -- # read -r var val 00:10:58.976 02:34:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:58.976 02:34:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.976 02:34:23 -- accel/accel.sh@20 -- # IFS=: 00:10:58.976 02:34:23 -- accel/accel.sh@20 -- # read -r var val 00:10:58.976 02:34:23 -- accel/accel.sh@21 -- # val=No 00:10:58.976 02:34:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.976 02:34:23 -- accel/accel.sh@20 -- # IFS=: 00:10:58.976 02:34:23 -- accel/accel.sh@20 -- # read -r var val 00:10:58.976 02:34:23 -- accel/accel.sh@21 -- # val= 00:10:58.976 02:34:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.976 02:34:23 -- accel/accel.sh@20 -- # IFS=: 00:10:58.976 02:34:23 -- accel/accel.sh@20 -- # read -r var val 00:10:58.976 02:34:23 -- accel/accel.sh@21 -- # val= 00:10:58.976 02:34:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.976 02:34:23 -- accel/accel.sh@20 -- # IFS=: 00:10:58.976 02:34:23 -- accel/accel.sh@20 -- # read -r var val 00:11:00.363 02:34:25 -- accel/accel.sh@21 -- # val= 00:11:00.363 02:34:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.363 02:34:25 -- accel/accel.sh@20 -- # IFS=: 00:11:00.363 02:34:25 -- accel/accel.sh@20 -- # read -r var val 00:11:00.363 02:34:25 -- accel/accel.sh@21 -- # val= 00:11:00.363 02:34:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.363 02:34:25 -- accel/accel.sh@20 -- # IFS=: 00:11:00.363 02:34:25 -- accel/accel.sh@20 -- # read -r var val 00:11:00.363 02:34:25 -- accel/accel.sh@21 -- # val= 00:11:00.363 02:34:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.363 02:34:25 -- accel/accel.sh@20 -- # IFS=: 00:11:00.363 02:34:25 -- accel/accel.sh@20 -- # read -r var val 00:11:00.363 02:34:25 -- accel/accel.sh@21 -- # val= 00:11:00.363 02:34:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.363 02:34:25 -- accel/accel.sh@20 -- # IFS=: 00:11:00.363 02:34:25 -- accel/accel.sh@20 -- # read -r var val 00:11:00.363 02:34:25 -- accel/accel.sh@21 -- # val= 00:11:00.363 02:34:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.363 02:34:25 -- accel/accel.sh@20 -- # IFS=: 00:11:00.363 02:34:25 -- accel/accel.sh@20 -- # read -r var val 00:11:00.363 02:34:25 -- accel/accel.sh@21 -- # val= 00:11:00.363 02:34:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.363 02:34:25 -- accel/accel.sh@20 -- # IFS=: 00:11:00.363 02:34:25 -- accel/accel.sh@20 -- # read -r var val 00:11:00.363 02:34:25 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:00.363 02:34:25 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:11:00.363 ************************************ 00:11:00.363 END TEST accel_dif_verify 00:11:00.363 ************************************ 00:11:00.363 02:34:25 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:00.363 00:11:00.363 real 0m3.037s 00:11:00.363 user 0m1.271s 00:11:00.363 sys 0m0.185s 00:11:00.363 02:34:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:00.363 02:34:25 -- common/autotest_common.sh@10 -- # set +x 00:11:00.363 02:34:25 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:11:00.363 02:34:25 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:11:00.363 02:34:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:00.363 02:34:25 -- common/autotest_common.sh@10 -- # set +x 00:11:00.363 ************************************ 00:11:00.363 START TEST accel_dif_generate 00:11:00.363 ************************************ 00:11:00.363 02:34:25 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:11:00.363 02:34:25 -- accel/accel.sh@16 -- # local accel_opc 00:11:00.363 02:34:25 -- accel/accel.sh@17 -- # local accel_module 00:11:00.363 02:34:25 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:11:00.363 02:34:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:11:00.363 02:34:25 -- accel/accel.sh@12 -- # build_accel_config 00:11:00.363 02:34:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:00.363 02:34:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:00.363 02:34:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:00.363 02:34:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:00.363 02:34:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:00.363 02:34:25 -- accel/accel.sh@41 -- # local IFS=, 00:11:00.363 02:34:25 -- accel/accel.sh@42 -- # jq -r . 00:11:00.363 [2024-07-11 02:34:25.269593] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:00.363 [2024-07-11 02:34:25.269858] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120104 ] 00:11:00.363 [2024-07-11 02:34:25.418656] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.621 [2024-07-11 02:34:25.519037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.995 02:34:26 -- accel/accel.sh@18 -- # out=' 00:11:01.995 SPDK Configuration: 00:11:01.995 Core mask: 0x1 00:11:01.995 00:11:01.995 Accel Perf Configuration: 00:11:01.995 Workload Type: dif_generate 00:11:01.995 Vector size: 4096 bytes 00:11:01.995 Transfer size: 4096 bytes 00:11:01.995 Block size: 512 bytes 00:11:01.995 Metadata size: 8 bytes 00:11:01.995 Vector count 1 00:11:01.995 Module: software 00:11:01.995 Queue depth: 32 00:11:01.995 Allocate depth: 32 00:11:01.995 # threads/core: 1 00:11:01.995 Run time: 1 seconds 00:11:01.995 Verify: No 00:11:01.995 00:11:01.995 Running for 1 seconds... 00:11:01.995 00:11:01.995 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:01.995 ------------------------------------------------------------------------------------ 00:11:01.995 0,0 130240/s 516 MiB/s 0 0 00:11:01.995 ==================================================================================== 00:11:01.995 Total 130240/s 508 MiB/s 0 0' 00:11:01.995 02:34:26 -- accel/accel.sh@20 -- # IFS=: 00:11:01.995 02:34:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:11:01.995 02:34:26 -- accel/accel.sh@20 -- # read -r var val 00:11:01.995 02:34:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:11:01.995 02:34:26 -- accel/accel.sh@12 -- # build_accel_config 00:11:01.995 02:34:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:01.995 02:34:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:01.995 02:34:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:01.995 02:34:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:01.995 02:34:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:01.995 02:34:26 -- accel/accel.sh@41 -- # local IFS=, 00:11:01.995 02:34:26 -- accel/accel.sh@42 -- # jq -r . 00:11:01.995 [2024-07-11 02:34:26.815350] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:01.995 [2024-07-11 02:34:26.816154] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120138 ] 00:11:01.995 [2024-07-11 02:34:26.962406] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.995 [2024-07-11 02:34:27.050489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.252 02:34:27 -- accel/accel.sh@21 -- # val= 00:11:02.252 02:34:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.252 02:34:27 -- accel/accel.sh@20 -- # IFS=: 00:11:02.252 02:34:27 -- accel/accel.sh@20 -- # read -r var val 00:11:02.252 02:34:27 -- accel/accel.sh@21 -- # val= 00:11:02.252 02:34:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.252 02:34:27 -- accel/accel.sh@20 -- # IFS=: 00:11:02.252 02:34:27 -- accel/accel.sh@20 -- # read -r var val 00:11:02.252 02:34:27 -- accel/accel.sh@21 -- # val=0x1 00:11:02.252 02:34:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.252 02:34:27 -- accel/accel.sh@20 -- # IFS=: 00:11:02.252 02:34:27 -- accel/accel.sh@20 -- # read -r var val 00:11:02.252 02:34:27 -- accel/accel.sh@21 -- # val= 00:11:02.252 02:34:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.252 02:34:27 -- accel/accel.sh@20 -- # IFS=: 00:11:02.252 02:34:27 -- accel/accel.sh@20 -- # read -r var val 00:11:02.252 02:34:27 -- accel/accel.sh@21 -- # val= 00:11:02.252 02:34:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.252 02:34:27 -- accel/accel.sh@20 -- # IFS=: 00:11:02.252 02:34:27 -- accel/accel.sh@20 -- # read -r var val 00:11:02.252 02:34:27 -- accel/accel.sh@21 -- # val=dif_generate 00:11:02.252 02:34:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.252 02:34:27 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:11:02.252 02:34:27 -- accel/accel.sh@20 -- # IFS=: 00:11:02.252 02:34:27 -- accel/accel.sh@20 -- # read -r var val 00:11:02.252 02:34:27 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:02.252 02:34:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.252 02:34:27 -- accel/accel.sh@20 -- # IFS=: 00:11:02.252 02:34:27 -- accel/accel.sh@20 -- # read -r var val 00:11:02.252 02:34:27 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:02.252 02:34:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.252 02:34:27 -- accel/accel.sh@20 -- # IFS=: 00:11:02.252 02:34:27 -- accel/accel.sh@20 -- # read -r var val 00:11:02.252 02:34:27 -- accel/accel.sh@21 -- # val='512 bytes' 00:11:02.252 02:34:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.252 02:34:27 -- accel/accel.sh@20 -- # IFS=: 00:11:02.252 02:34:27 -- accel/accel.sh@20 -- # read -r var val 00:11:02.252 02:34:27 -- accel/accel.sh@21 -- # val='8 bytes' 00:11:02.252 02:34:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.252 02:34:27 -- accel/accel.sh@20 -- # IFS=: 00:11:02.252 02:34:27 -- accel/accel.sh@20 -- # read -r var val 00:11:02.252 02:34:27 -- accel/accel.sh@21 -- # val= 00:11:02.252 02:34:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.252 02:34:27 -- accel/accel.sh@20 -- # IFS=: 00:11:02.252 02:34:27 -- accel/accel.sh@20 -- # read -r var val 00:11:02.252 02:34:27 -- accel/accel.sh@21 -- # val=software 00:11:02.252 02:34:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.252 02:34:27 -- accel/accel.sh@23 -- # accel_module=software 00:11:02.252 02:34:27 -- accel/accel.sh@20 -- # IFS=: 00:11:02.252 02:34:27 -- accel/accel.sh@20 -- # read -r var val 00:11:02.252 02:34:27 -- accel/accel.sh@21 -- # val=32 00:11:02.252 02:34:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.252 02:34:27 -- accel/accel.sh@20 -- # IFS=: 00:11:02.252 02:34:27 -- accel/accel.sh@20 -- # read -r var val 00:11:02.252 02:34:27 -- accel/accel.sh@21 -- # val=32 00:11:02.252 02:34:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.252 02:34:27 -- accel/accel.sh@20 -- # IFS=: 00:11:02.252 02:34:27 -- accel/accel.sh@20 -- # read -r var val 00:11:02.253 02:34:27 -- accel/accel.sh@21 -- # val=1 00:11:02.253 02:34:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.253 02:34:27 -- accel/accel.sh@20 -- # IFS=: 00:11:02.253 02:34:27 -- accel/accel.sh@20 -- # read -r var val 00:11:02.253 02:34:27 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:02.253 02:34:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.253 02:34:27 -- accel/accel.sh@20 -- # IFS=: 00:11:02.253 02:34:27 -- accel/accel.sh@20 -- # read -r var val 00:11:02.253 02:34:27 -- accel/accel.sh@21 -- # val=No 00:11:02.253 02:34:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.253 02:34:27 -- accel/accel.sh@20 -- # IFS=: 00:11:02.253 02:34:27 -- accel/accel.sh@20 -- # read -r var val 00:11:02.253 02:34:27 -- accel/accel.sh@21 -- # val= 00:11:02.253 02:34:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.253 02:34:27 -- accel/accel.sh@20 -- # IFS=: 00:11:02.253 02:34:27 -- accel/accel.sh@20 -- # read -r var val 00:11:02.253 02:34:27 -- accel/accel.sh@21 -- # val= 00:11:02.253 02:34:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.253 02:34:27 -- accel/accel.sh@20 -- # IFS=: 00:11:02.253 02:34:27 -- accel/accel.sh@20 -- # read -r var val 00:11:03.654 02:34:28 -- accel/accel.sh@21 -- # val= 00:11:03.654 02:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.654 02:34:28 -- accel/accel.sh@20 -- # IFS=: 00:11:03.654 02:34:28 -- accel/accel.sh@20 -- # read -r var val 00:11:03.654 02:34:28 -- accel/accel.sh@21 -- # val= 00:11:03.654 02:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.654 02:34:28 -- accel/accel.sh@20 -- # IFS=: 00:11:03.654 02:34:28 -- accel/accel.sh@20 -- # read -r var val 00:11:03.654 02:34:28 -- accel/accel.sh@21 -- # val= 00:11:03.654 02:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.654 02:34:28 -- accel/accel.sh@20 -- # IFS=: 00:11:03.654 02:34:28 -- accel/accel.sh@20 -- # read -r var val 00:11:03.654 02:34:28 -- accel/accel.sh@21 -- # val= 00:11:03.654 02:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.654 02:34:28 -- accel/accel.sh@20 -- # IFS=: 00:11:03.654 02:34:28 -- accel/accel.sh@20 -- # read -r var val 00:11:03.654 02:34:28 -- accel/accel.sh@21 -- # val= 00:11:03.654 02:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.654 02:34:28 -- accel/accel.sh@20 -- # IFS=: 00:11:03.654 02:34:28 -- accel/accel.sh@20 -- # read -r var val 00:11:03.654 02:34:28 -- accel/accel.sh@21 -- # val= 00:11:03.654 02:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.654 02:34:28 -- accel/accel.sh@20 -- # IFS=: 00:11:03.654 02:34:28 -- accel/accel.sh@20 -- # read -r var val 00:11:03.654 ************************************ 00:11:03.654 END TEST accel_dif_generate 00:11:03.654 ************************************ 00:11:03.654 02:34:28 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:03.654 02:34:28 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:11:03.654 02:34:28 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:03.654 00:11:03.654 real 0m3.074s 00:11:03.654 user 0m2.651s 00:11:03.654 sys 0m0.276s 00:11:03.654 02:34:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:03.654 02:34:28 -- common/autotest_common.sh@10 -- # set +x 00:11:03.654 02:34:28 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:11:03.654 02:34:28 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:11:03.654 02:34:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:03.654 02:34:28 -- common/autotest_common.sh@10 -- # set +x 00:11:03.654 ************************************ 00:11:03.654 START TEST accel_dif_generate_copy 00:11:03.654 ************************************ 00:11:03.654 02:34:28 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:11:03.654 02:34:28 -- accel/accel.sh@16 -- # local accel_opc 00:11:03.654 02:34:28 -- accel/accel.sh@17 -- # local accel_module 00:11:03.654 02:34:28 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:11:03.654 02:34:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:11:03.654 02:34:28 -- accel/accel.sh@12 -- # build_accel_config 00:11:03.654 02:34:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:03.654 02:34:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:03.654 02:34:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:03.654 02:34:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:03.654 02:34:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:03.654 02:34:28 -- accel/accel.sh@41 -- # local IFS=, 00:11:03.654 02:34:28 -- accel/accel.sh@42 -- # jq -r . 00:11:03.654 [2024-07-11 02:34:28.398441] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:03.654 [2024-07-11 02:34:28.398679] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120173 ] 00:11:03.654 [2024-07-11 02:34:28.545214] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.654 [2024-07-11 02:34:28.612939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.030 02:34:29 -- accel/accel.sh@18 -- # out=' 00:11:05.030 SPDK Configuration: 00:11:05.030 Core mask: 0x1 00:11:05.030 00:11:05.030 Accel Perf Configuration: 00:11:05.030 Workload Type: dif_generate_copy 00:11:05.030 Vector size: 4096 bytes 00:11:05.030 Transfer size: 4096 bytes 00:11:05.030 Vector count 1 00:11:05.030 Module: software 00:11:05.030 Queue depth: 32 00:11:05.030 Allocate depth: 32 00:11:05.030 # threads/core: 1 00:11:05.030 Run time: 1 seconds 00:11:05.030 Verify: No 00:11:05.030 00:11:05.030 Running for 1 seconds... 00:11:05.030 00:11:05.030 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:05.030 ------------------------------------------------------------------------------------ 00:11:05.030 0,0 100160/s 397 MiB/s 0 0 00:11:05.030 ==================================================================================== 00:11:05.030 Total 100160/s 391 MiB/s 0 0' 00:11:05.030 02:34:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:11:05.030 02:34:29 -- accel/accel.sh@20 -- # IFS=: 00:11:05.030 02:34:29 -- accel/accel.sh@20 -- # read -r var val 00:11:05.030 02:34:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:11:05.030 02:34:29 -- accel/accel.sh@12 -- # build_accel_config 00:11:05.030 02:34:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:05.030 02:34:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:05.030 02:34:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:05.030 02:34:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:05.030 02:34:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:05.030 02:34:29 -- accel/accel.sh@41 -- # local IFS=, 00:11:05.030 02:34:29 -- accel/accel.sh@42 -- # jq -r . 00:11:05.030 [2024-07-11 02:34:29.896693] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:05.030 [2024-07-11 02:34:29.896911] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120206 ] 00:11:05.030 [2024-07-11 02:34:30.041997] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.289 [2024-07-11 02:34:30.130064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.289 02:34:30 -- accel/accel.sh@21 -- # val= 00:11:05.289 02:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.289 02:34:30 -- accel/accel.sh@20 -- # IFS=: 00:11:05.289 02:34:30 -- accel/accel.sh@20 -- # read -r var val 00:11:05.289 02:34:30 -- accel/accel.sh@21 -- # val= 00:11:05.289 02:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.289 02:34:30 -- accel/accel.sh@20 -- # IFS=: 00:11:05.289 02:34:30 -- accel/accel.sh@20 -- # read -r var val 00:11:05.289 02:34:30 -- accel/accel.sh@21 -- # val=0x1 00:11:05.289 02:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.289 02:34:30 -- accel/accel.sh@20 -- # IFS=: 00:11:05.289 02:34:30 -- accel/accel.sh@20 -- # read -r var val 00:11:05.289 02:34:30 -- accel/accel.sh@21 -- # val= 00:11:05.289 02:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.289 02:34:30 -- accel/accel.sh@20 -- # IFS=: 00:11:05.289 02:34:30 -- accel/accel.sh@20 -- # read -r var val 00:11:05.289 02:34:30 -- accel/accel.sh@21 -- # val= 00:11:05.289 02:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.289 02:34:30 -- accel/accel.sh@20 -- # IFS=: 00:11:05.289 02:34:30 -- accel/accel.sh@20 -- # read -r var val 00:11:05.289 02:34:30 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:11:05.289 02:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.289 02:34:30 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:11:05.289 02:34:30 -- accel/accel.sh@20 -- # IFS=: 00:11:05.289 02:34:30 -- accel/accel.sh@20 -- # read -r var val 00:11:05.289 02:34:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:05.289 02:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.289 02:34:30 -- accel/accel.sh@20 -- # IFS=: 00:11:05.289 02:34:30 -- accel/accel.sh@20 -- # read -r var val 00:11:05.289 02:34:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:05.289 02:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.289 02:34:30 -- accel/accel.sh@20 -- # IFS=: 00:11:05.289 02:34:30 -- accel/accel.sh@20 -- # read -r var val 00:11:05.289 02:34:30 -- accel/accel.sh@21 -- # val= 00:11:05.289 02:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.289 02:34:30 -- accel/accel.sh@20 -- # IFS=: 00:11:05.289 02:34:30 -- accel/accel.sh@20 -- # read -r var val 00:11:05.289 02:34:30 -- accel/accel.sh@21 -- # val=software 00:11:05.289 02:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.289 02:34:30 -- accel/accel.sh@23 -- # accel_module=software 00:11:05.289 02:34:30 -- accel/accel.sh@20 -- # IFS=: 00:11:05.289 02:34:30 -- accel/accel.sh@20 -- # read -r var val 00:11:05.289 02:34:30 -- accel/accel.sh@21 -- # val=32 00:11:05.289 02:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.289 02:34:30 -- accel/accel.sh@20 -- # IFS=: 00:11:05.289 02:34:30 -- accel/accel.sh@20 -- # read -r var val 00:11:05.289 02:34:30 -- accel/accel.sh@21 -- # val=32 00:11:05.289 02:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.289 02:34:30 -- accel/accel.sh@20 -- # IFS=: 00:11:05.289 02:34:30 -- accel/accel.sh@20 -- # read -r var val 00:11:05.289 02:34:30 -- accel/accel.sh@21 -- # val=1 00:11:05.289 02:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.289 02:34:30 -- accel/accel.sh@20 -- # IFS=: 00:11:05.289 02:34:30 -- accel/accel.sh@20 -- # read -r var val 00:11:05.289 02:34:30 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:05.289 02:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.289 02:34:30 -- accel/accel.sh@20 -- # IFS=: 00:11:05.289 02:34:30 -- accel/accel.sh@20 -- # read -r var val 00:11:05.289 02:34:30 -- accel/accel.sh@21 -- # val=No 00:11:05.289 02:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.289 02:34:30 -- accel/accel.sh@20 -- # IFS=: 00:11:05.289 02:34:30 -- accel/accel.sh@20 -- # read -r var val 00:11:05.289 02:34:30 -- accel/accel.sh@21 -- # val= 00:11:05.289 02:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.289 02:34:30 -- accel/accel.sh@20 -- # IFS=: 00:11:05.289 02:34:30 -- accel/accel.sh@20 -- # read -r var val 00:11:05.289 02:34:30 -- accel/accel.sh@21 -- # val= 00:11:05.289 02:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.289 02:34:30 -- accel/accel.sh@20 -- # IFS=: 00:11:05.289 02:34:30 -- accel/accel.sh@20 -- # read -r var val 00:11:06.665 02:34:31 -- accel/accel.sh@21 -- # val= 00:11:06.665 02:34:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.665 02:34:31 -- accel/accel.sh@20 -- # IFS=: 00:11:06.665 02:34:31 -- accel/accel.sh@20 -- # read -r var val 00:11:06.665 02:34:31 -- accel/accel.sh@21 -- # val= 00:11:06.665 02:34:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.665 02:34:31 -- accel/accel.sh@20 -- # IFS=: 00:11:06.665 02:34:31 -- accel/accel.sh@20 -- # read -r var val 00:11:06.665 02:34:31 -- accel/accel.sh@21 -- # val= 00:11:06.665 02:34:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.665 02:34:31 -- accel/accel.sh@20 -- # IFS=: 00:11:06.665 02:34:31 -- accel/accel.sh@20 -- # read -r var val 00:11:06.665 02:34:31 -- accel/accel.sh@21 -- # val= 00:11:06.665 02:34:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.665 02:34:31 -- accel/accel.sh@20 -- # IFS=: 00:11:06.665 02:34:31 -- accel/accel.sh@20 -- # read -r var val 00:11:06.665 02:34:31 -- accel/accel.sh@21 -- # val= 00:11:06.665 02:34:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.665 02:34:31 -- accel/accel.sh@20 -- # IFS=: 00:11:06.665 02:34:31 -- accel/accel.sh@20 -- # read -r var val 00:11:06.665 02:34:31 -- accel/accel.sh@21 -- # val= 00:11:06.665 02:34:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.665 02:34:31 -- accel/accel.sh@20 -- # IFS=: 00:11:06.665 02:34:31 -- accel/accel.sh@20 -- # read -r var val 00:11:06.665 02:34:31 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:06.665 02:34:31 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:11:06.665 02:34:31 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:06.665 00:11:06.665 real 0m2.996s 00:11:06.665 user 0m2.595s 00:11:06.665 sys 0m0.264s 00:11:06.665 02:34:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:06.665 ************************************ 00:11:06.665 END TEST accel_dif_generate_copy 00:11:06.665 ************************************ 00:11:06.665 02:34:31 -- common/autotest_common.sh@10 -- # set +x 00:11:06.665 02:34:31 -- accel/accel.sh@107 -- # [[ y == y ]] 00:11:06.665 02:34:31 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:06.665 02:34:31 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:11:06.665 02:34:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:06.665 02:34:31 -- common/autotest_common.sh@10 -- # set +x 00:11:06.665 ************************************ 00:11:06.665 START TEST accel_comp 00:11:06.665 ************************************ 00:11:06.665 02:34:31 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:06.665 02:34:31 -- accel/accel.sh@16 -- # local accel_opc 00:11:06.665 02:34:31 -- accel/accel.sh@17 -- # local accel_module 00:11:06.665 02:34:31 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:06.665 02:34:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:06.665 02:34:31 -- accel/accel.sh@12 -- # build_accel_config 00:11:06.665 02:34:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:06.665 02:34:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:06.665 02:34:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:06.665 02:34:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:06.665 02:34:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:06.665 02:34:31 -- accel/accel.sh@41 -- # local IFS=, 00:11:06.665 02:34:31 -- accel/accel.sh@42 -- # jq -r . 00:11:06.665 [2024-07-11 02:34:31.447219] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:06.665 [2024-07-11 02:34:31.447445] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120242 ] 00:11:06.665 [2024-07-11 02:34:31.593515] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.665 [2024-07-11 02:34:31.670616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.041 02:34:32 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:08.041 00:11:08.041 SPDK Configuration: 00:11:08.041 Core mask: 0x1 00:11:08.041 00:11:08.041 Accel Perf Configuration: 00:11:08.041 Workload Type: compress 00:11:08.041 Transfer size: 4096 bytes 00:11:08.041 Vector count 1 00:11:08.041 Module: software 00:11:08.041 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:08.041 Queue depth: 32 00:11:08.041 Allocate depth: 32 00:11:08.041 # threads/core: 1 00:11:08.041 Run time: 1 seconds 00:11:08.041 Verify: No 00:11:08.041 00:11:08.041 Running for 1 seconds... 00:11:08.041 00:11:08.041 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:08.041 ------------------------------------------------------------------------------------ 00:11:08.041 0,0 54496/s 227 MiB/s 0 0 00:11:08.041 ==================================================================================== 00:11:08.041 Total 54496/s 212 MiB/s 0 0' 00:11:08.041 02:34:32 -- accel/accel.sh@20 -- # IFS=: 00:11:08.041 02:34:32 -- accel/accel.sh@20 -- # read -r var val 00:11:08.041 02:34:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:08.041 02:34:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:08.041 02:34:32 -- accel/accel.sh@12 -- # build_accel_config 00:11:08.041 02:34:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:08.041 02:34:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:08.041 02:34:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:08.041 02:34:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:08.041 02:34:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:08.041 02:34:32 -- accel/accel.sh@41 -- # local IFS=, 00:11:08.041 02:34:32 -- accel/accel.sh@42 -- # jq -r . 00:11:08.041 [2024-07-11 02:34:32.956484] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:08.041 [2024-07-11 02:34:32.956722] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120291 ] 00:11:08.041 [2024-07-11 02:34:33.101410] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.300 [2024-07-11 02:34:33.192781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.300 02:34:33 -- accel/accel.sh@21 -- # val= 00:11:08.300 02:34:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.300 02:34:33 -- accel/accel.sh@20 -- # IFS=: 00:11:08.300 02:34:33 -- accel/accel.sh@20 -- # read -r var val 00:11:08.300 02:34:33 -- accel/accel.sh@21 -- # val= 00:11:08.300 02:34:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.300 02:34:33 -- accel/accel.sh@20 -- # IFS=: 00:11:08.300 02:34:33 -- accel/accel.sh@20 -- # read -r var val 00:11:08.300 02:34:33 -- accel/accel.sh@21 -- # val= 00:11:08.300 02:34:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.300 02:34:33 -- accel/accel.sh@20 -- # IFS=: 00:11:08.300 02:34:33 -- accel/accel.sh@20 -- # read -r var val 00:11:08.300 02:34:33 -- accel/accel.sh@21 -- # val=0x1 00:11:08.300 02:34:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.300 02:34:33 -- accel/accel.sh@20 -- # IFS=: 00:11:08.300 02:34:33 -- accel/accel.sh@20 -- # read -r var val 00:11:08.300 02:34:33 -- accel/accel.sh@21 -- # val= 00:11:08.300 02:34:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.300 02:34:33 -- accel/accel.sh@20 -- # IFS=: 00:11:08.300 02:34:33 -- accel/accel.sh@20 -- # read -r var val 00:11:08.300 02:34:33 -- accel/accel.sh@21 -- # val= 00:11:08.300 02:34:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.300 02:34:33 -- accel/accel.sh@20 -- # IFS=: 00:11:08.300 02:34:33 -- accel/accel.sh@20 -- # read -r var val 00:11:08.300 02:34:33 -- accel/accel.sh@21 -- # val=compress 00:11:08.300 02:34:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.300 02:34:33 -- accel/accel.sh@24 -- # accel_opc=compress 00:11:08.300 02:34:33 -- accel/accel.sh@20 -- # IFS=: 00:11:08.300 02:34:33 -- accel/accel.sh@20 -- # read -r var val 00:11:08.300 02:34:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:08.300 02:34:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.300 02:34:33 -- accel/accel.sh@20 -- # IFS=: 00:11:08.300 02:34:33 -- accel/accel.sh@20 -- # read -r var val 00:11:08.300 02:34:33 -- accel/accel.sh@21 -- # val= 00:11:08.300 02:34:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.300 02:34:33 -- accel/accel.sh@20 -- # IFS=: 00:11:08.300 02:34:33 -- accel/accel.sh@20 -- # read -r var val 00:11:08.300 02:34:33 -- accel/accel.sh@21 -- # val=software 00:11:08.300 02:34:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.300 02:34:33 -- accel/accel.sh@23 -- # accel_module=software 00:11:08.300 02:34:33 -- accel/accel.sh@20 -- # IFS=: 00:11:08.300 02:34:33 -- accel/accel.sh@20 -- # read -r var val 00:11:08.300 02:34:33 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:08.300 02:34:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.300 02:34:33 -- accel/accel.sh@20 -- # IFS=: 00:11:08.300 02:34:33 -- accel/accel.sh@20 -- # read -r var val 00:11:08.300 02:34:33 -- accel/accel.sh@21 -- # val=32 00:11:08.300 02:34:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.300 02:34:33 -- accel/accel.sh@20 -- # IFS=: 00:11:08.300 02:34:33 -- accel/accel.sh@20 -- # read -r var val 00:11:08.300 02:34:33 -- accel/accel.sh@21 -- # val=32 00:11:08.300 02:34:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.300 02:34:33 -- accel/accel.sh@20 -- # IFS=: 00:11:08.300 02:34:33 -- accel/accel.sh@20 -- # read -r var val 00:11:08.300 02:34:33 -- accel/accel.sh@21 -- # val=1 00:11:08.300 02:34:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.300 02:34:33 -- accel/accel.sh@20 -- # IFS=: 00:11:08.300 02:34:33 -- accel/accel.sh@20 -- # read -r var val 00:11:08.300 02:34:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:08.300 02:34:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.300 02:34:33 -- accel/accel.sh@20 -- # IFS=: 00:11:08.300 02:34:33 -- accel/accel.sh@20 -- # read -r var val 00:11:08.300 02:34:33 -- accel/accel.sh@21 -- # val=No 00:11:08.300 02:34:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.300 02:34:33 -- accel/accel.sh@20 -- # IFS=: 00:11:08.300 02:34:33 -- accel/accel.sh@20 -- # read -r var val 00:11:08.300 02:34:33 -- accel/accel.sh@21 -- # val= 00:11:08.300 02:34:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.300 02:34:33 -- accel/accel.sh@20 -- # IFS=: 00:11:08.300 02:34:33 -- accel/accel.sh@20 -- # read -r var val 00:11:08.300 02:34:33 -- accel/accel.sh@21 -- # val= 00:11:08.300 02:34:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.300 02:34:33 -- accel/accel.sh@20 -- # IFS=: 00:11:08.300 02:34:33 -- accel/accel.sh@20 -- # read -r var val 00:11:09.676 02:34:34 -- accel/accel.sh@21 -- # val= 00:11:09.676 02:34:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.676 02:34:34 -- accel/accel.sh@20 -- # IFS=: 00:11:09.676 02:34:34 -- accel/accel.sh@20 -- # read -r var val 00:11:09.676 02:34:34 -- accel/accel.sh@21 -- # val= 00:11:09.676 02:34:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.676 02:34:34 -- accel/accel.sh@20 -- # IFS=: 00:11:09.676 02:34:34 -- accel/accel.sh@20 -- # read -r var val 00:11:09.676 02:34:34 -- accel/accel.sh@21 -- # val= 00:11:09.676 02:34:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.676 02:34:34 -- accel/accel.sh@20 -- # IFS=: 00:11:09.676 02:34:34 -- accel/accel.sh@20 -- # read -r var val 00:11:09.676 02:34:34 -- accel/accel.sh@21 -- # val= 00:11:09.676 02:34:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.676 02:34:34 -- accel/accel.sh@20 -- # IFS=: 00:11:09.676 02:34:34 -- accel/accel.sh@20 -- # read -r var val 00:11:09.676 02:34:34 -- accel/accel.sh@21 -- # val= 00:11:09.676 02:34:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.676 02:34:34 -- accel/accel.sh@20 -- # IFS=: 00:11:09.676 02:34:34 -- accel/accel.sh@20 -- # read -r var val 00:11:09.676 02:34:34 -- accel/accel.sh@21 -- # val= 00:11:09.676 02:34:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.676 02:34:34 -- accel/accel.sh@20 -- # IFS=: 00:11:09.676 02:34:34 -- accel/accel.sh@20 -- # read -r var val 00:11:09.676 02:34:34 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:09.676 02:34:34 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:11:09.676 02:34:34 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:09.676 00:11:09.676 real 0m3.041s 00:11:09.676 user 0m2.640s 00:11:09.676 sys 0m0.266s 00:11:09.676 02:34:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:09.676 ************************************ 00:11:09.676 02:34:34 -- common/autotest_common.sh@10 -- # set +x 00:11:09.676 END TEST accel_comp 00:11:09.676 ************************************ 00:11:09.676 02:34:34 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:09.676 02:34:34 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:11:09.676 02:34:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:09.676 02:34:34 -- common/autotest_common.sh@10 -- # set +x 00:11:09.676 ************************************ 00:11:09.676 START TEST accel_decomp 00:11:09.676 ************************************ 00:11:09.676 02:34:34 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:09.676 02:34:34 -- accel/accel.sh@16 -- # local accel_opc 00:11:09.676 02:34:34 -- accel/accel.sh@17 -- # local accel_module 00:11:09.676 02:34:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:09.676 02:34:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:09.676 02:34:34 -- accel/accel.sh@12 -- # build_accel_config 00:11:09.676 02:34:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:09.676 02:34:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:09.676 02:34:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:09.676 02:34:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:09.676 02:34:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:09.676 02:34:34 -- accel/accel.sh@41 -- # local IFS=, 00:11:09.676 02:34:34 -- accel/accel.sh@42 -- # jq -r . 00:11:09.676 [2024-07-11 02:34:34.537380] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:09.677 [2024-07-11 02:34:34.537609] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120331 ] 00:11:09.677 [2024-07-11 02:34:34.685462] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.677 [2024-07-11 02:34:34.756769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.053 02:34:35 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:11.053 00:11:11.053 SPDK Configuration: 00:11:11.053 Core mask: 0x1 00:11:11.053 00:11:11.053 Accel Perf Configuration: 00:11:11.053 Workload Type: decompress 00:11:11.053 Transfer size: 4096 bytes 00:11:11.053 Vector count 1 00:11:11.053 Module: software 00:11:11.053 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:11.053 Queue depth: 32 00:11:11.053 Allocate depth: 32 00:11:11.053 # threads/core: 1 00:11:11.053 Run time: 1 seconds 00:11:11.053 Verify: Yes 00:11:11.053 00:11:11.053 Running for 1 seconds... 00:11:11.053 00:11:11.053 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:11.053 ------------------------------------------------------------------------------------ 00:11:11.053 0,0 68800/s 126 MiB/s 0 0 00:11:11.053 ==================================================================================== 00:11:11.053 Total 68800/s 268 MiB/s 0 0' 00:11:11.053 02:34:35 -- accel/accel.sh@20 -- # IFS=: 00:11:11.053 02:34:35 -- accel/accel.sh@20 -- # read -r var val 00:11:11.053 02:34:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:11.053 02:34:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:11.053 02:34:35 -- accel/accel.sh@12 -- # build_accel_config 00:11:11.053 02:34:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:11.053 02:34:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:11.053 02:34:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:11.053 02:34:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:11.053 02:34:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:11.053 02:34:36 -- accel/accel.sh@41 -- # local IFS=, 00:11:11.053 02:34:36 -- accel/accel.sh@42 -- # jq -r . 00:11:11.053 [2024-07-11 02:34:36.027522] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:11.053 [2024-07-11 02:34:36.027781] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120360 ] 00:11:11.312 [2024-07-11 02:34:36.173555] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.312 [2024-07-11 02:34:36.257705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.312 02:34:36 -- accel/accel.sh@21 -- # val= 00:11:11.312 02:34:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.312 02:34:36 -- accel/accel.sh@20 -- # IFS=: 00:11:11.312 02:34:36 -- accel/accel.sh@20 -- # read -r var val 00:11:11.312 02:34:36 -- accel/accel.sh@21 -- # val= 00:11:11.312 02:34:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.312 02:34:36 -- accel/accel.sh@20 -- # IFS=: 00:11:11.312 02:34:36 -- accel/accel.sh@20 -- # read -r var val 00:11:11.312 02:34:36 -- accel/accel.sh@21 -- # val= 00:11:11.312 02:34:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.312 02:34:36 -- accel/accel.sh@20 -- # IFS=: 00:11:11.312 02:34:36 -- accel/accel.sh@20 -- # read -r var val 00:11:11.312 02:34:36 -- accel/accel.sh@21 -- # val=0x1 00:11:11.312 02:34:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.312 02:34:36 -- accel/accel.sh@20 -- # IFS=: 00:11:11.312 02:34:36 -- accel/accel.sh@20 -- # read -r var val 00:11:11.312 02:34:36 -- accel/accel.sh@21 -- # val= 00:11:11.312 02:34:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.312 02:34:36 -- accel/accel.sh@20 -- # IFS=: 00:11:11.312 02:34:36 -- accel/accel.sh@20 -- # read -r var val 00:11:11.312 02:34:36 -- accel/accel.sh@21 -- # val= 00:11:11.312 02:34:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.312 02:34:36 -- accel/accel.sh@20 -- # IFS=: 00:11:11.312 02:34:36 -- accel/accel.sh@20 -- # read -r var val 00:11:11.312 02:34:36 -- accel/accel.sh@21 -- # val=decompress 00:11:11.312 02:34:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.312 02:34:36 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:11.312 02:34:36 -- accel/accel.sh@20 -- # IFS=: 00:11:11.312 02:34:36 -- accel/accel.sh@20 -- # read -r var val 00:11:11.312 02:34:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:11.312 02:34:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.312 02:34:36 -- accel/accel.sh@20 -- # IFS=: 00:11:11.312 02:34:36 -- accel/accel.sh@20 -- # read -r var val 00:11:11.312 02:34:36 -- accel/accel.sh@21 -- # val= 00:11:11.312 02:34:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.312 02:34:36 -- accel/accel.sh@20 -- # IFS=: 00:11:11.312 02:34:36 -- accel/accel.sh@20 -- # read -r var val 00:11:11.312 02:34:36 -- accel/accel.sh@21 -- # val=software 00:11:11.312 02:34:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.312 02:34:36 -- accel/accel.sh@23 -- # accel_module=software 00:11:11.312 02:34:36 -- accel/accel.sh@20 -- # IFS=: 00:11:11.312 02:34:36 -- accel/accel.sh@20 -- # read -r var val 00:11:11.312 02:34:36 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:11.312 02:34:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.312 02:34:36 -- accel/accel.sh@20 -- # IFS=: 00:11:11.312 02:34:36 -- accel/accel.sh@20 -- # read -r var val 00:11:11.312 02:34:36 -- accel/accel.sh@21 -- # val=32 00:11:11.312 02:34:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.312 02:34:36 -- accel/accel.sh@20 -- # IFS=: 00:11:11.312 02:34:36 -- accel/accel.sh@20 -- # read -r var val 00:11:11.312 02:34:36 -- accel/accel.sh@21 -- # val=32 00:11:11.312 02:34:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.312 02:34:36 -- accel/accel.sh@20 -- # IFS=: 00:11:11.312 02:34:36 -- accel/accel.sh@20 -- # read -r var val 00:11:11.312 02:34:36 -- accel/accel.sh@21 -- # val=1 00:11:11.312 02:34:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.312 02:34:36 -- accel/accel.sh@20 -- # IFS=: 00:11:11.312 02:34:36 -- accel/accel.sh@20 -- # read -r var val 00:11:11.312 02:34:36 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:11.312 02:34:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.312 02:34:36 -- accel/accel.sh@20 -- # IFS=: 00:11:11.312 02:34:36 -- accel/accel.sh@20 -- # read -r var val 00:11:11.312 02:34:36 -- accel/accel.sh@21 -- # val=Yes 00:11:11.312 02:34:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.312 02:34:36 -- accel/accel.sh@20 -- # IFS=: 00:11:11.312 02:34:36 -- accel/accel.sh@20 -- # read -r var val 00:11:11.312 02:34:36 -- accel/accel.sh@21 -- # val= 00:11:11.312 02:34:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.312 02:34:36 -- accel/accel.sh@20 -- # IFS=: 00:11:11.312 02:34:36 -- accel/accel.sh@20 -- # read -r var val 00:11:11.312 02:34:36 -- accel/accel.sh@21 -- # val= 00:11:11.312 02:34:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.312 02:34:36 -- accel/accel.sh@20 -- # IFS=: 00:11:11.312 02:34:36 -- accel/accel.sh@20 -- # read -r var val 00:11:12.688 02:34:37 -- accel/accel.sh@21 -- # val= 00:11:12.688 02:34:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.688 02:34:37 -- accel/accel.sh@20 -- # IFS=: 00:11:12.688 02:34:37 -- accel/accel.sh@20 -- # read -r var val 00:11:12.688 02:34:37 -- accel/accel.sh@21 -- # val= 00:11:12.688 02:34:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.688 02:34:37 -- accel/accel.sh@20 -- # IFS=: 00:11:12.688 02:34:37 -- accel/accel.sh@20 -- # read -r var val 00:11:12.688 02:34:37 -- accel/accel.sh@21 -- # val= 00:11:12.688 02:34:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.688 02:34:37 -- accel/accel.sh@20 -- # IFS=: 00:11:12.688 02:34:37 -- accel/accel.sh@20 -- # read -r var val 00:11:12.688 02:34:37 -- accel/accel.sh@21 -- # val= 00:11:12.688 02:34:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.688 02:34:37 -- accel/accel.sh@20 -- # IFS=: 00:11:12.688 02:34:37 -- accel/accel.sh@20 -- # read -r var val 00:11:12.688 02:34:37 -- accel/accel.sh@21 -- # val= 00:11:12.688 02:34:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.688 02:34:37 -- accel/accel.sh@20 -- # IFS=: 00:11:12.688 02:34:37 -- accel/accel.sh@20 -- # read -r var val 00:11:12.688 02:34:37 -- accel/accel.sh@21 -- # val= 00:11:12.688 02:34:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.688 02:34:37 -- accel/accel.sh@20 -- # IFS=: 00:11:12.688 02:34:37 -- accel/accel.sh@20 -- # read -r var val 00:11:12.688 ************************************ 00:11:12.688 END TEST accel_decomp 00:11:12.688 ************************************ 00:11:12.688 02:34:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:12.688 02:34:37 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:12.689 02:34:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:12.689 00:11:12.689 real 0m3.010s 00:11:12.689 user 0m2.607s 00:11:12.689 sys 0m0.278s 00:11:12.689 02:34:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:12.689 02:34:37 -- common/autotest_common.sh@10 -- # set +x 00:11:12.689 02:34:37 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:12.689 02:34:37 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:11:12.689 02:34:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:12.689 02:34:37 -- common/autotest_common.sh@10 -- # set +x 00:11:12.689 ************************************ 00:11:12.689 START TEST accel_decmop_full 00:11:12.689 ************************************ 00:11:12.689 02:34:37 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:12.689 02:34:37 -- accel/accel.sh@16 -- # local accel_opc 00:11:12.689 02:34:37 -- accel/accel.sh@17 -- # local accel_module 00:11:12.689 02:34:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:12.689 02:34:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:12.689 02:34:37 -- accel/accel.sh@12 -- # build_accel_config 00:11:12.689 02:34:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:12.689 02:34:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:12.689 02:34:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:12.689 02:34:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:12.689 02:34:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:12.689 02:34:37 -- accel/accel.sh@41 -- # local IFS=, 00:11:12.689 02:34:37 -- accel/accel.sh@42 -- # jq -r . 00:11:12.689 [2024-07-11 02:34:37.596721] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:12.689 [2024-07-11 02:34:37.596995] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120401 ] 00:11:12.689 [2024-07-11 02:34:37.743991] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.947 [2024-07-11 02:34:37.835618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.322 02:34:39 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:14.322 00:11:14.322 SPDK Configuration: 00:11:14.322 Core mask: 0x1 00:11:14.322 00:11:14.322 Accel Perf Configuration: 00:11:14.322 Workload Type: decompress 00:11:14.322 Transfer size: 111250 bytes 00:11:14.322 Vector count 1 00:11:14.322 Module: software 00:11:14.322 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:14.322 Queue depth: 32 00:11:14.322 Allocate depth: 32 00:11:14.322 # threads/core: 1 00:11:14.322 Run time: 1 seconds 00:11:14.322 Verify: Yes 00:11:14.322 00:11:14.322 Running for 1 seconds... 00:11:14.322 00:11:14.322 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:14.322 ------------------------------------------------------------------------------------ 00:11:14.322 0,0 5056/s 208 MiB/s 0 0 00:11:14.322 ==================================================================================== 00:11:14.322 Total 5056/s 536 MiB/s 0 0' 00:11:14.322 02:34:39 -- accel/accel.sh@20 -- # IFS=: 00:11:14.322 02:34:39 -- accel/accel.sh@20 -- # read -r var val 00:11:14.322 02:34:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:14.322 02:34:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:14.322 02:34:39 -- accel/accel.sh@12 -- # build_accel_config 00:11:14.322 02:34:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:14.322 02:34:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:14.322 02:34:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:14.322 02:34:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:14.322 02:34:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:14.322 02:34:39 -- accel/accel.sh@41 -- # local IFS=, 00:11:14.322 02:34:39 -- accel/accel.sh@42 -- # jq -r . 00:11:14.322 [2024-07-11 02:34:39.110839] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:14.322 [2024-07-11 02:34:39.111073] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120429 ] 00:11:14.322 [2024-07-11 02:34:39.258898] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.322 [2024-07-11 02:34:39.338656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.322 02:34:39 -- accel/accel.sh@21 -- # val= 00:11:14.322 02:34:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.322 02:34:39 -- accel/accel.sh@20 -- # IFS=: 00:11:14.322 02:34:39 -- accel/accel.sh@20 -- # read -r var val 00:11:14.322 02:34:39 -- accel/accel.sh@21 -- # val= 00:11:14.322 02:34:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.322 02:34:39 -- accel/accel.sh@20 -- # IFS=: 00:11:14.322 02:34:39 -- accel/accel.sh@20 -- # read -r var val 00:11:14.322 02:34:39 -- accel/accel.sh@21 -- # val= 00:11:14.322 02:34:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.322 02:34:39 -- accel/accel.sh@20 -- # IFS=: 00:11:14.322 02:34:39 -- accel/accel.sh@20 -- # read -r var val 00:11:14.322 02:34:39 -- accel/accel.sh@21 -- # val=0x1 00:11:14.322 02:34:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.322 02:34:39 -- accel/accel.sh@20 -- # IFS=: 00:11:14.322 02:34:39 -- accel/accel.sh@20 -- # read -r var val 00:11:14.322 02:34:39 -- accel/accel.sh@21 -- # val= 00:11:14.322 02:34:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.322 02:34:39 -- accel/accel.sh@20 -- # IFS=: 00:11:14.322 02:34:39 -- accel/accel.sh@20 -- # read -r var val 00:11:14.322 02:34:39 -- accel/accel.sh@21 -- # val= 00:11:14.322 02:34:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.322 02:34:39 -- accel/accel.sh@20 -- # IFS=: 00:11:14.322 02:34:39 -- accel/accel.sh@20 -- # read -r var val 00:11:14.322 02:34:39 -- accel/accel.sh@21 -- # val=decompress 00:11:14.322 02:34:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.322 02:34:39 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:14.322 02:34:39 -- accel/accel.sh@20 -- # IFS=: 00:11:14.322 02:34:39 -- accel/accel.sh@20 -- # read -r var val 00:11:14.322 02:34:39 -- accel/accel.sh@21 -- # val='111250 bytes' 00:11:14.322 02:34:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.322 02:34:39 -- accel/accel.sh@20 -- # IFS=: 00:11:14.322 02:34:39 -- accel/accel.sh@20 -- # read -r var val 00:11:14.322 02:34:39 -- accel/accel.sh@21 -- # val= 00:11:14.322 02:34:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.322 02:34:39 -- accel/accel.sh@20 -- # IFS=: 00:11:14.322 02:34:39 -- accel/accel.sh@20 -- # read -r var val 00:11:14.322 02:34:39 -- accel/accel.sh@21 -- # val=software 00:11:14.322 02:34:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.322 02:34:39 -- accel/accel.sh@23 -- # accel_module=software 00:11:14.322 02:34:39 -- accel/accel.sh@20 -- # IFS=: 00:11:14.322 02:34:39 -- accel/accel.sh@20 -- # read -r var val 00:11:14.322 02:34:39 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:14.322 02:34:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.322 02:34:39 -- accel/accel.sh@20 -- # IFS=: 00:11:14.322 02:34:39 -- accel/accel.sh@20 -- # read -r var val 00:11:14.322 02:34:39 -- accel/accel.sh@21 -- # val=32 00:11:14.322 02:34:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.322 02:34:39 -- accel/accel.sh@20 -- # IFS=: 00:11:14.322 02:34:39 -- accel/accel.sh@20 -- # read -r var val 00:11:14.322 02:34:39 -- accel/accel.sh@21 -- # val=32 00:11:14.322 02:34:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.322 02:34:39 -- accel/accel.sh@20 -- # IFS=: 00:11:14.322 02:34:39 -- accel/accel.sh@20 -- # read -r var val 00:11:14.322 02:34:39 -- accel/accel.sh@21 -- # val=1 00:11:14.322 02:34:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.323 02:34:39 -- accel/accel.sh@20 -- # IFS=: 00:11:14.323 02:34:39 -- accel/accel.sh@20 -- # read -r var val 00:11:14.323 02:34:39 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:14.323 02:34:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.323 02:34:39 -- accel/accel.sh@20 -- # IFS=: 00:11:14.323 02:34:39 -- accel/accel.sh@20 -- # read -r var val 00:11:14.323 02:34:39 -- accel/accel.sh@21 -- # val=Yes 00:11:14.323 02:34:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.323 02:34:39 -- accel/accel.sh@20 -- # IFS=: 00:11:14.323 02:34:39 -- accel/accel.sh@20 -- # read -r var val 00:11:14.323 02:34:39 -- accel/accel.sh@21 -- # val= 00:11:14.323 02:34:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.323 02:34:39 -- accel/accel.sh@20 -- # IFS=: 00:11:14.323 02:34:39 -- accel/accel.sh@20 -- # read -r var val 00:11:14.323 02:34:39 -- accel/accel.sh@21 -- # val= 00:11:14.323 02:34:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.323 02:34:39 -- accel/accel.sh@20 -- # IFS=: 00:11:14.323 02:34:39 -- accel/accel.sh@20 -- # read -r var val 00:11:15.698 02:34:40 -- accel/accel.sh@21 -- # val= 00:11:15.698 02:34:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.698 02:34:40 -- accel/accel.sh@20 -- # IFS=: 00:11:15.698 02:34:40 -- accel/accel.sh@20 -- # read -r var val 00:11:15.698 02:34:40 -- accel/accel.sh@21 -- # val= 00:11:15.698 02:34:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.698 02:34:40 -- accel/accel.sh@20 -- # IFS=: 00:11:15.698 02:34:40 -- accel/accel.sh@20 -- # read -r var val 00:11:15.698 02:34:40 -- accel/accel.sh@21 -- # val= 00:11:15.698 02:34:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.698 02:34:40 -- accel/accel.sh@20 -- # IFS=: 00:11:15.698 02:34:40 -- accel/accel.sh@20 -- # read -r var val 00:11:15.698 02:34:40 -- accel/accel.sh@21 -- # val= 00:11:15.698 02:34:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.698 02:34:40 -- accel/accel.sh@20 -- # IFS=: 00:11:15.698 02:34:40 -- accel/accel.sh@20 -- # read -r var val 00:11:15.698 02:34:40 -- accel/accel.sh@21 -- # val= 00:11:15.698 02:34:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.698 02:34:40 -- accel/accel.sh@20 -- # IFS=: 00:11:15.698 02:34:40 -- accel/accel.sh@20 -- # read -r var val 00:11:15.698 02:34:40 -- accel/accel.sh@21 -- # val= 00:11:15.698 02:34:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.698 02:34:40 -- accel/accel.sh@20 -- # IFS=: 00:11:15.698 02:34:40 -- accel/accel.sh@20 -- # read -r var val 00:11:15.698 ************************************ 00:11:15.698 END TEST accel_decmop_full 00:11:15.698 ************************************ 00:11:15.698 02:34:40 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:15.698 02:34:40 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:15.698 02:34:40 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:15.698 00:11:15.698 real 0m3.043s 00:11:15.698 user 0m2.619s 00:11:15.698 sys 0m0.291s 00:11:15.698 02:34:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:15.698 02:34:40 -- common/autotest_common.sh@10 -- # set +x 00:11:15.698 02:34:40 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:15.698 02:34:40 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:11:15.698 02:34:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:15.698 02:34:40 -- common/autotest_common.sh@10 -- # set +x 00:11:15.698 ************************************ 00:11:15.698 START TEST accel_decomp_mcore 00:11:15.698 ************************************ 00:11:15.698 02:34:40 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:15.698 02:34:40 -- accel/accel.sh@16 -- # local accel_opc 00:11:15.698 02:34:40 -- accel/accel.sh@17 -- # local accel_module 00:11:15.698 02:34:40 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:15.698 02:34:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:15.698 02:34:40 -- accel/accel.sh@12 -- # build_accel_config 00:11:15.698 02:34:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:15.698 02:34:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:15.698 02:34:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:15.698 02:34:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:15.698 02:34:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:15.698 02:34:40 -- accel/accel.sh@41 -- # local IFS=, 00:11:15.698 02:34:40 -- accel/accel.sh@42 -- # jq -r . 00:11:15.698 [2024-07-11 02:34:40.690075] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:15.698 [2024-07-11 02:34:40.690319] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120464 ] 00:11:15.956 [2024-07-11 02:34:40.854816] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:15.956 [2024-07-11 02:34:40.947058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:15.956 [2024-07-11 02:34:40.947202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:15.956 [2024-07-11 02:34:40.947346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.956 [2024-07-11 02:34:40.947347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:17.333 02:34:42 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:17.333 00:11:17.333 SPDK Configuration: 00:11:17.333 Core mask: 0xf 00:11:17.333 00:11:17.333 Accel Perf Configuration: 00:11:17.333 Workload Type: decompress 00:11:17.333 Transfer size: 4096 bytes 00:11:17.333 Vector count 1 00:11:17.333 Module: software 00:11:17.333 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:17.333 Queue depth: 32 00:11:17.333 Allocate depth: 32 00:11:17.333 # threads/core: 1 00:11:17.333 Run time: 1 seconds 00:11:17.333 Verify: Yes 00:11:17.333 00:11:17.333 Running for 1 seconds... 00:11:17.333 00:11:17.333 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:17.333 ------------------------------------------------------------------------------------ 00:11:17.333 0,0 55552/s 102 MiB/s 0 0 00:11:17.333 3,0 54976/s 101 MiB/s 0 0 00:11:17.333 2,0 55648/s 102 MiB/s 0 0 00:11:17.333 1,0 53952/s 99 MiB/s 0 0 00:11:17.333 ==================================================================================== 00:11:17.333 Total 220128/s 859 MiB/s 0 0' 00:11:17.333 02:34:42 -- accel/accel.sh@20 -- # IFS=: 00:11:17.333 02:34:42 -- accel/accel.sh@20 -- # read -r var val 00:11:17.333 02:34:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:17.333 02:34:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:17.333 02:34:42 -- accel/accel.sh@12 -- # build_accel_config 00:11:17.334 02:34:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:17.334 02:34:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:17.334 02:34:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:17.334 02:34:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:17.334 02:34:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:17.334 02:34:42 -- accel/accel.sh@41 -- # local IFS=, 00:11:17.334 02:34:42 -- accel/accel.sh@42 -- # jq -r . 00:11:17.334 [2024-07-11 02:34:42.314516] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:17.334 [2024-07-11 02:34:42.314701] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120520 ] 00:11:17.592 [2024-07-11 02:34:42.474091] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:17.592 [2024-07-11 02:34:42.614006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:17.592 [2024-07-11 02:34:42.614125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:17.592 [2024-07-11 02:34:42.614233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:17.592 [2024-07-11 02:34:42.614251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.851 02:34:42 -- accel/accel.sh@21 -- # val= 00:11:17.851 02:34:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.851 02:34:42 -- accel/accel.sh@20 -- # IFS=: 00:11:17.851 02:34:42 -- accel/accel.sh@20 -- # read -r var val 00:11:17.851 02:34:42 -- accel/accel.sh@21 -- # val= 00:11:17.851 02:34:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.851 02:34:42 -- accel/accel.sh@20 -- # IFS=: 00:11:17.851 02:34:42 -- accel/accel.sh@20 -- # read -r var val 00:11:17.851 02:34:42 -- accel/accel.sh@21 -- # val= 00:11:17.851 02:34:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.851 02:34:42 -- accel/accel.sh@20 -- # IFS=: 00:11:17.851 02:34:42 -- accel/accel.sh@20 -- # read -r var val 00:11:17.851 02:34:42 -- accel/accel.sh@21 -- # val=0xf 00:11:17.851 02:34:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.851 02:34:42 -- accel/accel.sh@20 -- # IFS=: 00:11:17.851 02:34:42 -- accel/accel.sh@20 -- # read -r var val 00:11:17.851 02:34:42 -- accel/accel.sh@21 -- # val= 00:11:17.851 02:34:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.851 02:34:42 -- accel/accel.sh@20 -- # IFS=: 00:11:17.851 02:34:42 -- accel/accel.sh@20 -- # read -r var val 00:11:17.851 02:34:42 -- accel/accel.sh@21 -- # val= 00:11:17.851 02:34:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.851 02:34:42 -- accel/accel.sh@20 -- # IFS=: 00:11:17.851 02:34:42 -- accel/accel.sh@20 -- # read -r var val 00:11:17.851 02:34:42 -- accel/accel.sh@21 -- # val=decompress 00:11:17.851 02:34:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.851 02:34:42 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:17.851 02:34:42 -- accel/accel.sh@20 -- # IFS=: 00:11:17.851 02:34:42 -- accel/accel.sh@20 -- # read -r var val 00:11:17.851 02:34:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:17.851 02:34:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.851 02:34:42 -- accel/accel.sh@20 -- # IFS=: 00:11:17.851 02:34:42 -- accel/accel.sh@20 -- # read -r var val 00:11:17.851 02:34:42 -- accel/accel.sh@21 -- # val= 00:11:17.851 02:34:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.851 02:34:42 -- accel/accel.sh@20 -- # IFS=: 00:11:17.851 02:34:42 -- accel/accel.sh@20 -- # read -r var val 00:11:17.851 02:34:42 -- accel/accel.sh@21 -- # val=software 00:11:17.851 02:34:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.851 02:34:42 -- accel/accel.sh@23 -- # accel_module=software 00:11:17.851 02:34:42 -- accel/accel.sh@20 -- # IFS=: 00:11:17.851 02:34:42 -- accel/accel.sh@20 -- # read -r var val 00:11:17.851 02:34:42 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:17.851 02:34:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.851 02:34:42 -- accel/accel.sh@20 -- # IFS=: 00:11:17.851 02:34:42 -- accel/accel.sh@20 -- # read -r var val 00:11:17.851 02:34:42 -- accel/accel.sh@21 -- # val=32 00:11:17.851 02:34:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.851 02:34:42 -- accel/accel.sh@20 -- # IFS=: 00:11:17.851 02:34:42 -- accel/accel.sh@20 -- # read -r var val 00:11:17.851 02:34:42 -- accel/accel.sh@21 -- # val=32 00:11:17.851 02:34:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.851 02:34:42 -- accel/accel.sh@20 -- # IFS=: 00:11:17.851 02:34:42 -- accel/accel.sh@20 -- # read -r var val 00:11:17.851 02:34:42 -- accel/accel.sh@21 -- # val=1 00:11:17.851 02:34:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.851 02:34:42 -- accel/accel.sh@20 -- # IFS=: 00:11:17.851 02:34:42 -- accel/accel.sh@20 -- # read -r var val 00:11:17.851 02:34:42 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:17.851 02:34:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.851 02:34:42 -- accel/accel.sh@20 -- # IFS=: 00:11:17.851 02:34:42 -- accel/accel.sh@20 -- # read -r var val 00:11:17.851 02:34:42 -- accel/accel.sh@21 -- # val=Yes 00:11:17.851 02:34:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.851 02:34:42 -- accel/accel.sh@20 -- # IFS=: 00:11:17.851 02:34:42 -- accel/accel.sh@20 -- # read -r var val 00:11:17.851 02:34:42 -- accel/accel.sh@21 -- # val= 00:11:17.851 02:34:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.851 02:34:42 -- accel/accel.sh@20 -- # IFS=: 00:11:17.851 02:34:42 -- accel/accel.sh@20 -- # read -r var val 00:11:17.851 02:34:42 -- accel/accel.sh@21 -- # val= 00:11:17.851 02:34:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.851 02:34:42 -- accel/accel.sh@20 -- # IFS=: 00:11:17.851 02:34:42 -- accel/accel.sh@20 -- # read -r var val 00:11:19.225 02:34:43 -- accel/accel.sh@21 -- # val= 00:11:19.225 02:34:43 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.225 02:34:43 -- accel/accel.sh@20 -- # IFS=: 00:11:19.225 02:34:43 -- accel/accel.sh@20 -- # read -r var val 00:11:19.225 02:34:43 -- accel/accel.sh@21 -- # val= 00:11:19.225 02:34:43 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.225 02:34:43 -- accel/accel.sh@20 -- # IFS=: 00:11:19.225 02:34:43 -- accel/accel.sh@20 -- # read -r var val 00:11:19.225 02:34:43 -- accel/accel.sh@21 -- # val= 00:11:19.225 02:34:43 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.225 02:34:43 -- accel/accel.sh@20 -- # IFS=: 00:11:19.225 02:34:43 -- accel/accel.sh@20 -- # read -r var val 00:11:19.225 02:34:43 -- accel/accel.sh@21 -- # val= 00:11:19.225 02:34:43 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.225 02:34:43 -- accel/accel.sh@20 -- # IFS=: 00:11:19.225 02:34:43 -- accel/accel.sh@20 -- # read -r var val 00:11:19.225 02:34:43 -- accel/accel.sh@21 -- # val= 00:11:19.225 02:34:43 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.225 02:34:43 -- accel/accel.sh@20 -- # IFS=: 00:11:19.225 02:34:43 -- accel/accel.sh@20 -- # read -r var val 00:11:19.225 02:34:43 -- accel/accel.sh@21 -- # val= 00:11:19.225 02:34:43 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.225 02:34:43 -- accel/accel.sh@20 -- # IFS=: 00:11:19.225 02:34:43 -- accel/accel.sh@20 -- # read -r var val 00:11:19.225 02:34:43 -- accel/accel.sh@21 -- # val= 00:11:19.225 02:34:43 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.225 02:34:43 -- accel/accel.sh@20 -- # IFS=: 00:11:19.225 02:34:43 -- accel/accel.sh@20 -- # read -r var val 00:11:19.225 02:34:43 -- accel/accel.sh@21 -- # val= 00:11:19.225 02:34:43 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.225 02:34:43 -- accel/accel.sh@20 -- # IFS=: 00:11:19.225 02:34:43 -- accel/accel.sh@20 -- # read -r var val 00:11:19.225 02:34:43 -- accel/accel.sh@21 -- # val= 00:11:19.225 02:34:43 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.225 02:34:43 -- accel/accel.sh@20 -- # IFS=: 00:11:19.225 02:34:43 -- accel/accel.sh@20 -- # read -r var val 00:11:19.225 02:34:43 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:19.225 02:34:43 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:19.225 02:34:43 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:19.225 00:11:19.225 real 0m3.315s 00:11:19.225 user 0m10.109s 00:11:19.225 sys 0m0.348s 00:11:19.225 02:34:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:19.225 02:34:43 -- common/autotest_common.sh@10 -- # set +x 00:11:19.225 ************************************ 00:11:19.225 END TEST accel_decomp_mcore 00:11:19.225 ************************************ 00:11:19.225 02:34:44 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:19.225 02:34:44 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:11:19.225 02:34:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:19.225 02:34:44 -- common/autotest_common.sh@10 -- # set +x 00:11:19.225 ************************************ 00:11:19.225 START TEST accel_decomp_full_mcore 00:11:19.225 ************************************ 00:11:19.225 02:34:44 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:19.225 02:34:44 -- accel/accel.sh@16 -- # local accel_opc 00:11:19.225 02:34:44 -- accel/accel.sh@17 -- # local accel_module 00:11:19.225 02:34:44 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:19.225 02:34:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:19.225 02:34:44 -- accel/accel.sh@12 -- # build_accel_config 00:11:19.225 02:34:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:19.225 02:34:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:19.225 02:34:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:19.225 02:34:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:19.225 02:34:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:19.225 02:34:44 -- accel/accel.sh@41 -- # local IFS=, 00:11:19.225 02:34:44 -- accel/accel.sh@42 -- # jq -r . 00:11:19.225 [2024-07-11 02:34:44.054646] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:19.225 [2024-07-11 02:34:44.054885] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120565 ] 00:11:19.225 [2024-07-11 02:34:44.220981] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:19.483 [2024-07-11 02:34:44.362950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:19.483 [2024-07-11 02:34:44.363080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:19.483 [2024-07-11 02:34:44.363225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:19.483 [2024-07-11 02:34:44.363226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.856 02:34:45 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:20.856 00:11:20.856 SPDK Configuration: 00:11:20.856 Core mask: 0xf 00:11:20.856 00:11:20.856 Accel Perf Configuration: 00:11:20.856 Workload Type: decompress 00:11:20.856 Transfer size: 111250 bytes 00:11:20.856 Vector count 1 00:11:20.856 Module: software 00:11:20.856 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:20.856 Queue depth: 32 00:11:20.856 Allocate depth: 32 00:11:20.856 # threads/core: 1 00:11:20.856 Run time: 1 seconds 00:11:20.856 Verify: Yes 00:11:20.856 00:11:20.856 Running for 1 seconds... 00:11:20.856 00:11:20.856 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:20.856 ------------------------------------------------------------------------------------ 00:11:20.856 0,0 4768/s 196 MiB/s 0 0 00:11:20.856 3,0 4704/s 194 MiB/s 0 0 00:11:20.856 2,0 4704/s 194 MiB/s 0 0 00:11:20.856 1,0 4704/s 194 MiB/s 0 0 00:11:20.856 ==================================================================================== 00:11:20.856 Total 18880/s 2003 MiB/s 0 0' 00:11:20.856 02:34:45 -- accel/accel.sh@20 -- # IFS=: 00:11:20.856 02:34:45 -- accel/accel.sh@20 -- # read -r var val 00:11:20.856 02:34:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:20.856 02:34:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:20.856 02:34:45 -- accel/accel.sh@12 -- # build_accel_config 00:11:20.856 02:34:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:20.856 02:34:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:20.856 02:34:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:20.856 02:34:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:20.856 02:34:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:20.856 02:34:45 -- accel/accel.sh@41 -- # local IFS=, 00:11:20.856 02:34:45 -- accel/accel.sh@42 -- # jq -r . 00:11:20.856 [2024-07-11 02:34:45.768852] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:20.856 [2024-07-11 02:34:45.769071] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120595 ] 00:11:20.856 [2024-07-11 02:34:45.926987] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:21.115 [2024-07-11 02:34:46.060069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:21.115 [2024-07-11 02:34:46.060184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:21.115 [2024-07-11 02:34:46.060312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:21.115 [2024-07-11 02:34:46.060319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.115 02:34:46 -- accel/accel.sh@21 -- # val= 00:11:21.115 02:34:46 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.115 02:34:46 -- accel/accel.sh@20 -- # IFS=: 00:11:21.115 02:34:46 -- accel/accel.sh@20 -- # read -r var val 00:11:21.115 02:34:46 -- accel/accel.sh@21 -- # val= 00:11:21.115 02:34:46 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.115 02:34:46 -- accel/accel.sh@20 -- # IFS=: 00:11:21.115 02:34:46 -- accel/accel.sh@20 -- # read -r var val 00:11:21.115 02:34:46 -- accel/accel.sh@21 -- # val= 00:11:21.115 02:34:46 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.115 02:34:46 -- accel/accel.sh@20 -- # IFS=: 00:11:21.115 02:34:46 -- accel/accel.sh@20 -- # read -r var val 00:11:21.115 02:34:46 -- accel/accel.sh@21 -- # val=0xf 00:11:21.115 02:34:46 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.115 02:34:46 -- accel/accel.sh@20 -- # IFS=: 00:11:21.115 02:34:46 -- accel/accel.sh@20 -- # read -r var val 00:11:21.115 02:34:46 -- accel/accel.sh@21 -- # val= 00:11:21.115 02:34:46 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.115 02:34:46 -- accel/accel.sh@20 -- # IFS=: 00:11:21.115 02:34:46 -- accel/accel.sh@20 -- # read -r var val 00:11:21.115 02:34:46 -- accel/accel.sh@21 -- # val= 00:11:21.115 02:34:46 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.115 02:34:46 -- accel/accel.sh@20 -- # IFS=: 00:11:21.115 02:34:46 -- accel/accel.sh@20 -- # read -r var val 00:11:21.115 02:34:46 -- accel/accel.sh@21 -- # val=decompress 00:11:21.115 02:34:46 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.115 02:34:46 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:21.115 02:34:46 -- accel/accel.sh@20 -- # IFS=: 00:11:21.115 02:34:46 -- accel/accel.sh@20 -- # read -r var val 00:11:21.115 02:34:46 -- accel/accel.sh@21 -- # val='111250 bytes' 00:11:21.115 02:34:46 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.115 02:34:46 -- accel/accel.sh@20 -- # IFS=: 00:11:21.115 02:34:46 -- accel/accel.sh@20 -- # read -r var val 00:11:21.115 02:34:46 -- accel/accel.sh@21 -- # val= 00:11:21.115 02:34:46 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.115 02:34:46 -- accel/accel.sh@20 -- # IFS=: 00:11:21.115 02:34:46 -- accel/accel.sh@20 -- # read -r var val 00:11:21.115 02:34:46 -- accel/accel.sh@21 -- # val=software 00:11:21.115 02:34:46 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.115 02:34:46 -- accel/accel.sh@23 -- # accel_module=software 00:11:21.115 02:34:46 -- accel/accel.sh@20 -- # IFS=: 00:11:21.115 02:34:46 -- accel/accel.sh@20 -- # read -r var val 00:11:21.115 02:34:46 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:21.115 02:34:46 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.115 02:34:46 -- accel/accel.sh@20 -- # IFS=: 00:11:21.115 02:34:46 -- accel/accel.sh@20 -- # read -r var val 00:11:21.115 02:34:46 -- accel/accel.sh@21 -- # val=32 00:11:21.115 02:34:46 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.115 02:34:46 -- accel/accel.sh@20 -- # IFS=: 00:11:21.115 02:34:46 -- accel/accel.sh@20 -- # read -r var val 00:11:21.115 02:34:46 -- accel/accel.sh@21 -- # val=32 00:11:21.115 02:34:46 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.115 02:34:46 -- accel/accel.sh@20 -- # IFS=: 00:11:21.115 02:34:46 -- accel/accel.sh@20 -- # read -r var val 00:11:21.115 02:34:46 -- accel/accel.sh@21 -- # val=1 00:11:21.115 02:34:46 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.115 02:34:46 -- accel/accel.sh@20 -- # IFS=: 00:11:21.115 02:34:46 -- accel/accel.sh@20 -- # read -r var val 00:11:21.115 02:34:46 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:21.115 02:34:46 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.115 02:34:46 -- accel/accel.sh@20 -- # IFS=: 00:11:21.115 02:34:46 -- accel/accel.sh@20 -- # read -r var val 00:11:21.115 02:34:46 -- accel/accel.sh@21 -- # val=Yes 00:11:21.115 02:34:46 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.115 02:34:46 -- accel/accel.sh@20 -- # IFS=: 00:11:21.115 02:34:46 -- accel/accel.sh@20 -- # read -r var val 00:11:21.115 02:34:46 -- accel/accel.sh@21 -- # val= 00:11:21.115 02:34:46 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.115 02:34:46 -- accel/accel.sh@20 -- # IFS=: 00:11:21.115 02:34:46 -- accel/accel.sh@20 -- # read -r var val 00:11:21.115 02:34:46 -- accel/accel.sh@21 -- # val= 00:11:21.115 02:34:46 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.115 02:34:46 -- accel/accel.sh@20 -- # IFS=: 00:11:21.115 02:34:46 -- accel/accel.sh@20 -- # read -r var val 00:11:22.490 02:34:47 -- accel/accel.sh@21 -- # val= 00:11:22.490 02:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.490 02:34:47 -- accel/accel.sh@20 -- # IFS=: 00:11:22.490 02:34:47 -- accel/accel.sh@20 -- # read -r var val 00:11:22.490 02:34:47 -- accel/accel.sh@21 -- # val= 00:11:22.490 02:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.490 02:34:47 -- accel/accel.sh@20 -- # IFS=: 00:11:22.490 02:34:47 -- accel/accel.sh@20 -- # read -r var val 00:11:22.490 02:34:47 -- accel/accel.sh@21 -- # val= 00:11:22.490 02:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.490 02:34:47 -- accel/accel.sh@20 -- # IFS=: 00:11:22.490 02:34:47 -- accel/accel.sh@20 -- # read -r var val 00:11:22.490 02:34:47 -- accel/accel.sh@21 -- # val= 00:11:22.490 02:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.490 02:34:47 -- accel/accel.sh@20 -- # IFS=: 00:11:22.490 02:34:47 -- accel/accel.sh@20 -- # read -r var val 00:11:22.490 02:34:47 -- accel/accel.sh@21 -- # val= 00:11:22.490 02:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.490 02:34:47 -- accel/accel.sh@20 -- # IFS=: 00:11:22.490 02:34:47 -- accel/accel.sh@20 -- # read -r var val 00:11:22.490 02:34:47 -- accel/accel.sh@21 -- # val= 00:11:22.490 02:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.490 02:34:47 -- accel/accel.sh@20 -- # IFS=: 00:11:22.490 02:34:47 -- accel/accel.sh@20 -- # read -r var val 00:11:22.490 02:34:47 -- accel/accel.sh@21 -- # val= 00:11:22.490 02:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.490 02:34:47 -- accel/accel.sh@20 -- # IFS=: 00:11:22.490 02:34:47 -- accel/accel.sh@20 -- # read -r var val 00:11:22.490 02:34:47 -- accel/accel.sh@21 -- # val= 00:11:22.490 02:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.490 02:34:47 -- accel/accel.sh@20 -- # IFS=: 00:11:22.490 02:34:47 -- accel/accel.sh@20 -- # read -r var val 00:11:22.490 02:34:47 -- accel/accel.sh@21 -- # val= 00:11:22.490 02:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.490 02:34:47 -- accel/accel.sh@20 -- # IFS=: 00:11:22.490 02:34:47 -- accel/accel.sh@20 -- # read -r var val 00:11:22.490 02:34:47 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:22.490 02:34:47 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:22.490 02:34:47 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:22.490 00:11:22.490 real 0m3.404s 00:11:22.490 user 0m10.249s 00:11:22.490 sys 0m0.431s 00:11:22.490 02:34:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:22.490 02:34:47 -- common/autotest_common.sh@10 -- # set +x 00:11:22.490 ************************************ 00:11:22.490 END TEST accel_decomp_full_mcore 00:11:22.490 ************************************ 00:11:22.490 02:34:47 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:22.490 02:34:47 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:11:22.490 02:34:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:22.490 02:34:47 -- common/autotest_common.sh@10 -- # set +x 00:11:22.490 ************************************ 00:11:22.490 START TEST accel_decomp_mthread 00:11:22.490 ************************************ 00:11:22.490 02:34:47 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:22.490 02:34:47 -- accel/accel.sh@16 -- # local accel_opc 00:11:22.490 02:34:47 -- accel/accel.sh@17 -- # local accel_module 00:11:22.490 02:34:47 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:22.490 02:34:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:22.490 02:34:47 -- accel/accel.sh@12 -- # build_accel_config 00:11:22.490 02:34:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:22.490 02:34:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:22.490 02:34:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:22.490 02:34:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:22.490 02:34:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:22.490 02:34:47 -- accel/accel.sh@41 -- # local IFS=, 00:11:22.490 02:34:47 -- accel/accel.sh@42 -- # jq -r . 00:11:22.490 [2024-07-11 02:34:47.509915] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:22.490 [2024-07-11 02:34:47.510341] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120645 ] 00:11:22.750 [2024-07-11 02:34:47.649771] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.750 [2024-07-11 02:34:47.770325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.123 02:34:49 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:24.123 00:11:24.123 SPDK Configuration: 00:11:24.123 Core mask: 0x1 00:11:24.123 00:11:24.123 Accel Perf Configuration: 00:11:24.123 Workload Type: decompress 00:11:24.123 Transfer size: 4096 bytes 00:11:24.123 Vector count 1 00:11:24.123 Module: software 00:11:24.123 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:24.123 Queue depth: 32 00:11:24.123 Allocate depth: 32 00:11:24.123 # threads/core: 2 00:11:24.123 Run time: 1 seconds 00:11:24.123 Verify: Yes 00:11:24.123 00:11:24.123 Running for 1 seconds... 00:11:24.123 00:11:24.123 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:24.123 ------------------------------------------------------------------------------------ 00:11:24.123 0,1 33312/s 61 MiB/s 0 0 00:11:24.123 0,0 33152/s 61 MiB/s 0 0 00:11:24.123 ==================================================================================== 00:11:24.123 Total 66464/s 259 MiB/s 0 0' 00:11:24.123 02:34:49 -- accel/accel.sh@20 -- # IFS=: 00:11:24.123 02:34:49 -- accel/accel.sh@20 -- # read -r var val 00:11:24.123 02:34:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:24.123 02:34:49 -- accel/accel.sh@12 -- # build_accel_config 00:11:24.123 02:34:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:24.123 02:34:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:24.123 02:34:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:24.123 02:34:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:24.123 02:34:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:24.123 02:34:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:24.123 02:34:49 -- accel/accel.sh@41 -- # local IFS=, 00:11:24.123 02:34:49 -- accel/accel.sh@42 -- # jq -r . 00:11:24.123 [2024-07-11 02:34:49.140385] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:24.123 [2024-07-11 02:34:49.140820] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120674 ] 00:11:24.382 [2024-07-11 02:34:49.285101] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.382 [2024-07-11 02:34:49.400013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.641 02:34:49 -- accel/accel.sh@21 -- # val= 00:11:24.641 02:34:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.641 02:34:49 -- accel/accel.sh@20 -- # IFS=: 00:11:24.641 02:34:49 -- accel/accel.sh@20 -- # read -r var val 00:11:24.641 02:34:49 -- accel/accel.sh@21 -- # val= 00:11:24.641 02:34:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.641 02:34:49 -- accel/accel.sh@20 -- # IFS=: 00:11:24.641 02:34:49 -- accel/accel.sh@20 -- # read -r var val 00:11:24.641 02:34:49 -- accel/accel.sh@21 -- # val= 00:11:24.641 02:34:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.641 02:34:49 -- accel/accel.sh@20 -- # IFS=: 00:11:24.641 02:34:49 -- accel/accel.sh@20 -- # read -r var val 00:11:24.641 02:34:49 -- accel/accel.sh@21 -- # val=0x1 00:11:24.641 02:34:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.641 02:34:49 -- accel/accel.sh@20 -- # IFS=: 00:11:24.641 02:34:49 -- accel/accel.sh@20 -- # read -r var val 00:11:24.641 02:34:49 -- accel/accel.sh@21 -- # val= 00:11:24.641 02:34:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.641 02:34:49 -- accel/accel.sh@20 -- # IFS=: 00:11:24.641 02:34:49 -- accel/accel.sh@20 -- # read -r var val 00:11:24.641 02:34:49 -- accel/accel.sh@21 -- # val= 00:11:24.641 02:34:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.641 02:34:49 -- accel/accel.sh@20 -- # IFS=: 00:11:24.641 02:34:49 -- accel/accel.sh@20 -- # read -r var val 00:11:24.641 02:34:49 -- accel/accel.sh@21 -- # val=decompress 00:11:24.641 02:34:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.641 02:34:49 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:24.641 02:34:49 -- accel/accel.sh@20 -- # IFS=: 00:11:24.641 02:34:49 -- accel/accel.sh@20 -- # read -r var val 00:11:24.641 02:34:49 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:24.641 02:34:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.641 02:34:49 -- accel/accel.sh@20 -- # IFS=: 00:11:24.641 02:34:49 -- accel/accel.sh@20 -- # read -r var val 00:11:24.641 02:34:49 -- accel/accel.sh@21 -- # val= 00:11:24.641 02:34:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.641 02:34:49 -- accel/accel.sh@20 -- # IFS=: 00:11:24.641 02:34:49 -- accel/accel.sh@20 -- # read -r var val 00:11:24.641 02:34:49 -- accel/accel.sh@21 -- # val=software 00:11:24.641 02:34:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.641 02:34:49 -- accel/accel.sh@23 -- # accel_module=software 00:11:24.641 02:34:49 -- accel/accel.sh@20 -- # IFS=: 00:11:24.641 02:34:49 -- accel/accel.sh@20 -- # read -r var val 00:11:24.641 02:34:49 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:24.641 02:34:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.641 02:34:49 -- accel/accel.sh@20 -- # IFS=: 00:11:24.641 02:34:49 -- accel/accel.sh@20 -- # read -r var val 00:11:24.641 02:34:49 -- accel/accel.sh@21 -- # val=32 00:11:24.641 02:34:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.641 02:34:49 -- accel/accel.sh@20 -- # IFS=: 00:11:24.641 02:34:49 -- accel/accel.sh@20 -- # read -r var val 00:11:24.641 02:34:49 -- accel/accel.sh@21 -- # val=32 00:11:24.641 02:34:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.641 02:34:49 -- accel/accel.sh@20 -- # IFS=: 00:11:24.641 02:34:49 -- accel/accel.sh@20 -- # read -r var val 00:11:24.641 02:34:49 -- accel/accel.sh@21 -- # val=2 00:11:24.641 02:34:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.641 02:34:49 -- accel/accel.sh@20 -- # IFS=: 00:11:24.641 02:34:49 -- accel/accel.sh@20 -- # read -r var val 00:11:24.641 02:34:49 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:24.641 02:34:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.641 02:34:49 -- accel/accel.sh@20 -- # IFS=: 00:11:24.641 02:34:49 -- accel/accel.sh@20 -- # read -r var val 00:11:24.641 02:34:49 -- accel/accel.sh@21 -- # val=Yes 00:11:24.641 02:34:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.641 02:34:49 -- accel/accel.sh@20 -- # IFS=: 00:11:24.641 02:34:49 -- accel/accel.sh@20 -- # read -r var val 00:11:24.641 02:34:49 -- accel/accel.sh@21 -- # val= 00:11:24.641 02:34:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.641 02:34:49 -- accel/accel.sh@20 -- # IFS=: 00:11:24.641 02:34:49 -- accel/accel.sh@20 -- # read -r var val 00:11:24.641 02:34:49 -- accel/accel.sh@21 -- # val= 00:11:24.641 02:34:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.641 02:34:49 -- accel/accel.sh@20 -- # IFS=: 00:11:24.641 02:34:49 -- accel/accel.sh@20 -- # read -r var val 00:11:26.047 02:34:50 -- accel/accel.sh@21 -- # val= 00:11:26.047 02:34:50 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.047 02:34:50 -- accel/accel.sh@20 -- # IFS=: 00:11:26.047 02:34:50 -- accel/accel.sh@20 -- # read -r var val 00:11:26.047 02:34:50 -- accel/accel.sh@21 -- # val= 00:11:26.047 02:34:50 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.047 02:34:50 -- accel/accel.sh@20 -- # IFS=: 00:11:26.047 02:34:50 -- accel/accel.sh@20 -- # read -r var val 00:11:26.047 02:34:50 -- accel/accel.sh@21 -- # val= 00:11:26.047 02:34:50 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.047 02:34:50 -- accel/accel.sh@20 -- # IFS=: 00:11:26.047 02:34:50 -- accel/accel.sh@20 -- # read -r var val 00:11:26.047 02:34:50 -- accel/accel.sh@21 -- # val= 00:11:26.047 02:34:50 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.047 02:34:50 -- accel/accel.sh@20 -- # IFS=: 00:11:26.047 02:34:50 -- accel/accel.sh@20 -- # read -r var val 00:11:26.047 02:34:50 -- accel/accel.sh@21 -- # val= 00:11:26.047 02:34:50 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.047 02:34:50 -- accel/accel.sh@20 -- # IFS=: 00:11:26.047 02:34:50 -- accel/accel.sh@20 -- # read -r var val 00:11:26.047 02:34:50 -- accel/accel.sh@21 -- # val= 00:11:26.047 02:34:50 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.047 02:34:50 -- accel/accel.sh@20 -- # IFS=: 00:11:26.047 02:34:50 -- accel/accel.sh@20 -- # read -r var val 00:11:26.047 02:34:50 -- accel/accel.sh@21 -- # val= 00:11:26.047 02:34:50 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.047 02:34:50 -- accel/accel.sh@20 -- # IFS=: 00:11:26.047 02:34:50 -- accel/accel.sh@20 -- # read -r var val 00:11:26.047 ************************************ 00:11:26.047 END TEST accel_decomp_mthread 00:11:26.047 ************************************ 00:11:26.047 02:34:50 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:26.047 02:34:50 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:26.047 02:34:50 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:26.047 00:11:26.047 real 0m3.269s 00:11:26.047 user 0m2.784s 00:11:26.047 sys 0m0.345s 00:11:26.047 02:34:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:26.047 02:34:50 -- common/autotest_common.sh@10 -- # set +x 00:11:26.047 02:34:50 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:26.047 02:34:50 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:11:26.047 02:34:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:26.047 02:34:50 -- common/autotest_common.sh@10 -- # set +x 00:11:26.047 ************************************ 00:11:26.047 START TEST accel_deomp_full_mthread 00:11:26.047 ************************************ 00:11:26.047 02:34:50 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:26.047 02:34:50 -- accel/accel.sh@16 -- # local accel_opc 00:11:26.047 02:34:50 -- accel/accel.sh@17 -- # local accel_module 00:11:26.047 02:34:50 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:26.047 02:34:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:26.047 02:34:50 -- accel/accel.sh@12 -- # build_accel_config 00:11:26.047 02:34:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:26.047 02:34:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:26.047 02:34:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:26.047 02:34:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:26.047 02:34:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:26.047 02:34:50 -- accel/accel.sh@41 -- # local IFS=, 00:11:26.047 02:34:50 -- accel/accel.sh@42 -- # jq -r . 00:11:26.047 [2024-07-11 02:34:50.831506] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:26.047 [2024-07-11 02:34:50.832489] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120714 ] 00:11:26.047 [2024-07-11 02:34:50.976084] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:26.047 [2024-07-11 02:34:51.098778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.426 02:34:52 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:27.426 00:11:27.426 SPDK Configuration: 00:11:27.426 Core mask: 0x1 00:11:27.426 00:11:27.426 Accel Perf Configuration: 00:11:27.426 Workload Type: decompress 00:11:27.426 Transfer size: 111250 bytes 00:11:27.426 Vector count 1 00:11:27.426 Module: software 00:11:27.426 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:27.426 Queue depth: 32 00:11:27.426 Allocate depth: 32 00:11:27.426 # threads/core: 2 00:11:27.426 Run time: 1 seconds 00:11:27.426 Verify: Yes 00:11:27.426 00:11:27.426 Running for 1 seconds... 00:11:27.426 00:11:27.426 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:27.426 ------------------------------------------------------------------------------------ 00:11:27.426 0,1 2528/s 104 MiB/s 0 0 00:11:27.426 0,0 2464/s 101 MiB/s 0 0 00:11:27.426 ==================================================================================== 00:11:27.426 Total 4992/s 529 MiB/s 0 0' 00:11:27.426 02:34:52 -- accel/accel.sh@20 -- # IFS=: 00:11:27.426 02:34:52 -- accel/accel.sh@20 -- # read -r var val 00:11:27.426 02:34:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:27.426 02:34:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:27.426 02:34:52 -- accel/accel.sh@12 -- # build_accel_config 00:11:27.426 02:34:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:27.426 02:34:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:27.426 02:34:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:27.426 02:34:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:27.426 02:34:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:27.426 02:34:52 -- accel/accel.sh@41 -- # local IFS=, 00:11:27.426 02:34:52 -- accel/accel.sh@42 -- # jq -r . 00:11:27.685 [2024-07-11 02:34:52.524081] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:27.685 [2024-07-11 02:34:52.525103] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120763 ] 00:11:27.685 [2024-07-11 02:34:52.673308] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.944 [2024-07-11 02:34:52.779214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.944 02:34:52 -- accel/accel.sh@21 -- # val= 00:11:27.944 02:34:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.944 02:34:52 -- accel/accel.sh@20 -- # IFS=: 00:11:27.944 02:34:52 -- accel/accel.sh@20 -- # read -r var val 00:11:27.944 02:34:52 -- accel/accel.sh@21 -- # val= 00:11:27.944 02:34:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.944 02:34:52 -- accel/accel.sh@20 -- # IFS=: 00:11:27.944 02:34:52 -- accel/accel.sh@20 -- # read -r var val 00:11:27.944 02:34:52 -- accel/accel.sh@21 -- # val= 00:11:27.944 02:34:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.944 02:34:52 -- accel/accel.sh@20 -- # IFS=: 00:11:27.944 02:34:52 -- accel/accel.sh@20 -- # read -r var val 00:11:27.944 02:34:52 -- accel/accel.sh@21 -- # val=0x1 00:11:27.944 02:34:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.944 02:34:52 -- accel/accel.sh@20 -- # IFS=: 00:11:27.944 02:34:52 -- accel/accel.sh@20 -- # read -r var val 00:11:27.944 02:34:52 -- accel/accel.sh@21 -- # val= 00:11:27.944 02:34:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.944 02:34:52 -- accel/accel.sh@20 -- # IFS=: 00:11:27.944 02:34:52 -- accel/accel.sh@20 -- # read -r var val 00:11:27.944 02:34:52 -- accel/accel.sh@21 -- # val= 00:11:27.944 02:34:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.944 02:34:52 -- accel/accel.sh@20 -- # IFS=: 00:11:27.944 02:34:52 -- accel/accel.sh@20 -- # read -r var val 00:11:27.944 02:34:52 -- accel/accel.sh@21 -- # val=decompress 00:11:27.944 02:34:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.944 02:34:52 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:27.944 02:34:52 -- accel/accel.sh@20 -- # IFS=: 00:11:27.944 02:34:52 -- accel/accel.sh@20 -- # read -r var val 00:11:27.944 02:34:52 -- accel/accel.sh@21 -- # val='111250 bytes' 00:11:27.944 02:34:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.944 02:34:52 -- accel/accel.sh@20 -- # IFS=: 00:11:27.944 02:34:52 -- accel/accel.sh@20 -- # read -r var val 00:11:27.944 02:34:52 -- accel/accel.sh@21 -- # val= 00:11:27.944 02:34:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.944 02:34:52 -- accel/accel.sh@20 -- # IFS=: 00:11:27.944 02:34:52 -- accel/accel.sh@20 -- # read -r var val 00:11:27.944 02:34:52 -- accel/accel.sh@21 -- # val=software 00:11:27.944 02:34:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.944 02:34:52 -- accel/accel.sh@23 -- # accel_module=software 00:11:27.944 02:34:52 -- accel/accel.sh@20 -- # IFS=: 00:11:27.944 02:34:52 -- accel/accel.sh@20 -- # read -r var val 00:11:27.944 02:34:52 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:27.944 02:34:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.944 02:34:52 -- accel/accel.sh@20 -- # IFS=: 00:11:27.944 02:34:52 -- accel/accel.sh@20 -- # read -r var val 00:11:27.944 02:34:52 -- accel/accel.sh@21 -- # val=32 00:11:27.944 02:34:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.944 02:34:52 -- accel/accel.sh@20 -- # IFS=: 00:11:27.944 02:34:52 -- accel/accel.sh@20 -- # read -r var val 00:11:27.944 02:34:52 -- accel/accel.sh@21 -- # val=32 00:11:27.944 02:34:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.944 02:34:52 -- accel/accel.sh@20 -- # IFS=: 00:11:27.944 02:34:52 -- accel/accel.sh@20 -- # read -r var val 00:11:27.944 02:34:52 -- accel/accel.sh@21 -- # val=2 00:11:27.944 02:34:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.944 02:34:52 -- accel/accel.sh@20 -- # IFS=: 00:11:27.944 02:34:52 -- accel/accel.sh@20 -- # read -r var val 00:11:27.944 02:34:52 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:27.944 02:34:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.944 02:34:52 -- accel/accel.sh@20 -- # IFS=: 00:11:27.944 02:34:52 -- accel/accel.sh@20 -- # read -r var val 00:11:27.944 02:34:52 -- accel/accel.sh@21 -- # val=Yes 00:11:27.944 02:34:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.944 02:34:52 -- accel/accel.sh@20 -- # IFS=: 00:11:27.944 02:34:52 -- accel/accel.sh@20 -- # read -r var val 00:11:27.944 02:34:52 -- accel/accel.sh@21 -- # val= 00:11:27.944 02:34:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.944 02:34:52 -- accel/accel.sh@20 -- # IFS=: 00:11:27.944 02:34:52 -- accel/accel.sh@20 -- # read -r var val 00:11:27.944 02:34:52 -- accel/accel.sh@21 -- # val= 00:11:27.944 02:34:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.944 02:34:52 -- accel/accel.sh@20 -- # IFS=: 00:11:27.944 02:34:52 -- accel/accel.sh@20 -- # read -r var val 00:11:29.320 02:34:54 -- accel/accel.sh@21 -- # val= 00:11:29.320 02:34:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:29.320 02:34:54 -- accel/accel.sh@20 -- # IFS=: 00:11:29.320 02:34:54 -- accel/accel.sh@20 -- # read -r var val 00:11:29.320 02:34:54 -- accel/accel.sh@21 -- # val= 00:11:29.320 02:34:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:29.320 02:34:54 -- accel/accel.sh@20 -- # IFS=: 00:11:29.320 02:34:54 -- accel/accel.sh@20 -- # read -r var val 00:11:29.320 02:34:54 -- accel/accel.sh@21 -- # val= 00:11:29.320 02:34:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:29.320 02:34:54 -- accel/accel.sh@20 -- # IFS=: 00:11:29.320 02:34:54 -- accel/accel.sh@20 -- # read -r var val 00:11:29.320 02:34:54 -- accel/accel.sh@21 -- # val= 00:11:29.320 02:34:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:29.320 02:34:54 -- accel/accel.sh@20 -- # IFS=: 00:11:29.320 02:34:54 -- accel/accel.sh@20 -- # read -r var val 00:11:29.320 02:34:54 -- accel/accel.sh@21 -- # val= 00:11:29.320 02:34:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:29.320 02:34:54 -- accel/accel.sh@20 -- # IFS=: 00:11:29.320 02:34:54 -- accel/accel.sh@20 -- # read -r var val 00:11:29.320 02:34:54 -- accel/accel.sh@21 -- # val= 00:11:29.320 02:34:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:29.320 02:34:54 -- accel/accel.sh@20 -- # IFS=: 00:11:29.320 02:34:54 -- accel/accel.sh@20 -- # read -r var val 00:11:29.320 02:34:54 -- accel/accel.sh@21 -- # val= 00:11:29.320 02:34:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:29.320 02:34:54 -- accel/accel.sh@20 -- # IFS=: 00:11:29.320 02:34:54 -- accel/accel.sh@20 -- # read -r var val 00:11:29.320 ************************************ 00:11:29.320 END TEST accel_deomp_full_mthread 00:11:29.320 ************************************ 00:11:29.320 02:34:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:29.320 02:34:54 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:29.320 02:34:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:29.320 00:11:29.320 real 0m3.331s 00:11:29.320 user 0m2.851s 00:11:29.320 sys 0m0.341s 00:11:29.320 02:34:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:29.320 02:34:54 -- common/autotest_common.sh@10 -- # set +x 00:11:29.320 02:34:54 -- accel/accel.sh@116 -- # [[ n == y ]] 00:11:29.320 02:34:54 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:11:29.320 02:34:54 -- accel/accel.sh@129 -- # build_accel_config 00:11:29.320 02:34:54 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:11:29.320 02:34:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:29.320 02:34:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:29.320 02:34:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:29.320 02:34:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:29.320 02:34:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:29.320 02:34:54 -- accel/accel.sh@41 -- # local IFS=, 00:11:29.320 02:34:54 -- accel/accel.sh@42 -- # jq -r . 00:11:29.320 02:34:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:29.320 02:34:54 -- common/autotest_common.sh@10 -- # set +x 00:11:29.320 ************************************ 00:11:29.320 START TEST accel_dif_functional_tests 00:11:29.320 ************************************ 00:11:29.320 02:34:54 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:11:29.320 [2024-07-11 02:34:54.257686] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:29.320 [2024-07-11 02:34:54.258180] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120810 ] 00:11:29.578 [2024-07-11 02:34:54.416213] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:29.578 [2024-07-11 02:34:54.512615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:29.578 [2024-07-11 02:34:54.512765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:29.578 [2024-07-11 02:34:54.512774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.578 00:11:29.578 00:11:29.578 CUnit - A unit testing framework for C - Version 2.1-3 00:11:29.578 http://cunit.sourceforge.net/ 00:11:29.578 00:11:29.578 00:11:29.578 Suite: accel_dif 00:11:29.578 Test: verify: DIF generated, GUARD check ...passed 00:11:29.578 Test: verify: DIF generated, APPTAG check ...passed 00:11:29.578 Test: verify: DIF generated, REFTAG check ...passed 00:11:29.578 Test: verify: DIF not generated, GUARD check ...[2024-07-11 02:34:54.633999] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:11:29.578 [2024-07-11 02:34:54.634349] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:11:29.578 passed 00:11:29.578 Test: verify: DIF not generated, APPTAG check ...[2024-07-11 02:34:54.634800] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:11:29.578 [2024-07-11 02:34:54.635001] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:11:29.578 passed 00:11:29.578 Test: verify: DIF not generated, REFTAG check ...[2024-07-11 02:34:54.635334] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:11:29.578 [2024-07-11 02:34:54.635550] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:11:29.578 passed 00:11:29.578 Test: verify: APPTAG correct, APPTAG check ...passed 00:11:29.578 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-11 02:34:54.636283] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:11:29.578 passed 00:11:29.578 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:11:29.578 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:11:29.578 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:11:29.578 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-11 02:34:54.637106] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:11:29.578 passed 00:11:29.578 Test: generate copy: DIF generated, GUARD check ...passed 00:11:29.578 Test: generate copy: DIF generated, APTTAG check ...passed 00:11:29.578 Test: generate copy: DIF generated, REFTAG check ...passed 00:11:29.578 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:11:29.578 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:11:29.578 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:11:29.578 Test: generate copy: iovecs-len validate ...[2024-07-11 02:34:54.638947] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:11:29.578 passed 00:11:29.578 Test: generate copy: buffer alignment validate ...passed 00:11:29.578 00:11:29.578 Run Summary: Type Total Ran Passed Failed Inactive 00:11:29.578 suites 1 1 n/a 0 0 00:11:29.578 tests 20 20 20 0 0 00:11:29.578 asserts 204 204 204 0 n/a 00:11:29.578 00:11:29.578 Elapsed time = 0.019 seconds 00:11:30.144 ************************************ 00:11:30.144 END TEST accel_dif_functional_tests 00:11:30.144 ************************************ 00:11:30.144 00:11:30.144 real 0m0.766s 00:11:30.144 user 0m1.071s 00:11:30.144 sys 0m0.246s 00:11:30.144 02:34:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:30.144 02:34:54 -- common/autotest_common.sh@10 -- # set +x 00:11:30.144 ************************************ 00:11:30.144 END TEST accel 00:11:30.144 ************************************ 00:11:30.144 00:11:30.144 real 1m6.649s 00:11:30.144 user 1m12.030s 00:11:30.144 sys 0m7.472s 00:11:30.144 02:34:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:30.144 02:34:54 -- common/autotest_common.sh@10 -- # set +x 00:11:30.144 02:34:55 -- spdk/autotest.sh@190 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:11:30.144 02:34:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:30.144 02:34:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:30.144 02:34:55 -- common/autotest_common.sh@10 -- # set +x 00:11:30.144 ************************************ 00:11:30.144 START TEST accel_rpc 00:11:30.144 ************************************ 00:11:30.144 02:34:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:11:30.144 * Looking for test storage... 00:11:30.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:30.144 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:11:30.144 02:34:55 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:11:30.144 02:34:55 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=120886 00:11:30.144 02:34:55 -- accel/accel_rpc.sh@15 -- # waitforlisten 120886 00:11:30.144 02:34:55 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:11:30.144 02:34:55 -- common/autotest_common.sh@819 -- # '[' -z 120886 ']' 00:11:30.144 02:34:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:30.144 02:34:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:30.144 02:34:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:30.144 02:34:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:30.144 02:34:55 -- common/autotest_common.sh@10 -- # set +x 00:11:30.144 [2024-07-11 02:34:55.168095] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:30.144 [2024-07-11 02:34:55.168493] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120886 ] 00:11:30.403 [2024-07-11 02:34:55.312326] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.403 [2024-07-11 02:34:55.388823] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:30.403 [2024-07-11 02:34:55.389357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.337 02:34:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:31.337 02:34:56 -- common/autotest_common.sh@852 -- # return 0 00:11:31.337 02:34:56 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:11:31.337 02:34:56 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:11:31.337 02:34:56 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:11:31.337 02:34:56 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:11:31.337 02:34:56 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:11:31.337 02:34:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:31.337 02:34:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:31.337 02:34:56 -- common/autotest_common.sh@10 -- # set +x 00:11:31.337 ************************************ 00:11:31.337 START TEST accel_assign_opcode 00:11:31.337 ************************************ 00:11:31.337 02:34:56 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:11:31.337 02:34:56 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:11:31.337 02:34:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:31.337 02:34:56 -- common/autotest_common.sh@10 -- # set +x 00:11:31.337 [2024-07-11 02:34:56.130362] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:11:31.337 02:34:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:31.337 02:34:56 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:11:31.337 02:34:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:31.337 02:34:56 -- common/autotest_common.sh@10 -- # set +x 00:11:31.337 [2024-07-11 02:34:56.138339] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:11:31.337 02:34:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:31.337 02:34:56 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:11:31.337 02:34:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:31.337 02:34:56 -- common/autotest_common.sh@10 -- # set +x 00:11:31.596 02:34:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:31.596 02:34:56 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:11:31.596 02:34:56 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:11:31.596 02:34:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:31.596 02:34:56 -- common/autotest_common.sh@10 -- # set +x 00:11:31.596 02:34:56 -- accel/accel_rpc.sh@42 -- # grep software 00:11:31.596 02:34:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:31.596 software 00:11:31.596 ************************************ 00:11:31.596 END TEST accel_assign_opcode 00:11:31.596 ************************************ 00:11:31.596 00:11:31.596 real 0m0.381s 00:11:31.596 user 0m0.060s 00:11:31.596 sys 0m0.006s 00:11:31.596 02:34:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:31.596 02:34:56 -- common/autotest_common.sh@10 -- # set +x 00:11:31.596 02:34:56 -- accel/accel_rpc.sh@55 -- # killprocess 120886 00:11:31.596 02:34:56 -- common/autotest_common.sh@926 -- # '[' -z 120886 ']' 00:11:31.596 02:34:56 -- common/autotest_common.sh@930 -- # kill -0 120886 00:11:31.596 02:34:56 -- common/autotest_common.sh@931 -- # uname 00:11:31.596 02:34:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:31.596 02:34:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 120886 00:11:31.596 killing process with pid 120886 00:11:31.596 02:34:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:31.596 02:34:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:31.596 02:34:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 120886' 00:11:31.596 02:34:56 -- common/autotest_common.sh@945 -- # kill 120886 00:11:31.596 02:34:56 -- common/autotest_common.sh@950 -- # wait 120886 00:11:32.164 ************************************ 00:11:32.164 END TEST accel_rpc 00:11:32.164 ************************************ 00:11:32.164 00:11:32.164 real 0m2.142s 00:11:32.164 user 0m2.084s 00:11:32.164 sys 0m0.573s 00:11:32.164 02:34:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:32.164 02:34:57 -- common/autotest_common.sh@10 -- # set +x 00:11:32.164 02:34:57 -- spdk/autotest.sh@191 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:11:32.164 02:34:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:32.164 02:34:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:32.164 02:34:57 -- common/autotest_common.sh@10 -- # set +x 00:11:32.164 ************************************ 00:11:32.164 START TEST app_cmdline 00:11:32.164 ************************************ 00:11:32.164 02:34:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:11:32.422 * Looking for test storage... 00:11:32.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.422 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:11:32.422 02:34:57 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:11:32.422 02:34:57 -- app/cmdline.sh@17 -- # spdk_tgt_pid=120993 00:11:32.422 02:34:57 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:11:32.422 02:34:57 -- app/cmdline.sh@18 -- # waitforlisten 120993 00:11:32.422 02:34:57 -- common/autotest_common.sh@819 -- # '[' -z 120993 ']' 00:11:32.422 02:34:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.422 02:34:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:32.423 02:34:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.423 02:34:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:32.423 02:34:57 -- common/autotest_common.sh@10 -- # set +x 00:11:32.423 [2024-07-11 02:34:57.376916] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:32.423 [2024-07-11 02:34:57.377306] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120993 ] 00:11:32.681 [2024-07-11 02:34:57.523671] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.681 [2024-07-11 02:34:57.597107] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:32.681 [2024-07-11 02:34:57.597614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.248 02:34:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:33.248 02:34:58 -- common/autotest_common.sh@852 -- # return 0 00:11:33.248 02:34:58 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:11:33.507 { 00:11:33.507 "version": "SPDK v24.01.1-pre git sha1 4b94202c6", 00:11:33.507 "fields": { 00:11:33.507 "major": 24, 00:11:33.507 "minor": 1, 00:11:33.507 "patch": 1, 00:11:33.507 "suffix": "-pre", 00:11:33.507 "commit": "4b94202c6" 00:11:33.507 } 00:11:33.507 } 00:11:33.507 02:34:58 -- app/cmdline.sh@22 -- # expected_methods=() 00:11:33.507 02:34:58 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:11:33.507 02:34:58 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:11:33.507 02:34:58 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:11:33.507 02:34:58 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:11:33.507 02:34:58 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:11:33.507 02:34:58 -- app/cmdline.sh@26 -- # sort 00:11:33.507 02:34:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:33.507 02:34:58 -- common/autotest_common.sh@10 -- # set +x 00:11:33.507 02:34:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:33.766 02:34:58 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:11:33.766 02:34:58 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:11:33.766 02:34:58 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:33.766 02:34:58 -- common/autotest_common.sh@640 -- # local es=0 00:11:33.766 02:34:58 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:33.766 02:34:58 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:33.766 02:34:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:33.766 02:34:58 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:33.766 02:34:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:33.766 02:34:58 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:33.766 02:34:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:33.766 02:34:58 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:33.766 02:34:58 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:33.766 02:34:58 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:34.024 request: 00:11:34.024 { 00:11:34.024 "method": "env_dpdk_get_mem_stats", 00:11:34.024 "req_id": 1 00:11:34.024 } 00:11:34.024 Got JSON-RPC error response 00:11:34.024 response: 00:11:34.024 { 00:11:34.024 "code": -32601, 00:11:34.024 "message": "Method not found" 00:11:34.024 } 00:11:34.024 02:34:58 -- common/autotest_common.sh@643 -- # es=1 00:11:34.024 02:34:58 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:11:34.024 02:34:58 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:11:34.024 02:34:58 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:11:34.024 02:34:58 -- app/cmdline.sh@1 -- # killprocess 120993 00:11:34.024 02:34:58 -- common/autotest_common.sh@926 -- # '[' -z 120993 ']' 00:11:34.024 02:34:58 -- common/autotest_common.sh@930 -- # kill -0 120993 00:11:34.024 02:34:58 -- common/autotest_common.sh@931 -- # uname 00:11:34.024 02:34:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:34.024 02:34:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 120993 00:11:34.024 02:34:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:34.024 02:34:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:34.024 02:34:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 120993' 00:11:34.024 killing process with pid 120993 00:11:34.024 02:34:58 -- common/autotest_common.sh@945 -- # kill 120993 00:11:34.024 02:34:58 -- common/autotest_common.sh@950 -- # wait 120993 00:11:34.590 ************************************ 00:11:34.590 END TEST app_cmdline 00:11:34.590 ************************************ 00:11:34.590 00:11:34.590 real 0m2.252s 00:11:34.590 user 0m2.639s 00:11:34.590 sys 0m0.597s 00:11:34.590 02:34:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:34.590 02:34:59 -- common/autotest_common.sh@10 -- # set +x 00:11:34.590 02:34:59 -- spdk/autotest.sh@192 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:11:34.590 02:34:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:34.590 02:34:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:34.590 02:34:59 -- common/autotest_common.sh@10 -- # set +x 00:11:34.590 ************************************ 00:11:34.590 START TEST version 00:11:34.590 ************************************ 00:11:34.590 02:34:59 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:11:34.590 * Looking for test storage... 00:11:34.590 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:11:34.590 02:34:59 -- app/version.sh@17 -- # get_header_version major 00:11:34.590 02:34:59 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:34.590 02:34:59 -- app/version.sh@14 -- # cut -f2 00:11:34.590 02:34:59 -- app/version.sh@14 -- # tr -d '"' 00:11:34.590 02:34:59 -- app/version.sh@17 -- # major=24 00:11:34.590 02:34:59 -- app/version.sh@18 -- # get_header_version minor 00:11:34.590 02:34:59 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:34.590 02:34:59 -- app/version.sh@14 -- # tr -d '"' 00:11:34.590 02:34:59 -- app/version.sh@14 -- # cut -f2 00:11:34.590 02:34:59 -- app/version.sh@18 -- # minor=1 00:11:34.590 02:34:59 -- app/version.sh@19 -- # get_header_version patch 00:11:34.590 02:34:59 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:34.590 02:34:59 -- app/version.sh@14 -- # cut -f2 00:11:34.590 02:34:59 -- app/version.sh@14 -- # tr -d '"' 00:11:34.590 02:34:59 -- app/version.sh@19 -- # patch=1 00:11:34.590 02:34:59 -- app/version.sh@20 -- # get_header_version suffix 00:11:34.590 02:34:59 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:34.590 02:34:59 -- app/version.sh@14 -- # cut -f2 00:11:34.590 02:34:59 -- app/version.sh@14 -- # tr -d '"' 00:11:34.590 02:34:59 -- app/version.sh@20 -- # suffix=-pre 00:11:34.590 02:34:59 -- app/version.sh@22 -- # version=24.1 00:11:34.590 02:34:59 -- app/version.sh@25 -- # (( patch != 0 )) 00:11:34.590 02:34:59 -- app/version.sh@25 -- # version=24.1.1 00:11:34.590 02:34:59 -- app/version.sh@28 -- # version=24.1.1rc0 00:11:34.590 02:34:59 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:34.590 02:34:59 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:11:34.590 02:34:59 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:11:34.590 02:34:59 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:11:34.590 00:11:34.590 real 0m0.138s 00:11:34.590 user 0m0.099s 00:11:34.590 sys 0m0.073s 00:11:34.590 02:34:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:34.590 02:34:59 -- common/autotest_common.sh@10 -- # set +x 00:11:34.590 ************************************ 00:11:34.590 END TEST version 00:11:34.590 ************************************ 00:11:34.848 02:34:59 -- spdk/autotest.sh@194 -- # '[' 1 -eq 1 ']' 00:11:34.848 02:34:59 -- spdk/autotest.sh@195 -- # run_test blockdev_general /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:11:34.848 02:34:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:34.848 02:34:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:34.848 02:34:59 -- common/autotest_common.sh@10 -- # set +x 00:11:34.848 ************************************ 00:11:34.848 START TEST blockdev_general 00:11:34.848 ************************************ 00:11:34.848 02:34:59 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:11:34.848 * Looking for test storage... 00:11:34.848 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:11:34.848 02:34:59 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:11:34.848 02:34:59 -- bdev/nbd_common.sh@6 -- # set -e 00:11:34.848 02:34:59 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:11:34.848 02:34:59 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:34.848 02:34:59 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:11:34.848 02:34:59 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:11:34.848 02:34:59 -- bdev/blockdev.sh@18 -- # : 00:11:34.848 02:34:59 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:11:34.848 02:34:59 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:11:34.848 02:34:59 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:11:34.849 02:34:59 -- bdev/blockdev.sh@672 -- # uname -s 00:11:34.849 02:34:59 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:11:34.849 02:34:59 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:11:34.849 02:34:59 -- bdev/blockdev.sh@680 -- # test_type=bdev 00:11:34.849 02:34:59 -- bdev/blockdev.sh@681 -- # crypto_device= 00:11:34.849 02:34:59 -- bdev/blockdev.sh@682 -- # dek= 00:11:34.849 02:34:59 -- bdev/blockdev.sh@683 -- # env_ctx= 00:11:34.849 02:34:59 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:11:34.849 02:34:59 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:11:34.849 02:34:59 -- bdev/blockdev.sh@688 -- # [[ bdev == bdev ]] 00:11:34.849 02:34:59 -- bdev/blockdev.sh@689 -- # wait_for_rpc=--wait-for-rpc 00:11:34.849 02:34:59 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:11:34.849 02:34:59 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=121147 00:11:34.849 02:34:59 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:34.849 02:34:59 -- bdev/blockdev.sh@47 -- # waitforlisten 121147 00:11:34.849 02:34:59 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:11:34.849 02:34:59 -- common/autotest_common.sh@819 -- # '[' -z 121147 ']' 00:11:34.849 02:34:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.849 02:34:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:34.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.849 02:34:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.849 02:34:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:34.849 02:34:59 -- common/autotest_common.sh@10 -- # set +x 00:11:34.849 [2024-07-11 02:34:59.869180] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:34.849 [2024-07-11 02:34:59.869402] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121147 ] 00:11:35.107 [2024-07-11 02:35:00.014840] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.107 [2024-07-11 02:35:00.085497] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:35.107 [2024-07-11 02:35:00.085821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.083 02:35:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:36.083 02:35:00 -- common/autotest_common.sh@852 -- # return 0 00:11:36.083 02:35:00 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:11:36.083 02:35:00 -- bdev/blockdev.sh@694 -- # setup_bdev_conf 00:11:36.083 02:35:00 -- bdev/blockdev.sh@51 -- # rpc_cmd 00:11:36.083 02:35:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:36.083 02:35:00 -- common/autotest_common.sh@10 -- # set +x 00:11:36.083 [2024-07-11 02:35:01.157756] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:36.083 [2024-07-11 02:35:01.157874] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:36.083 00:11:36.083 [2024-07-11 02:35:01.165680] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:36.083 [2024-07-11 02:35:01.165796] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:36.083 00:11:36.342 Malloc0 00:11:36.342 Malloc1 00:11:36.342 Malloc2 00:11:36.342 Malloc3 00:11:36.342 Malloc4 00:11:36.342 Malloc5 00:11:36.342 Malloc6 00:11:36.342 Malloc7 00:11:36.342 Malloc8 00:11:36.342 Malloc9 00:11:36.342 [2024-07-11 02:35:01.386135] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:36.342 [2024-07-11 02:35:01.386263] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.342 [2024-07-11 02:35:01.386319] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:11:36.342 [2024-07-11 02:35:01.386349] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.342 [2024-07-11 02:35:01.389382] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.342 [2024-07-11 02:35:01.389450] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:36.342 TestPT 00:11:36.342 02:35:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:36.342 02:35:01 -- bdev/blockdev.sh@74 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:11:36.601 5000+0 records in 00:11:36.601 5000+0 records out 00:11:36.601 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0208177 s, 492 MB/s 00:11:36.601 02:35:01 -- bdev/blockdev.sh@75 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:11:36.601 02:35:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:36.601 02:35:01 -- common/autotest_common.sh@10 -- # set +x 00:11:36.601 AIO0 00:11:36.601 02:35:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:36.601 02:35:01 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:11:36.601 02:35:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:36.601 02:35:01 -- common/autotest_common.sh@10 -- # set +x 00:11:36.601 02:35:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:36.601 02:35:01 -- bdev/blockdev.sh@738 -- # cat 00:11:36.601 02:35:01 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:11:36.601 02:35:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:36.601 02:35:01 -- common/autotest_common.sh@10 -- # set +x 00:11:36.601 02:35:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:36.601 02:35:01 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:11:36.601 02:35:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:36.601 02:35:01 -- common/autotest_common.sh@10 -- # set +x 00:11:36.601 02:35:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:36.601 02:35:01 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:11:36.601 02:35:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:36.601 02:35:01 -- common/autotest_common.sh@10 -- # set +x 00:11:36.601 02:35:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:36.601 02:35:01 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:11:36.601 02:35:01 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:11:36.601 02:35:01 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:11:36.601 02:35:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:36.601 02:35:01 -- common/autotest_common.sh@10 -- # set +x 00:11:36.601 02:35:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:36.601 02:35:01 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:11:36.601 02:35:01 -- bdev/blockdev.sh@747 -- # jq -r .name 00:11:36.602 02:35:01 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "879be936-4864-4d58-9c7a-a738ad515f5f"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "879be936-4864-4d58-9c7a-a738ad515f5f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "9b7917b2-2031-56bf-91e5-9703e3864171"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "9b7917b2-2031-56bf-91e5-9703e3864171",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "b8bf0827-d8e6-51aa-b22f-18c2685167c4"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "b8bf0827-d8e6-51aa-b22f-18c2685167c4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "1ca8ea45-2aa9-5f2e-a37f-e35bfad879c2"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "1ca8ea45-2aa9-5f2e-a37f-e35bfad879c2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "775e7cde-574a-5ee0-b9a5-186c5f61a4c9"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "775e7cde-574a-5ee0-b9a5-186c5f61a4c9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "6db1f496-a705-551d-be60-8bfa4b8416a1"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "6db1f496-a705-551d-be60-8bfa4b8416a1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "0c92c7ea-e53c-5531-bccc-291f1e30e4fa"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0c92c7ea-e53c-5531-bccc-291f1e30e4fa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "8f169d5e-6272-51d6-b924-3d4212a087ed"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8f169d5e-6272-51d6-b924-3d4212a087ed",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "21a2c4b0-c98d-5634-8364-8a42220d2fcc"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "21a2c4b0-c98d-5634-8364-8a42220d2fcc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "7430acf8-6b5b-5b8b-bdc0-dbd5a8265b96"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7430acf8-6b5b-5b8b-bdc0-dbd5a8265b96",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "1c15cf63-7e54-5a08-bef6-79f10575d95a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "1c15cf63-7e54-5a08-bef6-79f10575d95a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "20f48ee0-627a-5e35-aed8-dfebacc45973"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "20f48ee0-627a-5e35-aed8-dfebacc45973",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "84023315-afe3-4582-b3ad-fabf42876157"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "84023315-afe3-4582-b3ad-fabf42876157",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "84023315-afe3-4582-b3ad-fabf42876157",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "ce2e991b-ccf5-4473-87c5-ef1581a0a927",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "362dfd6c-f42c-49ce-a1cc-e7bbaa61e1b6",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "95495d39-2895-486e-8755-3a2c59ab4204"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "95495d39-2895-486e-8755-3a2c59ab4204",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "95495d39-2895-486e-8755-3a2c59ab4204",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "1302990e-4ac3-4e45-a051-c897f8a0f69b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "cfd0173d-ce8f-49d4-a4b4-0d86776afad6",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "efbce7ee-a7bd-4592-ab91-d8e6aafaaa89"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "efbce7ee-a7bd-4592-ab91-d8e6aafaaa89",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "efbce7ee-a7bd-4592-ab91-d8e6aafaaa89",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "7fec73d8-20a5-4e2b-a093-829a019bb3e9",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "c5f3e6d3-8ae0-4cb0-b8eb-dfcde647b251",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "2fb64507-90ae-4fcb-a0b8-accdfccf4e43"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "2fb64507-90ae-4fcb-a0b8-accdfccf4e43",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:11:36.862 02:35:01 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:11:36.862 02:35:01 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Malloc0 00:11:36.862 02:35:01 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:11:36.862 02:35:01 -- bdev/blockdev.sh@752 -- # killprocess 121147 00:11:36.862 02:35:01 -- common/autotest_common.sh@926 -- # '[' -z 121147 ']' 00:11:36.862 02:35:01 -- common/autotest_common.sh@930 -- # kill -0 121147 00:11:36.862 02:35:01 -- common/autotest_common.sh@931 -- # uname 00:11:36.862 02:35:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:36.862 02:35:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 121147 00:11:36.862 02:35:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:36.862 killing process with pid 121147 00:11:36.862 02:35:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:36.862 02:35:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 121147' 00:11:36.862 02:35:01 -- common/autotest_common.sh@945 -- # kill 121147 00:11:36.862 02:35:01 -- common/autotest_common.sh@950 -- # wait 121147 00:11:37.430 02:35:02 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:37.430 02:35:02 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:11:37.430 02:35:02 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:11:37.430 02:35:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:37.430 02:35:02 -- common/autotest_common.sh@10 -- # set +x 00:11:37.430 ************************************ 00:11:37.430 START TEST bdev_hello_world 00:11:37.430 ************************************ 00:11:37.430 02:35:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:11:37.688 [2024-07-11 02:35:02.582965] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:37.688 [2024-07-11 02:35:02.583231] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121228 ] 00:11:37.688 [2024-07-11 02:35:02.730391] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.947 [2024-07-11 02:35:02.830666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.947 [2024-07-11 02:35:03.017889] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:37.947 [2024-07-11 02:35:03.017997] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:37.947 [2024-07-11 02:35:03.025734] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:37.947 [2024-07-11 02:35:03.025812] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:37.947 [2024-07-11 02:35:03.033769] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:37.947 [2024-07-11 02:35:03.033824] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:37.947 [2024-07-11 02:35:03.033856] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:38.205 [2024-07-11 02:35:03.142361] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:38.205 [2024-07-11 02:35:03.142480] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:38.205 [2024-07-11 02:35:03.142549] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:11:38.205 [2024-07-11 02:35:03.142583] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:38.205 [2024-07-11 02:35:03.145169] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:38.205 [2024-07-11 02:35:03.145221] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:38.464 [2024-07-11 02:35:03.339646] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:11:38.464 [2024-07-11 02:35:03.339816] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:11:38.464 [2024-07-11 02:35:03.340014] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:11:38.464 [2024-07-11 02:35:03.340141] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:11:38.464 [2024-07-11 02:35:03.340306] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:11:38.464 [2024-07-11 02:35:03.340376] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:11:38.464 [2024-07-11 02:35:03.340500] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:11:38.464 00:11:38.464 [2024-07-11 02:35:03.340591] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:11:39.029 00:11:39.029 real 0m1.316s 00:11:39.029 user 0m0.795s 00:11:39.029 sys 0m0.369s 00:11:39.029 ************************************ 00:11:39.029 END TEST bdev_hello_world 00:11:39.029 ************************************ 00:11:39.029 02:35:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:39.029 02:35:03 -- common/autotest_common.sh@10 -- # set +x 00:11:39.029 02:35:03 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:11:39.029 02:35:03 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:39.029 02:35:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:39.029 02:35:03 -- common/autotest_common.sh@10 -- # set +x 00:11:39.029 ************************************ 00:11:39.029 START TEST bdev_bounds 00:11:39.029 ************************************ 00:11:39.029 02:35:03 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:11:39.029 02:35:03 -- bdev/blockdev.sh@288 -- # bdevio_pid=121266 00:11:39.029 02:35:03 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:39.029 02:35:03 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:11:39.029 Process bdevio pid: 121266 00:11:39.029 02:35:03 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 121266' 00:11:39.029 02:35:03 -- bdev/blockdev.sh@291 -- # waitforlisten 121266 00:11:39.029 02:35:03 -- common/autotest_common.sh@819 -- # '[' -z 121266 ']' 00:11:39.029 02:35:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:39.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:39.029 02:35:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:39.029 02:35:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:39.029 02:35:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:39.029 02:35:03 -- common/autotest_common.sh@10 -- # set +x 00:11:39.029 [2024-07-11 02:35:03.955186] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:39.029 [2024-07-11 02:35:03.955445] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121266 ] 00:11:39.029 [2024-07-11 02:35:04.111568] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:39.287 [2024-07-11 02:35:04.197431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:39.287 [2024-07-11 02:35:04.197568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:39.287 [2024-07-11 02:35:04.197577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.287 [2024-07-11 02:35:04.375006] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:39.287 [2024-07-11 02:35:04.375174] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:39.546 [2024-07-11 02:35:04.382893] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:39.546 [2024-07-11 02:35:04.382976] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:39.546 [2024-07-11 02:35:04.390973] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:39.546 [2024-07-11 02:35:04.391074] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:39.546 [2024-07-11 02:35:04.391111] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:39.546 [2024-07-11 02:35:04.498620] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:39.546 [2024-07-11 02:35:04.498747] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.546 [2024-07-11 02:35:04.498832] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:11:39.546 [2024-07-11 02:35:04.498860] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.546 [2024-07-11 02:35:04.502018] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.546 [2024-07-11 02:35:04.502067] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:40.113 02:35:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:40.113 02:35:04 -- common/autotest_common.sh@852 -- # return 0 00:11:40.113 02:35:04 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:11:40.113 I/O targets: 00:11:40.113 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:11:40.113 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:11:40.113 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:11:40.113 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:11:40.113 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:11:40.113 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:11:40.113 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:11:40.113 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:11:40.113 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:11:40.113 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:11:40.113 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:11:40.113 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:11:40.113 raid0: 131072 blocks of 512 bytes (64 MiB) 00:11:40.113 concat0: 131072 blocks of 512 bytes (64 MiB) 00:11:40.113 raid1: 65536 blocks of 512 bytes (32 MiB) 00:11:40.113 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:11:40.113 00:11:40.113 00:11:40.113 CUnit - A unit testing framework for C - Version 2.1-3 00:11:40.113 http://cunit.sourceforge.net/ 00:11:40.113 00:11:40.113 00:11:40.113 Suite: bdevio tests on: AIO0 00:11:40.113 Test: blockdev write read block ...passed 00:11:40.113 Test: blockdev write zeroes read block ...passed 00:11:40.113 Test: blockdev write zeroes read no split ...passed 00:11:40.113 Test: blockdev write zeroes read split ...passed 00:11:40.113 Test: blockdev write zeroes read split partial ...passed 00:11:40.113 Test: blockdev reset ...passed 00:11:40.113 Test: blockdev write read 8 blocks ...passed 00:11:40.113 Test: blockdev write read size > 128k ...passed 00:11:40.113 Test: blockdev write read invalid size ...passed 00:11:40.113 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:40.113 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:40.113 Test: blockdev write read max offset ...passed 00:11:40.113 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:40.113 Test: blockdev writev readv 8 blocks ...passed 00:11:40.113 Test: blockdev writev readv 30 x 1block ...passed 00:11:40.113 Test: blockdev writev readv block ...passed 00:11:40.113 Test: blockdev writev readv size > 128k ...passed 00:11:40.113 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:40.113 Test: blockdev comparev and writev ...passed 00:11:40.113 Test: blockdev nvme passthru rw ...passed 00:11:40.113 Test: blockdev nvme passthru vendor specific ...passed 00:11:40.113 Test: blockdev nvme admin passthru ...passed 00:11:40.113 Test: blockdev copy ...passed 00:11:40.113 Suite: bdevio tests on: raid1 00:11:40.113 Test: blockdev write read block ...passed 00:11:40.113 Test: blockdev write zeroes read block ...passed 00:11:40.113 Test: blockdev write zeroes read no split ...passed 00:11:40.113 Test: blockdev write zeroes read split ...passed 00:11:40.113 Test: blockdev write zeroes read split partial ...passed 00:11:40.113 Test: blockdev reset ...passed 00:11:40.113 Test: blockdev write read 8 blocks ...passed 00:11:40.113 Test: blockdev write read size > 128k ...passed 00:11:40.113 Test: blockdev write read invalid size ...passed 00:11:40.113 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:40.113 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:40.113 Test: blockdev write read max offset ...passed 00:11:40.113 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:40.113 Test: blockdev writev readv 8 blocks ...passed 00:11:40.113 Test: blockdev writev readv 30 x 1block ...passed 00:11:40.113 Test: blockdev writev readv block ...passed 00:11:40.113 Test: blockdev writev readv size > 128k ...passed 00:11:40.113 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:40.113 Test: blockdev comparev and writev ...passed 00:11:40.113 Test: blockdev nvme passthru rw ...passed 00:11:40.113 Test: blockdev nvme passthru vendor specific ...passed 00:11:40.113 Test: blockdev nvme admin passthru ...passed 00:11:40.113 Test: blockdev copy ...passed 00:11:40.113 Suite: bdevio tests on: concat0 00:11:40.113 Test: blockdev write read block ...passed 00:11:40.113 Test: blockdev write zeroes read block ...passed 00:11:40.113 Test: blockdev write zeroes read no split ...passed 00:11:40.114 Test: blockdev write zeroes read split ...passed 00:11:40.114 Test: blockdev write zeroes read split partial ...passed 00:11:40.114 Test: blockdev reset ...passed 00:11:40.114 Test: blockdev write read 8 blocks ...passed 00:11:40.114 Test: blockdev write read size > 128k ...passed 00:11:40.114 Test: blockdev write read invalid size ...passed 00:11:40.114 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:40.114 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:40.114 Test: blockdev write read max offset ...passed 00:11:40.114 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:40.114 Test: blockdev writev readv 8 blocks ...passed 00:11:40.114 Test: blockdev writev readv 30 x 1block ...passed 00:11:40.114 Test: blockdev writev readv block ...passed 00:11:40.114 Test: blockdev writev readv size > 128k ...passed 00:11:40.114 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:40.114 Test: blockdev comparev and writev ...passed 00:11:40.114 Test: blockdev nvme passthru rw ...passed 00:11:40.114 Test: blockdev nvme passthru vendor specific ...passed 00:11:40.114 Test: blockdev nvme admin passthru ...passed 00:11:40.114 Test: blockdev copy ...passed 00:11:40.114 Suite: bdevio tests on: raid0 00:11:40.114 Test: blockdev write read block ...passed 00:11:40.114 Test: blockdev write zeroes read block ...passed 00:11:40.114 Test: blockdev write zeroes read no split ...passed 00:11:40.114 Test: blockdev write zeroes read split ...passed 00:11:40.114 Test: blockdev write zeroes read split partial ...passed 00:11:40.114 Test: blockdev reset ...passed 00:11:40.114 Test: blockdev write read 8 blocks ...passed 00:11:40.114 Test: blockdev write read size > 128k ...passed 00:11:40.114 Test: blockdev write read invalid size ...passed 00:11:40.114 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:40.114 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:40.114 Test: blockdev write read max offset ...passed 00:11:40.114 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:40.114 Test: blockdev writev readv 8 blocks ...passed 00:11:40.114 Test: blockdev writev readv 30 x 1block ...passed 00:11:40.114 Test: blockdev writev readv block ...passed 00:11:40.114 Test: blockdev writev readv size > 128k ...passed 00:11:40.114 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:40.114 Test: blockdev comparev and writev ...passed 00:11:40.114 Test: blockdev nvme passthru rw ...passed 00:11:40.114 Test: blockdev nvme passthru vendor specific ...passed 00:11:40.114 Test: blockdev nvme admin passthru ...passed 00:11:40.114 Test: blockdev copy ...passed 00:11:40.114 Suite: bdevio tests on: TestPT 00:11:40.114 Test: blockdev write read block ...passed 00:11:40.114 Test: blockdev write zeroes read block ...passed 00:11:40.114 Test: blockdev write zeroes read no split ...passed 00:11:40.114 Test: blockdev write zeroes read split ...passed 00:11:40.114 Test: blockdev write zeroes read split partial ...passed 00:11:40.114 Test: blockdev reset ...passed 00:11:40.114 Test: blockdev write read 8 blocks ...passed 00:11:40.114 Test: blockdev write read size > 128k ...passed 00:11:40.114 Test: blockdev write read invalid size ...passed 00:11:40.114 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:40.114 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:40.114 Test: blockdev write read max offset ...passed 00:11:40.114 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:40.114 Test: blockdev writev readv 8 blocks ...passed 00:11:40.114 Test: blockdev writev readv 30 x 1block ...passed 00:11:40.114 Test: blockdev writev readv block ...passed 00:11:40.114 Test: blockdev writev readv size > 128k ...passed 00:11:40.114 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:40.114 Test: blockdev comparev and writev ...passed 00:11:40.114 Test: blockdev nvme passthru rw ...passed 00:11:40.114 Test: blockdev nvme passthru vendor specific ...passed 00:11:40.114 Test: blockdev nvme admin passthru ...passed 00:11:40.114 Test: blockdev copy ...passed 00:11:40.114 Suite: bdevio tests on: Malloc2p7 00:11:40.114 Test: blockdev write read block ...passed 00:11:40.114 Test: blockdev write zeroes read block ...passed 00:11:40.114 Test: blockdev write zeroes read no split ...passed 00:11:40.114 Test: blockdev write zeroes read split ...passed 00:11:40.114 Test: blockdev write zeroes read split partial ...passed 00:11:40.114 Test: blockdev reset ...passed 00:11:40.114 Test: blockdev write read 8 blocks ...passed 00:11:40.114 Test: blockdev write read size > 128k ...passed 00:11:40.114 Test: blockdev write read invalid size ...passed 00:11:40.114 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:40.114 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:40.114 Test: blockdev write read max offset ...passed 00:11:40.114 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:40.114 Test: blockdev writev readv 8 blocks ...passed 00:11:40.114 Test: blockdev writev readv 30 x 1block ...passed 00:11:40.114 Test: blockdev writev readv block ...passed 00:11:40.114 Test: blockdev writev readv size > 128k ...passed 00:11:40.114 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:40.114 Test: blockdev comparev and writev ...passed 00:11:40.114 Test: blockdev nvme passthru rw ...passed 00:11:40.114 Test: blockdev nvme passthru vendor specific ...passed 00:11:40.114 Test: blockdev nvme admin passthru ...passed 00:11:40.114 Test: blockdev copy ...passed 00:11:40.114 Suite: bdevio tests on: Malloc2p6 00:11:40.114 Test: blockdev write read block ...passed 00:11:40.114 Test: blockdev write zeroes read block ...passed 00:11:40.114 Test: blockdev write zeroes read no split ...passed 00:11:40.114 Test: blockdev write zeroes read split ...passed 00:11:40.114 Test: blockdev write zeroes read split partial ...passed 00:11:40.114 Test: blockdev reset ...passed 00:11:40.114 Test: blockdev write read 8 blocks ...passed 00:11:40.114 Test: blockdev write read size > 128k ...passed 00:11:40.114 Test: blockdev write read invalid size ...passed 00:11:40.114 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:40.114 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:40.114 Test: blockdev write read max offset ...passed 00:11:40.114 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:40.114 Test: blockdev writev readv 8 blocks ...passed 00:11:40.114 Test: blockdev writev readv 30 x 1block ...passed 00:11:40.114 Test: blockdev writev readv block ...passed 00:11:40.114 Test: blockdev writev readv size > 128k ...passed 00:11:40.114 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:40.114 Test: blockdev comparev and writev ...passed 00:11:40.114 Test: blockdev nvme passthru rw ...passed 00:11:40.114 Test: blockdev nvme passthru vendor specific ...passed 00:11:40.114 Test: blockdev nvme admin passthru ...passed 00:11:40.114 Test: blockdev copy ...passed 00:11:40.114 Suite: bdevio tests on: Malloc2p5 00:11:40.114 Test: blockdev write read block ...passed 00:11:40.114 Test: blockdev write zeroes read block ...passed 00:11:40.114 Test: blockdev write zeroes read no split ...passed 00:11:40.114 Test: blockdev write zeroes read split ...passed 00:11:40.374 Test: blockdev write zeroes read split partial ...passed 00:11:40.374 Test: blockdev reset ...passed 00:11:40.374 Test: blockdev write read 8 blocks ...passed 00:11:40.374 Test: blockdev write read size > 128k ...passed 00:11:40.374 Test: blockdev write read invalid size ...passed 00:11:40.374 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:40.374 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:40.374 Test: blockdev write read max offset ...passed 00:11:40.374 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:40.374 Test: blockdev writev readv 8 blocks ...passed 00:11:40.374 Test: blockdev writev readv 30 x 1block ...passed 00:11:40.374 Test: blockdev writev readv block ...passed 00:11:40.374 Test: blockdev writev readv size > 128k ...passed 00:11:40.374 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:40.374 Test: blockdev comparev and writev ...passed 00:11:40.374 Test: blockdev nvme passthru rw ...passed 00:11:40.374 Test: blockdev nvme passthru vendor specific ...passed 00:11:40.374 Test: blockdev nvme admin passthru ...passed 00:11:40.374 Test: blockdev copy ...passed 00:11:40.374 Suite: bdevio tests on: Malloc2p4 00:11:40.374 Test: blockdev write read block ...passed 00:11:40.374 Test: blockdev write zeroes read block ...passed 00:11:40.374 Test: blockdev write zeroes read no split ...passed 00:11:40.374 Test: blockdev write zeroes read split ...passed 00:11:40.374 Test: blockdev write zeroes read split partial ...passed 00:11:40.374 Test: blockdev reset ...passed 00:11:40.374 Test: blockdev write read 8 blocks ...passed 00:11:40.374 Test: blockdev write read size > 128k ...passed 00:11:40.374 Test: blockdev write read invalid size ...passed 00:11:40.374 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:40.374 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:40.374 Test: blockdev write read max offset ...passed 00:11:40.374 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:40.374 Test: blockdev writev readv 8 blocks ...passed 00:11:40.374 Test: blockdev writev readv 30 x 1block ...passed 00:11:40.374 Test: blockdev writev readv block ...passed 00:11:40.374 Test: blockdev writev readv size > 128k ...passed 00:11:40.374 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:40.374 Test: blockdev comparev and writev ...passed 00:11:40.374 Test: blockdev nvme passthru rw ...passed 00:11:40.374 Test: blockdev nvme passthru vendor specific ...passed 00:11:40.374 Test: blockdev nvme admin passthru ...passed 00:11:40.374 Test: blockdev copy ...passed 00:11:40.374 Suite: bdevio tests on: Malloc2p3 00:11:40.374 Test: blockdev write read block ...passed 00:11:40.374 Test: blockdev write zeroes read block ...passed 00:11:40.374 Test: blockdev write zeroes read no split ...passed 00:11:40.374 Test: blockdev write zeroes read split ...passed 00:11:40.374 Test: blockdev write zeroes read split partial ...passed 00:11:40.374 Test: blockdev reset ...passed 00:11:40.374 Test: blockdev write read 8 blocks ...passed 00:11:40.374 Test: blockdev write read size > 128k ...passed 00:11:40.374 Test: blockdev write read invalid size ...passed 00:11:40.374 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:40.374 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:40.374 Test: blockdev write read max offset ...passed 00:11:40.374 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:40.374 Test: blockdev writev readv 8 blocks ...passed 00:11:40.374 Test: blockdev writev readv 30 x 1block ...passed 00:11:40.374 Test: blockdev writev readv block ...passed 00:11:40.374 Test: blockdev writev readv size > 128k ...passed 00:11:40.374 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:40.374 Test: blockdev comparev and writev ...passed 00:11:40.374 Test: blockdev nvme passthru rw ...passed 00:11:40.374 Test: blockdev nvme passthru vendor specific ...passed 00:11:40.374 Test: blockdev nvme admin passthru ...passed 00:11:40.374 Test: blockdev copy ...passed 00:11:40.374 Suite: bdevio tests on: Malloc2p2 00:11:40.374 Test: blockdev write read block ...passed 00:11:40.374 Test: blockdev write zeroes read block ...passed 00:11:40.374 Test: blockdev write zeroes read no split ...passed 00:11:40.374 Test: blockdev write zeroes read split ...passed 00:11:40.374 Test: blockdev write zeroes read split partial ...passed 00:11:40.374 Test: blockdev reset ...passed 00:11:40.374 Test: blockdev write read 8 blocks ...passed 00:11:40.374 Test: blockdev write read size > 128k ...passed 00:11:40.374 Test: blockdev write read invalid size ...passed 00:11:40.374 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:40.374 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:40.374 Test: blockdev write read max offset ...passed 00:11:40.374 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:40.374 Test: blockdev writev readv 8 blocks ...passed 00:11:40.374 Test: blockdev writev readv 30 x 1block ...passed 00:11:40.374 Test: blockdev writev readv block ...passed 00:11:40.374 Test: blockdev writev readv size > 128k ...passed 00:11:40.374 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:40.374 Test: blockdev comparev and writev ...passed 00:11:40.374 Test: blockdev nvme passthru rw ...passed 00:11:40.374 Test: blockdev nvme passthru vendor specific ...passed 00:11:40.374 Test: blockdev nvme admin passthru ...passed 00:11:40.374 Test: blockdev copy ...passed 00:11:40.374 Suite: bdevio tests on: Malloc2p1 00:11:40.374 Test: blockdev write read block ...passed 00:11:40.374 Test: blockdev write zeroes read block ...passed 00:11:40.374 Test: blockdev write zeroes read no split ...passed 00:11:40.374 Test: blockdev write zeroes read split ...passed 00:11:40.374 Test: blockdev write zeroes read split partial ...passed 00:11:40.374 Test: blockdev reset ...passed 00:11:40.374 Test: blockdev write read 8 blocks ...passed 00:11:40.374 Test: blockdev write read size > 128k ...passed 00:11:40.374 Test: blockdev write read invalid size ...passed 00:11:40.374 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:40.374 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:40.374 Test: blockdev write read max offset ...passed 00:11:40.374 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:40.374 Test: blockdev writev readv 8 blocks ...passed 00:11:40.374 Test: blockdev writev readv 30 x 1block ...passed 00:11:40.374 Test: blockdev writev readv block ...passed 00:11:40.374 Test: blockdev writev readv size > 128k ...passed 00:11:40.374 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:40.374 Test: blockdev comparev and writev ...passed 00:11:40.374 Test: blockdev nvme passthru rw ...passed 00:11:40.374 Test: blockdev nvme passthru vendor specific ...passed 00:11:40.374 Test: blockdev nvme admin passthru ...passed 00:11:40.374 Test: blockdev copy ...passed 00:11:40.374 Suite: bdevio tests on: Malloc2p0 00:11:40.374 Test: blockdev write read block ...passed 00:11:40.374 Test: blockdev write zeroes read block ...passed 00:11:40.374 Test: blockdev write zeroes read no split ...passed 00:11:40.374 Test: blockdev write zeroes read split ...passed 00:11:40.374 Test: blockdev write zeroes read split partial ...passed 00:11:40.374 Test: blockdev reset ...passed 00:11:40.374 Test: blockdev write read 8 blocks ...passed 00:11:40.374 Test: blockdev write read size > 128k ...passed 00:11:40.374 Test: blockdev write read invalid size ...passed 00:11:40.374 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:40.374 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:40.374 Test: blockdev write read max offset ...passed 00:11:40.374 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:40.374 Test: blockdev writev readv 8 blocks ...passed 00:11:40.374 Test: blockdev writev readv 30 x 1block ...passed 00:11:40.374 Test: blockdev writev readv block ...passed 00:11:40.374 Test: blockdev writev readv size > 128k ...passed 00:11:40.374 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:40.374 Test: blockdev comparev and writev ...passed 00:11:40.374 Test: blockdev nvme passthru rw ...passed 00:11:40.374 Test: blockdev nvme passthru vendor specific ...passed 00:11:40.374 Test: blockdev nvme admin passthru ...passed 00:11:40.374 Test: blockdev copy ...passed 00:11:40.374 Suite: bdevio tests on: Malloc1p1 00:11:40.374 Test: blockdev write read block ...passed 00:11:40.374 Test: blockdev write zeroes read block ...passed 00:11:40.374 Test: blockdev write zeroes read no split ...passed 00:11:40.374 Test: blockdev write zeroes read split ...passed 00:11:40.374 Test: blockdev write zeroes read split partial ...passed 00:11:40.374 Test: blockdev reset ...passed 00:11:40.374 Test: blockdev write read 8 blocks ...passed 00:11:40.374 Test: blockdev write read size > 128k ...passed 00:11:40.374 Test: blockdev write read invalid size ...passed 00:11:40.374 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:40.374 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:40.374 Test: blockdev write read max offset ...passed 00:11:40.374 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:40.374 Test: blockdev writev readv 8 blocks ...passed 00:11:40.374 Test: blockdev writev readv 30 x 1block ...passed 00:11:40.374 Test: blockdev writev readv block ...passed 00:11:40.374 Test: blockdev writev readv size > 128k ...passed 00:11:40.374 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:40.374 Test: blockdev comparev and writev ...passed 00:11:40.374 Test: blockdev nvme passthru rw ...passed 00:11:40.375 Test: blockdev nvme passthru vendor specific ...passed 00:11:40.375 Test: blockdev nvme admin passthru ...passed 00:11:40.375 Test: blockdev copy ...passed 00:11:40.375 Suite: bdevio tests on: Malloc1p0 00:11:40.375 Test: blockdev write read block ...passed 00:11:40.375 Test: blockdev write zeroes read block ...passed 00:11:40.375 Test: blockdev write zeroes read no split ...passed 00:11:40.375 Test: blockdev write zeroes read split ...passed 00:11:40.375 Test: blockdev write zeroes read split partial ...passed 00:11:40.375 Test: blockdev reset ...passed 00:11:40.375 Test: blockdev write read 8 blocks ...passed 00:11:40.375 Test: blockdev write read size > 128k ...passed 00:11:40.375 Test: blockdev write read invalid size ...passed 00:11:40.375 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:40.375 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:40.375 Test: blockdev write read max offset ...passed 00:11:40.375 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:40.375 Test: blockdev writev readv 8 blocks ...passed 00:11:40.375 Test: blockdev writev readv 30 x 1block ...passed 00:11:40.375 Test: blockdev writev readv block ...passed 00:11:40.375 Test: blockdev writev readv size > 128k ...passed 00:11:40.375 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:40.375 Test: blockdev comparev and writev ...passed 00:11:40.375 Test: blockdev nvme passthru rw ...passed 00:11:40.375 Test: blockdev nvme passthru vendor specific ...passed 00:11:40.375 Test: blockdev nvme admin passthru ...passed 00:11:40.375 Test: blockdev copy ...passed 00:11:40.375 Suite: bdevio tests on: Malloc0 00:11:40.375 Test: blockdev write read block ...passed 00:11:40.375 Test: blockdev write zeroes read block ...passed 00:11:40.375 Test: blockdev write zeroes read no split ...passed 00:11:40.375 Test: blockdev write zeroes read split ...passed 00:11:40.375 Test: blockdev write zeroes read split partial ...passed 00:11:40.375 Test: blockdev reset ...passed 00:11:40.375 Test: blockdev write read 8 blocks ...passed 00:11:40.375 Test: blockdev write read size > 128k ...passed 00:11:40.375 Test: blockdev write read invalid size ...passed 00:11:40.375 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:40.375 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:40.375 Test: blockdev write read max offset ...passed 00:11:40.375 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:40.375 Test: blockdev writev readv 8 blocks ...passed 00:11:40.375 Test: blockdev writev readv 30 x 1block ...passed 00:11:40.375 Test: blockdev writev readv block ...passed 00:11:40.375 Test: blockdev writev readv size > 128k ...passed 00:11:40.375 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:40.375 Test: blockdev comparev and writev ...passed 00:11:40.375 Test: blockdev nvme passthru rw ...passed 00:11:40.375 Test: blockdev nvme passthru vendor specific ...passed 00:11:40.375 Test: blockdev nvme admin passthru ...passed 00:11:40.375 Test: blockdev copy ...passed 00:11:40.375 00:11:40.375 Run Summary: Type Total Ran Passed Failed Inactive 00:11:40.375 suites 16 16 n/a 0 0 00:11:40.375 tests 368 368 368 0 0 00:11:40.375 asserts 2224 2224 2224 0 n/a 00:11:40.375 00:11:40.375 Elapsed time = 0.726 seconds 00:11:40.375 0 00:11:40.375 02:35:05 -- bdev/blockdev.sh@293 -- # killprocess 121266 00:11:40.375 02:35:05 -- common/autotest_common.sh@926 -- # '[' -z 121266 ']' 00:11:40.375 02:35:05 -- common/autotest_common.sh@930 -- # kill -0 121266 00:11:40.375 02:35:05 -- common/autotest_common.sh@931 -- # uname 00:11:40.375 02:35:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:40.375 02:35:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 121266 00:11:40.375 02:35:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:40.375 killing process with pid 121266 00:11:40.375 02:35:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:40.375 02:35:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 121266' 00:11:40.375 02:35:05 -- common/autotest_common.sh@945 -- # kill 121266 00:11:40.375 02:35:05 -- common/autotest_common.sh@950 -- # wait 121266 00:11:40.942 ************************************ 00:11:40.942 END TEST bdev_bounds 00:11:40.942 ************************************ 00:11:40.942 02:35:05 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:11:40.942 00:11:40.942 real 0m1.933s 00:11:40.942 user 0m4.575s 00:11:40.942 sys 0m0.511s 00:11:40.943 02:35:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:40.943 02:35:05 -- common/autotest_common.sh@10 -- # set +x 00:11:40.943 02:35:05 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:11:40.943 02:35:05 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:11:40.943 02:35:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:40.943 02:35:05 -- common/autotest_common.sh@10 -- # set +x 00:11:40.943 ************************************ 00:11:40.943 START TEST bdev_nbd 00:11:40.943 ************************************ 00:11:40.943 02:35:05 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:11:40.943 02:35:05 -- bdev/blockdev.sh@298 -- # uname -s 00:11:40.943 02:35:05 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:11:40.943 02:35:05 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:40.943 02:35:05 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:40.943 02:35:05 -- bdev/blockdev.sh@302 -- # bdev_all=($2) 00:11:40.943 02:35:05 -- bdev/blockdev.sh@302 -- # local bdev_all 00:11:40.943 02:35:05 -- bdev/blockdev.sh@303 -- # local bdev_num=16 00:11:40.943 02:35:05 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:11:40.943 02:35:05 -- bdev/blockdev.sh@309 -- # nbd_all=(/dev/nbd+([0-9])) 00:11:40.943 02:35:05 -- bdev/blockdev.sh@309 -- # local nbd_all 00:11:40.943 02:35:05 -- bdev/blockdev.sh@310 -- # bdev_num=16 00:11:40.943 02:35:05 -- bdev/blockdev.sh@312 -- # nbd_list=(${nbd_all[@]::bdev_num}) 00:11:40.943 02:35:05 -- bdev/blockdev.sh@312 -- # local nbd_list 00:11:40.943 02:35:05 -- bdev/blockdev.sh@313 -- # bdev_list=(${bdev_all[@]::bdev_num}) 00:11:40.943 02:35:05 -- bdev/blockdev.sh@313 -- # local bdev_list 00:11:40.943 02:35:05 -- bdev/blockdev.sh@316 -- # nbd_pid=121329 00:11:40.943 02:35:05 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:40.943 02:35:05 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:11:40.943 02:35:05 -- bdev/blockdev.sh@318 -- # waitforlisten 121329 /var/tmp/spdk-nbd.sock 00:11:40.943 02:35:05 -- common/autotest_common.sh@819 -- # '[' -z 121329 ']' 00:11:40.943 02:35:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:40.943 02:35:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:40.943 02:35:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:40.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:40.943 02:35:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:40.943 02:35:05 -- common/autotest_common.sh@10 -- # set +x 00:11:40.943 [2024-07-11 02:35:05.948370] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:40.943 [2024-07-11 02:35:05.948808] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:41.202 [2024-07-11 02:35:06.097072] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.202 [2024-07-11 02:35:06.190413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.461 [2024-07-11 02:35:06.366580] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:41.461 [2024-07-11 02:35:06.366939] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:41.461 [2024-07-11 02:35:06.374501] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:41.461 [2024-07-11 02:35:06.374686] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:41.461 [2024-07-11 02:35:06.382557] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:41.461 [2024-07-11 02:35:06.382732] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:41.461 [2024-07-11 02:35:06.382852] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:41.461 [2024-07-11 02:35:06.488151] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:41.461 [2024-07-11 02:35:06.488492] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:41.461 [2024-07-11 02:35:06.488589] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:11:41.461 [2024-07-11 02:35:06.488749] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:41.461 [2024-07-11 02:35:06.491350] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:41.461 [2024-07-11 02:35:06.491517] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:42.028 02:35:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:42.028 02:35:06 -- common/autotest_common.sh@852 -- # return 0 00:11:42.028 02:35:06 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:11:42.028 02:35:06 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:42.028 02:35:06 -- bdev/nbd_common.sh@114 -- # bdev_list=($2) 00:11:42.028 02:35:06 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:11:42.028 02:35:06 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:11:42.028 02:35:06 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:42.028 02:35:06 -- bdev/nbd_common.sh@23 -- # bdev_list=($2) 00:11:42.028 02:35:06 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:11:42.028 02:35:06 -- bdev/nbd_common.sh@24 -- # local i 00:11:42.028 02:35:06 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:11:42.028 02:35:06 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:11:42.028 02:35:06 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:42.028 02:35:06 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 00:11:42.028 02:35:07 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:11:42.028 02:35:07 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:11:42.028 02:35:07 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:11:42.028 02:35:07 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:11:42.028 02:35:07 -- common/autotest_common.sh@857 -- # local i 00:11:42.028 02:35:07 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:42.028 02:35:07 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:42.028 02:35:07 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:11:42.286 02:35:07 -- common/autotest_common.sh@861 -- # break 00:11:42.286 02:35:07 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:42.286 02:35:07 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:42.286 02:35:07 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:42.286 1+0 records in 00:11:42.287 1+0 records out 00:11:42.287 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00125101 s, 3.3 MB/s 00:11:42.287 02:35:07 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:42.287 02:35:07 -- common/autotest_common.sh@874 -- # size=4096 00:11:42.287 02:35:07 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:42.287 02:35:07 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:42.287 02:35:07 -- common/autotest_common.sh@877 -- # return 0 00:11:42.287 02:35:07 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:42.287 02:35:07 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:42.287 02:35:07 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 00:11:42.545 02:35:07 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:11:42.545 02:35:07 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:11:42.545 02:35:07 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:11:42.545 02:35:07 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:11:42.545 02:35:07 -- common/autotest_common.sh@857 -- # local i 00:11:42.545 02:35:07 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:42.545 02:35:07 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:42.545 02:35:07 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:11:42.545 02:35:07 -- common/autotest_common.sh@861 -- # break 00:11:42.545 02:35:07 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:42.545 02:35:07 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:42.545 02:35:07 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:42.545 1+0 records in 00:11:42.545 1+0 records out 00:11:42.545 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000554716 s, 7.4 MB/s 00:11:42.545 02:35:07 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:42.545 02:35:07 -- common/autotest_common.sh@874 -- # size=4096 00:11:42.545 02:35:07 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:42.545 02:35:07 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:42.545 02:35:07 -- common/autotest_common.sh@877 -- # return 0 00:11:42.545 02:35:07 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:42.545 02:35:07 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:42.545 02:35:07 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 00:11:42.803 02:35:07 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:11:42.804 02:35:07 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:11:42.804 02:35:07 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:11:42.804 02:35:07 -- common/autotest_common.sh@856 -- # local nbd_name=nbd2 00:11:42.804 02:35:07 -- common/autotest_common.sh@857 -- # local i 00:11:42.804 02:35:07 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:42.804 02:35:07 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:42.804 02:35:07 -- common/autotest_common.sh@860 -- # grep -q -w nbd2 /proc/partitions 00:11:42.804 02:35:07 -- common/autotest_common.sh@861 -- # break 00:11:42.804 02:35:07 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:42.804 02:35:07 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:42.804 02:35:07 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:42.804 1+0 records in 00:11:42.804 1+0 records out 00:11:42.804 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000531194 s, 7.7 MB/s 00:11:42.804 02:35:07 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:42.804 02:35:07 -- common/autotest_common.sh@874 -- # size=4096 00:11:42.804 02:35:07 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:42.804 02:35:07 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:42.804 02:35:07 -- common/autotest_common.sh@877 -- # return 0 00:11:42.804 02:35:07 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:42.804 02:35:07 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:42.804 02:35:07 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 00:11:43.062 02:35:07 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:11:43.062 02:35:07 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:11:43.062 02:35:07 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:11:43.062 02:35:07 -- common/autotest_common.sh@856 -- # local nbd_name=nbd3 00:11:43.062 02:35:07 -- common/autotest_common.sh@857 -- # local i 00:11:43.062 02:35:07 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:43.062 02:35:07 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:43.062 02:35:07 -- common/autotest_common.sh@860 -- # grep -q -w nbd3 /proc/partitions 00:11:43.062 02:35:07 -- common/autotest_common.sh@861 -- # break 00:11:43.062 02:35:07 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:43.062 02:35:07 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:43.062 02:35:07 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:43.062 1+0 records in 00:11:43.062 1+0 records out 00:11:43.063 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000591076 s, 6.9 MB/s 00:11:43.063 02:35:07 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:43.063 02:35:07 -- common/autotest_common.sh@874 -- # size=4096 00:11:43.063 02:35:07 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:43.063 02:35:07 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:43.063 02:35:07 -- common/autotest_common.sh@877 -- # return 0 00:11:43.063 02:35:07 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:43.063 02:35:07 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:43.063 02:35:07 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 00:11:43.321 02:35:08 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:11:43.321 02:35:08 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:11:43.321 02:35:08 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:11:43.321 02:35:08 -- common/autotest_common.sh@856 -- # local nbd_name=nbd4 00:11:43.321 02:35:08 -- common/autotest_common.sh@857 -- # local i 00:11:43.321 02:35:08 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:43.321 02:35:08 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:43.321 02:35:08 -- common/autotest_common.sh@860 -- # grep -q -w nbd4 /proc/partitions 00:11:43.321 02:35:08 -- common/autotest_common.sh@861 -- # break 00:11:43.321 02:35:08 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:43.321 02:35:08 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:43.321 02:35:08 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:43.321 1+0 records in 00:11:43.321 1+0 records out 00:11:43.321 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000597377 s, 6.9 MB/s 00:11:43.321 02:35:08 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:43.321 02:35:08 -- common/autotest_common.sh@874 -- # size=4096 00:11:43.321 02:35:08 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:43.322 02:35:08 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:43.322 02:35:08 -- common/autotest_common.sh@877 -- # return 0 00:11:43.322 02:35:08 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:43.322 02:35:08 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:43.322 02:35:08 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 00:11:43.580 02:35:08 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:11:43.580 02:35:08 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:11:43.580 02:35:08 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:11:43.580 02:35:08 -- common/autotest_common.sh@856 -- # local nbd_name=nbd5 00:11:43.580 02:35:08 -- common/autotest_common.sh@857 -- # local i 00:11:43.580 02:35:08 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:43.580 02:35:08 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:43.580 02:35:08 -- common/autotest_common.sh@860 -- # grep -q -w nbd5 /proc/partitions 00:11:43.580 02:35:08 -- common/autotest_common.sh@861 -- # break 00:11:43.580 02:35:08 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:43.580 02:35:08 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:43.580 02:35:08 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:43.580 1+0 records in 00:11:43.580 1+0 records out 00:11:43.580 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000450563 s, 9.1 MB/s 00:11:43.580 02:35:08 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:43.580 02:35:08 -- common/autotest_common.sh@874 -- # size=4096 00:11:43.580 02:35:08 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:43.580 02:35:08 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:43.580 02:35:08 -- common/autotest_common.sh@877 -- # return 0 00:11:43.580 02:35:08 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:43.580 02:35:08 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:43.580 02:35:08 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 00:11:43.839 02:35:08 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:11:43.839 02:35:08 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:11:43.839 02:35:08 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:11:43.839 02:35:08 -- common/autotest_common.sh@856 -- # local nbd_name=nbd6 00:11:43.839 02:35:08 -- common/autotest_common.sh@857 -- # local i 00:11:43.839 02:35:08 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:43.839 02:35:08 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:43.839 02:35:08 -- common/autotest_common.sh@860 -- # grep -q -w nbd6 /proc/partitions 00:11:43.839 02:35:08 -- common/autotest_common.sh@861 -- # break 00:11:43.839 02:35:08 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:43.839 02:35:08 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:43.839 02:35:08 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:43.839 1+0 records in 00:11:43.839 1+0 records out 00:11:43.839 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000617762 s, 6.6 MB/s 00:11:43.839 02:35:08 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:43.839 02:35:08 -- common/autotest_common.sh@874 -- # size=4096 00:11:43.839 02:35:08 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:43.839 02:35:08 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:43.839 02:35:08 -- common/autotest_common.sh@877 -- # return 0 00:11:43.839 02:35:08 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:43.839 02:35:08 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:43.839 02:35:08 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 00:11:44.098 02:35:09 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7 00:11:44.098 02:35:09 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7 00:11:44.098 02:35:09 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7 00:11:44.098 02:35:09 -- common/autotest_common.sh@856 -- # local nbd_name=nbd7 00:11:44.098 02:35:09 -- common/autotest_common.sh@857 -- # local i 00:11:44.098 02:35:09 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:44.098 02:35:09 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:44.098 02:35:09 -- common/autotest_common.sh@860 -- # grep -q -w nbd7 /proc/partitions 00:11:44.098 02:35:09 -- common/autotest_common.sh@861 -- # break 00:11:44.098 02:35:09 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:44.098 02:35:09 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:44.098 02:35:09 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:44.098 1+0 records in 00:11:44.098 1+0 records out 00:11:44.098 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000779271 s, 5.3 MB/s 00:11:44.098 02:35:09 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:44.098 02:35:09 -- common/autotest_common.sh@874 -- # size=4096 00:11:44.098 02:35:09 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:44.098 02:35:09 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:44.098 02:35:09 -- common/autotest_common.sh@877 -- # return 0 00:11:44.098 02:35:09 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:44.098 02:35:09 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:44.098 02:35:09 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 00:11:44.357 02:35:09 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8 00:11:44.357 02:35:09 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8 00:11:44.357 02:35:09 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8 00:11:44.357 02:35:09 -- common/autotest_common.sh@856 -- # local nbd_name=nbd8 00:11:44.357 02:35:09 -- common/autotest_common.sh@857 -- # local i 00:11:44.357 02:35:09 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:44.357 02:35:09 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:44.357 02:35:09 -- common/autotest_common.sh@860 -- # grep -q -w nbd8 /proc/partitions 00:11:44.357 02:35:09 -- common/autotest_common.sh@861 -- # break 00:11:44.357 02:35:09 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:44.357 02:35:09 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:44.357 02:35:09 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:44.357 1+0 records in 00:11:44.357 1+0 records out 00:11:44.357 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000732276 s, 5.6 MB/s 00:11:44.357 02:35:09 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:44.357 02:35:09 -- common/autotest_common.sh@874 -- # size=4096 00:11:44.357 02:35:09 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:44.357 02:35:09 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:44.357 02:35:09 -- common/autotest_common.sh@877 -- # return 0 00:11:44.357 02:35:09 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:44.357 02:35:09 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:44.357 02:35:09 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 00:11:44.615 02:35:09 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9 00:11:44.615 02:35:09 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9 00:11:44.615 02:35:09 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9 00:11:44.615 02:35:09 -- common/autotest_common.sh@856 -- # local nbd_name=nbd9 00:11:44.616 02:35:09 -- common/autotest_common.sh@857 -- # local i 00:11:44.616 02:35:09 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:44.616 02:35:09 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:44.616 02:35:09 -- common/autotest_common.sh@860 -- # grep -q -w nbd9 /proc/partitions 00:11:44.616 02:35:09 -- common/autotest_common.sh@861 -- # break 00:11:44.616 02:35:09 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:44.616 02:35:09 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:44.616 02:35:09 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:44.616 1+0 records in 00:11:44.616 1+0 records out 00:11:44.616 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0010324 s, 4.0 MB/s 00:11:44.616 02:35:09 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:44.616 02:35:09 -- common/autotest_common.sh@874 -- # size=4096 00:11:44.616 02:35:09 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:44.616 02:35:09 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:44.616 02:35:09 -- common/autotest_common.sh@877 -- # return 0 00:11:44.616 02:35:09 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:44.616 02:35:09 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:44.616 02:35:09 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 00:11:44.875 02:35:09 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10 00:11:44.875 02:35:09 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10 00:11:44.875 02:35:09 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10 00:11:44.875 02:35:09 -- common/autotest_common.sh@856 -- # local nbd_name=nbd10 00:11:44.875 02:35:09 -- common/autotest_common.sh@857 -- # local i 00:11:44.875 02:35:09 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:44.875 02:35:09 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:44.875 02:35:09 -- common/autotest_common.sh@860 -- # grep -q -w nbd10 /proc/partitions 00:11:44.875 02:35:09 -- common/autotest_common.sh@861 -- # break 00:11:44.875 02:35:09 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:44.875 02:35:09 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:44.875 02:35:09 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:44.875 1+0 records in 00:11:44.875 1+0 records out 00:11:44.875 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00086673 s, 4.7 MB/s 00:11:44.875 02:35:09 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:44.875 02:35:09 -- common/autotest_common.sh@874 -- # size=4096 00:11:44.875 02:35:09 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:44.875 02:35:09 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:44.875 02:35:09 -- common/autotest_common.sh@877 -- # return 0 00:11:44.875 02:35:09 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:44.875 02:35:09 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:44.875 02:35:09 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT 00:11:45.134 02:35:10 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11 00:11:45.134 02:35:10 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11 00:11:45.134 02:35:10 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11 00:11:45.134 02:35:10 -- common/autotest_common.sh@856 -- # local nbd_name=nbd11 00:11:45.134 02:35:10 -- common/autotest_common.sh@857 -- # local i 00:11:45.134 02:35:10 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:45.134 02:35:10 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:45.134 02:35:10 -- common/autotest_common.sh@860 -- # grep -q -w nbd11 /proc/partitions 00:11:45.134 02:35:10 -- common/autotest_common.sh@861 -- # break 00:11:45.134 02:35:10 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:45.134 02:35:10 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:45.134 02:35:10 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:45.134 1+0 records in 00:11:45.134 1+0 records out 00:11:45.134 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0012036 s, 3.4 MB/s 00:11:45.134 02:35:10 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:45.392 02:35:10 -- common/autotest_common.sh@874 -- # size=4096 00:11:45.392 02:35:10 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:45.392 02:35:10 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:45.392 02:35:10 -- common/autotest_common.sh@877 -- # return 0 00:11:45.392 02:35:10 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:45.392 02:35:10 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:45.392 02:35:10 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 00:11:45.668 02:35:10 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12 00:11:45.668 02:35:10 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12 00:11:45.668 02:35:10 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12 00:11:45.668 02:35:10 -- common/autotest_common.sh@856 -- # local nbd_name=nbd12 00:11:45.668 02:35:10 -- common/autotest_common.sh@857 -- # local i 00:11:45.668 02:35:10 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:45.668 02:35:10 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:45.668 02:35:10 -- common/autotest_common.sh@860 -- # grep -q -w nbd12 /proc/partitions 00:11:45.668 02:35:10 -- common/autotest_common.sh@861 -- # break 00:11:45.669 02:35:10 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:45.669 02:35:10 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:45.669 02:35:10 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:45.669 1+0 records in 00:11:45.669 1+0 records out 00:11:45.669 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000924515 s, 4.4 MB/s 00:11:45.669 02:35:10 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:45.669 02:35:10 -- common/autotest_common.sh@874 -- # size=4096 00:11:45.669 02:35:10 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:45.669 02:35:10 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:45.669 02:35:10 -- common/autotest_common.sh@877 -- # return 0 00:11:45.669 02:35:10 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:45.669 02:35:10 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:45.669 02:35:10 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 00:11:45.961 02:35:10 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13 00:11:45.961 02:35:10 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13 00:11:45.961 02:35:10 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13 00:11:45.961 02:35:10 -- common/autotest_common.sh@856 -- # local nbd_name=nbd13 00:11:45.961 02:35:10 -- common/autotest_common.sh@857 -- # local i 00:11:45.961 02:35:10 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:45.961 02:35:10 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:45.961 02:35:10 -- common/autotest_common.sh@860 -- # grep -q -w nbd13 /proc/partitions 00:11:45.961 02:35:10 -- common/autotest_common.sh@861 -- # break 00:11:45.961 02:35:10 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:45.961 02:35:10 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:45.961 02:35:10 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:45.961 1+0 records in 00:11:45.961 1+0 records out 00:11:45.961 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00101349 s, 4.0 MB/s 00:11:45.961 02:35:10 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:45.961 02:35:10 -- common/autotest_common.sh@874 -- # size=4096 00:11:45.961 02:35:10 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:45.961 02:35:10 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:45.961 02:35:10 -- common/autotest_common.sh@877 -- # return 0 00:11:45.961 02:35:10 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:45.961 02:35:10 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:45.961 02:35:10 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 00:11:46.220 02:35:11 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14 00:11:46.220 02:35:11 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14 00:11:46.220 02:35:11 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14 00:11:46.220 02:35:11 -- common/autotest_common.sh@856 -- # local nbd_name=nbd14 00:11:46.220 02:35:11 -- common/autotest_common.sh@857 -- # local i 00:11:46.220 02:35:11 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:46.220 02:35:11 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:46.220 02:35:11 -- common/autotest_common.sh@860 -- # grep -q -w nbd14 /proc/partitions 00:11:46.220 02:35:11 -- common/autotest_common.sh@861 -- # break 00:11:46.220 02:35:11 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:46.220 02:35:11 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:46.220 02:35:11 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:46.220 1+0 records in 00:11:46.220 1+0 records out 00:11:46.220 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00112872 s, 3.6 MB/s 00:11:46.220 02:35:11 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:46.220 02:35:11 -- common/autotest_common.sh@874 -- # size=4096 00:11:46.220 02:35:11 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:46.220 02:35:11 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:46.220 02:35:11 -- common/autotest_common.sh@877 -- # return 0 00:11:46.220 02:35:11 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:46.220 02:35:11 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:46.220 02:35:11 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 00:11:46.479 02:35:11 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15 00:11:46.479 02:35:11 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15 00:11:46.479 02:35:11 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15 00:11:46.479 02:35:11 -- common/autotest_common.sh@856 -- # local nbd_name=nbd15 00:11:46.479 02:35:11 -- common/autotest_common.sh@857 -- # local i 00:11:46.479 02:35:11 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:46.479 02:35:11 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:46.479 02:35:11 -- common/autotest_common.sh@860 -- # grep -q -w nbd15 /proc/partitions 00:11:46.479 02:35:11 -- common/autotest_common.sh@861 -- # break 00:11:46.479 02:35:11 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:46.479 02:35:11 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:46.479 02:35:11 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:46.479 1+0 records in 00:11:46.479 1+0 records out 00:11:46.479 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00154769 s, 2.6 MB/s 00:11:46.479 02:35:11 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:46.479 02:35:11 -- common/autotest_common.sh@874 -- # size=4096 00:11:46.479 02:35:11 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:46.479 02:35:11 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:46.479 02:35:11 -- common/autotest_common.sh@877 -- # return 0 00:11:46.479 02:35:11 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:46.479 02:35:11 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:46.479 02:35:11 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:46.738 02:35:11 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:11:46.738 { 00:11:46.738 "nbd_device": "/dev/nbd0", 00:11:46.738 "bdev_name": "Malloc0" 00:11:46.738 }, 00:11:46.738 { 00:11:46.738 "nbd_device": "/dev/nbd1", 00:11:46.738 "bdev_name": "Malloc1p0" 00:11:46.738 }, 00:11:46.738 { 00:11:46.738 "nbd_device": "/dev/nbd2", 00:11:46.738 "bdev_name": "Malloc1p1" 00:11:46.738 }, 00:11:46.738 { 00:11:46.738 "nbd_device": "/dev/nbd3", 00:11:46.738 "bdev_name": "Malloc2p0" 00:11:46.738 }, 00:11:46.738 { 00:11:46.738 "nbd_device": "/dev/nbd4", 00:11:46.738 "bdev_name": "Malloc2p1" 00:11:46.738 }, 00:11:46.738 { 00:11:46.738 "nbd_device": "/dev/nbd5", 00:11:46.738 "bdev_name": "Malloc2p2" 00:11:46.738 }, 00:11:46.738 { 00:11:46.738 "nbd_device": "/dev/nbd6", 00:11:46.738 "bdev_name": "Malloc2p3" 00:11:46.738 }, 00:11:46.738 { 00:11:46.738 "nbd_device": "/dev/nbd7", 00:11:46.738 "bdev_name": "Malloc2p4" 00:11:46.738 }, 00:11:46.738 { 00:11:46.738 "nbd_device": "/dev/nbd8", 00:11:46.738 "bdev_name": "Malloc2p5" 00:11:46.738 }, 00:11:46.738 { 00:11:46.738 "nbd_device": "/dev/nbd9", 00:11:46.738 "bdev_name": "Malloc2p6" 00:11:46.738 }, 00:11:46.738 { 00:11:46.738 "nbd_device": "/dev/nbd10", 00:11:46.738 "bdev_name": "Malloc2p7" 00:11:46.738 }, 00:11:46.738 { 00:11:46.738 "nbd_device": "/dev/nbd11", 00:11:46.738 "bdev_name": "TestPT" 00:11:46.738 }, 00:11:46.738 { 00:11:46.738 "nbd_device": "/dev/nbd12", 00:11:46.738 "bdev_name": "raid0" 00:11:46.738 }, 00:11:46.738 { 00:11:46.738 "nbd_device": "/dev/nbd13", 00:11:46.738 "bdev_name": "concat0" 00:11:46.738 }, 00:11:46.738 { 00:11:46.738 "nbd_device": "/dev/nbd14", 00:11:46.738 "bdev_name": "raid1" 00:11:46.738 }, 00:11:46.738 { 00:11:46.738 "nbd_device": "/dev/nbd15", 00:11:46.738 "bdev_name": "AIO0" 00:11:46.738 } 00:11:46.738 ]' 00:11:46.738 02:35:11 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:11:46.738 02:35:11 -- bdev/nbd_common.sh@119 -- # echo '[ 00:11:46.738 { 00:11:46.738 "nbd_device": "/dev/nbd0", 00:11:46.738 "bdev_name": "Malloc0" 00:11:46.738 }, 00:11:46.738 { 00:11:46.738 "nbd_device": "/dev/nbd1", 00:11:46.738 "bdev_name": "Malloc1p0" 00:11:46.738 }, 00:11:46.738 { 00:11:46.738 "nbd_device": "/dev/nbd2", 00:11:46.738 "bdev_name": "Malloc1p1" 00:11:46.738 }, 00:11:46.738 { 00:11:46.738 "nbd_device": "/dev/nbd3", 00:11:46.738 "bdev_name": "Malloc2p0" 00:11:46.738 }, 00:11:46.738 { 00:11:46.738 "nbd_device": "/dev/nbd4", 00:11:46.738 "bdev_name": "Malloc2p1" 00:11:46.738 }, 00:11:46.738 { 00:11:46.738 "nbd_device": "/dev/nbd5", 00:11:46.738 "bdev_name": "Malloc2p2" 00:11:46.738 }, 00:11:46.738 { 00:11:46.738 "nbd_device": "/dev/nbd6", 00:11:46.738 "bdev_name": "Malloc2p3" 00:11:46.738 }, 00:11:46.738 { 00:11:46.738 "nbd_device": "/dev/nbd7", 00:11:46.738 "bdev_name": "Malloc2p4" 00:11:46.738 }, 00:11:46.738 { 00:11:46.738 "nbd_device": "/dev/nbd8", 00:11:46.738 "bdev_name": "Malloc2p5" 00:11:46.738 }, 00:11:46.738 { 00:11:46.738 "nbd_device": "/dev/nbd9", 00:11:46.738 "bdev_name": "Malloc2p6" 00:11:46.738 }, 00:11:46.738 { 00:11:46.738 "nbd_device": "/dev/nbd10", 00:11:46.738 "bdev_name": "Malloc2p7" 00:11:46.738 }, 00:11:46.738 { 00:11:46.738 "nbd_device": "/dev/nbd11", 00:11:46.738 "bdev_name": "TestPT" 00:11:46.738 }, 00:11:46.738 { 00:11:46.738 "nbd_device": "/dev/nbd12", 00:11:46.738 "bdev_name": "raid0" 00:11:46.738 }, 00:11:46.738 { 00:11:46.738 "nbd_device": "/dev/nbd13", 00:11:46.738 "bdev_name": "concat0" 00:11:46.738 }, 00:11:46.738 { 00:11:46.738 "nbd_device": "/dev/nbd14", 00:11:46.738 "bdev_name": "raid1" 00:11:46.738 }, 00:11:46.738 { 00:11:46.738 "nbd_device": "/dev/nbd15", 00:11:46.738 "bdev_name": "AIO0" 00:11:46.738 } 00:11:46.738 ]' 00:11:46.738 02:35:11 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:11:46.738 02:35:11 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15' 00:11:46.738 02:35:11 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:46.738 02:35:11 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:11:46.738 02:35:11 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:46.738 02:35:11 -- bdev/nbd_common.sh@51 -- # local i 00:11:46.738 02:35:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:46.738 02:35:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:46.997 02:35:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:46.997 02:35:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:46.997 02:35:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:46.997 02:35:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:46.997 02:35:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:46.997 02:35:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:46.997 02:35:11 -- bdev/nbd_common.sh@41 -- # break 00:11:46.997 02:35:11 -- bdev/nbd_common.sh@45 -- # return 0 00:11:46.997 02:35:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:46.997 02:35:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:47.256 02:35:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:47.256 02:35:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:47.256 02:35:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:47.256 02:35:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:47.256 02:35:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:47.256 02:35:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:47.256 02:35:12 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:11:47.515 02:35:12 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:11:47.515 02:35:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:47.515 02:35:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:47.515 02:35:12 -- bdev/nbd_common.sh@41 -- # break 00:11:47.515 02:35:12 -- bdev/nbd_common.sh@45 -- # return 0 00:11:47.515 02:35:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:47.515 02:35:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:11:47.773 02:35:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:11:47.773 02:35:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:11:47.773 02:35:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:11:47.773 02:35:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:47.773 02:35:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:47.773 02:35:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:11:47.773 02:35:12 -- bdev/nbd_common.sh@41 -- # break 00:11:47.773 02:35:12 -- bdev/nbd_common.sh@45 -- # return 0 00:11:47.773 02:35:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:47.773 02:35:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:11:48.032 02:35:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:11:48.032 02:35:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:11:48.032 02:35:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:11:48.032 02:35:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:48.032 02:35:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:48.032 02:35:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:11:48.032 02:35:12 -- bdev/nbd_common.sh@41 -- # break 00:11:48.032 02:35:12 -- bdev/nbd_common.sh@45 -- # return 0 00:11:48.032 02:35:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:48.032 02:35:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:11:48.290 02:35:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:11:48.290 02:35:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:11:48.290 02:35:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:11:48.290 02:35:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:48.290 02:35:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:48.290 02:35:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:11:48.290 02:35:13 -- bdev/nbd_common.sh@41 -- # break 00:11:48.290 02:35:13 -- bdev/nbd_common.sh@45 -- # return 0 00:11:48.291 02:35:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:48.291 02:35:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:11:48.549 02:35:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:11:48.549 02:35:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:11:48.549 02:35:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:11:48.549 02:35:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:48.549 02:35:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:48.549 02:35:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:11:48.549 02:35:13 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:11:48.549 02:35:13 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:11:48.549 02:35:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:48.549 02:35:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:11:48.549 02:35:13 -- bdev/nbd_common.sh@41 -- # break 00:11:48.549 02:35:13 -- bdev/nbd_common.sh@45 -- # return 0 00:11:48.549 02:35:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:48.549 02:35:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:11:48.807 02:35:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:11:48.807 02:35:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:11:48.807 02:35:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:11:48.807 02:35:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:48.807 02:35:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:48.807 02:35:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:11:48.807 02:35:13 -- bdev/nbd_common.sh@41 -- # break 00:11:48.807 02:35:13 -- bdev/nbd_common.sh@45 -- # return 0 00:11:48.807 02:35:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:48.807 02:35:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:11:49.065 02:35:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:11:49.065 02:35:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:11:49.065 02:35:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:11:49.065 02:35:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:49.065 02:35:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:49.065 02:35:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:11:49.065 02:35:14 -- bdev/nbd_common.sh@41 -- # break 00:11:49.065 02:35:14 -- bdev/nbd_common.sh@45 -- # return 0 00:11:49.065 02:35:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:49.065 02:35:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:11:49.323 02:35:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:11:49.323 02:35:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:11:49.323 02:35:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:11:49.323 02:35:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:49.323 02:35:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:49.323 02:35:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:11:49.323 02:35:14 -- bdev/nbd_common.sh@41 -- # break 00:11:49.323 02:35:14 -- bdev/nbd_common.sh@45 -- # return 0 00:11:49.323 02:35:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:49.323 02:35:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:11:49.581 02:35:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:11:49.581 02:35:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:11:49.581 02:35:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:11:49.581 02:35:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:49.581 02:35:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:49.581 02:35:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:11:49.581 02:35:14 -- bdev/nbd_common.sh@41 -- # break 00:11:49.581 02:35:14 -- bdev/nbd_common.sh@45 -- # return 0 00:11:49.581 02:35:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:49.581 02:35:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:11:49.840 02:35:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:11:49.840 02:35:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:11:49.840 02:35:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:11:49.840 02:35:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:49.840 02:35:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:49.840 02:35:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:11:49.840 02:35:14 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:11:50.100 02:35:14 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:11:50.100 02:35:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:50.100 02:35:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:11:50.100 02:35:14 -- bdev/nbd_common.sh@41 -- # break 00:11:50.100 02:35:14 -- bdev/nbd_common.sh@45 -- # return 0 00:11:50.100 02:35:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:50.100 02:35:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:11:50.359 02:35:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:11:50.359 02:35:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:11:50.359 02:35:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:11:50.359 02:35:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:50.359 02:35:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:50.359 02:35:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:11:50.359 02:35:15 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:11:50.359 02:35:15 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:11:50.359 02:35:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:50.359 02:35:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:11:50.359 02:35:15 -- bdev/nbd_common.sh@41 -- # break 00:11:50.359 02:35:15 -- bdev/nbd_common.sh@45 -- # return 0 00:11:50.359 02:35:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:50.359 02:35:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:11:50.618 02:35:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:11:50.618 02:35:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:11:50.618 02:35:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:11:50.618 02:35:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:50.618 02:35:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:50.618 02:35:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:11:50.618 02:35:15 -- bdev/nbd_common.sh@41 -- # break 00:11:50.618 02:35:15 -- bdev/nbd_common.sh@45 -- # return 0 00:11:50.618 02:35:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:50.618 02:35:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:11:50.878 02:35:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:11:50.878 02:35:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:11:50.878 02:35:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:11:50.878 02:35:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:50.878 02:35:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:50.878 02:35:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:11:50.878 02:35:15 -- bdev/nbd_common.sh@41 -- # break 00:11:50.878 02:35:15 -- bdev/nbd_common.sh@45 -- # return 0 00:11:50.878 02:35:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:50.878 02:35:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:11:51.137 02:35:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:11:51.137 02:35:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:11:51.137 02:35:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:11:51.137 02:35:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:51.137 02:35:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:51.137 02:35:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:11:51.137 02:35:16 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:11:51.137 02:35:16 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:11:51.137 02:35:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:51.137 02:35:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:11:51.137 02:35:16 -- bdev/nbd_common.sh@41 -- # break 00:11:51.137 02:35:16 -- bdev/nbd_common.sh@45 -- # return 0 00:11:51.137 02:35:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:51.137 02:35:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:11:51.705 02:35:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:11:51.705 02:35:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:11:51.705 02:35:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:11:51.705 02:35:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:51.705 02:35:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:51.705 02:35:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:11:51.705 02:35:16 -- bdev/nbd_common.sh@41 -- # break 00:11:51.705 02:35:16 -- bdev/nbd_common.sh@45 -- # return 0 00:11:51.705 02:35:16 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:51.705 02:35:16 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:51.705 02:35:16 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:51.705 02:35:16 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:51.705 02:35:16 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:51.705 02:35:16 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:51.963 02:35:16 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:51.963 02:35:16 -- bdev/nbd_common.sh@65 -- # echo '' 00:11:51.963 02:35:16 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:51.963 02:35:16 -- bdev/nbd_common.sh@65 -- # true 00:11:51.963 02:35:16 -- bdev/nbd_common.sh@65 -- # count=0 00:11:51.963 02:35:16 -- bdev/nbd_common.sh@66 -- # echo 0 00:11:51.963 02:35:16 -- bdev/nbd_common.sh@122 -- # count=0 00:11:51.963 02:35:16 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:11:51.963 02:35:16 -- bdev/nbd_common.sh@127 -- # return 0 00:11:51.963 02:35:16 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:11:51.963 02:35:16 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:51.963 02:35:16 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:11:51.963 02:35:16 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:51.963 02:35:16 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:11:51.963 02:35:16 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:51.963 02:35:16 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:11:51.963 02:35:16 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:51.963 02:35:16 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:11:51.963 02:35:16 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:51.963 02:35:16 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:11:51.963 02:35:16 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:51.963 02:35:16 -- bdev/nbd_common.sh@12 -- # local i 00:11:51.963 02:35:16 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:51.963 02:35:16 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:51.963 02:35:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:52.221 /dev/nbd0 00:11:52.221 02:35:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:52.221 02:35:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:52.221 02:35:17 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:11:52.221 02:35:17 -- common/autotest_common.sh@857 -- # local i 00:11:52.221 02:35:17 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:52.221 02:35:17 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:52.221 02:35:17 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:11:52.221 02:35:17 -- common/autotest_common.sh@861 -- # break 00:11:52.221 02:35:17 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:52.221 02:35:17 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:52.221 02:35:17 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:52.221 1+0 records in 00:11:52.221 1+0 records out 00:11:52.221 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000514161 s, 8.0 MB/s 00:11:52.222 02:35:17 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:52.222 02:35:17 -- common/autotest_common.sh@874 -- # size=4096 00:11:52.222 02:35:17 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:52.222 02:35:17 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:52.222 02:35:17 -- common/autotest_common.sh@877 -- # return 0 00:11:52.222 02:35:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:52.222 02:35:17 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:52.222 02:35:17 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1 00:11:52.480 /dev/nbd1 00:11:52.480 02:35:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:52.480 02:35:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:52.480 02:35:17 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:11:52.480 02:35:17 -- common/autotest_common.sh@857 -- # local i 00:11:52.480 02:35:17 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:52.480 02:35:17 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:52.480 02:35:17 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:11:52.480 02:35:17 -- common/autotest_common.sh@861 -- # break 00:11:52.480 02:35:17 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:52.480 02:35:17 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:52.480 02:35:17 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:52.480 1+0 records in 00:11:52.480 1+0 records out 00:11:52.480 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000502819 s, 8.1 MB/s 00:11:52.480 02:35:17 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:52.480 02:35:17 -- common/autotest_common.sh@874 -- # size=4096 00:11:52.480 02:35:17 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:52.480 02:35:17 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:52.480 02:35:17 -- common/autotest_common.sh@877 -- # return 0 00:11:52.480 02:35:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:52.480 02:35:17 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:52.480 02:35:17 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10 00:11:52.738 /dev/nbd10 00:11:52.738 02:35:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:11:52.738 02:35:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:11:52.738 02:35:17 -- common/autotest_common.sh@856 -- # local nbd_name=nbd10 00:11:52.738 02:35:17 -- common/autotest_common.sh@857 -- # local i 00:11:52.738 02:35:17 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:52.738 02:35:17 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:52.738 02:35:17 -- common/autotest_common.sh@860 -- # grep -q -w nbd10 /proc/partitions 00:11:52.738 02:35:17 -- common/autotest_common.sh@861 -- # break 00:11:52.738 02:35:17 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:52.738 02:35:17 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:52.738 02:35:17 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:52.738 1+0 records in 00:11:52.738 1+0 records out 00:11:52.738 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000815398 s, 5.0 MB/s 00:11:52.738 02:35:17 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:52.738 02:35:17 -- common/autotest_common.sh@874 -- # size=4096 00:11:52.738 02:35:17 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:52.738 02:35:17 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:52.738 02:35:17 -- common/autotest_common.sh@877 -- # return 0 00:11:52.738 02:35:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:52.738 02:35:17 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:52.739 02:35:17 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11 00:11:52.996 /dev/nbd11 00:11:52.996 02:35:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:11:52.997 02:35:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:11:52.997 02:35:18 -- common/autotest_common.sh@856 -- # local nbd_name=nbd11 00:11:52.997 02:35:18 -- common/autotest_common.sh@857 -- # local i 00:11:52.997 02:35:18 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:52.997 02:35:18 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:52.997 02:35:18 -- common/autotest_common.sh@860 -- # grep -q -w nbd11 /proc/partitions 00:11:52.997 02:35:18 -- common/autotest_common.sh@861 -- # break 00:11:52.997 02:35:18 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:52.997 02:35:18 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:52.997 02:35:18 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:52.997 1+0 records in 00:11:52.997 1+0 records out 00:11:52.997 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000587674 s, 7.0 MB/s 00:11:52.997 02:35:18 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:52.997 02:35:18 -- common/autotest_common.sh@874 -- # size=4096 00:11:52.997 02:35:18 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:52.997 02:35:18 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:52.997 02:35:18 -- common/autotest_common.sh@877 -- # return 0 00:11:52.997 02:35:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:52.997 02:35:18 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:52.997 02:35:18 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12 00:11:53.254 /dev/nbd12 00:11:53.254 02:35:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:11:53.254 02:35:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:11:53.254 02:35:18 -- common/autotest_common.sh@856 -- # local nbd_name=nbd12 00:11:53.254 02:35:18 -- common/autotest_common.sh@857 -- # local i 00:11:53.254 02:35:18 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:53.254 02:35:18 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:53.254 02:35:18 -- common/autotest_common.sh@860 -- # grep -q -w nbd12 /proc/partitions 00:11:53.254 02:35:18 -- common/autotest_common.sh@861 -- # break 00:11:53.254 02:35:18 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:53.254 02:35:18 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:53.254 02:35:18 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:53.254 1+0 records in 00:11:53.254 1+0 records out 00:11:53.254 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00171543 s, 2.4 MB/s 00:11:53.254 02:35:18 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:53.254 02:35:18 -- common/autotest_common.sh@874 -- # size=4096 00:11:53.254 02:35:18 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:53.254 02:35:18 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:53.254 02:35:18 -- common/autotest_common.sh@877 -- # return 0 00:11:53.254 02:35:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:53.254 02:35:18 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:53.254 02:35:18 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13 00:11:53.512 /dev/nbd13 00:11:53.512 02:35:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:11:53.512 02:35:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:11:53.512 02:35:18 -- common/autotest_common.sh@856 -- # local nbd_name=nbd13 00:11:53.512 02:35:18 -- common/autotest_common.sh@857 -- # local i 00:11:53.512 02:35:18 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:53.512 02:35:18 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:53.512 02:35:18 -- common/autotest_common.sh@860 -- # grep -q -w nbd13 /proc/partitions 00:11:53.512 02:35:18 -- common/autotest_common.sh@861 -- # break 00:11:53.512 02:35:18 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:53.512 02:35:18 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:53.512 02:35:18 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:53.770 1+0 records in 00:11:53.770 1+0 records out 00:11:53.770 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00054605 s, 7.5 MB/s 00:11:53.770 02:35:18 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:53.770 02:35:18 -- common/autotest_common.sh@874 -- # size=4096 00:11:53.770 02:35:18 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:53.770 02:35:18 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:53.770 02:35:18 -- common/autotest_common.sh@877 -- # return 0 00:11:53.770 02:35:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:53.770 02:35:18 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:53.770 02:35:18 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14 00:11:53.770 /dev/nbd14 00:11:53.770 02:35:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:11:53.770 02:35:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:11:53.770 02:35:18 -- common/autotest_common.sh@856 -- # local nbd_name=nbd14 00:11:53.770 02:35:18 -- common/autotest_common.sh@857 -- # local i 00:11:53.770 02:35:18 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:53.770 02:35:18 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:53.770 02:35:18 -- common/autotest_common.sh@860 -- # grep -q -w nbd14 /proc/partitions 00:11:53.770 02:35:18 -- common/autotest_common.sh@861 -- # break 00:11:53.770 02:35:18 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:53.770 02:35:18 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:53.770 02:35:18 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:53.770 1+0 records in 00:11:53.770 1+0 records out 00:11:53.770 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0010551 s, 3.9 MB/s 00:11:53.770 02:35:18 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:53.770 02:35:18 -- common/autotest_common.sh@874 -- # size=4096 00:11:53.770 02:35:18 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:53.770 02:35:18 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:53.770 02:35:18 -- common/autotest_common.sh@877 -- # return 0 00:11:53.770 02:35:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:53.770 02:35:18 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:53.770 02:35:18 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15 00:11:54.370 /dev/nbd15 00:11:54.370 02:35:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15 00:11:54.370 02:35:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15 00:11:54.370 02:35:19 -- common/autotest_common.sh@856 -- # local nbd_name=nbd15 00:11:54.370 02:35:19 -- common/autotest_common.sh@857 -- # local i 00:11:54.370 02:35:19 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:54.370 02:35:19 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:54.370 02:35:19 -- common/autotest_common.sh@860 -- # grep -q -w nbd15 /proc/partitions 00:11:54.370 02:35:19 -- common/autotest_common.sh@861 -- # break 00:11:54.370 02:35:19 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:54.370 02:35:19 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:54.370 02:35:19 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:54.370 1+0 records in 00:11:54.370 1+0 records out 00:11:54.370 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000608864 s, 6.7 MB/s 00:11:54.370 02:35:19 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:54.370 02:35:19 -- common/autotest_common.sh@874 -- # size=4096 00:11:54.370 02:35:19 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:54.370 02:35:19 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:54.370 02:35:19 -- common/autotest_common.sh@877 -- # return 0 00:11:54.370 02:35:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:54.370 02:35:19 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:54.370 02:35:19 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2 00:11:54.370 /dev/nbd2 00:11:54.370 02:35:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:11:54.370 02:35:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:11:54.370 02:35:19 -- common/autotest_common.sh@856 -- # local nbd_name=nbd2 00:11:54.370 02:35:19 -- common/autotest_common.sh@857 -- # local i 00:11:54.370 02:35:19 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:54.370 02:35:19 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:54.371 02:35:19 -- common/autotest_common.sh@860 -- # grep -q -w nbd2 /proc/partitions 00:11:54.371 02:35:19 -- common/autotest_common.sh@861 -- # break 00:11:54.371 02:35:19 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:54.371 02:35:19 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:54.371 02:35:19 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:54.371 1+0 records in 00:11:54.371 1+0 records out 00:11:54.371 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000882548 s, 4.6 MB/s 00:11:54.371 02:35:19 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:54.371 02:35:19 -- common/autotest_common.sh@874 -- # size=4096 00:11:54.371 02:35:19 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:54.371 02:35:19 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:54.371 02:35:19 -- common/autotest_common.sh@877 -- # return 0 00:11:54.371 02:35:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:54.371 02:35:19 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:54.371 02:35:19 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3 00:11:54.629 /dev/nbd3 00:11:54.629 02:35:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3 00:11:54.629 02:35:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3 00:11:54.629 02:35:19 -- common/autotest_common.sh@856 -- # local nbd_name=nbd3 00:11:54.629 02:35:19 -- common/autotest_common.sh@857 -- # local i 00:11:54.629 02:35:19 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:54.629 02:35:19 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:54.629 02:35:19 -- common/autotest_common.sh@860 -- # grep -q -w nbd3 /proc/partitions 00:11:54.629 02:35:19 -- common/autotest_common.sh@861 -- # break 00:11:54.629 02:35:19 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:54.629 02:35:19 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:54.629 02:35:19 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:54.629 1+0 records in 00:11:54.629 1+0 records out 00:11:54.629 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00067838 s, 6.0 MB/s 00:11:54.629 02:35:19 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:54.629 02:35:19 -- common/autotest_common.sh@874 -- # size=4096 00:11:54.629 02:35:19 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:54.629 02:35:19 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:54.629 02:35:19 -- common/autotest_common.sh@877 -- # return 0 00:11:54.629 02:35:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:54.629 02:35:19 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:54.629 02:35:19 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4 00:11:54.887 /dev/nbd4 00:11:54.887 02:35:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4 00:11:54.887 02:35:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4 00:11:54.887 02:35:19 -- common/autotest_common.sh@856 -- # local nbd_name=nbd4 00:11:54.887 02:35:19 -- common/autotest_common.sh@857 -- # local i 00:11:54.887 02:35:19 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:54.887 02:35:19 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:54.887 02:35:19 -- common/autotest_common.sh@860 -- # grep -q -w nbd4 /proc/partitions 00:11:54.887 02:35:19 -- common/autotest_common.sh@861 -- # break 00:11:54.887 02:35:19 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:54.887 02:35:19 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:54.887 02:35:19 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:54.887 1+0 records in 00:11:54.887 1+0 records out 00:11:54.887 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00103037 s, 4.0 MB/s 00:11:54.887 02:35:19 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:54.887 02:35:19 -- common/autotest_common.sh@874 -- # size=4096 00:11:54.887 02:35:19 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:54.887 02:35:19 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:54.887 02:35:19 -- common/autotest_common.sh@877 -- # return 0 00:11:54.887 02:35:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:54.887 02:35:19 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:54.887 02:35:19 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5 00:11:55.145 /dev/nbd5 00:11:55.145 02:35:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5 00:11:55.145 02:35:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5 00:11:55.145 02:35:20 -- common/autotest_common.sh@856 -- # local nbd_name=nbd5 00:11:55.145 02:35:20 -- common/autotest_common.sh@857 -- # local i 00:11:55.145 02:35:20 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:55.145 02:35:20 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:55.145 02:35:20 -- common/autotest_common.sh@860 -- # grep -q -w nbd5 /proc/partitions 00:11:55.145 02:35:20 -- common/autotest_common.sh@861 -- # break 00:11:55.145 02:35:20 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:55.145 02:35:20 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:55.145 02:35:20 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:55.145 1+0 records in 00:11:55.145 1+0 records out 00:11:55.145 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000687449 s, 6.0 MB/s 00:11:55.145 02:35:20 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.145 02:35:20 -- common/autotest_common.sh@874 -- # size=4096 00:11:55.145 02:35:20 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.145 02:35:20 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:55.145 02:35:20 -- common/autotest_common.sh@877 -- # return 0 00:11:55.145 02:35:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:55.145 02:35:20 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:55.145 02:35:20 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6 00:11:55.403 /dev/nbd6 00:11:55.403 02:35:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6 00:11:55.403 02:35:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6 00:11:55.403 02:35:20 -- common/autotest_common.sh@856 -- # local nbd_name=nbd6 00:11:55.403 02:35:20 -- common/autotest_common.sh@857 -- # local i 00:11:55.403 02:35:20 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:55.403 02:35:20 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:55.403 02:35:20 -- common/autotest_common.sh@860 -- # grep -q -w nbd6 /proc/partitions 00:11:55.403 02:35:20 -- common/autotest_common.sh@861 -- # break 00:11:55.403 02:35:20 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:55.403 02:35:20 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:55.403 02:35:20 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:55.403 1+0 records in 00:11:55.403 1+0 records out 00:11:55.403 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00129659 s, 3.2 MB/s 00:11:55.403 02:35:20 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.403 02:35:20 -- common/autotest_common.sh@874 -- # size=4096 00:11:55.403 02:35:20 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.403 02:35:20 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:55.403 02:35:20 -- common/autotest_common.sh@877 -- # return 0 00:11:55.403 02:35:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:55.403 02:35:20 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:55.403 02:35:20 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7 00:11:55.664 /dev/nbd7 00:11:55.925 02:35:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7 00:11:55.925 02:35:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7 00:11:55.925 02:35:20 -- common/autotest_common.sh@856 -- # local nbd_name=nbd7 00:11:55.925 02:35:20 -- common/autotest_common.sh@857 -- # local i 00:11:55.925 02:35:20 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:55.925 02:35:20 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:55.925 02:35:20 -- common/autotest_common.sh@860 -- # grep -q -w nbd7 /proc/partitions 00:11:55.925 02:35:20 -- common/autotest_common.sh@861 -- # break 00:11:55.925 02:35:20 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:55.925 02:35:20 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:55.925 02:35:20 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:55.925 1+0 records in 00:11:55.925 1+0 records out 00:11:55.925 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000941511 s, 4.4 MB/s 00:11:55.925 02:35:20 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.925 02:35:20 -- common/autotest_common.sh@874 -- # size=4096 00:11:55.925 02:35:20 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.925 02:35:20 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:55.925 02:35:20 -- common/autotest_common.sh@877 -- # return 0 00:11:55.925 02:35:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:55.925 02:35:20 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:55.925 02:35:20 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8 00:11:55.925 /dev/nbd8 00:11:55.925 02:35:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8 00:11:55.925 02:35:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8 00:11:55.925 02:35:20 -- common/autotest_common.sh@856 -- # local nbd_name=nbd8 00:11:55.925 02:35:20 -- common/autotest_common.sh@857 -- # local i 00:11:55.926 02:35:20 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:55.926 02:35:20 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:55.926 02:35:20 -- common/autotest_common.sh@860 -- # grep -q -w nbd8 /proc/partitions 00:11:55.926 02:35:20 -- common/autotest_common.sh@861 -- # break 00:11:55.926 02:35:20 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:55.926 02:35:20 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:55.926 02:35:20 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:55.926 1+0 records in 00:11:55.926 1+0 records out 00:11:55.926 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00102915 s, 4.0 MB/s 00:11:55.926 02:35:20 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.926 02:35:20 -- common/autotest_common.sh@874 -- # size=4096 00:11:55.926 02:35:20 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.926 02:35:20 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:55.926 02:35:20 -- common/autotest_common.sh@877 -- # return 0 00:11:55.926 02:35:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:55.926 02:35:20 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:55.926 02:35:20 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9 00:11:56.184 /dev/nbd9 00:11:56.184 02:35:21 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9 00:11:56.184 02:35:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9 00:11:56.184 02:35:21 -- common/autotest_common.sh@856 -- # local nbd_name=nbd9 00:11:56.184 02:35:21 -- common/autotest_common.sh@857 -- # local i 00:11:56.184 02:35:21 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:56.184 02:35:21 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:56.184 02:35:21 -- common/autotest_common.sh@860 -- # grep -q -w nbd9 /proc/partitions 00:11:56.184 02:35:21 -- common/autotest_common.sh@861 -- # break 00:11:56.184 02:35:21 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:56.184 02:35:21 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:56.184 02:35:21 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:56.184 1+0 records in 00:11:56.184 1+0 records out 00:11:56.184 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00155651 s, 2.6 MB/s 00:11:56.184 02:35:21 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:56.184 02:35:21 -- common/autotest_common.sh@874 -- # size=4096 00:11:56.184 02:35:21 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:56.184 02:35:21 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:56.184 02:35:21 -- common/autotest_common.sh@877 -- # return 0 00:11:56.184 02:35:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:56.184 02:35:21 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:56.184 02:35:21 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:56.184 02:35:21 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:56.184 02:35:21 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:56.443 02:35:21 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:56.443 { 00:11:56.443 "nbd_device": "/dev/nbd0", 00:11:56.443 "bdev_name": "Malloc0" 00:11:56.443 }, 00:11:56.443 { 00:11:56.443 "nbd_device": "/dev/nbd1", 00:11:56.443 "bdev_name": "Malloc1p0" 00:11:56.443 }, 00:11:56.443 { 00:11:56.443 "nbd_device": "/dev/nbd10", 00:11:56.443 "bdev_name": "Malloc1p1" 00:11:56.443 }, 00:11:56.443 { 00:11:56.443 "nbd_device": "/dev/nbd11", 00:11:56.443 "bdev_name": "Malloc2p0" 00:11:56.443 }, 00:11:56.443 { 00:11:56.443 "nbd_device": "/dev/nbd12", 00:11:56.443 "bdev_name": "Malloc2p1" 00:11:56.443 }, 00:11:56.443 { 00:11:56.443 "nbd_device": "/dev/nbd13", 00:11:56.443 "bdev_name": "Malloc2p2" 00:11:56.443 }, 00:11:56.443 { 00:11:56.443 "nbd_device": "/dev/nbd14", 00:11:56.443 "bdev_name": "Malloc2p3" 00:11:56.443 }, 00:11:56.443 { 00:11:56.443 "nbd_device": "/dev/nbd15", 00:11:56.443 "bdev_name": "Malloc2p4" 00:11:56.443 }, 00:11:56.443 { 00:11:56.443 "nbd_device": "/dev/nbd2", 00:11:56.443 "bdev_name": "Malloc2p5" 00:11:56.443 }, 00:11:56.443 { 00:11:56.443 "nbd_device": "/dev/nbd3", 00:11:56.443 "bdev_name": "Malloc2p6" 00:11:56.443 }, 00:11:56.443 { 00:11:56.443 "nbd_device": "/dev/nbd4", 00:11:56.443 "bdev_name": "Malloc2p7" 00:11:56.443 }, 00:11:56.443 { 00:11:56.443 "nbd_device": "/dev/nbd5", 00:11:56.443 "bdev_name": "TestPT" 00:11:56.443 }, 00:11:56.443 { 00:11:56.443 "nbd_device": "/dev/nbd6", 00:11:56.443 "bdev_name": "raid0" 00:11:56.443 }, 00:11:56.443 { 00:11:56.443 "nbd_device": "/dev/nbd7", 00:11:56.443 "bdev_name": "concat0" 00:11:56.443 }, 00:11:56.443 { 00:11:56.443 "nbd_device": "/dev/nbd8", 00:11:56.443 "bdev_name": "raid1" 00:11:56.443 }, 00:11:56.443 { 00:11:56.443 "nbd_device": "/dev/nbd9", 00:11:56.444 "bdev_name": "AIO0" 00:11:56.444 } 00:11:56.444 ]' 00:11:56.444 02:35:21 -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:56.444 { 00:11:56.444 "nbd_device": "/dev/nbd0", 00:11:56.444 "bdev_name": "Malloc0" 00:11:56.444 }, 00:11:56.444 { 00:11:56.444 "nbd_device": "/dev/nbd1", 00:11:56.444 "bdev_name": "Malloc1p0" 00:11:56.444 }, 00:11:56.444 { 00:11:56.444 "nbd_device": "/dev/nbd10", 00:11:56.444 "bdev_name": "Malloc1p1" 00:11:56.444 }, 00:11:56.444 { 00:11:56.444 "nbd_device": "/dev/nbd11", 00:11:56.444 "bdev_name": "Malloc2p0" 00:11:56.444 }, 00:11:56.444 { 00:11:56.444 "nbd_device": "/dev/nbd12", 00:11:56.444 "bdev_name": "Malloc2p1" 00:11:56.444 }, 00:11:56.444 { 00:11:56.444 "nbd_device": "/dev/nbd13", 00:11:56.444 "bdev_name": "Malloc2p2" 00:11:56.444 }, 00:11:56.444 { 00:11:56.444 "nbd_device": "/dev/nbd14", 00:11:56.444 "bdev_name": "Malloc2p3" 00:11:56.444 }, 00:11:56.444 { 00:11:56.444 "nbd_device": "/dev/nbd15", 00:11:56.444 "bdev_name": "Malloc2p4" 00:11:56.444 }, 00:11:56.444 { 00:11:56.444 "nbd_device": "/dev/nbd2", 00:11:56.444 "bdev_name": "Malloc2p5" 00:11:56.444 }, 00:11:56.444 { 00:11:56.444 "nbd_device": "/dev/nbd3", 00:11:56.444 "bdev_name": "Malloc2p6" 00:11:56.444 }, 00:11:56.444 { 00:11:56.444 "nbd_device": "/dev/nbd4", 00:11:56.444 "bdev_name": "Malloc2p7" 00:11:56.444 }, 00:11:56.444 { 00:11:56.444 "nbd_device": "/dev/nbd5", 00:11:56.444 "bdev_name": "TestPT" 00:11:56.444 }, 00:11:56.444 { 00:11:56.444 "nbd_device": "/dev/nbd6", 00:11:56.444 "bdev_name": "raid0" 00:11:56.444 }, 00:11:56.444 { 00:11:56.444 "nbd_device": "/dev/nbd7", 00:11:56.444 "bdev_name": "concat0" 00:11:56.444 }, 00:11:56.444 { 00:11:56.444 "nbd_device": "/dev/nbd8", 00:11:56.444 "bdev_name": "raid1" 00:11:56.444 }, 00:11:56.444 { 00:11:56.444 "nbd_device": "/dev/nbd9", 00:11:56.444 "bdev_name": "AIO0" 00:11:56.444 } 00:11:56.444 ]' 00:11:56.444 02:35:21 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:56.703 02:35:21 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:56.703 /dev/nbd1 00:11:56.703 /dev/nbd10 00:11:56.703 /dev/nbd11 00:11:56.703 /dev/nbd12 00:11:56.703 /dev/nbd13 00:11:56.703 /dev/nbd14 00:11:56.703 /dev/nbd15 00:11:56.703 /dev/nbd2 00:11:56.703 /dev/nbd3 00:11:56.703 /dev/nbd4 00:11:56.703 /dev/nbd5 00:11:56.703 /dev/nbd6 00:11:56.703 /dev/nbd7 00:11:56.703 /dev/nbd8 00:11:56.703 /dev/nbd9' 00:11:56.703 02:35:21 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:56.703 /dev/nbd1 00:11:56.703 /dev/nbd10 00:11:56.703 /dev/nbd11 00:11:56.703 /dev/nbd12 00:11:56.703 /dev/nbd13 00:11:56.703 /dev/nbd14 00:11:56.703 /dev/nbd15 00:11:56.703 /dev/nbd2 00:11:56.703 /dev/nbd3 00:11:56.703 /dev/nbd4 00:11:56.703 /dev/nbd5 00:11:56.703 /dev/nbd6 00:11:56.703 /dev/nbd7 00:11:56.703 /dev/nbd8 00:11:56.703 /dev/nbd9' 00:11:56.703 02:35:21 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:56.703 02:35:21 -- bdev/nbd_common.sh@65 -- # count=16 00:11:56.703 02:35:21 -- bdev/nbd_common.sh@66 -- # echo 16 00:11:56.703 02:35:21 -- bdev/nbd_common.sh@95 -- # count=16 00:11:56.703 02:35:21 -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']' 00:11:56.703 02:35:21 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write 00:11:56.703 02:35:21 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:11:56.703 02:35:21 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:56.703 02:35:21 -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:56.703 02:35:21 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:56.703 02:35:21 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:56.703 02:35:21 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:11:56.703 256+0 records in 00:11:56.703 256+0 records out 00:11:56.703 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00567854 s, 185 MB/s 00:11:56.703 02:35:21 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:56.703 02:35:21 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:56.703 256+0 records in 00:11:56.703 256+0 records out 00:11:56.703 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.126144 s, 8.3 MB/s 00:11:56.703 02:35:21 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:56.703 02:35:21 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:56.961 256+0 records in 00:11:56.961 256+0 records out 00:11:56.961 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.120113 s, 8.7 MB/s 00:11:56.961 02:35:21 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:56.961 02:35:21 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:11:56.961 256+0 records in 00:11:56.961 256+0 records out 00:11:56.961 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.119093 s, 8.8 MB/s 00:11:56.961 02:35:21 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:56.961 02:35:21 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:11:57.219 256+0 records in 00:11:57.219 256+0 records out 00:11:57.219 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.124864 s, 8.4 MB/s 00:11:57.219 02:35:22 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:57.219 02:35:22 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:11:57.219 256+0 records in 00:11:57.219 256+0 records out 00:11:57.219 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.12032 s, 8.7 MB/s 00:11:57.219 02:35:22 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:57.219 02:35:22 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:11:57.477 256+0 records in 00:11:57.478 256+0 records out 00:11:57.478 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.131933 s, 7.9 MB/s 00:11:57.478 02:35:22 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:57.478 02:35:22 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:11:57.478 256+0 records in 00:11:57.478 256+0 records out 00:11:57.478 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.148062 s, 7.1 MB/s 00:11:57.478 02:35:22 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:57.478 02:35:22 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct 00:11:57.737 256+0 records in 00:11:57.737 256+0 records out 00:11:57.737 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.151123 s, 6.9 MB/s 00:11:57.737 02:35:22 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:57.737 02:35:22 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct 00:11:57.737 256+0 records in 00:11:57.737 256+0 records out 00:11:57.737 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.166337 s, 6.3 MB/s 00:11:57.737 02:35:22 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:57.737 02:35:22 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct 00:11:57.995 256+0 records in 00:11:57.995 256+0 records out 00:11:57.995 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.165453 s, 6.3 MB/s 00:11:57.995 02:35:22 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:57.995 02:35:22 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct 00:11:58.254 256+0 records in 00:11:58.254 256+0 records out 00:11:58.254 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.157737 s, 6.6 MB/s 00:11:58.254 02:35:23 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:58.254 02:35:23 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct 00:11:58.254 256+0 records in 00:11:58.254 256+0 records out 00:11:58.254 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.155525 s, 6.7 MB/s 00:11:58.254 02:35:23 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:58.254 02:35:23 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct 00:11:58.513 256+0 records in 00:11:58.513 256+0 records out 00:11:58.513 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.155409 s, 6.7 MB/s 00:11:58.513 02:35:23 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:58.513 02:35:23 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct 00:11:58.513 256+0 records in 00:11:58.513 256+0 records out 00:11:58.513 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.157733 s, 6.6 MB/s 00:11:58.513 02:35:23 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:58.513 02:35:23 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct 00:11:58.771 256+0 records in 00:11:58.771 256+0 records out 00:11:58.771 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.163159 s, 6.4 MB/s 00:11:58.771 02:35:23 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:58.771 02:35:23 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct 00:11:59.030 256+0 records in 00:11:59.030 256+0 records out 00:11:59.030 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.238772 s, 4.4 MB/s 00:11:59.030 02:35:24 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify 00:11:59.030 02:35:24 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:11:59.030 02:35:24 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:59.030 02:35:24 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:59.030 02:35:24 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:59.030 02:35:24 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:59.030 02:35:24 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:59.030 02:35:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:59.030 02:35:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:11:59.030 02:35:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:59.030 02:35:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:11:59.030 02:35:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:59.030 02:35:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:11:59.030 02:35:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:59.030 02:35:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:11:59.030 02:35:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:59.030 02:35:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:11:59.030 02:35:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:59.030 02:35:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:11:59.030 02:35:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:59.030 02:35:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:11:59.030 02:35:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:59.030 02:35:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd15 00:11:59.030 02:35:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:59.030 02:35:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd2 00:11:59.030 02:35:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:59.030 02:35:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd3 00:11:59.030 02:35:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:59.030 02:35:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd4 00:11:59.030 02:35:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:59.030 02:35:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd5 00:11:59.030 02:35:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:59.030 02:35:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd6 00:11:59.030 02:35:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:59.030 02:35:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd7 00:11:59.289 02:35:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:59.289 02:35:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd8 00:11:59.289 02:35:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:59.289 02:35:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd9 00:11:59.289 02:35:24 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:59.289 02:35:24 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:11:59.289 02:35:24 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:59.289 02:35:24 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:11:59.289 02:35:24 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:59.289 02:35:24 -- bdev/nbd_common.sh@51 -- # local i 00:11:59.289 02:35:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:59.289 02:35:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:59.548 02:35:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:59.548 02:35:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:59.548 02:35:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:59.548 02:35:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:59.548 02:35:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:59.548 02:35:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:59.548 02:35:24 -- bdev/nbd_common.sh@41 -- # break 00:11:59.548 02:35:24 -- bdev/nbd_common.sh@45 -- # return 0 00:11:59.548 02:35:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:59.548 02:35:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:59.806 02:35:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:59.806 02:35:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:59.806 02:35:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:59.806 02:35:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:59.806 02:35:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:59.806 02:35:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:59.806 02:35:24 -- bdev/nbd_common.sh@41 -- # break 00:11:59.806 02:35:24 -- bdev/nbd_common.sh@45 -- # return 0 00:11:59.806 02:35:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:59.806 02:35:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:00.074 02:35:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:00.074 02:35:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:00.074 02:35:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:00.074 02:35:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:00.074 02:35:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:00.074 02:35:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:00.074 02:35:24 -- bdev/nbd_common.sh@41 -- # break 00:12:00.074 02:35:24 -- bdev/nbd_common.sh@45 -- # return 0 00:12:00.074 02:35:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:00.074 02:35:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:00.370 02:35:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:00.370 02:35:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:00.370 02:35:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:00.370 02:35:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:00.370 02:35:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:00.370 02:35:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:00.370 02:35:25 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:00.370 02:35:25 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:00.370 02:35:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:00.370 02:35:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:00.370 02:35:25 -- bdev/nbd_common.sh@41 -- # break 00:12:00.370 02:35:25 -- bdev/nbd_common.sh@45 -- # return 0 00:12:00.370 02:35:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:00.370 02:35:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:00.635 02:35:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:00.635 02:35:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:00.635 02:35:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:00.635 02:35:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:00.635 02:35:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:00.635 02:35:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:00.635 02:35:25 -- bdev/nbd_common.sh@41 -- # break 00:12:00.635 02:35:25 -- bdev/nbd_common.sh@45 -- # return 0 00:12:00.635 02:35:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:00.635 02:35:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:00.893 02:35:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:00.893 02:35:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:00.893 02:35:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:00.893 02:35:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:00.893 02:35:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:00.893 02:35:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:00.893 02:35:25 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:00.893 02:35:25 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:00.893 02:35:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:00.893 02:35:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:00.893 02:35:25 -- bdev/nbd_common.sh@41 -- # break 00:12:00.893 02:35:25 -- bdev/nbd_common.sh@45 -- # return 0 00:12:00.893 02:35:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:00.893 02:35:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:12:01.151 02:35:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:12:01.151 02:35:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:12:01.151 02:35:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:12:01.151 02:35:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:01.151 02:35:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:01.151 02:35:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:12:01.151 02:35:26 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:01.151 02:35:26 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:01.151 02:35:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:01.151 02:35:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:12:01.151 02:35:26 -- bdev/nbd_common.sh@41 -- # break 00:12:01.151 02:35:26 -- bdev/nbd_common.sh@45 -- # return 0 00:12:01.151 02:35:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:01.151 02:35:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:12:01.410 02:35:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:12:01.410 02:35:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:12:01.410 02:35:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:12:01.410 02:35:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:01.410 02:35:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:01.410 02:35:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:12:01.410 02:35:26 -- bdev/nbd_common.sh@41 -- # break 00:12:01.410 02:35:26 -- bdev/nbd_common.sh@45 -- # return 0 00:12:01.410 02:35:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:01.410 02:35:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:01.668 02:35:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:01.668 02:35:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:01.668 02:35:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:01.668 02:35:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:01.668 02:35:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:01.668 02:35:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:01.668 02:35:26 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:01.926 02:35:26 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:01.926 02:35:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:01.926 02:35:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:01.926 02:35:26 -- bdev/nbd_common.sh@41 -- # break 00:12:01.926 02:35:26 -- bdev/nbd_common.sh@45 -- # return 0 00:12:01.926 02:35:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:01.926 02:35:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:02.184 02:35:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:02.184 02:35:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:02.184 02:35:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:02.184 02:35:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:02.184 02:35:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:02.185 02:35:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:02.185 02:35:27 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:02.185 02:35:27 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:02.185 02:35:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:02.185 02:35:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:02.185 02:35:27 -- bdev/nbd_common.sh@41 -- # break 00:12:02.185 02:35:27 -- bdev/nbd_common.sh@45 -- # return 0 00:12:02.185 02:35:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:02.185 02:35:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:02.443 02:35:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:02.443 02:35:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:02.443 02:35:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:02.443 02:35:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:02.443 02:35:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:02.443 02:35:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:02.443 02:35:27 -- bdev/nbd_common.sh@41 -- # break 00:12:02.443 02:35:27 -- bdev/nbd_common.sh@45 -- # return 0 00:12:02.443 02:35:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:02.443 02:35:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:02.701 02:35:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:02.701 02:35:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:02.701 02:35:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:02.701 02:35:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:02.701 02:35:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:02.701 02:35:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:02.701 02:35:27 -- bdev/nbd_common.sh@41 -- # break 00:12:02.701 02:35:27 -- bdev/nbd_common.sh@45 -- # return 0 00:12:02.701 02:35:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:02.701 02:35:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:12:02.959 02:35:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:12:02.959 02:35:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:12:02.959 02:35:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:12:02.959 02:35:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:02.959 02:35:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:02.959 02:35:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:12:02.959 02:35:27 -- bdev/nbd_common.sh@41 -- # break 00:12:02.959 02:35:27 -- bdev/nbd_common.sh@45 -- # return 0 00:12:02.959 02:35:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:02.960 02:35:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:12:03.218 02:35:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:12:03.218 02:35:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:12:03.218 02:35:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:12:03.218 02:35:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:03.218 02:35:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:03.218 02:35:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:12:03.218 02:35:28 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:03.218 02:35:28 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:03.218 02:35:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:03.218 02:35:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:12:03.218 02:35:28 -- bdev/nbd_common.sh@41 -- # break 00:12:03.218 02:35:28 -- bdev/nbd_common.sh@45 -- # return 0 00:12:03.218 02:35:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:03.218 02:35:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:12:03.477 02:35:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:12:03.477 02:35:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:12:03.477 02:35:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:12:03.477 02:35:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:03.477 02:35:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:03.478 02:35:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:12:03.478 02:35:28 -- bdev/nbd_common.sh@41 -- # break 00:12:03.478 02:35:28 -- bdev/nbd_common.sh@45 -- # return 0 00:12:03.478 02:35:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:03.478 02:35:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:12:03.737 02:35:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:12:03.996 02:35:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:12:03.996 02:35:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:12:03.996 02:35:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:03.996 02:35:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:03.996 02:35:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:12:03.996 02:35:28 -- bdev/nbd_common.sh@41 -- # break 00:12:03.996 02:35:28 -- bdev/nbd_common.sh@45 -- # return 0 00:12:03.996 02:35:28 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:03.996 02:35:28 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:03.996 02:35:28 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:03.996 02:35:29 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:03.996 02:35:29 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:03.996 02:35:29 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:04.255 02:35:29 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:04.255 02:35:29 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:04.255 02:35:29 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:04.255 02:35:29 -- bdev/nbd_common.sh@65 -- # true 00:12:04.255 02:35:29 -- bdev/nbd_common.sh@65 -- # count=0 00:12:04.255 02:35:29 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:04.255 02:35:29 -- bdev/nbd_common.sh@104 -- # count=0 00:12:04.255 02:35:29 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:04.255 02:35:29 -- bdev/nbd_common.sh@109 -- # return 0 00:12:04.255 02:35:29 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:04.255 02:35:29 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:04.255 02:35:29 -- bdev/nbd_common.sh@132 -- # nbd_list=($2) 00:12:04.255 02:35:29 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:12:04.255 02:35:29 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:12:04.255 02:35:29 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:12:04.515 malloc_lvol_verify 00:12:04.515 02:35:29 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:12:04.774 b17489e8-cf5c-4cdf-b991-a9f6510740e4 00:12:04.774 02:35:29 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:12:05.033 c5173d01-d476-4a11-a5f7-4d3070669b2b 00:12:05.033 02:35:29 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:12:05.033 /dev/nbd0 00:12:05.033 02:35:30 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:12:05.033 mke2fs 1.45.5 (07-Jan-2020) 00:12:05.033 00:12:05.033 Filesystem too small for a journal 00:12:05.033 Creating filesystem with 1024 4k blocks and 1024 inodes 00:12:05.033 00:12:05.033 Allocating group tables: 0/1 done 00:12:05.033 Writing inode tables: 0/1 done 00:12:05.033 Writing superblocks and filesystem accounting information: 0/1 done 00:12:05.033 00:12:05.033 02:35:30 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:12:05.033 02:35:30 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:05.033 02:35:30 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:05.033 02:35:30 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:12:05.033 02:35:30 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:05.033 02:35:30 -- bdev/nbd_common.sh@51 -- # local i 00:12:05.033 02:35:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:05.033 02:35:30 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:05.292 02:35:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:05.292 02:35:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:05.292 02:35:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:05.292 02:35:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:05.292 02:35:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:05.292 02:35:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:05.292 02:35:30 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:05.550 02:35:30 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:05.551 02:35:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:05.551 02:35:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:05.551 02:35:30 -- bdev/nbd_common.sh@41 -- # break 00:12:05.551 02:35:30 -- bdev/nbd_common.sh@45 -- # return 0 00:12:05.551 02:35:30 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:12:05.551 02:35:30 -- bdev/nbd_common.sh@147 -- # return 0 00:12:05.551 02:35:30 -- bdev/blockdev.sh@324 -- # killprocess 121329 00:12:05.551 02:35:30 -- common/autotest_common.sh@926 -- # '[' -z 121329 ']' 00:12:05.551 02:35:30 -- common/autotest_common.sh@930 -- # kill -0 121329 00:12:05.551 02:35:30 -- common/autotest_common.sh@931 -- # uname 00:12:05.551 02:35:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:05.551 02:35:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 121329 00:12:05.551 killing process with pid 121329 00:12:05.551 02:35:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:05.551 02:35:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:05.551 02:35:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 121329' 00:12:05.551 02:35:30 -- common/autotest_common.sh@945 -- # kill 121329 00:12:05.551 02:35:30 -- common/autotest_common.sh@950 -- # wait 121329 00:12:05.810 ************************************ 00:12:05.810 END TEST bdev_nbd 00:12:05.810 ************************************ 00:12:05.810 02:35:30 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:12:05.810 00:12:05.810 real 0m25.010s 00:12:05.810 user 0m34.277s 00:12:05.810 sys 0m9.349s 00:12:05.810 02:35:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:05.810 02:35:30 -- common/autotest_common.sh@10 -- # set +x 00:12:06.069 02:35:30 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:12:06.069 02:35:30 -- bdev/blockdev.sh@762 -- # '[' bdev = nvme ']' 00:12:06.069 02:35:30 -- bdev/blockdev.sh@762 -- # '[' bdev = gpt ']' 00:12:06.069 02:35:30 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:12:06.069 02:35:30 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:06.069 02:35:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:06.069 02:35:30 -- common/autotest_common.sh@10 -- # set +x 00:12:06.069 ************************************ 00:12:06.069 START TEST bdev_fio 00:12:06.069 ************************************ 00:12:06.069 02:35:30 -- common/autotest_common.sh@1104 -- # fio_test_suite '' 00:12:06.069 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:12:06.069 02:35:30 -- bdev/blockdev.sh@329 -- # local env_context 00:12:06.069 02:35:30 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:12:06.069 02:35:30 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:12:06.069 02:35:30 -- bdev/blockdev.sh@337 -- # echo '' 00:12:06.069 02:35:30 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:12:06.069 02:35:30 -- bdev/blockdev.sh@337 -- # env_context= 00:12:06.069 02:35:30 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:12:06.069 02:35:30 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:06.069 02:35:30 -- common/autotest_common.sh@1260 -- # local workload=verify 00:12:06.069 02:35:30 -- common/autotest_common.sh@1261 -- # local bdev_type=AIO 00:12:06.069 02:35:30 -- common/autotest_common.sh@1262 -- # local env_context= 00:12:06.069 02:35:30 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:12:06.069 02:35:30 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:06.069 02:35:30 -- common/autotest_common.sh@1270 -- # '[' -z verify ']' 00:12:06.069 02:35:30 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:12:06.069 02:35:30 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:06.069 02:35:30 -- common/autotest_common.sh@1280 -- # cat 00:12:06.069 02:35:30 -- common/autotest_common.sh@1292 -- # '[' verify == verify ']' 00:12:06.069 02:35:30 -- common/autotest_common.sh@1293 -- # cat 00:12:06.069 02:35:30 -- common/autotest_common.sh@1302 -- # '[' AIO == AIO ']' 00:12:06.069 02:35:30 -- common/autotest_common.sh@1303 -- # /usr/src/fio/fio --version 00:12:06.069 02:35:31 -- common/autotest_common.sh@1303 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:12:06.069 02:35:31 -- common/autotest_common.sh@1304 -- # echo serialize_overlap=1 00:12:06.069 02:35:31 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:06.069 02:35:31 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc0]' 00:12:06.069 02:35:31 -- bdev/blockdev.sh@341 -- # echo filename=Malloc0 00:12:06.069 02:35:31 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:06.069 02:35:31 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p0]' 00:12:06.069 02:35:31 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p0 00:12:06.069 02:35:31 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:06.069 02:35:31 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p1]' 00:12:06.069 02:35:31 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p1 00:12:06.069 02:35:31 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:06.069 02:35:31 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p0]' 00:12:06.069 02:35:31 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p0 00:12:06.069 02:35:31 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:06.069 02:35:31 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p1]' 00:12:06.069 02:35:31 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p1 00:12:06.069 02:35:31 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:06.069 02:35:31 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p2]' 00:12:06.069 02:35:31 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p2 00:12:06.069 02:35:31 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:06.069 02:35:31 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p3]' 00:12:06.069 02:35:31 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p3 00:12:06.069 02:35:31 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:06.069 02:35:31 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p4]' 00:12:06.069 02:35:31 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p4 00:12:06.069 02:35:31 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:06.069 02:35:31 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p5]' 00:12:06.069 02:35:31 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p5 00:12:06.069 02:35:31 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:06.069 02:35:31 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p6]' 00:12:06.069 02:35:31 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p6 00:12:06.069 02:35:31 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:06.069 02:35:31 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p7]' 00:12:06.069 02:35:31 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p7 00:12:06.069 02:35:31 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:06.070 02:35:31 -- bdev/blockdev.sh@340 -- # echo '[job_TestPT]' 00:12:06.070 02:35:31 -- bdev/blockdev.sh@341 -- # echo filename=TestPT 00:12:06.070 02:35:31 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:06.070 02:35:31 -- bdev/blockdev.sh@340 -- # echo '[job_raid0]' 00:12:06.070 02:35:31 -- bdev/blockdev.sh@341 -- # echo filename=raid0 00:12:06.070 02:35:31 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:06.070 02:35:31 -- bdev/blockdev.sh@340 -- # echo '[job_concat0]' 00:12:06.070 02:35:31 -- bdev/blockdev.sh@341 -- # echo filename=concat0 00:12:06.070 02:35:31 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:06.070 02:35:31 -- bdev/blockdev.sh@340 -- # echo '[job_raid1]' 00:12:06.070 02:35:31 -- bdev/blockdev.sh@341 -- # echo filename=raid1 00:12:06.070 02:35:31 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:06.070 02:35:31 -- bdev/blockdev.sh@340 -- # echo '[job_AIO0]' 00:12:06.070 02:35:31 -- bdev/blockdev.sh@341 -- # echo filename=AIO0 00:12:06.070 02:35:31 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:12:06.070 02:35:31 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:06.070 02:35:31 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:12:06.070 02:35:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:06.070 02:35:31 -- common/autotest_common.sh@10 -- # set +x 00:12:06.070 ************************************ 00:12:06.070 START TEST bdev_fio_rw_verify 00:12:06.070 ************************************ 00:12:06.070 02:35:31 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:06.070 02:35:31 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:06.070 02:35:31 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:12:06.070 02:35:31 -- common/autotest_common.sh@1318 -- # sanitizers=(libasan libclang_rt.asan) 00:12:06.070 02:35:31 -- common/autotest_common.sh@1318 -- # local sanitizers 00:12:06.070 02:35:31 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:06.070 02:35:31 -- common/autotest_common.sh@1320 -- # shift 00:12:06.070 02:35:31 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:12:06.070 02:35:31 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:12:06.070 02:35:31 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:06.070 02:35:31 -- common/autotest_common.sh@1324 -- # grep libasan 00:12:06.070 02:35:31 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:12:06.070 02:35:31 -- common/autotest_common.sh@1324 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.5 00:12:06.070 02:35:31 -- common/autotest_common.sh@1325 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.5 ]] 00:12:06.070 02:35:31 -- common/autotest_common.sh@1326 -- # break 00:12:06.070 02:35:31 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.5 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:06.070 02:35:31 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:06.329 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:06.329 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:06.329 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:06.329 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:06.329 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:06.329 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:06.329 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:06.329 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:06.329 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:06.329 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:06.329 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:06.329 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:06.329 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:06.329 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:06.329 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:06.329 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:06.329 fio-3.35 00:12:06.329 Starting 16 threads 00:12:18.537 00:12:18.537 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=122569: Thu Jul 11 02:35:42 2024 00:12:18.537 read: IOPS=68.4k, BW=267MiB/s (280MB/s)(2674MiB/10002msec) 00:12:18.537 slat (usec): min=2, max=32014, avg=41.98, stdev=433.99 00:12:18.537 clat (usec): min=9, max=32229, avg=336.36, stdev=1295.21 00:12:18.537 lat (usec): min=25, max=32254, avg=378.33, stdev=1365.75 00:12:18.537 clat percentiles (usec): 00:12:18.537 | 50.000th=[ 206], 99.000th=[ 1139], 99.900th=[16319], 99.990th=[24249], 00:12:18.537 | 99.999th=[30540] 00:12:18.537 write: IOPS=109k, BW=427MiB/s (448MB/s)(4226MiB/9888msec); 0 zone resets 00:12:18.537 slat (usec): min=6, max=56250, avg=72.01, stdev=629.85 00:12:18.537 clat (usec): min=10, max=56648, avg=433.80, stdev=1532.96 00:12:18.537 lat (usec): min=37, max=56712, avg=505.82, stdev=1657.07 00:12:18.537 clat percentiles (usec): 00:12:18.537 | 50.000th=[ 255], 99.000th=[ 8160], 99.900th=[16581], 99.990th=[30540], 00:12:18.537 | 99.999th=[51119] 00:12:18.537 bw ( KiB/s): min=257281, max=747400, per=99.22%, avg=434175.63, stdev=8870.94, samples=304 00:12:18.537 iops : min=64320, max=186850, avg=108543.47, stdev=2217.73, samples=304 00:12:18.537 lat (usec) : 10=0.01%, 20=0.01%, 50=0.30%, 100=7.57%, 250=47.67% 00:12:18.537 lat (usec) : 500=39.26%, 750=2.86%, 1000=0.67% 00:12:18.537 lat (msec) : 2=0.54%, 4=0.08%, 10=0.25%, 20=0.73%, 50=0.06% 00:12:18.537 lat (msec) : 100=0.01% 00:12:18.537 cpu : usr=58.04%, sys=2.04%, ctx=219156, majf=0, minf=85246 00:12:18.537 IO depths : 1=11.5%, 2=24.0%, 4=51.5%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:18.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:18.537 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:18.537 issued rwts: total=684526,1081739,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:18.537 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:18.537 00:12:18.537 Run status group 0 (all jobs): 00:12:18.537 READ: bw=267MiB/s (280MB/s), 267MiB/s-267MiB/s (280MB/s-280MB/s), io=2674MiB (2804MB), run=10002-10002msec 00:12:18.537 WRITE: bw=427MiB/s (448MB/s), 427MiB/s-427MiB/s (448MB/s-448MB/s), io=4226MiB (4431MB), run=9888-9888msec 00:12:18.537 ----------------------------------------------------- 00:12:18.537 Suppressions used: 00:12:18.537 count bytes template 00:12:18.537 16 140 /usr/src/fio/parse.c 00:12:18.537 13033 1251168 /usr/src/fio/iolog.c 00:12:18.537 2 596 libcrypto.so 00:12:18.537 ----------------------------------------------------- 00:12:18.537 00:12:18.537 ************************************ 00:12:18.537 END TEST bdev_fio_rw_verify 00:12:18.537 ************************************ 00:12:18.537 00:12:18.537 real 0m12.117s 00:12:18.537 user 1m36.035s 00:12:18.537 sys 0m4.111s 00:12:18.537 02:35:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:18.537 02:35:43 -- common/autotest_common.sh@10 -- # set +x 00:12:18.537 02:35:43 -- bdev/blockdev.sh@348 -- # rm -f 00:12:18.537 02:35:43 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:18.537 02:35:43 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:12:18.537 02:35:43 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:18.537 02:35:43 -- common/autotest_common.sh@1260 -- # local workload=trim 00:12:18.537 02:35:43 -- common/autotest_common.sh@1261 -- # local bdev_type= 00:12:18.537 02:35:43 -- common/autotest_common.sh@1262 -- # local env_context= 00:12:18.537 02:35:43 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:12:18.537 02:35:43 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:18.537 02:35:43 -- common/autotest_common.sh@1270 -- # '[' -z trim ']' 00:12:18.537 02:35:43 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:12:18.537 02:35:43 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:18.537 02:35:43 -- common/autotest_common.sh@1280 -- # cat 00:12:18.537 02:35:43 -- common/autotest_common.sh@1292 -- # '[' trim == verify ']' 00:12:18.537 02:35:43 -- common/autotest_common.sh@1307 -- # '[' trim == trim ']' 00:12:18.537 02:35:43 -- common/autotest_common.sh@1308 -- # echo rw=trimwrite 00:12:18.537 02:35:43 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:12:18.538 02:35:43 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "879be936-4864-4d58-9c7a-a738ad515f5f"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "879be936-4864-4d58-9c7a-a738ad515f5f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "9b7917b2-2031-56bf-91e5-9703e3864171"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "9b7917b2-2031-56bf-91e5-9703e3864171",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "b8bf0827-d8e6-51aa-b22f-18c2685167c4"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "b8bf0827-d8e6-51aa-b22f-18c2685167c4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "1ca8ea45-2aa9-5f2e-a37f-e35bfad879c2"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "1ca8ea45-2aa9-5f2e-a37f-e35bfad879c2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "775e7cde-574a-5ee0-b9a5-186c5f61a4c9"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "775e7cde-574a-5ee0-b9a5-186c5f61a4c9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "6db1f496-a705-551d-be60-8bfa4b8416a1"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "6db1f496-a705-551d-be60-8bfa4b8416a1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "0c92c7ea-e53c-5531-bccc-291f1e30e4fa"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0c92c7ea-e53c-5531-bccc-291f1e30e4fa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "8f169d5e-6272-51d6-b924-3d4212a087ed"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8f169d5e-6272-51d6-b924-3d4212a087ed",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "21a2c4b0-c98d-5634-8364-8a42220d2fcc"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "21a2c4b0-c98d-5634-8364-8a42220d2fcc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "7430acf8-6b5b-5b8b-bdc0-dbd5a8265b96"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7430acf8-6b5b-5b8b-bdc0-dbd5a8265b96",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "1c15cf63-7e54-5a08-bef6-79f10575d95a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "1c15cf63-7e54-5a08-bef6-79f10575d95a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "20f48ee0-627a-5e35-aed8-dfebacc45973"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "20f48ee0-627a-5e35-aed8-dfebacc45973",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "84023315-afe3-4582-b3ad-fabf42876157"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "84023315-afe3-4582-b3ad-fabf42876157",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "84023315-afe3-4582-b3ad-fabf42876157",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "ce2e991b-ccf5-4473-87c5-ef1581a0a927",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "362dfd6c-f42c-49ce-a1cc-e7bbaa61e1b6",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "95495d39-2895-486e-8755-3a2c59ab4204"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "95495d39-2895-486e-8755-3a2c59ab4204",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "95495d39-2895-486e-8755-3a2c59ab4204",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "1302990e-4ac3-4e45-a051-c897f8a0f69b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "cfd0173d-ce8f-49d4-a4b4-0d86776afad6",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "efbce7ee-a7bd-4592-ab91-d8e6aafaaa89"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "efbce7ee-a7bd-4592-ab91-d8e6aafaaa89",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "efbce7ee-a7bd-4592-ab91-d8e6aafaaa89",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "7fec73d8-20a5-4e2b-a093-829a019bb3e9",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "c5f3e6d3-8ae0-4cb0-b8eb-dfcde647b251",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "2fb64507-90ae-4fcb-a0b8-accdfccf4e43"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "2fb64507-90ae-4fcb-a0b8-accdfccf4e43",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:12:18.538 02:35:43 -- bdev/blockdev.sh@353 -- # [[ -n Malloc0 00:12:18.538 Malloc1p0 00:12:18.538 Malloc1p1 00:12:18.538 Malloc2p0 00:12:18.538 Malloc2p1 00:12:18.538 Malloc2p2 00:12:18.538 Malloc2p3 00:12:18.538 Malloc2p4 00:12:18.538 Malloc2p5 00:12:18.538 Malloc2p6 00:12:18.538 Malloc2p7 00:12:18.538 TestPT 00:12:18.538 raid0 00:12:18.538 concat0 ]] 00:12:18.538 02:35:43 -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:12:18.539 02:35:43 -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "879be936-4864-4d58-9c7a-a738ad515f5f"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "879be936-4864-4d58-9c7a-a738ad515f5f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "9b7917b2-2031-56bf-91e5-9703e3864171"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "9b7917b2-2031-56bf-91e5-9703e3864171",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "b8bf0827-d8e6-51aa-b22f-18c2685167c4"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "b8bf0827-d8e6-51aa-b22f-18c2685167c4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "1ca8ea45-2aa9-5f2e-a37f-e35bfad879c2"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "1ca8ea45-2aa9-5f2e-a37f-e35bfad879c2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "775e7cde-574a-5ee0-b9a5-186c5f61a4c9"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "775e7cde-574a-5ee0-b9a5-186c5f61a4c9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "6db1f496-a705-551d-be60-8bfa4b8416a1"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "6db1f496-a705-551d-be60-8bfa4b8416a1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "0c92c7ea-e53c-5531-bccc-291f1e30e4fa"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0c92c7ea-e53c-5531-bccc-291f1e30e4fa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "8f169d5e-6272-51d6-b924-3d4212a087ed"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8f169d5e-6272-51d6-b924-3d4212a087ed",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "21a2c4b0-c98d-5634-8364-8a42220d2fcc"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "21a2c4b0-c98d-5634-8364-8a42220d2fcc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "7430acf8-6b5b-5b8b-bdc0-dbd5a8265b96"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7430acf8-6b5b-5b8b-bdc0-dbd5a8265b96",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "1c15cf63-7e54-5a08-bef6-79f10575d95a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "1c15cf63-7e54-5a08-bef6-79f10575d95a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "20f48ee0-627a-5e35-aed8-dfebacc45973"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "20f48ee0-627a-5e35-aed8-dfebacc45973",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "84023315-afe3-4582-b3ad-fabf42876157"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "84023315-afe3-4582-b3ad-fabf42876157",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "84023315-afe3-4582-b3ad-fabf42876157",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "ce2e991b-ccf5-4473-87c5-ef1581a0a927",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "362dfd6c-f42c-49ce-a1cc-e7bbaa61e1b6",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "95495d39-2895-486e-8755-3a2c59ab4204"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "95495d39-2895-486e-8755-3a2c59ab4204",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "95495d39-2895-486e-8755-3a2c59ab4204",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "1302990e-4ac3-4e45-a051-c897f8a0f69b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "cfd0173d-ce8f-49d4-a4b4-0d86776afad6",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "efbce7ee-a7bd-4592-ab91-d8e6aafaaa89"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "efbce7ee-a7bd-4592-ab91-d8e6aafaaa89",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "efbce7ee-a7bd-4592-ab91-d8e6aafaaa89",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "7fec73d8-20a5-4e2b-a093-829a019bb3e9",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "c5f3e6d3-8ae0-4cb0-b8eb-dfcde647b251",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "2fb64507-90ae-4fcb-a0b8-accdfccf4e43"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "2fb64507-90ae-4fcb-a0b8-accdfccf4e43",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:12:18.539 02:35:43 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:18.539 02:35:43 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc0]' 00:12:18.539 02:35:43 -- bdev/blockdev.sh@356 -- # echo filename=Malloc0 00:12:18.539 02:35:43 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:18.539 02:35:43 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p0]' 00:12:18.539 02:35:43 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p0 00:12:18.539 02:35:43 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:18.539 02:35:43 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p1]' 00:12:18.539 02:35:43 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p1 00:12:18.539 02:35:43 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:18.539 02:35:43 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p0]' 00:12:18.539 02:35:43 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p0 00:12:18.539 02:35:43 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:18.539 02:35:43 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p1]' 00:12:18.539 02:35:43 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p1 00:12:18.539 02:35:43 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:18.539 02:35:43 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p2]' 00:12:18.539 02:35:43 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p2 00:12:18.539 02:35:43 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:18.539 02:35:43 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p3]' 00:12:18.539 02:35:43 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p3 00:12:18.539 02:35:43 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:18.539 02:35:43 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p4]' 00:12:18.539 02:35:43 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p4 00:12:18.539 02:35:43 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:18.539 02:35:43 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p5]' 00:12:18.539 02:35:43 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p5 00:12:18.539 02:35:43 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:18.539 02:35:43 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p6]' 00:12:18.539 02:35:43 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p6 00:12:18.539 02:35:43 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:18.539 02:35:43 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p7]' 00:12:18.539 02:35:43 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p7 00:12:18.540 02:35:43 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:18.540 02:35:43 -- bdev/blockdev.sh@355 -- # echo '[job_TestPT]' 00:12:18.540 02:35:43 -- bdev/blockdev.sh@356 -- # echo filename=TestPT 00:12:18.540 02:35:43 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:18.540 02:35:43 -- bdev/blockdev.sh@355 -- # echo '[job_raid0]' 00:12:18.540 02:35:43 -- bdev/blockdev.sh@356 -- # echo filename=raid0 00:12:18.540 02:35:43 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:18.540 02:35:43 -- bdev/blockdev.sh@355 -- # echo '[job_concat0]' 00:12:18.540 02:35:43 -- bdev/blockdev.sh@356 -- # echo filename=concat0 00:12:18.540 02:35:43 -- bdev/blockdev.sh@365 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:18.540 02:35:43 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:12:18.540 02:35:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:18.540 02:35:43 -- common/autotest_common.sh@10 -- # set +x 00:12:18.540 ************************************ 00:12:18.540 START TEST bdev_fio_trim 00:12:18.540 ************************************ 00:12:18.540 02:35:43 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:18.540 02:35:43 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:18.540 02:35:43 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:12:18.540 02:35:43 -- common/autotest_common.sh@1318 -- # sanitizers=(libasan libclang_rt.asan) 00:12:18.540 02:35:43 -- common/autotest_common.sh@1318 -- # local sanitizers 00:12:18.540 02:35:43 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:18.540 02:35:43 -- common/autotest_common.sh@1320 -- # shift 00:12:18.540 02:35:43 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:12:18.540 02:35:43 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:12:18.540 02:35:43 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:18.540 02:35:43 -- common/autotest_common.sh@1324 -- # grep libasan 00:12:18.540 02:35:43 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:12:18.540 02:35:43 -- common/autotest_common.sh@1324 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.5 00:12:18.540 02:35:43 -- common/autotest_common.sh@1325 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.5 ]] 00:12:18.540 02:35:43 -- common/autotest_common.sh@1326 -- # break 00:12:18.540 02:35:43 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.5 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:18.540 02:35:43 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:18.540 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:18.540 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:18.540 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:18.540 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:18.540 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:18.540 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:18.540 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:18.540 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:18.540 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:18.540 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:18.540 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:18.540 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:18.540 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:18.540 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:18.540 fio-3.35 00:12:18.540 Starting 14 threads 00:12:30.743 00:12:30.743 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=122785: Thu Jul 11 02:35:54 2024 00:12:30.743 write: IOPS=118k, BW=460MiB/s (483MB/s)(4604MiB/10001msec); 0 zone resets 00:12:30.743 slat (usec): min=2, max=40050, avg=43.86, stdev=424.16 00:12:30.743 clat (usec): min=18, max=36347, avg=288.50, stdev=1078.11 00:12:30.743 lat (usec): min=34, max=40310, avg=332.36, stdev=1157.90 00:12:30.743 clat percentiles (usec): 00:12:30.743 | 50.000th=[ 198], 99.000th=[ 424], 99.900th=[16319], 99.990th=[20317], 00:12:30.743 | 99.999th=[28181] 00:12:30.743 bw ( KiB/s): min=339840, max=660030, per=100.00%, avg=471445.58, stdev=7788.09, samples=266 00:12:30.743 iops : min=84960, max=165007, avg=117861.37, stdev=1947.02, samples=266 00:12:30.743 trim: IOPS=118k, BW=460MiB/s (483MB/s)(4605MiB/10001msec); 0 zone resets 00:12:30.743 slat (usec): min=4, max=28037, avg=29.63, stdev=349.09 00:12:30.743 clat (usec): min=4, max=40311, avg=327.36, stdev=1147.67 00:12:30.743 lat (usec): min=14, max=40350, avg=356.99, stdev=1199.10 00:12:30.743 clat percentiles (usec): 00:12:30.743 | 50.000th=[ 229], 99.000th=[ 482], 99.900th=[16319], 99.990th=[20317], 00:12:30.743 | 99.999th=[28443] 00:12:30.743 bw ( KiB/s): min=339848, max=660030, per=100.00%, avg=471445.58, stdev=7788.13, samples=266 00:12:30.743 iops : min=84962, max=165007, avg=117861.26, stdev=1947.03, samples=266 00:12:30.743 lat (usec) : 10=0.01%, 20=0.01%, 50=0.28%, 100=4.36%, 250=60.73% 00:12:30.743 lat (usec) : 500=33.74%, 750=0.10%, 1000=0.02% 00:12:30.743 lat (msec) : 2=0.02%, 4=0.02%, 10=0.20%, 20=0.50%, 50=0.01% 00:12:30.743 cpu : usr=68.98%, sys=0.44%, ctx=170231, majf=0, minf=8883 00:12:30.743 IO depths : 1=12.5%, 2=24.9%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:30.743 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:30.743 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:30.743 issued rwts: total=0,1178748,1178753,0 short=0,0,0,0 dropped=0,0,0,0 00:12:30.743 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:30.743 00:12:30.743 Run status group 0 (all jobs): 00:12:30.743 WRITE: bw=460MiB/s (483MB/s), 460MiB/s-460MiB/s (483MB/s-483MB/s), io=4604MiB (4828MB), run=10001-10001msec 00:12:30.743 TRIM: bw=460MiB/s (483MB/s), 460MiB/s-460MiB/s (483MB/s-483MB/s), io=4605MiB (4828MB), run=10001-10001msec 00:12:30.743 ----------------------------------------------------- 00:12:30.743 Suppressions used: 00:12:30.743 count bytes template 00:12:30.743 14 129 /usr/src/fio/parse.c 00:12:30.743 2 596 libcrypto.so 00:12:30.743 ----------------------------------------------------- 00:12:30.743 00:12:30.743 ************************************ 00:12:30.743 END TEST bdev_fio_trim 00:12:30.743 ************************************ 00:12:30.743 00:12:30.743 real 0m11.650s 00:12:30.743 user 1m39.152s 00:12:30.743 sys 0m1.428s 00:12:30.743 02:35:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:30.743 02:35:54 -- common/autotest_common.sh@10 -- # set +x 00:12:30.743 02:35:54 -- bdev/blockdev.sh@366 -- # rm -f 00:12:30.743 02:35:54 -- bdev/blockdev.sh@367 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:30.743 02:35:55 -- bdev/blockdev.sh@368 -- # popd 00:12:30.743 /home/vagrant/spdk_repo/spdk 00:12:30.743 02:35:55 -- bdev/blockdev.sh@369 -- # trap - SIGINT SIGTERM EXIT 00:12:30.743 00:12:30.743 real 0m24.060s 00:12:30.743 user 3m15.390s 00:12:30.743 sys 0m5.614s 00:12:30.743 02:35:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:30.743 02:35:55 -- common/autotest_common.sh@10 -- # set +x 00:12:30.743 ************************************ 00:12:30.743 END TEST bdev_fio 00:12:30.743 ************************************ 00:12:30.743 02:35:55 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:30.743 02:35:55 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:30.743 02:35:55 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:12:30.743 02:35:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:30.743 02:35:55 -- common/autotest_common.sh@10 -- # set +x 00:12:30.743 ************************************ 00:12:30.743 START TEST bdev_verify 00:12:30.743 ************************************ 00:12:30.743 02:35:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:30.743 [2024-07-11 02:35:55.114763] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:12:30.743 [2024-07-11 02:35:55.115095] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122972 ] 00:12:30.743 [2024-07-11 02:35:55.254219] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:30.743 [2024-07-11 02:35:55.314910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:30.743 [2024-07-11 02:35:55.314916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:30.743 [2024-07-11 02:35:55.455306] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:30.743 [2024-07-11 02:35:55.455784] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:30.743 [2024-07-11 02:35:55.463245] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:30.743 [2024-07-11 02:35:55.463499] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:30.743 [2024-07-11 02:35:55.471299] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:30.743 [2024-07-11 02:35:55.471537] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:30.743 [2024-07-11 02:35:55.471709] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:30.743 [2024-07-11 02:35:55.580960] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:30.743 [2024-07-11 02:35:55.581364] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.743 [2024-07-11 02:35:55.581562] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:12:30.743 [2024-07-11 02:35:55.581759] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.743 [2024-07-11 02:35:55.584547] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.743 [2024-07-11 02:35:55.584714] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:30.743 Running I/O for 5 seconds... 00:12:36.011 00:12:36.011 Latency(us) 00:12:36.011 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:36.011 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:36.011 Verification LBA range: start 0x0 length 0x1000 00:12:36.011 Malloc0 : 5.19 1418.97 5.54 0.00 0.00 89646.04 2189.50 141081.13 00:12:36.011 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:36.011 Verification LBA range: start 0x1000 length 0x1000 00:12:36.011 Malloc0 : 5.22 1411.38 5.51 0.00 0.00 90324.61 2293.76 205902.20 00:12:36.011 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:36.011 Verification LBA range: start 0x0 length 0x800 00:12:36.011 Malloc1p0 : 5.19 978.12 3.82 0.00 0.00 129887.30 4140.68 127735.62 00:12:36.011 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:36.011 Verification LBA range: start 0x800 length 0x800 00:12:36.011 Malloc1p0 : 5.22 989.01 3.86 0.00 0.00 128713.56 4110.89 125829.12 00:12:36.011 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:36.011 Verification LBA range: start 0x0 length 0x800 00:12:36.011 Malloc1p1 : 5.19 977.89 3.82 0.00 0.00 129732.89 3783.21 123922.62 00:12:36.011 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:36.011 Verification LBA range: start 0x800 length 0x800 00:12:36.011 Malloc1p1 : 5.22 988.83 3.86 0.00 0.00 128520.23 3678.95 124875.87 00:12:36.011 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:36.011 Verification LBA range: start 0x0 length 0x200 00:12:36.011 Malloc2p0 : 5.20 977.60 3.82 0.00 0.00 129580.92 3678.95 120109.61 00:12:36.011 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:36.011 Verification LBA range: start 0x200 length 0x200 00:12:36.011 Malloc2p0 : 5.22 988.64 3.86 0.00 0.00 128379.32 3708.74 123922.62 00:12:36.011 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:36.011 Verification LBA range: start 0x0 length 0x200 00:12:36.011 Malloc2p1 : 5.20 977.33 3.82 0.00 0.00 129454.29 3798.11 116296.61 00:12:36.011 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:36.011 Verification LBA range: start 0x200 length 0x200 00:12:36.011 Malloc2p1 : 5.22 988.45 3.86 0.00 0.00 128239.22 3753.43 125829.12 00:12:36.011 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:36.011 Verification LBA range: start 0x0 length 0x200 00:12:36.011 Malloc2p2 : 5.20 977.02 3.82 0.00 0.00 129286.84 3634.27 112960.23 00:12:36.011 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:36.011 Verification LBA range: start 0x200 length 0x200 00:12:36.011 Malloc2p2 : 5.23 988.28 3.86 0.00 0.00 128056.29 3649.16 126782.37 00:12:36.011 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:36.011 Verification LBA range: start 0x0 length 0x200 00:12:36.011 Malloc2p3 : 5.20 976.76 3.82 0.00 0.00 129149.47 3559.80 110577.11 00:12:36.011 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:36.011 Verification LBA range: start 0x200 length 0x200 00:12:36.011 Malloc2p3 : 5.23 988.09 3.86 0.00 0.00 127912.38 3515.11 126782.37 00:12:36.011 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:36.011 Verification LBA range: start 0x0 length 0x200 00:12:36.011 Malloc2p4 : 5.20 976.50 3.81 0.00 0.00 129026.70 3723.64 110577.11 00:12:36.011 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:36.011 Verification LBA range: start 0x200 length 0x200 00:12:36.011 Malloc2p4 : 5.23 987.92 3.86 0.00 0.00 127753.06 3678.95 126782.37 00:12:36.269 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:36.269 Verification LBA range: start 0x0 length 0x200 00:12:36.269 Malloc2p5 : 5.20 976.24 3.81 0.00 0.00 128879.82 3708.74 110100.48 00:12:36.269 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:36.269 Verification LBA range: start 0x200 length 0x200 00:12:36.269 Malloc2p5 : 5.23 987.73 3.86 0.00 0.00 127590.46 3723.64 126782.37 00:12:36.269 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:36.269 Verification LBA range: start 0x0 length 0x200 00:12:36.269 Malloc2p6 : 5.21 975.98 3.81 0.00 0.00 128721.14 3842.79 110577.11 00:12:36.269 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:36.269 Verification LBA range: start 0x200 length 0x200 00:12:36.269 Malloc2p6 : 5.23 987.55 3.86 0.00 0.00 127426.21 3991.74 125829.12 00:12:36.269 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:36.269 Verification LBA range: start 0x0 length 0x200 00:12:36.269 Malloc2p7 : 5.21 975.73 3.81 0.00 0.00 128539.34 3589.59 111053.73 00:12:36.269 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:36.269 Verification LBA range: start 0x200 length 0x200 00:12:36.269 Malloc2p7 : 5.23 987.38 3.86 0.00 0.00 127252.95 3604.48 126782.37 00:12:36.270 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:36.270 Verification LBA range: start 0x0 length 0x1000 00:12:36.270 TestPT : 5.21 975.46 3.81 0.00 0.00 128358.94 1720.32 110577.11 00:12:36.270 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:36.270 Verification LBA range: start 0x1000 length 0x1000 00:12:36.270 TestPT : 5.23 957.20 3.74 0.00 0.00 131043.73 7030.23 175398.17 00:12:36.270 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:36.270 Verification LBA range: start 0x0 length 0x2000 00:12:36.270 raid0 : 5.21 975.21 3.81 0.00 0.00 128101.16 3678.95 110577.11 00:12:36.270 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:36.270 Verification LBA range: start 0x2000 length 0x2000 00:12:36.270 raid0 : 5.23 987.01 3.86 0.00 0.00 126861.93 3634.27 125829.12 00:12:36.270 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:36.270 Verification LBA range: start 0x0 length 0x2000 00:12:36.270 concat0 : 5.21 974.97 3.81 0.00 0.00 127946.47 3678.95 111053.73 00:12:36.270 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:36.270 Verification LBA range: start 0x2000 length 0x2000 00:12:36.270 concat0 : 5.23 986.84 3.85 0.00 0.00 126699.81 3664.06 126782.37 00:12:36.270 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:36.270 Verification LBA range: start 0x0 length 0x1000 00:12:36.270 raid1 : 5.21 974.68 3.81 0.00 0.00 127794.22 4259.84 111530.36 00:12:36.270 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:36.270 Verification LBA range: start 0x1000 length 0x1000 00:12:36.270 raid1 : 5.23 986.65 3.85 0.00 0.00 126534.96 4259.84 126782.37 00:12:36.270 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:36.270 Verification LBA range: start 0x0 length 0x4e2 00:12:36.270 AIO0 : 5.21 974.11 3.81 0.00 0.00 127638.20 3842.79 112006.98 00:12:36.270 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:36.270 Verification LBA range: start 0x4e2 length 0x4e2 00:12:36.270 AIO0 : 5.23 986.11 3.85 0.00 0.00 126349.38 4766.25 126782.37 00:12:36.270 =================================================================================================================== 00:12:36.270 Total : 32259.65 126.01 0.00 0.00 124950.86 1720.32 205902.20 00:12:36.528 ************************************ 00:12:36.528 END TEST bdev_verify 00:12:36.528 ************************************ 00:12:36.528 00:12:36.528 real 0m6.435s 00:12:36.528 user 0m11.623s 00:12:36.528 sys 0m0.577s 00:12:36.528 02:36:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:36.528 02:36:01 -- common/autotest_common.sh@10 -- # set +x 00:12:36.528 02:36:01 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:36.528 02:36:01 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:12:36.528 02:36:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:36.528 02:36:01 -- common/autotest_common.sh@10 -- # set +x 00:12:36.528 ************************************ 00:12:36.528 START TEST bdev_verify_big_io 00:12:36.528 ************************************ 00:12:36.528 02:36:01 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:36.528 [2024-07-11 02:36:01.602015] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:12:36.528 [2024-07-11 02:36:01.602263] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123071 ] 00:12:36.786 [2024-07-11 02:36:01.751936] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:36.786 [2024-07-11 02:36:01.815704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:36.786 [2024-07-11 02:36:01.815729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.044 [2024-07-11 02:36:01.959239] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:37.044 [2024-07-11 02:36:01.959400] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:37.044 [2024-07-11 02:36:01.967156] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:37.044 [2024-07-11 02:36:01.967250] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:37.044 [2024-07-11 02:36:01.975218] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:37.044 [2024-07-11 02:36:01.975287] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:37.044 [2024-07-11 02:36:01.975388] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:37.044 [2024-07-11 02:36:02.070262] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:37.044 [2024-07-11 02:36:02.070414] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:37.044 [2024-07-11 02:36:02.070481] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:12:37.044 [2024-07-11 02:36:02.070508] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:37.044 [2024-07-11 02:36:02.073352] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:37.044 [2024-07-11 02:36:02.073420] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:37.302 [2024-07-11 02:36:02.257594] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:12:37.302 [2024-07-11 02:36:02.258749] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:12:37.302 [2024-07-11 02:36:02.260489] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:12:37.302 [2024-07-11 02:36:02.262243] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:12:37.302 [2024-07-11 02:36:02.263368] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:12:37.302 [2024-07-11 02:36:02.264984] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:12:37.302 [2024-07-11 02:36:02.266071] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:12:37.302 [2024-07-11 02:36:02.267826] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:12:37.303 [2024-07-11 02:36:02.268944] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:12:37.303 [2024-07-11 02:36:02.270615] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:12:37.303 [2024-07-11 02:36:02.271742] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:12:37.303 [2024-07-11 02:36:02.273426] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:12:37.303 [2024-07-11 02:36:02.274523] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:12:37.303 [2024-07-11 02:36:02.276238] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:12:37.303 [2024-07-11 02:36:02.277895] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:12:37.303 [2024-07-11 02:36:02.279008] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:12:37.303 [2024-07-11 02:36:02.305898] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:12:37.303 [2024-07-11 02:36:02.308392] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:12:37.303 Running I/O for 5 seconds... 00:12:43.863 00:12:43.863 Latency(us) 00:12:43.863 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:43.863 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:43.863 Verification LBA range: start 0x0 length 0x100 00:12:43.863 Malloc0 : 5.45 438.26 27.39 0.00 0.00 284003.73 16324.42 804543.77 00:12:43.863 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:43.863 Verification LBA range: start 0x100 length 0x100 00:12:43.863 Malloc0 : 5.44 509.28 31.83 0.00 0.00 245992.85 14477.50 831234.79 00:12:43.863 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:43.863 Verification LBA range: start 0x0 length 0x80 00:12:43.863 Malloc1p0 : 5.63 210.97 13.19 0.00 0.00 576128.15 36223.53 983754.94 00:12:43.863 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:43.863 Verification LBA range: start 0x80 length 0x80 00:12:43.863 Malloc1p0 : 5.44 378.40 23.65 0.00 0.00 328509.84 29789.09 751161.72 00:12:43.863 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:43.863 Verification LBA range: start 0x0 length 0x80 00:12:43.863 Malloc1p1 : 5.76 144.01 9.00 0.00 0.00 841150.50 36700.16 1692973.61 00:12:43.863 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:43.863 Verification LBA range: start 0x80 length 0x80 00:12:43.863 Malloc1p1 : 5.57 165.48 10.34 0.00 0.00 738922.02 29550.78 1494697.43 00:12:43.863 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:43.863 Verification LBA range: start 0x0 length 0x20 00:12:43.863 Malloc2p0 : 5.55 81.32 5.08 0.00 0.00 371257.97 6940.86 640584.61 00:12:43.863 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:43.863 Verification LBA range: start 0x20 length 0x20 00:12:43.863 Malloc2p0 : 5.44 94.61 5.91 0.00 0.00 320251.35 5540.77 470905.95 00:12:43.863 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:43.863 Verification LBA range: start 0x0 length 0x20 00:12:43.863 Malloc2p1 : 5.55 81.30 5.08 0.00 0.00 369853.56 6702.55 629145.60 00:12:43.863 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:43.863 Verification LBA range: start 0x20 length 0x20 00:12:43.863 Malloc2p1 : 5.44 94.59 5.91 0.00 0.00 319283.64 5272.67 463279.94 00:12:43.863 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:43.863 Verification LBA range: start 0x0 length 0x20 00:12:43.863 Malloc2p2 : 5.55 81.29 5.08 0.00 0.00 368461.50 6494.02 613893.59 00:12:43.863 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:43.863 Verification LBA range: start 0x20 length 0x20 00:12:43.863 Malloc2p2 : 5.49 98.03 6.13 0.00 0.00 310461.74 5540.77 451840.93 00:12:43.863 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:43.863 Verification LBA range: start 0x0 length 0x20 00:12:43.863 Malloc2p3 : 5.55 81.27 5.08 0.00 0.00 367042.65 7030.23 602454.57 00:12:43.863 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:43.863 Verification LBA range: start 0x20 length 0x20 00:12:43.863 Malloc2p3 : 5.49 98.01 6.13 0.00 0.00 309502.90 4974.78 442308.42 00:12:43.863 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:43.863 Verification LBA range: start 0x0 length 0x20 00:12:43.863 Malloc2p4 : 5.55 81.25 5.08 0.00 0.00 365671.20 8043.05 587202.56 00:12:43.863 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:43.863 Verification LBA range: start 0x20 length 0x20 00:12:43.863 Malloc2p4 : 5.49 97.99 6.12 0.00 0.00 308621.90 5510.98 432775.91 00:12:43.863 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:43.863 Verification LBA range: start 0x0 length 0x20 00:12:43.863 Malloc2p5 : 5.55 81.24 5.08 0.00 0.00 363960.24 6672.76 568137.54 00:12:43.863 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:43.863 Verification LBA range: start 0x20 length 0x20 00:12:43.863 Malloc2p5 : 5.49 97.98 6.12 0.00 0.00 307659.35 5540.77 425149.91 00:12:43.863 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:43.863 Verification LBA range: start 0x0 length 0x20 00:12:43.863 Malloc2p6 : 5.63 84.20 5.26 0.00 0.00 350975.38 7596.22 556698.53 00:12:43.863 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:43.863 Verification LBA range: start 0x20 length 0x20 00:12:43.863 Malloc2p6 : 5.49 97.96 6.12 0.00 0.00 306762.58 6434.44 413710.89 00:12:43.863 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:43.863 Verification LBA range: start 0x0 length 0x20 00:12:43.863 Malloc2p7 : 5.63 84.19 5.26 0.00 0.00 349559.84 7417.48 541446.52 00:12:43.863 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:43.863 Verification LBA range: start 0x20 length 0x20 00:12:43.863 Malloc2p7 : 5.49 97.94 6.12 0.00 0.00 305829.09 5540.77 404178.39 00:12:43.864 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:43.864 Verification LBA range: start 0x0 length 0x100 00:12:43.864 TestPT : 5.84 147.01 9.19 0.00 0.00 785381.81 40274.85 1685347.61 00:12:43.864 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:43.864 Verification LBA range: start 0x100 length 0x100 00:12:43.864 TestPT : 5.60 165.78 10.36 0.00 0.00 713438.97 37891.72 1502323.43 00:12:43.864 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:43.864 Verification LBA range: start 0x0 length 0x200 00:12:43.864 raid0 : 5.71 155.78 9.74 0.00 0.00 739677.13 34317.03 1700599.62 00:12:43.864 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:43.864 Verification LBA range: start 0x200 length 0x200 00:12:43.864 raid0 : 5.62 170.10 10.63 0.00 0.00 687542.25 30265.72 1471819.40 00:12:43.864 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:43.864 Verification LBA range: start 0x0 length 0x200 00:12:43.864 concat0 : 5.71 163.95 10.25 0.00 0.00 693592.26 32648.84 1715851.64 00:12:43.864 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:43.864 Verification LBA range: start 0x200 length 0x200 00:12:43.864 concat0 : 5.62 175.49 10.97 0.00 0.00 660349.11 28359.21 1479445.41 00:12:43.864 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:43.864 Verification LBA range: start 0x0 length 0x100 00:12:43.864 raid1 : 5.73 188.48 11.78 0.00 0.00 594788.08 17158.52 1723477.64 00:12:43.864 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:43.864 Verification LBA range: start 0x100 length 0x100 00:12:43.864 raid1 : 5.62 185.89 11.62 0.00 0.00 617200.03 13941.29 1479445.41 00:12:43.864 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:12:43.864 Verification LBA range: start 0x0 length 0x4e 00:12:43.864 AIO0 : 5.77 195.88 12.24 0.00 0.00 344748.15 1362.85 1014258.97 00:12:43.864 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:12:43.864 Verification LBA range: start 0x4e length 0x4e 00:12:43.864 AIO0 : 5.63 196.95 12.31 0.00 0.00 353484.30 1697.98 854112.81 00:12:43.864 =================================================================================================================== 00:12:43.864 Total : 5024.88 314.05 0.00 0.00 456318.60 1362.85 1723477.64 00:12:43.864 00:12:43.864 real 0m7.092s 00:12:43.864 user 0m12.985s 00:12:43.864 sys 0m0.564s 00:12:43.864 ************************************ 00:12:43.864 END TEST bdev_verify_big_io 00:12:43.864 ************************************ 00:12:43.864 02:36:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:43.864 02:36:08 -- common/autotest_common.sh@10 -- # set +x 00:12:43.864 02:36:08 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:43.864 02:36:08 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:12:43.864 02:36:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:43.864 02:36:08 -- common/autotest_common.sh@10 -- # set +x 00:12:43.864 ************************************ 00:12:43.864 START TEST bdev_write_zeroes 00:12:43.864 ************************************ 00:12:43.864 02:36:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:43.864 [2024-07-11 02:36:08.744890] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:12:43.864 [2024-07-11 02:36:08.745381] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123213 ] 00:12:43.864 [2024-07-11 02:36:08.890165] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.864 [2024-07-11 02:36:08.953161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.122 [2024-07-11 02:36:09.092265] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:44.122 [2024-07-11 02:36:09.092684] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:44.122 [2024-07-11 02:36:09.100210] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:44.122 [2024-07-11 02:36:09.100445] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:44.122 [2024-07-11 02:36:09.108246] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:44.122 [2024-07-11 02:36:09.108457] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:44.122 [2024-07-11 02:36:09.108593] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:44.122 [2024-07-11 02:36:09.207851] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:44.122 [2024-07-11 02:36:09.208258] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:44.122 [2024-07-11 02:36:09.208434] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:12:44.122 [2024-07-11 02:36:09.208552] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:44.122 [2024-07-11 02:36:09.211086] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:44.122 [2024-07-11 02:36:09.211279] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:44.379 Running I/O for 1 seconds... 00:12:45.751 00:12:45.751 Latency(us) 00:12:45.751 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:45.751 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:45.751 Malloc0 : 1.03 6089.01 23.79 0.00 0.00 21005.75 662.81 38606.66 00:12:45.751 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:45.752 Malloc1p0 : 1.03 6081.59 23.76 0.00 0.00 21004.18 897.40 37653.41 00:12:45.752 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:45.752 Malloc1p1 : 1.03 6074.01 23.73 0.00 0.00 20988.88 841.54 36938.47 00:12:45.752 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:45.752 Malloc2p0 : 1.03 6065.46 23.69 0.00 0.00 20971.46 1042.62 35746.91 00:12:45.752 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:45.752 Malloc2p1 : 1.04 6056.80 23.66 0.00 0.00 20958.09 826.65 35031.97 00:12:45.752 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:45.752 Malloc2p2 : 1.04 6048.89 23.63 0.00 0.00 20948.81 856.44 34078.72 00:12:45.752 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:45.752 Malloc2p3 : 1.04 6041.82 23.60 0.00 0.00 20931.96 826.65 33602.09 00:12:45.752 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:45.752 Malloc2p4 : 1.04 6034.33 23.57 0.00 0.00 20915.46 1020.28 32410.53 00:12:45.752 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:45.752 Malloc2p5 : 1.04 6027.15 23.54 0.00 0.00 20906.86 819.20 31695.59 00:12:45.752 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:45.752 Malloc2p6 : 1.06 6063.33 23.68 0.00 0.00 20746.39 845.27 30980.65 00:12:45.752 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:45.752 Malloc2p7 : 1.06 6055.56 23.65 0.00 0.00 20725.26 826.65 30027.40 00:12:45.752 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:45.752 TestPT : 1.06 6047.71 23.62 0.00 0.00 20717.44 1064.96 28835.84 00:12:45.752 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:45.752 raid0 : 1.06 6038.16 23.59 0.00 0.00 20700.48 1414.98 27405.96 00:12:45.752 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:45.752 concat0 : 1.06 6029.01 23.55 0.00 0.00 20656.65 1482.01 25856.93 00:12:45.752 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:45.752 raid1 : 1.06 6017.81 23.51 0.00 0.00 20619.55 2293.76 25380.31 00:12:45.752 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:45.752 AIO0 : 1.06 6000.10 23.44 0.00 0.00 20587.52 1519.24 25380.31 00:12:45.752 =================================================================================================================== 00:12:45.752 Total : 96770.75 378.01 0.00 0.00 20835.18 662.81 38606.66 00:12:46.009 ************************************ 00:12:46.009 END TEST bdev_write_zeroes 00:12:46.009 ************************************ 00:12:46.009 00:12:46.009 real 0m2.219s 00:12:46.009 user 0m1.644s 00:12:46.009 sys 0m0.392s 00:12:46.009 02:36:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:46.009 02:36:10 -- common/autotest_common.sh@10 -- # set +x 00:12:46.010 02:36:10 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:46.010 02:36:10 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:12:46.010 02:36:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:46.010 02:36:10 -- common/autotest_common.sh@10 -- # set +x 00:12:46.010 ************************************ 00:12:46.010 START TEST bdev_json_nonenclosed 00:12:46.010 ************************************ 00:12:46.010 02:36:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:46.010 [2024-07-11 02:36:11.010631] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:12:46.010 [2024-07-11 02:36:11.010997] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123263 ] 00:12:46.267 [2024-07-11 02:36:11.146292] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:46.267 [2024-07-11 02:36:11.203366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.267 [2024-07-11 02:36:11.203880] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:12:46.267 [2024-07-11 02:36:11.204036] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:46.267 00:12:46.267 real 0m0.343s 00:12:46.267 user 0m0.127s 00:12:46.267 sys 0m0.115s 00:12:46.267 02:36:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:46.267 02:36:11 -- common/autotest_common.sh@10 -- # set +x 00:12:46.267 ************************************ 00:12:46.267 END TEST bdev_json_nonenclosed 00:12:46.267 ************************************ 00:12:46.267 02:36:11 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:46.267 02:36:11 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:12:46.267 02:36:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:46.267 02:36:11 -- common/autotest_common.sh@10 -- # set +x 00:12:46.525 ************************************ 00:12:46.526 START TEST bdev_json_nonarray 00:12:46.526 ************************************ 00:12:46.526 02:36:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:46.526 [2024-07-11 02:36:11.417770] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:12:46.526 [2024-07-11 02:36:11.418679] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123285 ] 00:12:46.526 [2024-07-11 02:36:11.564604] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:46.784 [2024-07-11 02:36:11.631940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.784 [2024-07-11 02:36:11.632471] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:12:46.784 [2024-07-11 02:36:11.632620] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:46.784 00:12:46.784 real 0m0.376s 00:12:46.784 user 0m0.175s 00:12:46.784 sys 0m0.100s 00:12:46.784 02:36:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:46.784 ************************************ 00:12:46.784 END TEST bdev_json_nonarray 00:12:46.784 02:36:11 -- common/autotest_common.sh@10 -- # set +x 00:12:46.784 ************************************ 00:12:46.784 02:36:11 -- bdev/blockdev.sh@785 -- # [[ bdev == bdev ]] 00:12:46.784 02:36:11 -- bdev/blockdev.sh@786 -- # run_test bdev_qos qos_test_suite '' 00:12:46.784 02:36:11 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:46.784 02:36:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:46.784 02:36:11 -- common/autotest_common.sh@10 -- # set +x 00:12:46.784 ************************************ 00:12:46.784 START TEST bdev_qos 00:12:46.784 ************************************ 00:12:46.784 02:36:11 -- common/autotest_common.sh@1104 -- # qos_test_suite '' 00:12:46.784 02:36:11 -- bdev/blockdev.sh@444 -- # QOS_PID=123316 00:12:46.784 02:36:11 -- bdev/blockdev.sh@445 -- # echo 'Process qos testing pid: 123316' 00:12:46.784 02:36:11 -- bdev/blockdev.sh@443 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:12:46.784 Process qos testing pid: 123316 00:12:46.784 02:36:11 -- bdev/blockdev.sh@446 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:12:46.784 02:36:11 -- bdev/blockdev.sh@447 -- # waitforlisten 123316 00:12:46.784 02:36:11 -- common/autotest_common.sh@819 -- # '[' -z 123316 ']' 00:12:46.784 02:36:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:46.784 02:36:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:46.784 02:36:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:46.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:46.784 02:36:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:46.784 02:36:11 -- common/autotest_common.sh@10 -- # set +x 00:12:46.784 [2024-07-11 02:36:11.844792] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:12:46.784 [2024-07-11 02:36:11.845208] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123316 ] 00:12:47.042 [2024-07-11 02:36:11.993734] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:47.042 [2024-07-11 02:36:12.088619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:47.980 02:36:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:47.980 02:36:12 -- common/autotest_common.sh@852 -- # return 0 00:12:47.980 02:36:12 -- bdev/blockdev.sh@449 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:12:47.980 02:36:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:47.980 02:36:12 -- common/autotest_common.sh@10 -- # set +x 00:12:47.980 Malloc_0 00:12:47.980 02:36:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:47.980 02:36:12 -- bdev/blockdev.sh@450 -- # waitforbdev Malloc_0 00:12:47.980 02:36:12 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_0 00:12:47.980 02:36:12 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:47.980 02:36:12 -- common/autotest_common.sh@889 -- # local i 00:12:47.980 02:36:12 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:47.980 02:36:12 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:47.980 02:36:12 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:12:47.980 02:36:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:47.980 02:36:12 -- common/autotest_common.sh@10 -- # set +x 00:12:47.980 02:36:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:47.980 02:36:12 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:12:47.980 02:36:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:47.980 02:36:12 -- common/autotest_common.sh@10 -- # set +x 00:12:47.980 [ 00:12:47.980 { 00:12:47.980 "name": "Malloc_0", 00:12:47.980 "aliases": [ 00:12:47.980 "3e415969-1dde-4e2a-9f65-e0cf7dfdb355" 00:12:47.980 ], 00:12:47.980 "product_name": "Malloc disk", 00:12:47.980 "block_size": 512, 00:12:47.980 "num_blocks": 262144, 00:12:47.980 "uuid": "3e415969-1dde-4e2a-9f65-e0cf7dfdb355", 00:12:47.980 "assigned_rate_limits": { 00:12:47.980 "rw_ios_per_sec": 0, 00:12:47.980 "rw_mbytes_per_sec": 0, 00:12:47.980 "r_mbytes_per_sec": 0, 00:12:47.980 "w_mbytes_per_sec": 0 00:12:47.980 }, 00:12:47.980 "claimed": false, 00:12:47.980 "zoned": false, 00:12:47.980 "supported_io_types": { 00:12:47.980 "read": true, 00:12:47.980 "write": true, 00:12:47.980 "unmap": true, 00:12:47.980 "write_zeroes": true, 00:12:47.980 "flush": true, 00:12:47.980 "reset": true, 00:12:47.980 "compare": false, 00:12:47.980 "compare_and_write": false, 00:12:47.980 "abort": true, 00:12:47.980 "nvme_admin": false, 00:12:47.980 "nvme_io": false 00:12:47.980 }, 00:12:47.980 "memory_domains": [ 00:12:47.980 { 00:12:47.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.980 "dma_device_type": 2 00:12:47.980 } 00:12:47.980 ], 00:12:47.980 "driver_specific": {} 00:12:47.980 } 00:12:47.980 ] 00:12:47.980 02:36:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:47.980 02:36:12 -- common/autotest_common.sh@895 -- # return 0 00:12:47.980 02:36:12 -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_null_create Null_1 128 512 00:12:47.980 02:36:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:47.980 02:36:12 -- common/autotest_common.sh@10 -- # set +x 00:12:47.980 Null_1 00:12:47.980 02:36:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:47.980 02:36:12 -- bdev/blockdev.sh@452 -- # waitforbdev Null_1 00:12:47.980 02:36:12 -- common/autotest_common.sh@887 -- # local bdev_name=Null_1 00:12:47.980 02:36:12 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:47.980 02:36:12 -- common/autotest_common.sh@889 -- # local i 00:12:47.980 02:36:12 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:47.980 02:36:12 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:47.980 02:36:12 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:12:47.980 02:36:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:47.980 02:36:12 -- common/autotest_common.sh@10 -- # set +x 00:12:47.980 02:36:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:47.980 02:36:12 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:12:47.980 02:36:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:47.980 02:36:12 -- common/autotest_common.sh@10 -- # set +x 00:12:47.980 [ 00:12:47.980 { 00:12:47.980 "name": "Null_1", 00:12:47.980 "aliases": [ 00:12:47.980 "d43dddd8-6fe9-4f0d-a5e6-46c1e15a605f" 00:12:47.980 ], 00:12:47.980 "product_name": "Null disk", 00:12:47.980 "block_size": 512, 00:12:47.980 "num_blocks": 262144, 00:12:47.980 "uuid": "d43dddd8-6fe9-4f0d-a5e6-46c1e15a605f", 00:12:47.980 "assigned_rate_limits": { 00:12:47.980 "rw_ios_per_sec": 0, 00:12:47.980 "rw_mbytes_per_sec": 0, 00:12:47.980 "r_mbytes_per_sec": 0, 00:12:47.980 "w_mbytes_per_sec": 0 00:12:47.980 }, 00:12:47.980 "claimed": false, 00:12:47.980 "zoned": false, 00:12:47.980 "supported_io_types": { 00:12:47.980 "read": true, 00:12:47.980 "write": true, 00:12:47.980 "unmap": false, 00:12:47.980 "write_zeroes": true, 00:12:47.980 "flush": false, 00:12:47.980 "reset": true, 00:12:47.980 "compare": false, 00:12:47.980 "compare_and_write": false, 00:12:47.980 "abort": true, 00:12:47.980 "nvme_admin": false, 00:12:47.980 "nvme_io": false 00:12:47.980 }, 00:12:47.980 "driver_specific": {} 00:12:47.980 } 00:12:47.980 ] 00:12:47.980 02:36:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:47.980 02:36:12 -- common/autotest_common.sh@895 -- # return 0 00:12:47.980 02:36:12 -- bdev/blockdev.sh@455 -- # qos_function_test 00:12:47.980 02:36:12 -- bdev/blockdev.sh@454 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:47.980 02:36:12 -- bdev/blockdev.sh@408 -- # local qos_lower_iops_limit=1000 00:12:47.980 02:36:12 -- bdev/blockdev.sh@409 -- # local qos_lower_bw_limit=2 00:12:47.980 02:36:12 -- bdev/blockdev.sh@410 -- # local io_result=0 00:12:47.980 02:36:12 -- bdev/blockdev.sh@411 -- # local iops_limit=0 00:12:47.980 02:36:12 -- bdev/blockdev.sh@412 -- # local bw_limit=0 00:12:47.980 02:36:12 -- bdev/blockdev.sh@414 -- # get_io_result IOPS Malloc_0 00:12:47.980 02:36:12 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:12:47.980 02:36:12 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:12:47.980 02:36:12 -- bdev/blockdev.sh@375 -- # local iostat_result 00:12:47.980 02:36:12 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:12:47.980 02:36:12 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:12:47.980 02:36:12 -- bdev/blockdev.sh@376 -- # tail -1 00:12:47.980 Running I/O for 60 seconds... 00:12:53.292 02:36:18 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 81089.67 324358.68 0.00 0.00 327680.00 0.00 0.00 ' 00:12:53.292 02:36:18 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:12:53.292 02:36:18 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:12:53.292 02:36:18 -- bdev/blockdev.sh@378 -- # iostat_result=81089.67 00:12:53.292 02:36:18 -- bdev/blockdev.sh@383 -- # echo 81089 00:12:53.292 02:36:18 -- bdev/blockdev.sh@414 -- # io_result=81089 00:12:53.292 02:36:18 -- bdev/blockdev.sh@416 -- # iops_limit=20000 00:12:53.292 02:36:18 -- bdev/blockdev.sh@417 -- # '[' 20000 -gt 1000 ']' 00:12:53.292 02:36:18 -- bdev/blockdev.sh@420 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 20000 Malloc_0 00:12:53.292 02:36:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:53.292 02:36:18 -- common/autotest_common.sh@10 -- # set +x 00:12:53.292 02:36:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:53.292 02:36:18 -- bdev/blockdev.sh@421 -- # run_test bdev_qos_iops run_qos_test 20000 IOPS Malloc_0 00:12:53.292 02:36:18 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:12:53.292 02:36:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:53.292 02:36:18 -- common/autotest_common.sh@10 -- # set +x 00:12:53.292 ************************************ 00:12:53.292 START TEST bdev_qos_iops 00:12:53.292 ************************************ 00:12:53.292 02:36:18 -- common/autotest_common.sh@1104 -- # run_qos_test 20000 IOPS Malloc_0 00:12:53.292 02:36:18 -- bdev/blockdev.sh@387 -- # local qos_limit=20000 00:12:53.292 02:36:18 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:12:53.292 02:36:18 -- bdev/blockdev.sh@390 -- # get_io_result IOPS Malloc_0 00:12:53.292 02:36:18 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:12:53.292 02:36:18 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:12:53.292 02:36:18 -- bdev/blockdev.sh@375 -- # local iostat_result 00:12:53.292 02:36:18 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:12:53.292 02:36:18 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:12:53.292 02:36:18 -- bdev/blockdev.sh@376 -- # tail -1 00:12:58.574 02:36:23 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 19989.12 79956.49 0.00 0.00 81440.00 0.00 0.00 ' 00:12:58.574 02:36:23 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:12:58.574 02:36:23 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:12:58.574 02:36:23 -- bdev/blockdev.sh@378 -- # iostat_result=19989.12 00:12:58.574 02:36:23 -- bdev/blockdev.sh@383 -- # echo 19989 00:12:58.574 ************************************ 00:12:58.574 END TEST bdev_qos_iops 00:12:58.574 ************************************ 00:12:58.574 02:36:23 -- bdev/blockdev.sh@390 -- # qos_result=19989 00:12:58.574 02:36:23 -- bdev/blockdev.sh@391 -- # '[' IOPS = BANDWIDTH ']' 00:12:58.574 02:36:23 -- bdev/blockdev.sh@394 -- # lower_limit=18000 00:12:58.574 02:36:23 -- bdev/blockdev.sh@395 -- # upper_limit=22000 00:12:58.574 02:36:23 -- bdev/blockdev.sh@398 -- # '[' 19989 -lt 18000 ']' 00:12:58.574 02:36:23 -- bdev/blockdev.sh@398 -- # '[' 19989 -gt 22000 ']' 00:12:58.574 00:12:58.574 real 0m5.212s 00:12:58.574 user 0m0.115s 00:12:58.574 sys 0m0.015s 00:12:58.574 02:36:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:58.574 02:36:23 -- common/autotest_common.sh@10 -- # set +x 00:12:58.574 02:36:23 -- bdev/blockdev.sh@425 -- # get_io_result BANDWIDTH Null_1 00:12:58.574 02:36:23 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:12:58.574 02:36:23 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:12:58.574 02:36:23 -- bdev/blockdev.sh@375 -- # local iostat_result 00:12:58.574 02:36:23 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:12:58.574 02:36:23 -- bdev/blockdev.sh@376 -- # grep Null_1 00:12:58.574 02:36:23 -- bdev/blockdev.sh@376 -- # tail -1 00:13:03.843 02:36:28 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 27408.51 109634.05 0.00 0.00 111616.00 0.00 0.00 ' 00:13:03.843 02:36:28 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:13:03.843 02:36:28 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:03.843 02:36:28 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:13:03.843 02:36:28 -- bdev/blockdev.sh@380 -- # iostat_result=111616.00 00:13:03.843 02:36:28 -- bdev/blockdev.sh@383 -- # echo 111616 00:13:03.843 02:36:28 -- bdev/blockdev.sh@425 -- # bw_limit=111616 00:13:03.843 02:36:28 -- bdev/blockdev.sh@426 -- # bw_limit=10 00:13:03.843 02:36:28 -- bdev/blockdev.sh@427 -- # '[' 10 -lt 2 ']' 00:13:03.843 02:36:28 -- bdev/blockdev.sh@430 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 10 Null_1 00:13:03.843 02:36:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.843 02:36:28 -- common/autotest_common.sh@10 -- # set +x 00:13:03.843 02:36:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.843 02:36:28 -- bdev/blockdev.sh@431 -- # run_test bdev_qos_bw run_qos_test 10 BANDWIDTH Null_1 00:13:03.843 02:36:28 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:13:03.843 02:36:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:03.843 02:36:28 -- common/autotest_common.sh@10 -- # set +x 00:13:03.843 ************************************ 00:13:03.843 START TEST bdev_qos_bw 00:13:03.843 ************************************ 00:13:03.843 02:36:28 -- common/autotest_common.sh@1104 -- # run_qos_test 10 BANDWIDTH Null_1 00:13:03.843 02:36:28 -- bdev/blockdev.sh@387 -- # local qos_limit=10 00:13:03.843 02:36:28 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:13:03.843 02:36:28 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Null_1 00:13:03.843 02:36:28 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:13:03.843 02:36:28 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:13:03.843 02:36:28 -- bdev/blockdev.sh@375 -- # local iostat_result 00:13:03.843 02:36:28 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:03.843 02:36:28 -- bdev/blockdev.sh@376 -- # grep Null_1 00:13:03.843 02:36:28 -- bdev/blockdev.sh@376 -- # tail -1 00:13:09.110 02:36:33 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 2556.44 10225.77 0.00 0.00 10404.00 0.00 0.00 ' 00:13:09.110 02:36:33 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:13:09.110 02:36:33 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:09.110 02:36:33 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:13:09.110 02:36:33 -- bdev/blockdev.sh@380 -- # iostat_result=10404.00 00:13:09.110 02:36:33 -- bdev/blockdev.sh@383 -- # echo 10404 00:13:09.110 02:36:33 -- bdev/blockdev.sh@390 -- # qos_result=10404 00:13:09.110 02:36:33 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:09.110 02:36:33 -- bdev/blockdev.sh@392 -- # qos_limit=10240 00:13:09.110 02:36:33 -- bdev/blockdev.sh@394 -- # lower_limit=9216 00:13:09.110 02:36:33 -- bdev/blockdev.sh@395 -- # upper_limit=11264 00:13:09.110 02:36:33 -- bdev/blockdev.sh@398 -- # '[' 10404 -lt 9216 ']' 00:13:09.110 02:36:33 -- bdev/blockdev.sh@398 -- # '[' 10404 -gt 11264 ']' 00:13:09.110 00:13:09.110 real 0m5.212s 00:13:09.110 user 0m0.112s 00:13:09.110 sys 0m0.017s 00:13:09.110 ************************************ 00:13:09.110 END TEST bdev_qos_bw 00:13:09.110 ************************************ 00:13:09.110 02:36:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:09.110 02:36:33 -- common/autotest_common.sh@10 -- # set +x 00:13:09.110 02:36:33 -- bdev/blockdev.sh@434 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:13:09.110 02:36:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:09.110 02:36:33 -- common/autotest_common.sh@10 -- # set +x 00:13:09.110 02:36:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:09.110 02:36:33 -- bdev/blockdev.sh@435 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:13:09.110 02:36:33 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:13:09.110 02:36:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:09.110 02:36:33 -- common/autotest_common.sh@10 -- # set +x 00:13:09.110 ************************************ 00:13:09.110 START TEST bdev_qos_ro_bw 00:13:09.110 ************************************ 00:13:09.110 02:36:33 -- common/autotest_common.sh@1104 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:13:09.110 02:36:33 -- bdev/blockdev.sh@387 -- # local qos_limit=2 00:13:09.110 02:36:33 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:13:09.110 02:36:33 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Malloc_0 00:13:09.110 02:36:33 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:13:09.110 02:36:33 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:13:09.110 02:36:33 -- bdev/blockdev.sh@375 -- # local iostat_result 00:13:09.110 02:36:33 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:09.110 02:36:33 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:13:09.110 02:36:33 -- bdev/blockdev.sh@376 -- # tail -1 00:13:14.375 02:36:39 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 511.48 2045.94 0.00 0.00 2060.00 0.00 0.00 ' 00:13:14.376 02:36:39 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:13:14.376 02:36:39 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:14.376 02:36:39 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:13:14.376 02:36:39 -- bdev/blockdev.sh@380 -- # iostat_result=2060.00 00:13:14.376 02:36:39 -- bdev/blockdev.sh@383 -- # echo 2060 00:13:14.376 02:36:39 -- bdev/blockdev.sh@390 -- # qos_result=2060 00:13:14.376 02:36:39 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:14.376 02:36:39 -- bdev/blockdev.sh@392 -- # qos_limit=2048 00:13:14.376 02:36:39 -- bdev/blockdev.sh@394 -- # lower_limit=1843 00:13:14.376 02:36:39 -- bdev/blockdev.sh@395 -- # upper_limit=2252 00:13:14.376 02:36:39 -- bdev/blockdev.sh@398 -- # '[' 2060 -lt 1843 ']' 00:13:14.376 02:36:39 -- bdev/blockdev.sh@398 -- # '[' 2060 -gt 2252 ']' 00:13:14.376 00:13:14.376 real 0m5.158s 00:13:14.376 user 0m0.104s 00:13:14.376 sys 0m0.029s 00:13:14.376 02:36:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:14.376 ************************************ 00:13:14.376 END TEST bdev_qos_ro_bw 00:13:14.376 ************************************ 00:13:14.376 02:36:39 -- common/autotest_common.sh@10 -- # set +x 00:13:14.376 02:36:39 -- bdev/blockdev.sh@457 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:13:14.376 02:36:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.376 02:36:39 -- common/autotest_common.sh@10 -- # set +x 00:13:14.634 02:36:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.634 02:36:39 -- bdev/blockdev.sh@458 -- # rpc_cmd bdev_null_delete Null_1 00:13:14.634 02:36:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.634 02:36:39 -- common/autotest_common.sh@10 -- # set +x 00:13:14.892 00:13:14.892 Latency(us) 00:13:14.892 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:14.892 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:14.892 Malloc_0 : 26.63 27693.95 108.18 0.00 0.00 9156.93 2100.13 503316.48 00:13:14.892 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:14.892 Null_1 : 26.76 27903.91 109.00 0.00 0.00 9154.81 644.19 125829.12 00:13:14.893 =================================================================================================================== 00:13:14.893 Total : 55597.86 217.18 0.00 0.00 9155.87 644.19 503316.48 00:13:14.893 0 00:13:14.893 02:36:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.893 02:36:39 -- bdev/blockdev.sh@459 -- # killprocess 123316 00:13:14.893 02:36:39 -- common/autotest_common.sh@926 -- # '[' -z 123316 ']' 00:13:14.893 02:36:39 -- common/autotest_common.sh@930 -- # kill -0 123316 00:13:14.893 02:36:39 -- common/autotest_common.sh@931 -- # uname 00:13:14.893 02:36:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:14.893 02:36:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 123316 00:13:14.893 killing process with pid 123316 00:13:14.893 Received shutdown signal, test time was about 26.787116 seconds 00:13:14.893 00:13:14.893 Latency(us) 00:13:14.893 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:14.893 =================================================================================================================== 00:13:14.893 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:14.893 02:36:39 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:14.893 02:36:39 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:14.893 02:36:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 123316' 00:13:14.893 02:36:39 -- common/autotest_common.sh@945 -- # kill 123316 00:13:14.893 02:36:39 -- common/autotest_common.sh@950 -- # wait 123316 00:13:15.151 ************************************ 00:13:15.151 END TEST bdev_qos 00:13:15.151 ************************************ 00:13:15.151 02:36:40 -- bdev/blockdev.sh@460 -- # trap - SIGINT SIGTERM EXIT 00:13:15.151 00:13:15.151 real 0m28.288s 00:13:15.151 user 0m29.012s 00:13:15.151 sys 0m0.603s 00:13:15.151 02:36:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:15.151 02:36:40 -- common/autotest_common.sh@10 -- # set +x 00:13:15.151 02:36:40 -- bdev/blockdev.sh@787 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:13:15.151 02:36:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:15.151 02:36:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:15.151 02:36:40 -- common/autotest_common.sh@10 -- # set +x 00:13:15.151 ************************************ 00:13:15.151 START TEST bdev_qd_sampling 00:13:15.151 ************************************ 00:13:15.151 Process bdev QD sampling period testing pid: 123824 00:13:15.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.151 02:36:40 -- common/autotest_common.sh@1104 -- # qd_sampling_test_suite '' 00:13:15.151 02:36:40 -- bdev/blockdev.sh@536 -- # QD_DEV=Malloc_QD 00:13:15.152 02:36:40 -- bdev/blockdev.sh@539 -- # QD_PID=123824 00:13:15.152 02:36:40 -- bdev/blockdev.sh@540 -- # echo 'Process bdev QD sampling period testing pid: 123824' 00:13:15.152 02:36:40 -- bdev/blockdev.sh@538 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:13:15.152 02:36:40 -- bdev/blockdev.sh@541 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:13:15.152 02:36:40 -- bdev/blockdev.sh@542 -- # waitforlisten 123824 00:13:15.152 02:36:40 -- common/autotest_common.sh@819 -- # '[' -z 123824 ']' 00:13:15.152 02:36:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.152 02:36:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:15.152 02:36:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.152 02:36:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:15.152 02:36:40 -- common/autotest_common.sh@10 -- # set +x 00:13:15.152 [2024-07-11 02:36:40.186198] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:13:15.152 [2024-07-11 02:36:40.186626] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123824 ] 00:13:15.410 [2024-07-11 02:36:40.335939] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:15.410 [2024-07-11 02:36:40.410346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:15.410 [2024-07-11 02:36:40.410356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.347 02:36:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:16.347 02:36:41 -- common/autotest_common.sh@852 -- # return 0 00:13:16.347 02:36:41 -- bdev/blockdev.sh@544 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:13:16.347 02:36:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:16.347 02:36:41 -- common/autotest_common.sh@10 -- # set +x 00:13:16.347 Malloc_QD 00:13:16.347 02:36:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:16.347 02:36:41 -- bdev/blockdev.sh@545 -- # waitforbdev Malloc_QD 00:13:16.347 02:36:41 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_QD 00:13:16.347 02:36:41 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:16.347 02:36:41 -- common/autotest_common.sh@889 -- # local i 00:13:16.347 02:36:41 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:16.347 02:36:41 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:16.347 02:36:41 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:13:16.347 02:36:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:16.347 02:36:41 -- common/autotest_common.sh@10 -- # set +x 00:13:16.347 02:36:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:16.347 02:36:41 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:13:16.347 02:36:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:16.347 02:36:41 -- common/autotest_common.sh@10 -- # set +x 00:13:16.347 [ 00:13:16.347 { 00:13:16.347 "name": "Malloc_QD", 00:13:16.347 "aliases": [ 00:13:16.347 "b1277ef7-cdb0-46fc-9d7d-0d0bde4307cb" 00:13:16.347 ], 00:13:16.347 "product_name": "Malloc disk", 00:13:16.347 "block_size": 512, 00:13:16.347 "num_blocks": 262144, 00:13:16.347 "uuid": "b1277ef7-cdb0-46fc-9d7d-0d0bde4307cb", 00:13:16.347 "assigned_rate_limits": { 00:13:16.347 "rw_ios_per_sec": 0, 00:13:16.347 "rw_mbytes_per_sec": 0, 00:13:16.347 "r_mbytes_per_sec": 0, 00:13:16.347 "w_mbytes_per_sec": 0 00:13:16.347 }, 00:13:16.347 "claimed": false, 00:13:16.347 "zoned": false, 00:13:16.347 "supported_io_types": { 00:13:16.347 "read": true, 00:13:16.347 "write": true, 00:13:16.347 "unmap": true, 00:13:16.347 "write_zeroes": true, 00:13:16.347 "flush": true, 00:13:16.347 "reset": true, 00:13:16.347 "compare": false, 00:13:16.347 "compare_and_write": false, 00:13:16.347 "abort": true, 00:13:16.347 "nvme_admin": false, 00:13:16.347 "nvme_io": false 00:13:16.347 }, 00:13:16.347 "memory_domains": [ 00:13:16.347 { 00:13:16.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:16.347 "dma_device_type": 2 00:13:16.347 } 00:13:16.347 ], 00:13:16.347 "driver_specific": {} 00:13:16.347 } 00:13:16.347 ] 00:13:16.347 02:36:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:16.347 02:36:41 -- common/autotest_common.sh@895 -- # return 0 00:13:16.347 02:36:41 -- bdev/blockdev.sh@548 -- # sleep 2 00:13:16.347 02:36:41 -- bdev/blockdev.sh@547 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:16.347 Running I/O for 5 seconds... 00:13:18.251 02:36:43 -- bdev/blockdev.sh@549 -- # qd_sampling_function_test Malloc_QD 00:13:18.251 02:36:43 -- bdev/blockdev.sh@517 -- # local bdev_name=Malloc_QD 00:13:18.251 02:36:43 -- bdev/blockdev.sh@518 -- # local sampling_period=10 00:13:18.251 02:36:43 -- bdev/blockdev.sh@519 -- # local iostats 00:13:18.251 02:36:43 -- bdev/blockdev.sh@521 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:13:18.251 02:36:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.251 02:36:43 -- common/autotest_common.sh@10 -- # set +x 00:13:18.251 02:36:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.251 02:36:43 -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:13:18.251 02:36:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.251 02:36:43 -- common/autotest_common.sh@10 -- # set +x 00:13:18.251 02:36:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.251 02:36:43 -- bdev/blockdev.sh@523 -- # iostats='{ 00:13:18.251 "tick_rate": 2200000000, 00:13:18.251 "ticks": 1523340104603, 00:13:18.251 "bdevs": [ 00:13:18.251 { 00:13:18.251 "name": "Malloc_QD", 00:13:18.251 "bytes_read": 948998656, 00:13:18.251 "num_read_ops": 231683, 00:13:18.251 "bytes_written": 0, 00:13:18.251 "num_write_ops": 0, 00:13:18.251 "bytes_unmapped": 0, 00:13:18.251 "num_unmap_ops": 0, 00:13:18.251 "bytes_copied": 0, 00:13:18.251 "num_copy_ops": 0, 00:13:18.251 "read_latency_ticks": 2154531211487, 00:13:18.251 "max_read_latency_ticks": 14456040, 00:13:18.251 "min_read_latency_ticks": 353828, 00:13:18.251 "write_latency_ticks": 0, 00:13:18.251 "max_write_latency_ticks": 0, 00:13:18.251 "min_write_latency_ticks": 0, 00:13:18.251 "unmap_latency_ticks": 0, 00:13:18.251 "max_unmap_latency_ticks": 0, 00:13:18.251 "min_unmap_latency_ticks": 0, 00:13:18.251 "copy_latency_ticks": 0, 00:13:18.251 "max_copy_latency_ticks": 0, 00:13:18.251 "min_copy_latency_ticks": 0, 00:13:18.251 "io_error": {}, 00:13:18.251 "queue_depth_polling_period": 10, 00:13:18.251 "queue_depth": 512, 00:13:18.251 "io_time": 30, 00:13:18.251 "weighted_io_time": 15360 00:13:18.251 } 00:13:18.251 ] 00:13:18.251 }' 00:13:18.251 02:36:43 -- bdev/blockdev.sh@525 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:13:18.251 02:36:43 -- bdev/blockdev.sh@525 -- # qd_sampling_period=10 00:13:18.251 02:36:43 -- bdev/blockdev.sh@527 -- # '[' 10 == null ']' 00:13:18.251 02:36:43 -- bdev/blockdev.sh@527 -- # '[' 10 -ne 10 ']' 00:13:18.251 02:36:43 -- bdev/blockdev.sh@551 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:13:18.251 02:36:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.251 02:36:43 -- common/autotest_common.sh@10 -- # set +x 00:13:18.251 00:13:18.251 Latency(us) 00:13:18.251 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:18.251 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:13:18.251 Malloc_QD : 2.00 58816.99 229.75 0.00 0.00 4342.13 1154.33 6583.39 00:13:18.251 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:18.251 Malloc_QD : 2.00 61850.26 241.60 0.00 0.00 4129.27 785.69 5391.83 00:13:18.251 =================================================================================================================== 00:13:18.251 Total : 120667.25 471.36 0.00 0.00 4232.99 785.69 6583.39 00:13:18.510 0 00:13:18.510 02:36:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.510 02:36:43 -- bdev/blockdev.sh@552 -- # killprocess 123824 00:13:18.510 02:36:43 -- common/autotest_common.sh@926 -- # '[' -z 123824 ']' 00:13:18.510 02:36:43 -- common/autotest_common.sh@930 -- # kill -0 123824 00:13:18.510 02:36:43 -- common/autotest_common.sh@931 -- # uname 00:13:18.510 02:36:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:18.510 02:36:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 123824 00:13:18.510 killing process with pid 123824 00:13:18.510 Received shutdown signal, test time was about 2.048931 seconds 00:13:18.510 00:13:18.510 Latency(us) 00:13:18.510 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:18.510 =================================================================================================================== 00:13:18.510 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:18.510 02:36:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:18.510 02:36:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:18.510 02:36:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 123824' 00:13:18.510 02:36:43 -- common/autotest_common.sh@945 -- # kill 123824 00:13:18.510 02:36:43 -- common/autotest_common.sh@950 -- # wait 123824 00:13:18.767 ************************************ 00:13:18.767 END TEST bdev_qd_sampling 00:13:18.767 ************************************ 00:13:18.767 02:36:43 -- bdev/blockdev.sh@553 -- # trap - SIGINT SIGTERM EXIT 00:13:18.767 00:13:18.767 real 0m3.475s 00:13:18.767 user 0m6.771s 00:13:18.767 sys 0m0.378s 00:13:18.767 02:36:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:18.767 02:36:43 -- common/autotest_common.sh@10 -- # set +x 00:13:18.767 02:36:43 -- bdev/blockdev.sh@788 -- # run_test bdev_error error_test_suite '' 00:13:18.767 02:36:43 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:18.767 02:36:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:18.767 02:36:43 -- common/autotest_common.sh@10 -- # set +x 00:13:18.767 ************************************ 00:13:18.767 START TEST bdev_error 00:13:18.767 ************************************ 00:13:18.767 Process error testing pid: 123906 00:13:18.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:18.767 02:36:43 -- common/autotest_common.sh@1104 -- # error_test_suite '' 00:13:18.767 02:36:43 -- bdev/blockdev.sh@464 -- # DEV_1=Dev_1 00:13:18.767 02:36:43 -- bdev/blockdev.sh@465 -- # DEV_2=Dev_2 00:13:18.767 02:36:43 -- bdev/blockdev.sh@466 -- # ERR_DEV=EE_Dev_1 00:13:18.767 02:36:43 -- bdev/blockdev.sh@470 -- # ERR_PID=123906 00:13:18.767 02:36:43 -- bdev/blockdev.sh@471 -- # echo 'Process error testing pid: 123906' 00:13:18.767 02:36:43 -- bdev/blockdev.sh@472 -- # waitforlisten 123906 00:13:18.767 02:36:43 -- bdev/blockdev.sh@469 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:13:18.767 02:36:43 -- common/autotest_common.sh@819 -- # '[' -z 123906 ']' 00:13:18.767 02:36:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:18.767 02:36:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:18.767 02:36:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:18.767 02:36:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:18.767 02:36:43 -- common/autotest_common.sh@10 -- # set +x 00:13:18.767 [2024-07-11 02:36:43.719314] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:13:18.767 [2024-07-11 02:36:43.719816] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123906 ] 00:13:19.025 [2024-07-11 02:36:43.866317] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:19.025 [2024-07-11 02:36:43.934460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:19.591 02:36:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:19.591 02:36:44 -- common/autotest_common.sh@852 -- # return 0 00:13:19.591 02:36:44 -- bdev/blockdev.sh@474 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:13:19.591 02:36:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:19.591 02:36:44 -- common/autotest_common.sh@10 -- # set +x 00:13:19.591 Dev_1 00:13:19.591 02:36:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:19.591 02:36:44 -- bdev/blockdev.sh@475 -- # waitforbdev Dev_1 00:13:19.591 02:36:44 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_1 00:13:19.591 02:36:44 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:19.591 02:36:44 -- common/autotest_common.sh@889 -- # local i 00:13:19.591 02:36:44 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:19.591 02:36:44 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:19.591 02:36:44 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:13:19.591 02:36:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:19.591 02:36:44 -- common/autotest_common.sh@10 -- # set +x 00:13:19.591 02:36:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:19.591 02:36:44 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:13:19.591 02:36:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:19.591 02:36:44 -- common/autotest_common.sh@10 -- # set +x 00:13:19.591 [ 00:13:19.591 { 00:13:19.591 "name": "Dev_1", 00:13:19.591 "aliases": [ 00:13:19.591 "c8824456-aa0d-4c21-a2ef-bd8eb22bc30b" 00:13:19.591 ], 00:13:19.591 "product_name": "Malloc disk", 00:13:19.591 "block_size": 512, 00:13:19.591 "num_blocks": 262144, 00:13:19.591 "uuid": "c8824456-aa0d-4c21-a2ef-bd8eb22bc30b", 00:13:19.591 "assigned_rate_limits": { 00:13:19.591 "rw_ios_per_sec": 0, 00:13:19.591 "rw_mbytes_per_sec": 0, 00:13:19.591 "r_mbytes_per_sec": 0, 00:13:19.591 "w_mbytes_per_sec": 0 00:13:19.591 }, 00:13:19.591 "claimed": false, 00:13:19.591 "zoned": false, 00:13:19.591 "supported_io_types": { 00:13:19.591 "read": true, 00:13:19.591 "write": true, 00:13:19.591 "unmap": true, 00:13:19.591 "write_zeroes": true, 00:13:19.591 "flush": true, 00:13:19.591 "reset": true, 00:13:19.591 "compare": false, 00:13:19.591 "compare_and_write": false, 00:13:19.591 "abort": true, 00:13:19.591 "nvme_admin": false, 00:13:19.591 "nvme_io": false 00:13:19.591 }, 00:13:19.591 "memory_domains": [ 00:13:19.591 { 00:13:19.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.591 "dma_device_type": 2 00:13:19.591 } 00:13:19.591 ], 00:13:19.591 "driver_specific": {} 00:13:19.591 } 00:13:19.591 ] 00:13:19.591 02:36:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:19.591 02:36:44 -- common/autotest_common.sh@895 -- # return 0 00:13:19.591 02:36:44 -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_error_create Dev_1 00:13:19.591 02:36:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:19.591 02:36:44 -- common/autotest_common.sh@10 -- # set +x 00:13:19.850 true 00:13:19.850 02:36:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:19.850 02:36:44 -- bdev/blockdev.sh@477 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:13:19.850 02:36:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:19.850 02:36:44 -- common/autotest_common.sh@10 -- # set +x 00:13:19.850 Dev_2 00:13:19.850 02:36:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:19.850 02:36:44 -- bdev/blockdev.sh@478 -- # waitforbdev Dev_2 00:13:19.850 02:36:44 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_2 00:13:19.850 02:36:44 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:19.850 02:36:44 -- common/autotest_common.sh@889 -- # local i 00:13:19.850 02:36:44 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:19.850 02:36:44 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:19.850 02:36:44 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:13:19.850 02:36:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:19.850 02:36:44 -- common/autotest_common.sh@10 -- # set +x 00:13:19.850 02:36:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:19.850 02:36:44 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:13:19.850 02:36:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:19.850 02:36:44 -- common/autotest_common.sh@10 -- # set +x 00:13:19.850 [ 00:13:19.850 { 00:13:19.850 "name": "Dev_2", 00:13:19.850 "aliases": [ 00:13:19.850 "1965a82c-882e-4761-9c4c-ce521f348d8d" 00:13:19.850 ], 00:13:19.850 "product_name": "Malloc disk", 00:13:19.850 "block_size": 512, 00:13:19.850 "num_blocks": 262144, 00:13:19.850 "uuid": "1965a82c-882e-4761-9c4c-ce521f348d8d", 00:13:19.850 "assigned_rate_limits": { 00:13:19.850 "rw_ios_per_sec": 0, 00:13:19.850 "rw_mbytes_per_sec": 0, 00:13:19.850 "r_mbytes_per_sec": 0, 00:13:19.850 "w_mbytes_per_sec": 0 00:13:19.850 }, 00:13:19.850 "claimed": false, 00:13:19.850 "zoned": false, 00:13:19.850 "supported_io_types": { 00:13:19.850 "read": true, 00:13:19.850 "write": true, 00:13:19.850 "unmap": true, 00:13:19.850 "write_zeroes": true, 00:13:19.850 "flush": true, 00:13:19.850 "reset": true, 00:13:19.850 "compare": false, 00:13:19.850 "compare_and_write": false, 00:13:19.850 "abort": true, 00:13:19.850 "nvme_admin": false, 00:13:19.850 "nvme_io": false 00:13:19.850 }, 00:13:19.850 "memory_domains": [ 00:13:19.850 { 00:13:19.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.851 "dma_device_type": 2 00:13:19.851 } 00:13:19.851 ], 00:13:19.851 "driver_specific": {} 00:13:19.851 } 00:13:19.851 ] 00:13:19.851 02:36:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:19.851 02:36:44 -- common/autotest_common.sh@895 -- # return 0 00:13:19.851 02:36:44 -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:13:19.851 02:36:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:19.851 02:36:44 -- common/autotest_common.sh@10 -- # set +x 00:13:19.851 02:36:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:19.851 02:36:44 -- bdev/blockdev.sh@482 -- # sleep 1 00:13:19.851 02:36:44 -- bdev/blockdev.sh@481 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:13:19.851 Running I/O for 5 seconds... 00:13:20.786 Process is existed as continue on error is set. Pid: 123906 00:13:20.787 02:36:45 -- bdev/blockdev.sh@485 -- # kill -0 123906 00:13:20.787 02:36:45 -- bdev/blockdev.sh@486 -- # echo 'Process is existed as continue on error is set. Pid: 123906' 00:13:20.787 02:36:45 -- bdev/blockdev.sh@493 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:13:20.787 02:36:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:20.787 02:36:45 -- common/autotest_common.sh@10 -- # set +x 00:13:20.787 02:36:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:20.787 02:36:45 -- bdev/blockdev.sh@494 -- # rpc_cmd bdev_malloc_delete Dev_1 00:13:20.787 02:36:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:20.787 02:36:45 -- common/autotest_common.sh@10 -- # set +x 00:13:20.787 02:36:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:20.787 02:36:45 -- bdev/blockdev.sh@495 -- # sleep 5 00:13:20.787 Timeout while waiting for response: 00:13:20.787 00:13:20.787 00:13:24.973 00:13:24.973 Latency(us) 00:13:24.973 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:24.973 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:24.973 EE_Dev_1 : 0.91 44816.17 175.06 5.51 0.00 354.37 148.95 733.56 00:13:24.973 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:24.973 Dev_2 : 5.00 99238.64 387.65 0.00 0.00 158.71 74.47 24188.74 00:13:24.974 =================================================================================================================== 00:13:24.974 Total : 144054.81 562.71 5.51 0.00 173.54 74.47 24188.74 00:13:25.907 02:36:50 -- bdev/blockdev.sh@497 -- # killprocess 123906 00:13:25.907 02:36:50 -- common/autotest_common.sh@926 -- # '[' -z 123906 ']' 00:13:25.907 02:36:50 -- common/autotest_common.sh@930 -- # kill -0 123906 00:13:25.907 02:36:50 -- common/autotest_common.sh@931 -- # uname 00:13:25.907 02:36:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:25.907 02:36:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 123906 00:13:25.907 02:36:50 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:25.907 02:36:50 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:25.908 02:36:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 123906' 00:13:25.908 killing process with pid 123906 00:13:25.908 Received shutdown signal, test time was about 5.000000 seconds 00:13:25.908 00:13:25.908 Latency(us) 00:13:25.908 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:25.908 =================================================================================================================== 00:13:25.908 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:25.908 02:36:50 -- common/autotest_common.sh@945 -- # kill 123906 00:13:25.908 02:36:50 -- common/autotest_common.sh@950 -- # wait 123906 00:13:26.166 02:36:51 -- bdev/blockdev.sh@501 -- # ERR_PID=124023 00:13:26.166 02:36:51 -- bdev/blockdev.sh@500 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:13:26.166 02:36:51 -- bdev/blockdev.sh@502 -- # echo 'Process error testing pid: 124023' 00:13:26.166 Process error testing pid: 124023 00:13:26.166 02:36:51 -- bdev/blockdev.sh@503 -- # waitforlisten 124023 00:13:26.166 02:36:51 -- common/autotest_common.sh@819 -- # '[' -z 124023 ']' 00:13:26.166 02:36:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:26.166 02:36:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:26.166 02:36:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:26.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:26.166 02:36:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:26.166 02:36:51 -- common/autotest_common.sh@10 -- # set +x 00:13:26.166 [2024-07-11 02:36:51.168762] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:13:26.166 [2024-07-11 02:36:51.169217] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124023 ] 00:13:26.424 [2024-07-11 02:36:51.313480] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.424 [2024-07-11 02:36:51.383555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:27.359 02:36:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:27.360 02:36:52 -- common/autotest_common.sh@852 -- # return 0 00:13:27.360 02:36:52 -- bdev/blockdev.sh@505 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:13:27.360 02:36:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.360 02:36:52 -- common/autotest_common.sh@10 -- # set +x 00:13:27.360 Dev_1 00:13:27.360 02:36:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.360 02:36:52 -- bdev/blockdev.sh@506 -- # waitforbdev Dev_1 00:13:27.360 02:36:52 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_1 00:13:27.360 02:36:52 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:27.360 02:36:52 -- common/autotest_common.sh@889 -- # local i 00:13:27.360 02:36:52 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:27.360 02:36:52 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:27.360 02:36:52 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:13:27.360 02:36:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.360 02:36:52 -- common/autotest_common.sh@10 -- # set +x 00:13:27.360 02:36:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.360 02:36:52 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:13:27.360 02:36:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.360 02:36:52 -- common/autotest_common.sh@10 -- # set +x 00:13:27.360 [ 00:13:27.360 { 00:13:27.360 "name": "Dev_1", 00:13:27.360 "aliases": [ 00:13:27.360 "9c1a2867-ef23-42b4-9386-fbe9352ecb80" 00:13:27.360 ], 00:13:27.360 "product_name": "Malloc disk", 00:13:27.360 "block_size": 512, 00:13:27.360 "num_blocks": 262144, 00:13:27.360 "uuid": "9c1a2867-ef23-42b4-9386-fbe9352ecb80", 00:13:27.360 "assigned_rate_limits": { 00:13:27.360 "rw_ios_per_sec": 0, 00:13:27.360 "rw_mbytes_per_sec": 0, 00:13:27.360 "r_mbytes_per_sec": 0, 00:13:27.360 "w_mbytes_per_sec": 0 00:13:27.360 }, 00:13:27.360 "claimed": false, 00:13:27.360 "zoned": false, 00:13:27.360 "supported_io_types": { 00:13:27.360 "read": true, 00:13:27.360 "write": true, 00:13:27.360 "unmap": true, 00:13:27.360 "write_zeroes": true, 00:13:27.360 "flush": true, 00:13:27.360 "reset": true, 00:13:27.360 "compare": false, 00:13:27.360 "compare_and_write": false, 00:13:27.360 "abort": true, 00:13:27.360 "nvme_admin": false, 00:13:27.360 "nvme_io": false 00:13:27.360 }, 00:13:27.360 "memory_domains": [ 00:13:27.360 { 00:13:27.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.360 "dma_device_type": 2 00:13:27.360 } 00:13:27.360 ], 00:13:27.360 "driver_specific": {} 00:13:27.360 } 00:13:27.360 ] 00:13:27.360 02:36:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.360 02:36:52 -- common/autotest_common.sh@895 -- # return 0 00:13:27.360 02:36:52 -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_error_create Dev_1 00:13:27.360 02:36:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.360 02:36:52 -- common/autotest_common.sh@10 -- # set +x 00:13:27.360 true 00:13:27.360 02:36:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.360 02:36:52 -- bdev/blockdev.sh@508 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:13:27.360 02:36:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.360 02:36:52 -- common/autotest_common.sh@10 -- # set +x 00:13:27.360 Dev_2 00:13:27.360 02:36:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.360 02:36:52 -- bdev/blockdev.sh@509 -- # waitforbdev Dev_2 00:13:27.360 02:36:52 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_2 00:13:27.360 02:36:52 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:27.360 02:36:52 -- common/autotest_common.sh@889 -- # local i 00:13:27.360 02:36:52 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:27.360 02:36:52 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:27.360 02:36:52 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:13:27.360 02:36:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.360 02:36:52 -- common/autotest_common.sh@10 -- # set +x 00:13:27.360 02:36:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.360 02:36:52 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:13:27.360 02:36:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.360 02:36:52 -- common/autotest_common.sh@10 -- # set +x 00:13:27.360 [ 00:13:27.360 { 00:13:27.360 "name": "Dev_2", 00:13:27.360 "aliases": [ 00:13:27.360 "74bda35f-541b-46bb-8195-c71103ff5ff1" 00:13:27.360 ], 00:13:27.360 "product_name": "Malloc disk", 00:13:27.360 "block_size": 512, 00:13:27.360 "num_blocks": 262144, 00:13:27.360 "uuid": "74bda35f-541b-46bb-8195-c71103ff5ff1", 00:13:27.360 "assigned_rate_limits": { 00:13:27.360 "rw_ios_per_sec": 0, 00:13:27.360 "rw_mbytes_per_sec": 0, 00:13:27.360 "r_mbytes_per_sec": 0, 00:13:27.360 "w_mbytes_per_sec": 0 00:13:27.360 }, 00:13:27.360 "claimed": false, 00:13:27.360 "zoned": false, 00:13:27.360 "supported_io_types": { 00:13:27.360 "read": true, 00:13:27.360 "write": true, 00:13:27.360 "unmap": true, 00:13:27.360 "write_zeroes": true, 00:13:27.360 "flush": true, 00:13:27.360 "reset": true, 00:13:27.360 "compare": false, 00:13:27.360 "compare_and_write": false, 00:13:27.360 "abort": true, 00:13:27.360 "nvme_admin": false, 00:13:27.360 "nvme_io": false 00:13:27.360 }, 00:13:27.360 "memory_domains": [ 00:13:27.360 { 00:13:27.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.360 "dma_device_type": 2 00:13:27.360 } 00:13:27.360 ], 00:13:27.360 "driver_specific": {} 00:13:27.360 } 00:13:27.360 ] 00:13:27.360 02:36:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.360 02:36:52 -- common/autotest_common.sh@895 -- # return 0 00:13:27.360 02:36:52 -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:13:27.360 02:36:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.360 02:36:52 -- common/autotest_common.sh@10 -- # set +x 00:13:27.360 02:36:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.360 02:36:52 -- bdev/blockdev.sh@513 -- # NOT wait 124023 00:13:27.360 02:36:52 -- common/autotest_common.sh@640 -- # local es=0 00:13:27.360 02:36:52 -- bdev/blockdev.sh@512 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:13:27.360 02:36:52 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 124023 00:13:27.360 02:36:52 -- common/autotest_common.sh@628 -- # local arg=wait 00:13:27.360 02:36:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:27.360 02:36:52 -- common/autotest_common.sh@632 -- # type -t wait 00:13:27.360 02:36:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:27.360 02:36:52 -- common/autotest_common.sh@643 -- # wait 124023 00:13:27.360 Running I/O for 5 seconds... 00:13:27.360 task offset: 148440 on job bdev=EE_Dev_1 fails 00:13:27.360 00:13:27.360 Latency(us) 00:13:27.360 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:27.360 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:27.360 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:13:27.360 EE_Dev_1 : 0.00 24608.50 96.13 5592.84 0.00 441.58 172.22 796.86 00:13:27.360 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:27.360 Dev_2 : 0.00 19500.30 76.17 0.00 0.00 523.55 162.91 949.53 00:13:27.360 =================================================================================================================== 00:13:27.360 Total : 44108.81 172.30 5592.84 0.00 486.04 162.91 949.53 00:13:27.360 [2024-07-11 02:36:52.363024] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:27.360 request: 00:13:27.360 { 00:13:27.360 "method": "perform_tests", 00:13:27.360 "req_id": 1 00:13:27.360 } 00:13:27.360 Got JSON-RPC error response 00:13:27.360 response: 00:13:27.360 { 00:13:27.360 "code": -32603, 00:13:27.360 "message": "bdevperf failed with error Operation not permitted" 00:13:27.360 } 00:13:27.927 02:36:52 -- common/autotest_common.sh@643 -- # es=255 00:13:27.927 02:36:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:13:27.927 02:36:52 -- common/autotest_common.sh@652 -- # es=127 00:13:27.927 02:36:52 -- common/autotest_common.sh@653 -- # case "$es" in 00:13:27.927 02:36:52 -- common/autotest_common.sh@660 -- # es=1 00:13:27.927 02:36:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:13:27.927 00:13:27.927 real 0m9.049s 00:13:27.927 user 0m9.268s 00:13:27.927 sys 0m0.704s 00:13:27.927 02:36:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:27.927 02:36:52 -- common/autotest_common.sh@10 -- # set +x 00:13:27.927 ************************************ 00:13:27.927 END TEST bdev_error 00:13:27.927 ************************************ 00:13:27.927 02:36:52 -- bdev/blockdev.sh@789 -- # run_test bdev_stat stat_test_suite '' 00:13:27.927 02:36:52 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:27.927 02:36:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:27.927 02:36:52 -- common/autotest_common.sh@10 -- # set +x 00:13:27.927 ************************************ 00:13:27.927 START TEST bdev_stat 00:13:27.927 ************************************ 00:13:27.927 02:36:52 -- common/autotest_common.sh@1104 -- # stat_test_suite '' 00:13:27.927 02:36:52 -- bdev/blockdev.sh@590 -- # STAT_DEV=Malloc_STAT 00:13:27.927 02:36:52 -- bdev/blockdev.sh@594 -- # STAT_PID=124069 00:13:27.927 02:36:52 -- bdev/blockdev.sh@593 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:13:27.927 Process Bdev IO statistics testing pid: 124069 00:13:27.927 02:36:52 -- bdev/blockdev.sh@595 -- # echo 'Process Bdev IO statistics testing pid: 124069' 00:13:27.927 02:36:52 -- bdev/blockdev.sh@596 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:13:27.927 02:36:52 -- bdev/blockdev.sh@597 -- # waitforlisten 124069 00:13:27.927 02:36:52 -- common/autotest_common.sh@819 -- # '[' -z 124069 ']' 00:13:27.927 02:36:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.927 02:36:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:27.927 02:36:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.927 02:36:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:27.927 02:36:52 -- common/autotest_common.sh@10 -- # set +x 00:13:27.927 [2024-07-11 02:36:52.817796] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:13:27.927 [2024-07-11 02:36:52.818218] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124069 ] 00:13:27.927 [2024-07-11 02:36:52.968110] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:28.186 [2024-07-11 02:36:53.051235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:28.186 [2024-07-11 02:36:53.051246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.753 02:36:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:28.753 02:36:53 -- common/autotest_common.sh@852 -- # return 0 00:13:28.753 02:36:53 -- bdev/blockdev.sh@599 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:13:28.753 02:36:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:28.753 02:36:53 -- common/autotest_common.sh@10 -- # set +x 00:13:29.010 Malloc_STAT 00:13:29.010 02:36:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:29.010 02:36:53 -- bdev/blockdev.sh@600 -- # waitforbdev Malloc_STAT 00:13:29.010 02:36:53 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_STAT 00:13:29.010 02:36:53 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:29.010 02:36:53 -- common/autotest_common.sh@889 -- # local i 00:13:29.010 02:36:53 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:29.010 02:36:53 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:29.010 02:36:53 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:13:29.010 02:36:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:29.010 02:36:53 -- common/autotest_common.sh@10 -- # set +x 00:13:29.010 02:36:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:29.010 02:36:53 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:13:29.010 02:36:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:29.010 02:36:53 -- common/autotest_common.sh@10 -- # set +x 00:13:29.010 [ 00:13:29.010 { 00:13:29.010 "name": "Malloc_STAT", 00:13:29.010 "aliases": [ 00:13:29.010 "51046579-4705-4a6d-9eea-bbacada441b0" 00:13:29.010 ], 00:13:29.010 "product_name": "Malloc disk", 00:13:29.010 "block_size": 512, 00:13:29.010 "num_blocks": 262144, 00:13:29.010 "uuid": "51046579-4705-4a6d-9eea-bbacada441b0", 00:13:29.010 "assigned_rate_limits": { 00:13:29.010 "rw_ios_per_sec": 0, 00:13:29.010 "rw_mbytes_per_sec": 0, 00:13:29.010 "r_mbytes_per_sec": 0, 00:13:29.010 "w_mbytes_per_sec": 0 00:13:29.010 }, 00:13:29.010 "claimed": false, 00:13:29.010 "zoned": false, 00:13:29.010 "supported_io_types": { 00:13:29.010 "read": true, 00:13:29.010 "write": true, 00:13:29.010 "unmap": true, 00:13:29.010 "write_zeroes": true, 00:13:29.010 "flush": true, 00:13:29.010 "reset": true, 00:13:29.010 "compare": false, 00:13:29.011 "compare_and_write": false, 00:13:29.011 "abort": true, 00:13:29.011 "nvme_admin": false, 00:13:29.011 "nvme_io": false 00:13:29.011 }, 00:13:29.011 "memory_domains": [ 00:13:29.011 { 00:13:29.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:29.011 "dma_device_type": 2 00:13:29.011 } 00:13:29.011 ], 00:13:29.011 "driver_specific": {} 00:13:29.011 } 00:13:29.011 ] 00:13:29.011 02:36:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:29.011 02:36:53 -- common/autotest_common.sh@895 -- # return 0 00:13:29.011 02:36:53 -- bdev/blockdev.sh@603 -- # sleep 2 00:13:29.011 02:36:53 -- bdev/blockdev.sh@602 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:29.011 Running I/O for 10 seconds... 00:13:30.913 02:36:55 -- bdev/blockdev.sh@604 -- # stat_function_test Malloc_STAT 00:13:30.913 02:36:55 -- bdev/blockdev.sh@557 -- # local bdev_name=Malloc_STAT 00:13:30.913 02:36:55 -- bdev/blockdev.sh@558 -- # local iostats 00:13:30.913 02:36:55 -- bdev/blockdev.sh@559 -- # local io_count1 00:13:30.913 02:36:55 -- bdev/blockdev.sh@560 -- # local io_count2 00:13:30.913 02:36:55 -- bdev/blockdev.sh@561 -- # local iostats_per_channel 00:13:30.913 02:36:55 -- bdev/blockdev.sh@562 -- # local io_count_per_channel1 00:13:30.913 02:36:55 -- bdev/blockdev.sh@563 -- # local io_count_per_channel2 00:13:30.913 02:36:55 -- bdev/blockdev.sh@564 -- # local io_count_per_channel_all=0 00:13:30.913 02:36:55 -- bdev/blockdev.sh@566 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:13:30.913 02:36:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.913 02:36:55 -- common/autotest_common.sh@10 -- # set +x 00:13:30.913 02:36:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.913 02:36:55 -- bdev/blockdev.sh@566 -- # iostats='{ 00:13:30.913 "tick_rate": 2200000000, 00:13:30.913 "ticks": 1551158366533, 00:13:30.913 "bdevs": [ 00:13:30.913 { 00:13:30.913 "name": "Malloc_STAT", 00:13:30.913 "bytes_read": 485528064, 00:13:30.913 "num_read_ops": 118531, 00:13:30.913 "bytes_written": 0, 00:13:30.913 "num_write_ops": 0, 00:13:30.913 "bytes_unmapped": 0, 00:13:30.913 "num_unmap_ops": 0, 00:13:30.913 "bytes_copied": 0, 00:13:30.913 "num_copy_ops": 0, 00:13:30.913 "read_latency_ticks": 2146283234588, 00:13:30.913 "max_read_latency_ticks": 24012874, 00:13:30.913 "min_read_latency_ticks": 665824, 00:13:30.913 "write_latency_ticks": 0, 00:13:30.913 "max_write_latency_ticks": 0, 00:13:30.913 "min_write_latency_ticks": 0, 00:13:30.913 "unmap_latency_ticks": 0, 00:13:30.913 "max_unmap_latency_ticks": 0, 00:13:30.913 "min_unmap_latency_ticks": 0, 00:13:30.913 "copy_latency_ticks": 0, 00:13:30.913 "max_copy_latency_ticks": 0, 00:13:30.913 "min_copy_latency_ticks": 0, 00:13:30.913 "io_error": {} 00:13:30.913 } 00:13:30.913 ] 00:13:30.913 }' 00:13:30.913 02:36:55 -- bdev/blockdev.sh@567 -- # jq -r '.bdevs[0].num_read_ops' 00:13:30.913 02:36:55 -- bdev/blockdev.sh@567 -- # io_count1=118531 00:13:30.913 02:36:55 -- bdev/blockdev.sh@569 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:13:30.913 02:36:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.913 02:36:55 -- common/autotest_common.sh@10 -- # set +x 00:13:30.913 02:36:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.913 02:36:55 -- bdev/blockdev.sh@569 -- # iostats_per_channel='{ 00:13:30.913 "tick_rate": 2200000000, 00:13:30.913 "ticks": 1551351555225, 00:13:30.913 "name": "Malloc_STAT", 00:13:30.913 "channels": [ 00:13:30.913 { 00:13:30.913 "thread_id": 2, 00:13:30.913 "bytes_read": 251658240, 00:13:30.913 "num_read_ops": 61440, 00:13:30.913 "bytes_written": 0, 00:13:30.913 "num_write_ops": 0, 00:13:30.913 "bytes_unmapped": 0, 00:13:30.913 "num_unmap_ops": 0, 00:13:30.913 "bytes_copied": 0, 00:13:30.913 "num_copy_ops": 0, 00:13:30.913 "read_latency_ticks": 1121691481074, 00:13:30.913 "max_read_latency_ticks": 24012874, 00:13:30.913 "min_read_latency_ticks": 14987128, 00:13:30.913 "write_latency_ticks": 0, 00:13:30.913 "max_write_latency_ticks": 0, 00:13:30.913 "min_write_latency_ticks": 0, 00:13:30.913 "unmap_latency_ticks": 0, 00:13:30.913 "max_unmap_latency_ticks": 0, 00:13:30.913 "min_unmap_latency_ticks": 0, 00:13:30.913 "copy_latency_ticks": 0, 00:13:30.913 "max_copy_latency_ticks": 0, 00:13:30.913 "min_copy_latency_ticks": 0 00:13:30.913 }, 00:13:30.913 { 00:13:30.913 "thread_id": 3, 00:13:30.913 "bytes_read": 254803968, 00:13:30.913 "num_read_ops": 62208, 00:13:30.913 "bytes_written": 0, 00:13:30.913 "num_write_ops": 0, 00:13:30.913 "bytes_unmapped": 0, 00:13:30.913 "num_unmap_ops": 0, 00:13:30.913 "bytes_copied": 0, 00:13:30.913 "num_copy_ops": 0, 00:13:30.913 "read_latency_ticks": 1123793753525, 00:13:30.913 "max_read_latency_ticks": 22452064, 00:13:30.913 "min_read_latency_ticks": 10211591, 00:13:30.913 "write_latency_ticks": 0, 00:13:30.913 "max_write_latency_ticks": 0, 00:13:30.913 "min_write_latency_ticks": 0, 00:13:30.913 "unmap_latency_ticks": 0, 00:13:30.913 "max_unmap_latency_ticks": 0, 00:13:30.913 "min_unmap_latency_ticks": 0, 00:13:30.913 "copy_latency_ticks": 0, 00:13:30.913 "max_copy_latency_ticks": 0, 00:13:30.913 "min_copy_latency_ticks": 0 00:13:30.913 } 00:13:30.913 ] 00:13:30.913 }' 00:13:30.913 02:36:55 -- bdev/blockdev.sh@570 -- # jq -r '.channels[0].num_read_ops' 00:13:31.172 02:36:56 -- bdev/blockdev.sh@570 -- # io_count_per_channel1=61440 00:13:31.172 02:36:56 -- bdev/blockdev.sh@571 -- # io_count_per_channel_all=61440 00:13:31.172 02:36:56 -- bdev/blockdev.sh@572 -- # jq -r '.channels[1].num_read_ops' 00:13:31.172 02:36:56 -- bdev/blockdev.sh@572 -- # io_count_per_channel2=62208 00:13:31.172 02:36:56 -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=123648 00:13:31.172 02:36:56 -- bdev/blockdev.sh@575 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:13:31.172 02:36:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:31.172 02:36:56 -- common/autotest_common.sh@10 -- # set +x 00:13:31.172 02:36:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:31.172 02:36:56 -- bdev/blockdev.sh@575 -- # iostats='{ 00:13:31.172 "tick_rate": 2200000000, 00:13:31.172 "ticks": 1551643490379, 00:13:31.172 "bdevs": [ 00:13:31.172 { 00:13:31.172 "name": "Malloc_STAT", 00:13:31.172 "bytes_read": 541102592, 00:13:31.172 "num_read_ops": 132099, 00:13:31.172 "bytes_written": 0, 00:13:31.172 "num_write_ops": 0, 00:13:31.172 "bytes_unmapped": 0, 00:13:31.172 "num_unmap_ops": 0, 00:13:31.172 "bytes_copied": 0, 00:13:31.172 "num_copy_ops": 0, 00:13:31.172 "read_latency_ticks": 2394668500171, 00:13:31.172 "max_read_latency_ticks": 24012874, 00:13:31.172 "min_read_latency_ticks": 665824, 00:13:31.172 "write_latency_ticks": 0, 00:13:31.172 "max_write_latency_ticks": 0, 00:13:31.172 "min_write_latency_ticks": 0, 00:13:31.172 "unmap_latency_ticks": 0, 00:13:31.172 "max_unmap_latency_ticks": 0, 00:13:31.172 "min_unmap_latency_ticks": 0, 00:13:31.172 "copy_latency_ticks": 0, 00:13:31.172 "max_copy_latency_ticks": 0, 00:13:31.172 "min_copy_latency_ticks": 0, 00:13:31.172 "io_error": {} 00:13:31.172 } 00:13:31.172 ] 00:13:31.172 }' 00:13:31.172 02:36:56 -- bdev/blockdev.sh@576 -- # jq -r '.bdevs[0].num_read_ops' 00:13:31.172 02:36:56 -- bdev/blockdev.sh@576 -- # io_count2=132099 00:13:31.172 02:36:56 -- bdev/blockdev.sh@581 -- # '[' 123648 -lt 118531 ']' 00:13:31.172 02:36:56 -- bdev/blockdev.sh@581 -- # '[' 123648 -gt 132099 ']' 00:13:31.172 02:36:56 -- bdev/blockdev.sh@606 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:13:31.172 02:36:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:31.172 02:36:56 -- common/autotest_common.sh@10 -- # set +x 00:13:31.172 00:13:31.172 Latency(us) 00:13:31.172 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:31.173 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:13:31.173 Malloc_STAT : 2.21 30779.52 120.23 0.00 0.00 8289.78 1936.29 10962.39 00:13:31.173 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:31.173 Malloc_STAT : 2.21 31351.64 122.47 0.00 0.00 8144.66 1333.06 10247.45 00:13:31.173 =================================================================================================================== 00:13:31.173 Total : 62131.16 242.70 0.00 0.00 8216.55 1333.06 10962.39 00:13:31.173 0 00:13:31.173 02:36:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:31.173 02:36:56 -- bdev/blockdev.sh@607 -- # killprocess 124069 00:13:31.173 02:36:56 -- common/autotest_common.sh@926 -- # '[' -z 124069 ']' 00:13:31.173 02:36:56 -- common/autotest_common.sh@930 -- # kill -0 124069 00:13:31.173 02:36:56 -- common/autotest_common.sh@931 -- # uname 00:13:31.173 02:36:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:31.173 02:36:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 124069 00:13:31.173 02:36:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:31.173 killing process with pid 124069 00:13:31.173 Received shutdown signal, test time was about 2.278125 seconds 00:13:31.173 00:13:31.173 Latency(us) 00:13:31.173 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:31.173 =================================================================================================================== 00:13:31.173 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:31.173 02:36:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:31.173 02:36:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 124069' 00:13:31.173 02:36:56 -- common/autotest_common.sh@945 -- # kill 124069 00:13:31.173 02:36:56 -- common/autotest_common.sh@950 -- # wait 124069 00:13:31.740 ************************************ 00:13:31.740 END TEST bdev_stat 00:13:31.740 ************************************ 00:13:31.740 02:36:56 -- bdev/blockdev.sh@608 -- # trap - SIGINT SIGTERM EXIT 00:13:31.740 00:13:31.740 real 0m3.813s 00:13:31.740 user 0m7.578s 00:13:31.740 sys 0m0.363s 00:13:31.740 02:36:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:31.740 02:36:56 -- common/autotest_common.sh@10 -- # set +x 00:13:31.740 02:36:56 -- bdev/blockdev.sh@792 -- # [[ bdev == gpt ]] 00:13:31.740 02:36:56 -- bdev/blockdev.sh@796 -- # [[ bdev == crypto_sw ]] 00:13:31.741 02:36:56 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:13:31.741 02:36:56 -- bdev/blockdev.sh@809 -- # cleanup 00:13:31.741 02:36:56 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:31.741 02:36:56 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:31.741 02:36:56 -- bdev/blockdev.sh@24 -- # [[ bdev == rbd ]] 00:13:31.741 02:36:56 -- bdev/blockdev.sh@28 -- # [[ bdev == daos ]] 00:13:31.741 02:36:56 -- bdev/blockdev.sh@32 -- # [[ bdev = \g\p\t ]] 00:13:31.741 02:36:56 -- bdev/blockdev.sh@38 -- # [[ bdev == xnvme ]] 00:13:31.741 ************************************ 00:13:31.741 END TEST blockdev_general 00:13:31.741 ************************************ 00:13:31.741 00:13:31.741 real 1m56.905s 00:13:31.741 user 5m17.284s 00:13:31.741 sys 0m20.622s 00:13:31.741 02:36:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:31.741 02:36:56 -- common/autotest_common.sh@10 -- # set +x 00:13:31.741 02:36:56 -- spdk/autotest.sh@196 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:13:31.741 02:36:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:31.741 02:36:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:31.741 02:36:56 -- common/autotest_common.sh@10 -- # set +x 00:13:31.741 ************************************ 00:13:31.741 START TEST bdev_raid 00:13:31.741 ************************************ 00:13:31.741 02:36:56 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:13:31.741 * Looking for test storage... 00:13:31.741 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:13:31.741 02:36:56 -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:31.741 02:36:56 -- bdev/nbd_common.sh@6 -- # set -e 00:13:31.741 02:36:56 -- bdev/bdev_raid.sh@14 -- # rpc_py='/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:13:31.741 02:36:56 -- bdev/bdev_raid.sh@714 -- # trap 'on_error_exit;' ERR 00:13:31.741 02:36:56 -- bdev/bdev_raid.sh@716 -- # uname -s 00:13:31.741 02:36:56 -- bdev/bdev_raid.sh@716 -- # '[' Linux = Linux ']' 00:13:31.741 02:36:56 -- bdev/bdev_raid.sh@716 -- # modprobe -n nbd 00:13:31.741 02:36:56 -- bdev/bdev_raid.sh@717 -- # has_nbd=true 00:13:31.741 02:36:56 -- bdev/bdev_raid.sh@718 -- # modprobe nbd 00:13:31.741 02:36:56 -- bdev/bdev_raid.sh@719 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:13:31.741 02:36:56 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:31.741 02:36:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:31.741 02:36:56 -- common/autotest_common.sh@10 -- # set +x 00:13:31.741 ************************************ 00:13:31.741 START TEST raid_function_test_raid0 00:13:31.741 ************************************ 00:13:31.741 02:36:56 -- common/autotest_common.sh@1104 -- # raid_function_test raid0 00:13:31.741 02:36:56 -- bdev/bdev_raid.sh@81 -- # local raid_level=raid0 00:13:31.741 02:36:56 -- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0 00:13:31.741 02:36:56 -- bdev/bdev_raid.sh@83 -- # local raid_bdev 00:13:31.741 02:36:56 -- bdev/bdev_raid.sh@86 -- # raid_pid=124231 00:13:31.741 Process raid pid: 124231 00:13:31.741 02:36:56 -- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 124231' 00:13:31.741 02:36:56 -- bdev/bdev_raid.sh@88 -- # waitforlisten 124231 /var/tmp/spdk-raid.sock 00:13:31.741 02:36:56 -- bdev/bdev_raid.sh@85 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:31.741 02:36:56 -- common/autotest_common.sh@819 -- # '[' -z 124231 ']' 00:13:31.741 02:36:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:31.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:31.741 02:36:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:31.741 02:36:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:31.741 02:36:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:31.741 02:36:56 -- common/autotest_common.sh@10 -- # set +x 00:13:31.741 [2024-07-11 02:36:56.816479] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:13:31.741 [2024-07-11 02:36:56.816730] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:31.999 [2024-07-11 02:36:56.965310] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:31.999 [2024-07-11 02:36:57.035849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:31.999 [2024-07-11 02:36:57.087143] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:32.935 02:36:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:32.935 02:36:57 -- common/autotest_common.sh@852 -- # return 0 00:13:32.935 02:36:57 -- bdev/bdev_raid.sh@90 -- # configure_raid_bdev raid0 00:13:32.935 02:36:57 -- bdev/bdev_raid.sh@67 -- # local raid_level=raid0 00:13:32.935 02:36:57 -- bdev/bdev_raid.sh@68 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:13:32.935 02:36:57 -- bdev/bdev_raid.sh@70 -- # cat 00:13:32.935 02:36:57 -- bdev/bdev_raid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:13:33.193 [2024-07-11 02:36:58.104526] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:13:33.193 [2024-07-11 02:36:58.107466] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:13:33.193 [2024-07-11 02:36:58.107567] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006380 00:13:33.193 [2024-07-11 02:36:58.107584] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:33.193 [2024-07-11 02:36:58.107828] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001eb0 00:13:33.193 [2024-07-11 02:36:58.108351] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006380 00:13:33.193 [2024-07-11 02:36:58.108380] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000006380 00:13:33.193 [2024-07-11 02:36:58.108653] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:33.193 Base_1 00:13:33.193 Base_2 00:13:33.193 02:36:58 -- bdev/bdev_raid.sh@77 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:13:33.193 02:36:58 -- bdev/bdev_raid.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:13:33.193 02:36:58 -- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)' 00:13:33.451 02:36:58 -- bdev/bdev_raid.sh@91 -- # raid_bdev=raid 00:13:33.451 02:36:58 -- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']' 00:13:33.451 02:36:58 -- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:13:33.451 02:36:58 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:33.451 02:36:58 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:13:33.451 02:36:58 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:33.451 02:36:58 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:13:33.451 02:36:58 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:33.451 02:36:58 -- bdev/nbd_common.sh@12 -- # local i 00:13:33.451 02:36:58 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:33.451 02:36:58 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:33.451 02:36:58 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:13:33.710 [2024-07-11 02:36:58.560974] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:13:33.710 /dev/nbd0 00:13:33.710 02:36:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:33.710 02:36:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:33.710 02:36:58 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:13:33.710 02:36:58 -- common/autotest_common.sh@857 -- # local i 00:13:33.710 02:36:58 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:33.710 02:36:58 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:33.710 02:36:58 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:13:33.710 02:36:58 -- common/autotest_common.sh@861 -- # break 00:13:33.710 02:36:58 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:33.710 02:36:58 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:33.710 02:36:58 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:33.710 1+0 records in 00:13:33.710 1+0 records out 00:13:33.710 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000343841 s, 11.9 MB/s 00:13:33.710 02:36:58 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.710 02:36:58 -- common/autotest_common.sh@874 -- # size=4096 00:13:33.710 02:36:58 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.710 02:36:58 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:33.710 02:36:58 -- common/autotest_common.sh@877 -- # return 0 00:13:33.710 02:36:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:33.710 02:36:58 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:33.710 02:36:58 -- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:13:33.710 02:36:58 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:33.710 02:36:58 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:13:33.981 02:36:58 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:33.981 { 00:13:33.981 "nbd_device": "/dev/nbd0", 00:13:33.981 "bdev_name": "raid" 00:13:33.981 } 00:13:33.981 ]' 00:13:33.981 02:36:58 -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:33.981 { 00:13:33.981 "nbd_device": "/dev/nbd0", 00:13:33.981 "bdev_name": "raid" 00:13:33.981 } 00:13:33.981 ]' 00:13:33.981 02:36:58 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:33.981 02:36:58 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:13:33.981 02:36:58 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:33.981 02:36:58 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:13:33.981 02:36:58 -- bdev/nbd_common.sh@65 -- # count=1 00:13:33.981 02:36:58 -- bdev/nbd_common.sh@66 -- # echo 1 00:13:33.981 02:36:58 -- bdev/bdev_raid.sh@98 -- # count=1 00:13:33.981 02:36:58 -- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']' 00:13:33.981 02:36:58 -- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:13:33.981 02:36:58 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:13:33.981 02:36:58 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:13:33.981 02:36:58 -- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:33.981 02:36:58 -- bdev/bdev_raid.sh@20 -- # local blksize 00:13:33.981 02:36:58 -- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0 00:13:33.981 02:36:58 -- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC 00:13:33.981 02:36:58 -- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5 00:13:33.981 02:36:58 -- bdev/bdev_raid.sh@21 -- # blksize=512 00:13:33.981 02:36:58 -- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096 00:13:33.981 02:36:58 -- bdev/bdev_raid.sh@23 -- # local rw_len=2097152 00:13:33.981 02:36:58 -- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=(0 1028 321) 00:13:33.981 02:36:58 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs 00:13:33.981 02:36:58 -- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=(128 2035 456) 00:13:33.981 02:36:58 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums 00:13:33.981 02:36:58 -- bdev/bdev_raid.sh@26 -- # local unmap_off 00:13:33.981 02:36:58 -- bdev/bdev_raid.sh@27 -- # local unmap_len 00:13:33.981 02:36:58 -- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096 00:13:33.981 4096+0 records in 00:13:33.981 4096+0 records out 00:13:33.981 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0284474 s, 73.7 MB/s 00:13:33.981 02:36:58 -- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:13:34.257 4096+0 records in 00:13:34.257 4096+0 records out 00:13:34.257 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.254663 s, 8.2 MB/s 00:13:34.257 02:36:59 -- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0 00:13:34.257 02:36:59 -- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:13:34.257 02:36:59 -- bdev/bdev_raid.sh@37 -- # (( i = 0 )) 00:13:34.257 02:36:59 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:13:34.257 02:36:59 -- bdev/bdev_raid.sh@38 -- # unmap_off=0 00:13:34.257 02:36:59 -- bdev/bdev_raid.sh@39 -- # unmap_len=65536 00:13:34.257 02:36:59 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:13:34.257 128+0 records in 00:13:34.257 128+0 records out 00:13:34.257 65536 bytes (66 kB, 64 KiB) copied, 0.000870145 s, 75.3 MB/s 00:13:34.257 02:36:59 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:13:34.257 02:36:59 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:13:34.257 02:36:59 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:13:34.257 02:36:59 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:13:34.257 02:36:59 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:13:34.257 02:36:59 -- bdev/bdev_raid.sh@38 -- # unmap_off=526336 00:13:34.257 02:36:59 -- bdev/bdev_raid.sh@39 -- # unmap_len=1041920 00:13:34.257 02:36:59 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:13:34.257 2035+0 records in 00:13:34.257 2035+0 records out 00:13:34.257 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00807331 s, 129 MB/s 00:13:34.257 02:36:59 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:13:34.257 02:36:59 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:13:34.257 02:36:59 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:13:34.257 02:36:59 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:13:34.257 02:36:59 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:13:34.257 02:36:59 -- bdev/bdev_raid.sh@38 -- # unmap_off=164352 00:13:34.257 02:36:59 -- bdev/bdev_raid.sh@39 -- # unmap_len=233472 00:13:34.257 02:36:59 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:13:34.257 456+0 records in 00:13:34.257 456+0 records out 00:13:34.257 233472 bytes (233 kB, 228 KiB) copied, 0.00223728 s, 104 MB/s 00:13:34.257 02:36:59 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:13:34.257 02:36:59 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:13:34.257 02:36:59 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:13:34.257 02:36:59 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:13:34.257 02:36:59 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:13:34.257 02:36:59 -- bdev/bdev_raid.sh@53 -- # return 0 00:13:34.257 02:36:59 -- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:13:34.257 02:36:59 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:34.257 02:36:59 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:13:34.257 02:36:59 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:34.257 02:36:59 -- bdev/nbd_common.sh@51 -- # local i 00:13:34.257 02:36:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:34.257 02:36:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:13:34.515 02:36:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:34.515 02:36:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:34.515 02:36:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:34.515 02:36:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:34.515 02:36:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:34.515 02:36:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:34.515 02:36:59 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:13:34.515 [2024-07-11 02:36:59.578766] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:34.773 02:36:59 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:13:34.773 02:36:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:34.773 02:36:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:34.773 02:36:59 -- bdev/nbd_common.sh@41 -- # break 00:13:34.773 02:36:59 -- bdev/nbd_common.sh@45 -- # return 0 00:13:34.773 02:36:59 -- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:13:34.773 02:36:59 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:34.773 02:36:59 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:13:35.030 02:36:59 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:35.030 02:36:59 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:35.030 02:36:59 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:35.030 02:37:00 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:35.030 02:37:00 -- bdev/nbd_common.sh@65 -- # echo '' 00:13:35.030 02:37:00 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:35.030 02:37:00 -- bdev/nbd_common.sh@65 -- # true 00:13:35.030 02:37:00 -- bdev/nbd_common.sh@65 -- # count=0 00:13:35.030 02:37:00 -- bdev/nbd_common.sh@66 -- # echo 0 00:13:35.030 02:37:00 -- bdev/bdev_raid.sh@106 -- # count=0 00:13:35.030 02:37:00 -- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']' 00:13:35.030 02:37:00 -- bdev/bdev_raid.sh@111 -- # killprocess 124231 00:13:35.030 02:37:00 -- common/autotest_common.sh@926 -- # '[' -z 124231 ']' 00:13:35.030 02:37:00 -- common/autotest_common.sh@930 -- # kill -0 124231 00:13:35.030 02:37:00 -- common/autotest_common.sh@931 -- # uname 00:13:35.030 02:37:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:35.030 02:37:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 124231 00:13:35.030 killing process with pid 124231 00:13:35.030 02:37:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:35.030 02:37:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:35.030 02:37:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 124231' 00:13:35.030 02:37:00 -- common/autotest_common.sh@945 -- # kill 124231 00:13:35.030 02:37:00 -- common/autotest_common.sh@950 -- # wait 124231 00:13:35.030 [2024-07-11 02:37:00.043617] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:35.030 [2024-07-11 02:37:00.043882] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:35.030 [2024-07-11 02:37:00.043983] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:35.030 [2024-07-11 02:37:00.044144] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name raid, state offline 00:13:35.030 [2024-07-11 02:37:00.070895] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:35.596 ************************************ 00:13:35.596 END TEST raid_function_test_raid0 00:13:35.596 ************************************ 00:13:35.596 02:37:00 -- bdev/bdev_raid.sh@113 -- # return 0 00:13:35.596 00:13:35.596 real 0m3.622s 00:13:35.596 user 0m5.010s 00:13:35.596 sys 0m0.867s 00:13:35.596 02:37:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:35.596 02:37:00 -- common/autotest_common.sh@10 -- # set +x 00:13:35.596 02:37:00 -- bdev/bdev_raid.sh@720 -- # run_test raid_function_test_concat raid_function_test concat 00:13:35.596 02:37:00 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:35.596 02:37:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:35.596 02:37:00 -- common/autotest_common.sh@10 -- # set +x 00:13:35.596 ************************************ 00:13:35.596 START TEST raid_function_test_concat 00:13:35.596 ************************************ 00:13:35.596 02:37:00 -- common/autotest_common.sh@1104 -- # raid_function_test concat 00:13:35.596 02:37:00 -- bdev/bdev_raid.sh@81 -- # local raid_level=concat 00:13:35.596 02:37:00 -- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0 00:13:35.596 02:37:00 -- bdev/bdev_raid.sh@83 -- # local raid_bdev 00:13:35.596 Process raid pid: 124375 00:13:35.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:35.596 02:37:00 -- bdev/bdev_raid.sh@86 -- # raid_pid=124375 00:13:35.596 02:37:00 -- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 124375' 00:13:35.596 02:37:00 -- bdev/bdev_raid.sh@85 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:35.596 02:37:00 -- bdev/bdev_raid.sh@88 -- # waitforlisten 124375 /var/tmp/spdk-raid.sock 00:13:35.596 02:37:00 -- common/autotest_common.sh@819 -- # '[' -z 124375 ']' 00:13:35.596 02:37:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:35.596 02:37:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:35.596 02:37:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:35.596 02:37:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:35.596 02:37:00 -- common/autotest_common.sh@10 -- # set +x 00:13:35.597 [2024-07-11 02:37:00.482528] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:13:35.597 [2024-07-11 02:37:00.482902] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:35.597 [2024-07-11 02:37:00.621582] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:35.854 [2024-07-11 02:37:00.703441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:35.854 [2024-07-11 02:37:00.777215] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:36.420 02:37:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:36.420 02:37:01 -- common/autotest_common.sh@852 -- # return 0 00:13:36.420 02:37:01 -- bdev/bdev_raid.sh@90 -- # configure_raid_bdev concat 00:13:36.420 02:37:01 -- bdev/bdev_raid.sh@67 -- # local raid_level=concat 00:13:36.420 02:37:01 -- bdev/bdev_raid.sh@68 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:13:36.420 02:37:01 -- bdev/bdev_raid.sh@70 -- # cat 00:13:36.420 02:37:01 -- bdev/bdev_raid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:13:36.678 [2024-07-11 02:37:01.650388] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:13:36.678 [2024-07-11 02:37:01.653058] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:13:36.678 [2024-07-11 02:37:01.653251] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006380 00:13:36.678 [2024-07-11 02:37:01.653365] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:36.678 [2024-07-11 02:37:01.653577] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001eb0 00:13:36.678 [2024-07-11 02:37:01.654024] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006380 00:13:36.678 [2024-07-11 02:37:01.654165] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000006380 00:13:36.678 [2024-07-11 02:37:01.654452] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:36.678 Base_1 00:13:36.678 Base_2 00:13:36.678 02:37:01 -- bdev/bdev_raid.sh@77 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:13:36.678 02:37:01 -- bdev/bdev_raid.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:13:36.678 02:37:01 -- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)' 00:13:36.937 02:37:01 -- bdev/bdev_raid.sh@91 -- # raid_bdev=raid 00:13:36.937 02:37:01 -- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']' 00:13:36.937 02:37:01 -- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:13:36.937 02:37:01 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:36.937 02:37:01 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:13:36.937 02:37:01 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:36.937 02:37:01 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:13:36.937 02:37:01 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:36.937 02:37:01 -- bdev/nbd_common.sh@12 -- # local i 00:13:36.937 02:37:01 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:36.937 02:37:01 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:36.937 02:37:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:13:37.195 [2024-07-11 02:37:02.126586] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:13:37.195 /dev/nbd0 00:13:37.195 02:37:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:37.195 02:37:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:37.195 02:37:02 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:13:37.195 02:37:02 -- common/autotest_common.sh@857 -- # local i 00:13:37.195 02:37:02 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:37.195 02:37:02 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:37.195 02:37:02 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:13:37.195 02:37:02 -- common/autotest_common.sh@861 -- # break 00:13:37.195 02:37:02 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:37.195 02:37:02 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:37.195 02:37:02 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:37.195 1+0 records in 00:13:37.195 1+0 records out 00:13:37.195 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000429733 s, 9.5 MB/s 00:13:37.195 02:37:02 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:37.195 02:37:02 -- common/autotest_common.sh@874 -- # size=4096 00:13:37.195 02:37:02 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:37.195 02:37:02 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:37.195 02:37:02 -- common/autotest_common.sh@877 -- # return 0 00:13:37.195 02:37:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:37.195 02:37:02 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:37.195 02:37:02 -- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:13:37.195 02:37:02 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:37.195 02:37:02 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:13:37.454 02:37:02 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:37.454 { 00:13:37.454 "nbd_device": "/dev/nbd0", 00:13:37.454 "bdev_name": "raid" 00:13:37.454 } 00:13:37.454 ]' 00:13:37.454 02:37:02 -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:37.454 { 00:13:37.454 "nbd_device": "/dev/nbd0", 00:13:37.454 "bdev_name": "raid" 00:13:37.454 } 00:13:37.454 ]' 00:13:37.454 02:37:02 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:37.454 02:37:02 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:13:37.454 02:37:02 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:13:37.454 02:37:02 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:37.454 02:37:02 -- bdev/nbd_common.sh@65 -- # count=1 00:13:37.454 02:37:02 -- bdev/nbd_common.sh@66 -- # echo 1 00:13:37.454 02:37:02 -- bdev/bdev_raid.sh@98 -- # count=1 00:13:37.454 02:37:02 -- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']' 00:13:37.454 02:37:02 -- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:13:37.454 02:37:02 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:13:37.454 02:37:02 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:13:37.454 02:37:02 -- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:37.454 02:37:02 -- bdev/bdev_raid.sh@20 -- # local blksize 00:13:37.454 02:37:02 -- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0 00:13:37.454 02:37:02 -- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC 00:13:37.454 02:37:02 -- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5 00:13:37.454 02:37:02 -- bdev/bdev_raid.sh@21 -- # blksize=512 00:13:37.454 02:37:02 -- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096 00:13:37.454 02:37:02 -- bdev/bdev_raid.sh@23 -- # local rw_len=2097152 00:13:37.454 02:37:02 -- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=(0 1028 321) 00:13:37.454 02:37:02 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs 00:13:37.454 02:37:02 -- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=(128 2035 456) 00:13:37.454 02:37:02 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums 00:13:37.454 02:37:02 -- bdev/bdev_raid.sh@26 -- # local unmap_off 00:13:37.454 02:37:02 -- bdev/bdev_raid.sh@27 -- # local unmap_len 00:13:37.454 02:37:02 -- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096 00:13:37.454 4096+0 records in 00:13:37.454 4096+0 records out 00:13:37.454 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0281584 s, 74.5 MB/s 00:13:37.454 02:37:02 -- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:13:37.712 4096+0 records in 00:13:37.712 4096+0 records out 00:13:37.712 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.259925 s, 8.1 MB/s 00:13:37.712 02:37:02 -- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0 00:13:37.712 02:37:02 -- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:13:37.972 02:37:02 -- bdev/bdev_raid.sh@37 -- # (( i = 0 )) 00:13:37.972 02:37:02 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:13:37.972 02:37:02 -- bdev/bdev_raid.sh@38 -- # unmap_off=0 00:13:37.972 02:37:02 -- bdev/bdev_raid.sh@39 -- # unmap_len=65536 00:13:37.972 02:37:02 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:13:37.972 128+0 records in 00:13:37.972 128+0 records out 00:13:37.972 65536 bytes (66 kB, 64 KiB) copied, 0.00093287 s, 70.3 MB/s 00:13:37.972 02:37:02 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:13:37.972 02:37:02 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:13:37.972 02:37:02 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:13:37.972 02:37:02 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:13:37.972 02:37:02 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:13:37.972 02:37:02 -- bdev/bdev_raid.sh@38 -- # unmap_off=526336 00:13:37.972 02:37:02 -- bdev/bdev_raid.sh@39 -- # unmap_len=1041920 00:13:37.972 02:37:02 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:13:37.972 2035+0 records in 00:13:37.972 2035+0 records out 00:13:37.972 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00702776 s, 148 MB/s 00:13:37.972 02:37:02 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:13:37.972 02:37:02 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:13:37.972 02:37:02 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:13:37.972 02:37:02 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:13:37.972 02:37:02 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:13:37.972 02:37:02 -- bdev/bdev_raid.sh@38 -- # unmap_off=164352 00:13:37.972 02:37:02 -- bdev/bdev_raid.sh@39 -- # unmap_len=233472 00:13:37.972 02:37:02 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:13:37.972 456+0 records in 00:13:37.972 456+0 records out 00:13:37.972 233472 bytes (233 kB, 228 KiB) copied, 0.00210455 s, 111 MB/s 00:13:37.972 02:37:02 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:13:37.972 02:37:02 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:13:37.972 02:37:02 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:13:37.972 02:37:02 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:13:37.972 02:37:02 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:13:37.972 02:37:02 -- bdev/bdev_raid.sh@53 -- # return 0 00:13:37.972 02:37:02 -- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:13:37.972 02:37:02 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:37.972 02:37:02 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:13:37.972 02:37:02 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:37.972 02:37:02 -- bdev/nbd_common.sh@51 -- # local i 00:13:37.972 02:37:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:37.972 02:37:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:13:38.230 02:37:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:38.230 02:37:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:38.230 02:37:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:38.230 02:37:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:38.230 02:37:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:38.230 02:37:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:38.230 02:37:03 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:13:38.230 [2024-07-11 02:37:03.139106] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:38.230 02:37:03 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:13:38.231 02:37:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:38.231 02:37:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:38.231 02:37:03 -- bdev/nbd_common.sh@41 -- # break 00:13:38.231 02:37:03 -- bdev/nbd_common.sh@45 -- # return 0 00:13:38.231 02:37:03 -- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:13:38.231 02:37:03 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:38.231 02:37:03 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:13:38.489 02:37:03 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:38.489 02:37:03 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:38.489 02:37:03 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:38.489 02:37:03 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:38.489 02:37:03 -- bdev/nbd_common.sh@65 -- # echo '' 00:13:38.489 02:37:03 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:38.489 02:37:03 -- bdev/nbd_common.sh@65 -- # true 00:13:38.489 02:37:03 -- bdev/nbd_common.sh@65 -- # count=0 00:13:38.489 02:37:03 -- bdev/nbd_common.sh@66 -- # echo 0 00:13:38.489 02:37:03 -- bdev/bdev_raid.sh@106 -- # count=0 00:13:38.489 02:37:03 -- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']' 00:13:38.489 02:37:03 -- bdev/bdev_raid.sh@111 -- # killprocess 124375 00:13:38.489 02:37:03 -- common/autotest_common.sh@926 -- # '[' -z 124375 ']' 00:13:38.490 02:37:03 -- common/autotest_common.sh@930 -- # kill -0 124375 00:13:38.490 02:37:03 -- common/autotest_common.sh@931 -- # uname 00:13:38.490 02:37:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:38.490 02:37:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 124375 00:13:38.490 killing process with pid 124375 00:13:38.490 02:37:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:38.490 02:37:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:38.490 02:37:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 124375' 00:13:38.490 02:37:03 -- common/autotest_common.sh@945 -- # kill 124375 00:13:38.490 02:37:03 -- common/autotest_common.sh@950 -- # wait 124375 00:13:38.490 [2024-07-11 02:37:03.510115] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:38.490 [2024-07-11 02:37:03.510346] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:38.490 [2024-07-11 02:37:03.510589] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:38.490 [2024-07-11 02:37:03.510778] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name raid, state offline 00:13:38.490 [2024-07-11 02:37:03.549489] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:39.054 ************************************ 00:13:39.054 END TEST raid_function_test_concat 00:13:39.054 ************************************ 00:13:39.054 02:37:03 -- bdev/bdev_raid.sh@113 -- # return 0 00:13:39.054 00:13:39.054 real 0m3.406s 00:13:39.054 user 0m4.636s 00:13:39.054 sys 0m0.820s 00:13:39.054 02:37:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:39.054 02:37:03 -- common/autotest_common.sh@10 -- # set +x 00:13:39.054 02:37:03 -- bdev/bdev_raid.sh@723 -- # run_test raid0_resize_test raid0_resize_test 00:13:39.054 02:37:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:39.054 02:37:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:39.054 02:37:03 -- common/autotest_common.sh@10 -- # set +x 00:13:39.054 ************************************ 00:13:39.054 START TEST raid0_resize_test 00:13:39.054 ************************************ 00:13:39.054 02:37:03 -- common/autotest_common.sh@1104 -- # raid0_resize_test 00:13:39.054 02:37:03 -- bdev/bdev_raid.sh@293 -- # local blksize=512 00:13:39.054 02:37:03 -- bdev/bdev_raid.sh@294 -- # local bdev_size_mb=32 00:13:39.054 02:37:03 -- bdev/bdev_raid.sh@295 -- # local new_bdev_size_mb=64 00:13:39.054 02:37:03 -- bdev/bdev_raid.sh@296 -- # local blkcnt 00:13:39.054 02:37:03 -- bdev/bdev_raid.sh@297 -- # local raid_size_mb 00:13:39.054 02:37:03 -- bdev/bdev_raid.sh@298 -- # local new_raid_size_mb 00:13:39.054 02:37:03 -- bdev/bdev_raid.sh@301 -- # raid_pid=124526 00:13:39.054 02:37:03 -- bdev/bdev_raid.sh@302 -- # echo 'Process raid pid: 124526' 00:13:39.054 02:37:03 -- bdev/bdev_raid.sh@300 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:39.054 Process raid pid: 124526 00:13:39.054 02:37:03 -- bdev/bdev_raid.sh@303 -- # waitforlisten 124526 /var/tmp/spdk-raid.sock 00:13:39.054 02:37:03 -- common/autotest_common.sh@819 -- # '[' -z 124526 ']' 00:13:39.054 02:37:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:39.054 02:37:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:39.054 02:37:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:39.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:39.054 02:37:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:39.054 02:37:03 -- common/autotest_common.sh@10 -- # set +x 00:13:39.054 [2024-07-11 02:37:03.961010] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:13:39.054 [2024-07-11 02:37:03.961446] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:39.054 [2024-07-11 02:37:04.109233] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.312 [2024-07-11 02:37:04.201503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.312 [2024-07-11 02:37:04.276096] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:39.879 02:37:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:39.879 02:37:04 -- common/autotest_common.sh@852 -- # return 0 00:13:39.879 02:37:04 -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:13:40.138 Base_1 00:13:40.138 02:37:05 -- bdev/bdev_raid.sh@306 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:13:40.396 Base_2 00:13:40.396 02:37:05 -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:13:40.655 [2024-07-11 02:37:05.584442] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:13:40.655 [2024-07-11 02:37:05.586739] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:13:40.655 [2024-07-11 02:37:05.586943] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006380 00:13:40.655 [2024-07-11 02:37:05.587066] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:40.655 [2024-07-11 02:37:05.587321] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001d10 00:13:40.655 [2024-07-11 02:37:05.587743] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006380 00:13:40.655 [2024-07-11 02:37:05.587873] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x616000006380 00:13:40.655 [2024-07-11 02:37:05.588159] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:40.655 02:37:05 -- bdev/bdev_raid.sh@311 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:13:40.914 [2024-07-11 02:37:05.848522] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:40.914 [2024-07-11 02:37:05.848705] bdev_raid.c:2082:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:13:40.914 true 00:13:40.914 02:37:05 -- bdev/bdev_raid.sh@314 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:13:40.914 02:37:05 -- bdev/bdev_raid.sh@314 -- # jq '.[].num_blocks' 00:13:41.172 [2024-07-11 02:37:06.132708] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:41.172 02:37:06 -- bdev/bdev_raid.sh@314 -- # blkcnt=131072 00:13:41.172 02:37:06 -- bdev/bdev_raid.sh@315 -- # raid_size_mb=64 00:13:41.172 02:37:06 -- bdev/bdev_raid.sh@316 -- # '[' 64 '!=' 64 ']' 00:13:41.172 02:37:06 -- bdev/bdev_raid.sh@322 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:13:41.431 [2024-07-11 02:37:06.372620] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:41.431 [2024-07-11 02:37:06.372797] bdev_raid.c:2082:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:13:41.431 [2024-07-11 02:37:06.372943] raid0.c: 402:raid0_resize: *NOTICE*: raid0 'Raid': min blockcount was changed from 262144 to 262144 00:13:41.431 [2024-07-11 02:37:06.373045] bdev_raid.c:1572:raid_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:41.431 true 00:13:41.431 02:37:06 -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:13:41.431 02:37:06 -- bdev/bdev_raid.sh@325 -- # jq '.[].num_blocks' 00:13:41.689 [2024-07-11 02:37:06.628810] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:41.689 02:37:06 -- bdev/bdev_raid.sh@325 -- # blkcnt=262144 00:13:41.689 02:37:06 -- bdev/bdev_raid.sh@326 -- # raid_size_mb=128 00:13:41.689 02:37:06 -- bdev/bdev_raid.sh@327 -- # '[' 128 '!=' 128 ']' 00:13:41.689 02:37:06 -- bdev/bdev_raid.sh@332 -- # killprocess 124526 00:13:41.689 02:37:06 -- common/autotest_common.sh@926 -- # '[' -z 124526 ']' 00:13:41.689 02:37:06 -- common/autotest_common.sh@930 -- # kill -0 124526 00:13:41.689 02:37:06 -- common/autotest_common.sh@931 -- # uname 00:13:41.689 02:37:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:41.689 02:37:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 124526 00:13:41.689 killing process with pid 124526 00:13:41.689 02:37:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:41.689 02:37:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:41.689 02:37:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 124526' 00:13:41.689 02:37:06 -- common/autotest_common.sh@945 -- # kill 124526 00:13:41.689 02:37:06 -- common/autotest_common.sh@950 -- # wait 124526 00:13:41.689 [2024-07-11 02:37:06.665250] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:41.689 [2024-07-11 02:37:06.665376] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:41.689 [2024-07-11 02:37:06.665440] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:41.689 [2024-07-11 02:37:06.665532] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Raid, state offline 00:13:41.689 [2024-07-11 02:37:06.666231] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:41.948 ************************************ 00:13:41.948 END TEST raid0_resize_test 00:13:41.948 ************************************ 00:13:41.948 02:37:06 -- bdev/bdev_raid.sh@334 -- # return 0 00:13:41.948 00:13:41.948 real 0m3.082s 00:13:41.948 user 0m4.771s 00:13:41.948 sys 0m0.509s 00:13:41.948 02:37:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:41.948 02:37:06 -- common/autotest_common.sh@10 -- # set +x 00:13:41.948 02:37:07 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:13:41.948 02:37:07 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:13:41.948 02:37:07 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:13:41.948 02:37:07 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:13:41.948 02:37:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:41.948 02:37:07 -- common/autotest_common.sh@10 -- # set +x 00:13:41.948 ************************************ 00:13:41.948 START TEST raid_state_function_test 00:13:41.948 ************************************ 00:13:41.948 02:37:07 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 2 false 00:13:41.948 02:37:07 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:13:41.948 02:37:07 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:13:41.948 02:37:07 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:13:41.948 02:37:07 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:13:41.948 02:37:07 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:13:42.206 02:37:07 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:13:42.206 02:37:07 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:42.206 02:37:07 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:13:42.206 02:37:07 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:42.206 02:37:07 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:42.206 02:37:07 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:13:42.206 02:37:07 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:42.206 02:37:07 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:42.206 02:37:07 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:13:42.206 02:37:07 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:13:42.206 02:37:07 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:13:42.206 02:37:07 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:13:42.206 02:37:07 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:13:42.206 02:37:07 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:13:42.206 02:37:07 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:13:42.206 02:37:07 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:13:42.206 02:37:07 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:13:42.206 02:37:07 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:13:42.206 02:37:07 -- bdev/bdev_raid.sh@226 -- # raid_pid=124610 00:13:42.206 02:37:07 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 124610' 00:13:42.206 02:37:07 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:42.206 Process raid pid: 124610 00:13:42.206 02:37:07 -- bdev/bdev_raid.sh@228 -- # waitforlisten 124610 /var/tmp/spdk-raid.sock 00:13:42.206 02:37:07 -- common/autotest_common.sh@819 -- # '[' -z 124610 ']' 00:13:42.206 02:37:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:42.206 02:37:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:42.206 02:37:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:42.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:42.206 02:37:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:42.206 02:37:07 -- common/autotest_common.sh@10 -- # set +x 00:13:42.206 [2024-07-11 02:37:07.088644] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:13:42.206 [2024-07-11 02:37:07.089117] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:42.206 [2024-07-11 02:37:07.229447] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.465 [2024-07-11 02:37:07.321823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.465 [2024-07-11 02:37:07.395742] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:43.031 02:37:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:43.031 02:37:08 -- common/autotest_common.sh@852 -- # return 0 00:13:43.031 02:37:08 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:43.297 [2024-07-11 02:37:08.190205] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:43.297 [2024-07-11 02:37:08.190572] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:43.297 [2024-07-11 02:37:08.190693] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:43.297 [2024-07-11 02:37:08.190754] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:43.297 02:37:08 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:43.297 02:37:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:43.297 02:37:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:43.297 02:37:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:43.297 02:37:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:43.297 02:37:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:43.297 02:37:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:43.297 02:37:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:43.297 02:37:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:43.297 02:37:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:43.297 02:37:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:43.297 02:37:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:43.568 02:37:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:43.568 "name": "Existed_Raid", 00:13:43.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.568 "strip_size_kb": 64, 00:13:43.568 "state": "configuring", 00:13:43.568 "raid_level": "raid0", 00:13:43.568 "superblock": false, 00:13:43.568 "num_base_bdevs": 2, 00:13:43.568 "num_base_bdevs_discovered": 0, 00:13:43.568 "num_base_bdevs_operational": 2, 00:13:43.568 "base_bdevs_list": [ 00:13:43.568 { 00:13:43.568 "name": "BaseBdev1", 00:13:43.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.568 "is_configured": false, 00:13:43.568 "data_offset": 0, 00:13:43.569 "data_size": 0 00:13:43.569 }, 00:13:43.569 { 00:13:43.569 "name": "BaseBdev2", 00:13:43.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.569 "is_configured": false, 00:13:43.569 "data_offset": 0, 00:13:43.569 "data_size": 0 00:13:43.569 } 00:13:43.569 ] 00:13:43.569 }' 00:13:43.569 02:37:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:43.569 02:37:08 -- common/autotest_common.sh@10 -- # set +x 00:13:44.136 02:37:09 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:44.395 [2024-07-11 02:37:09.266378] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:44.395 [2024-07-11 02:37:09.266708] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:13:44.395 02:37:09 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:44.653 [2024-07-11 02:37:09.510422] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:44.653 [2024-07-11 02:37:09.510717] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:44.653 [2024-07-11 02:37:09.510838] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:44.653 [2024-07-11 02:37:09.510902] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:44.653 02:37:09 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:44.653 [2024-07-11 02:37:09.704858] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:44.653 BaseBdev1 00:13:44.653 02:37:09 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:13:44.653 02:37:09 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:13:44.653 02:37:09 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:44.653 02:37:09 -- common/autotest_common.sh@889 -- # local i 00:13:44.653 02:37:09 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:44.653 02:37:09 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:44.653 02:37:09 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:44.911 02:37:09 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:45.170 [ 00:13:45.170 { 00:13:45.170 "name": "BaseBdev1", 00:13:45.170 "aliases": [ 00:13:45.170 "09fc09ba-8fe7-4d3b-a4b1-b2ca3ea56763" 00:13:45.170 ], 00:13:45.170 "product_name": "Malloc disk", 00:13:45.170 "block_size": 512, 00:13:45.170 "num_blocks": 65536, 00:13:45.170 "uuid": "09fc09ba-8fe7-4d3b-a4b1-b2ca3ea56763", 00:13:45.170 "assigned_rate_limits": { 00:13:45.170 "rw_ios_per_sec": 0, 00:13:45.170 "rw_mbytes_per_sec": 0, 00:13:45.170 "r_mbytes_per_sec": 0, 00:13:45.170 "w_mbytes_per_sec": 0 00:13:45.170 }, 00:13:45.170 "claimed": true, 00:13:45.170 "claim_type": "exclusive_write", 00:13:45.170 "zoned": false, 00:13:45.170 "supported_io_types": { 00:13:45.170 "read": true, 00:13:45.170 "write": true, 00:13:45.170 "unmap": true, 00:13:45.170 "write_zeroes": true, 00:13:45.170 "flush": true, 00:13:45.170 "reset": true, 00:13:45.170 "compare": false, 00:13:45.170 "compare_and_write": false, 00:13:45.170 "abort": true, 00:13:45.170 "nvme_admin": false, 00:13:45.170 "nvme_io": false 00:13:45.170 }, 00:13:45.170 "memory_domains": [ 00:13:45.170 { 00:13:45.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:45.170 "dma_device_type": 2 00:13:45.170 } 00:13:45.170 ], 00:13:45.170 "driver_specific": {} 00:13:45.170 } 00:13:45.170 ] 00:13:45.170 02:37:10 -- common/autotest_common.sh@895 -- # return 0 00:13:45.170 02:37:10 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:45.170 02:37:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:45.170 02:37:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:45.170 02:37:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:45.170 02:37:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:45.170 02:37:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:45.170 02:37:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:45.170 02:37:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:45.170 02:37:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:45.170 02:37:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:45.170 02:37:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:45.170 02:37:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:45.429 02:37:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:45.429 "name": "Existed_Raid", 00:13:45.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.429 "strip_size_kb": 64, 00:13:45.429 "state": "configuring", 00:13:45.429 "raid_level": "raid0", 00:13:45.429 "superblock": false, 00:13:45.429 "num_base_bdevs": 2, 00:13:45.429 "num_base_bdevs_discovered": 1, 00:13:45.429 "num_base_bdevs_operational": 2, 00:13:45.429 "base_bdevs_list": [ 00:13:45.429 { 00:13:45.429 "name": "BaseBdev1", 00:13:45.429 "uuid": "09fc09ba-8fe7-4d3b-a4b1-b2ca3ea56763", 00:13:45.429 "is_configured": true, 00:13:45.429 "data_offset": 0, 00:13:45.429 "data_size": 65536 00:13:45.429 }, 00:13:45.429 { 00:13:45.429 "name": "BaseBdev2", 00:13:45.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.429 "is_configured": false, 00:13:45.429 "data_offset": 0, 00:13:45.429 "data_size": 0 00:13:45.429 } 00:13:45.429 ] 00:13:45.429 }' 00:13:45.429 02:37:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:45.429 02:37:10 -- common/autotest_common.sh@10 -- # set +x 00:13:45.995 02:37:10 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:46.254 [2024-07-11 02:37:11.209298] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:46.254 [2024-07-11 02:37:11.209667] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005a80 name Existed_Raid, state configuring 00:13:46.254 02:37:11 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:13:46.254 02:37:11 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:46.512 [2024-07-11 02:37:11.421340] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:46.512 [2024-07-11 02:37:11.423619] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:46.512 [2024-07-11 02:37:11.423814] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:46.512 02:37:11 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:13:46.512 02:37:11 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:46.512 02:37:11 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:46.512 02:37:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:46.512 02:37:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:46.512 02:37:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:46.512 02:37:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:46.512 02:37:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:46.512 02:37:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:46.512 02:37:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:46.512 02:37:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:46.512 02:37:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:46.512 02:37:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:46.512 02:37:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:46.771 02:37:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:46.771 "name": "Existed_Raid", 00:13:46.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.771 "strip_size_kb": 64, 00:13:46.771 "state": "configuring", 00:13:46.771 "raid_level": "raid0", 00:13:46.771 "superblock": false, 00:13:46.771 "num_base_bdevs": 2, 00:13:46.771 "num_base_bdevs_discovered": 1, 00:13:46.771 "num_base_bdevs_operational": 2, 00:13:46.771 "base_bdevs_list": [ 00:13:46.771 { 00:13:46.771 "name": "BaseBdev1", 00:13:46.771 "uuid": "09fc09ba-8fe7-4d3b-a4b1-b2ca3ea56763", 00:13:46.771 "is_configured": true, 00:13:46.771 "data_offset": 0, 00:13:46.771 "data_size": 65536 00:13:46.771 }, 00:13:46.771 { 00:13:46.771 "name": "BaseBdev2", 00:13:46.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.771 "is_configured": false, 00:13:46.771 "data_offset": 0, 00:13:46.771 "data_size": 0 00:13:46.771 } 00:13:46.771 ] 00:13:46.771 }' 00:13:46.771 02:37:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:46.771 02:37:11 -- common/autotest_common.sh@10 -- # set +x 00:13:47.338 02:37:12 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:47.597 [2024-07-11 02:37:12.618885] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:47.597 [2024-07-11 02:37:12.619292] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006380 00:13:47.597 [2024-07-11 02:37:12.619348] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:47.597 [2024-07-11 02:37:12.619778] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001eb0 00:13:47.597 [2024-07-11 02:37:12.620481] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006380 00:13:47.597 [2024-07-11 02:37:12.620638] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006380 00:13:47.597 BaseBdev2 00:13:47.597 [2024-07-11 02:37:12.621152] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:47.597 02:37:12 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:13:47.597 02:37:12 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:13:47.597 02:37:12 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:47.597 02:37:12 -- common/autotest_common.sh@889 -- # local i 00:13:47.597 02:37:12 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:47.597 02:37:12 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:47.597 02:37:12 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:47.855 02:37:12 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:48.113 [ 00:13:48.113 { 00:13:48.113 "name": "BaseBdev2", 00:13:48.113 "aliases": [ 00:13:48.113 "db64378e-680e-4297-94fb-d5ff70f69948" 00:13:48.113 ], 00:13:48.113 "product_name": "Malloc disk", 00:13:48.113 "block_size": 512, 00:13:48.113 "num_blocks": 65536, 00:13:48.113 "uuid": "db64378e-680e-4297-94fb-d5ff70f69948", 00:13:48.113 "assigned_rate_limits": { 00:13:48.113 "rw_ios_per_sec": 0, 00:13:48.113 "rw_mbytes_per_sec": 0, 00:13:48.113 "r_mbytes_per_sec": 0, 00:13:48.113 "w_mbytes_per_sec": 0 00:13:48.113 }, 00:13:48.113 "claimed": true, 00:13:48.113 "claim_type": "exclusive_write", 00:13:48.113 "zoned": false, 00:13:48.113 "supported_io_types": { 00:13:48.113 "read": true, 00:13:48.113 "write": true, 00:13:48.113 "unmap": true, 00:13:48.113 "write_zeroes": true, 00:13:48.113 "flush": true, 00:13:48.113 "reset": true, 00:13:48.113 "compare": false, 00:13:48.113 "compare_and_write": false, 00:13:48.113 "abort": true, 00:13:48.113 "nvme_admin": false, 00:13:48.113 "nvme_io": false 00:13:48.113 }, 00:13:48.113 "memory_domains": [ 00:13:48.113 { 00:13:48.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:48.113 "dma_device_type": 2 00:13:48.113 } 00:13:48.113 ], 00:13:48.113 "driver_specific": {} 00:13:48.113 } 00:13:48.113 ] 00:13:48.113 02:37:13 -- common/autotest_common.sh@895 -- # return 0 00:13:48.113 02:37:13 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:13:48.113 02:37:13 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:48.113 02:37:13 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:13:48.113 02:37:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:48.113 02:37:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:48.113 02:37:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:48.113 02:37:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:48.113 02:37:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:48.113 02:37:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:48.113 02:37:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:48.113 02:37:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:48.113 02:37:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:48.113 02:37:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:48.113 02:37:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:48.371 02:37:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:48.371 "name": "Existed_Raid", 00:13:48.371 "uuid": "f55da1b5-fa77-4fd3-b6ed-2f721aa7d451", 00:13:48.371 "strip_size_kb": 64, 00:13:48.371 "state": "online", 00:13:48.371 "raid_level": "raid0", 00:13:48.371 "superblock": false, 00:13:48.371 "num_base_bdevs": 2, 00:13:48.371 "num_base_bdevs_discovered": 2, 00:13:48.371 "num_base_bdevs_operational": 2, 00:13:48.371 "base_bdevs_list": [ 00:13:48.371 { 00:13:48.371 "name": "BaseBdev1", 00:13:48.371 "uuid": "09fc09ba-8fe7-4d3b-a4b1-b2ca3ea56763", 00:13:48.371 "is_configured": true, 00:13:48.371 "data_offset": 0, 00:13:48.371 "data_size": 65536 00:13:48.371 }, 00:13:48.371 { 00:13:48.371 "name": "BaseBdev2", 00:13:48.371 "uuid": "db64378e-680e-4297-94fb-d5ff70f69948", 00:13:48.371 "is_configured": true, 00:13:48.371 "data_offset": 0, 00:13:48.371 "data_size": 65536 00:13:48.371 } 00:13:48.371 ] 00:13:48.371 }' 00:13:48.371 02:37:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:48.371 02:37:13 -- common/autotest_common.sh@10 -- # set +x 00:13:49.306 02:37:14 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:49.306 [2024-07-11 02:37:14.219606] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:49.306 [2024-07-11 02:37:14.219827] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:49.306 [2024-07-11 02:37:14.220088] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:49.306 02:37:14 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:13:49.306 02:37:14 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:13:49.306 02:37:14 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:13:49.306 02:37:14 -- bdev/bdev_raid.sh@197 -- # return 1 00:13:49.306 02:37:14 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:13:49.306 02:37:14 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:13:49.306 02:37:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:49.306 02:37:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:13:49.306 02:37:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:49.306 02:37:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:49.306 02:37:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:13:49.306 02:37:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:49.306 02:37:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:49.306 02:37:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:49.306 02:37:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:49.306 02:37:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:49.306 02:37:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:49.564 02:37:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:49.564 "name": "Existed_Raid", 00:13:49.564 "uuid": "f55da1b5-fa77-4fd3-b6ed-2f721aa7d451", 00:13:49.564 "strip_size_kb": 64, 00:13:49.564 "state": "offline", 00:13:49.564 "raid_level": "raid0", 00:13:49.564 "superblock": false, 00:13:49.564 "num_base_bdevs": 2, 00:13:49.564 "num_base_bdevs_discovered": 1, 00:13:49.564 "num_base_bdevs_operational": 1, 00:13:49.564 "base_bdevs_list": [ 00:13:49.564 { 00:13:49.564 "name": null, 00:13:49.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.564 "is_configured": false, 00:13:49.564 "data_offset": 0, 00:13:49.564 "data_size": 65536 00:13:49.564 }, 00:13:49.564 { 00:13:49.564 "name": "BaseBdev2", 00:13:49.564 "uuid": "db64378e-680e-4297-94fb-d5ff70f69948", 00:13:49.564 "is_configured": true, 00:13:49.564 "data_offset": 0, 00:13:49.564 "data_size": 65536 00:13:49.564 } 00:13:49.564 ] 00:13:49.564 }' 00:13:49.564 02:37:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:49.564 02:37:14 -- common/autotest_common.sh@10 -- # set +x 00:13:50.130 02:37:15 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:13:50.130 02:37:15 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:50.130 02:37:15 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:50.130 02:37:15 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:13:50.389 02:37:15 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:13:50.389 02:37:15 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:50.389 02:37:15 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:50.647 [2024-07-11 02:37:15.636131] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:50.647 [2024-07-11 02:37:15.636361] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state offline 00:13:50.647 02:37:15 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:13:50.647 02:37:15 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:50.647 02:37:15 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:50.647 02:37:15 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:13:50.906 02:37:15 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:13:50.906 02:37:15 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:13:50.906 02:37:15 -- bdev/bdev_raid.sh@287 -- # killprocess 124610 00:13:50.906 02:37:15 -- common/autotest_common.sh@926 -- # '[' -z 124610 ']' 00:13:50.906 02:37:15 -- common/autotest_common.sh@930 -- # kill -0 124610 00:13:50.906 02:37:15 -- common/autotest_common.sh@931 -- # uname 00:13:50.906 02:37:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:50.906 02:37:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 124610 00:13:50.906 killing process with pid 124610 00:13:50.906 02:37:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:50.906 02:37:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:50.906 02:37:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 124610' 00:13:50.906 02:37:15 -- common/autotest_common.sh@945 -- # kill 124610 00:13:50.906 02:37:15 -- common/autotest_common.sh@950 -- # wait 124610 00:13:50.906 [2024-07-11 02:37:15.949971] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:50.906 [2024-07-11 02:37:15.950125] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:51.165 ************************************ 00:13:51.165 END TEST raid_state_function_test 00:13:51.165 ************************************ 00:13:51.165 02:37:16 -- bdev/bdev_raid.sh@289 -- # return 0 00:13:51.165 00:13:51.165 real 0m9.124s 00:13:51.165 user 0m16.746s 00:13:51.165 sys 0m1.145s 00:13:51.165 02:37:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:51.165 02:37:16 -- common/autotest_common.sh@10 -- # set +x 00:13:51.165 02:37:16 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:13:51.165 02:37:16 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:13:51.165 02:37:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:51.165 02:37:16 -- common/autotest_common.sh@10 -- # set +x 00:13:51.165 ************************************ 00:13:51.165 START TEST raid_state_function_test_sb 00:13:51.165 ************************************ 00:13:51.165 02:37:16 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 2 true 00:13:51.165 02:37:16 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:13:51.165 02:37:16 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:13:51.165 02:37:16 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:13:51.165 02:37:16 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:13:51.165 02:37:16 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:13:51.165 02:37:16 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:13:51.165 02:37:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:51.165 02:37:16 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:13:51.165 02:37:16 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:51.165 02:37:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:51.165 02:37:16 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:13:51.165 02:37:16 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:51.165 02:37:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:51.165 02:37:16 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:13:51.165 02:37:16 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:13:51.165 02:37:16 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:13:51.165 02:37:16 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:13:51.165 02:37:16 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:13:51.165 02:37:16 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:13:51.165 02:37:16 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:13:51.165 02:37:16 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:13:51.165 02:37:16 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:13:51.165 02:37:16 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:13:51.165 02:37:16 -- bdev/bdev_raid.sh@226 -- # raid_pid=124935 00:13:51.165 02:37:16 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 124935' 00:13:51.165 Process raid pid: 124935 00:13:51.165 02:37:16 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:51.165 02:37:16 -- bdev/bdev_raid.sh@228 -- # waitforlisten 124935 /var/tmp/spdk-raid.sock 00:13:51.165 02:37:16 -- common/autotest_common.sh@819 -- # '[' -z 124935 ']' 00:13:51.165 02:37:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:51.165 02:37:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:51.165 02:37:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:51.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:51.165 02:37:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:51.165 02:37:16 -- common/autotest_common.sh@10 -- # set +x 00:13:51.424 [2024-07-11 02:37:16.274831] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:13:51.424 [2024-07-11 02:37:16.275208] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:51.424 [2024-07-11 02:37:16.423339] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.682 [2024-07-11 02:37:16.523280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.682 [2024-07-11 02:37:16.595884] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:52.246 02:37:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:52.246 02:37:17 -- common/autotest_common.sh@852 -- # return 0 00:13:52.246 02:37:17 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:52.505 [2024-07-11 02:37:17.488496] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:52.505 [2024-07-11 02:37:17.488833] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:52.505 [2024-07-11 02:37:17.488977] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:52.505 [2024-07-11 02:37:17.489036] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:52.505 02:37:17 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:52.505 02:37:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:52.505 02:37:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:52.505 02:37:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:52.505 02:37:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:52.505 02:37:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:52.505 02:37:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:52.505 02:37:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:52.505 02:37:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:52.505 02:37:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:52.505 02:37:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:52.505 02:37:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:52.764 02:37:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:52.764 "name": "Existed_Raid", 00:13:52.764 "uuid": "d0a04bad-9ab0-4d81-93fb-8add2bcfcfde", 00:13:52.764 "strip_size_kb": 64, 00:13:52.764 "state": "configuring", 00:13:52.764 "raid_level": "raid0", 00:13:52.764 "superblock": true, 00:13:52.764 "num_base_bdevs": 2, 00:13:52.764 "num_base_bdevs_discovered": 0, 00:13:52.764 "num_base_bdevs_operational": 2, 00:13:52.764 "base_bdevs_list": [ 00:13:52.764 { 00:13:52.764 "name": "BaseBdev1", 00:13:52.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.764 "is_configured": false, 00:13:52.764 "data_offset": 0, 00:13:52.764 "data_size": 0 00:13:52.764 }, 00:13:52.764 { 00:13:52.764 "name": "BaseBdev2", 00:13:52.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.764 "is_configured": false, 00:13:52.764 "data_offset": 0, 00:13:52.764 "data_size": 0 00:13:52.764 } 00:13:52.764 ] 00:13:52.764 }' 00:13:52.764 02:37:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:52.764 02:37:17 -- common/autotest_common.sh@10 -- # set +x 00:13:53.341 02:37:18 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:53.613 [2024-07-11 02:37:18.620606] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:53.613 [2024-07-11 02:37:18.620922] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:13:53.613 02:37:18 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:53.871 [2024-07-11 02:37:18.864743] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:53.871 [2024-07-11 02:37:18.865140] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:53.871 [2024-07-11 02:37:18.865289] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:53.871 [2024-07-11 02:37:18.865355] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:53.871 02:37:18 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:54.130 [2024-07-11 02:37:19.103416] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:54.130 BaseBdev1 00:13:54.130 02:37:19 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:13:54.130 02:37:19 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:13:54.130 02:37:19 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:54.130 02:37:19 -- common/autotest_common.sh@889 -- # local i 00:13:54.130 02:37:19 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:54.130 02:37:19 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:54.130 02:37:19 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:54.389 02:37:19 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:54.648 [ 00:13:54.648 { 00:13:54.648 "name": "BaseBdev1", 00:13:54.648 "aliases": [ 00:13:54.648 "d457c386-ecba-47d4-8726-08f8681b960d" 00:13:54.648 ], 00:13:54.648 "product_name": "Malloc disk", 00:13:54.648 "block_size": 512, 00:13:54.648 "num_blocks": 65536, 00:13:54.648 "uuid": "d457c386-ecba-47d4-8726-08f8681b960d", 00:13:54.648 "assigned_rate_limits": { 00:13:54.648 "rw_ios_per_sec": 0, 00:13:54.648 "rw_mbytes_per_sec": 0, 00:13:54.648 "r_mbytes_per_sec": 0, 00:13:54.648 "w_mbytes_per_sec": 0 00:13:54.648 }, 00:13:54.648 "claimed": true, 00:13:54.648 "claim_type": "exclusive_write", 00:13:54.648 "zoned": false, 00:13:54.648 "supported_io_types": { 00:13:54.648 "read": true, 00:13:54.648 "write": true, 00:13:54.648 "unmap": true, 00:13:54.648 "write_zeroes": true, 00:13:54.648 "flush": true, 00:13:54.648 "reset": true, 00:13:54.648 "compare": false, 00:13:54.648 "compare_and_write": false, 00:13:54.648 "abort": true, 00:13:54.648 "nvme_admin": false, 00:13:54.648 "nvme_io": false 00:13:54.648 }, 00:13:54.648 "memory_domains": [ 00:13:54.648 { 00:13:54.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:54.648 "dma_device_type": 2 00:13:54.648 } 00:13:54.648 ], 00:13:54.648 "driver_specific": {} 00:13:54.648 } 00:13:54.648 ] 00:13:54.648 02:37:19 -- common/autotest_common.sh@895 -- # return 0 00:13:54.648 02:37:19 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:54.648 02:37:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:54.648 02:37:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:54.648 02:37:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:54.648 02:37:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:54.648 02:37:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:54.648 02:37:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:54.648 02:37:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:54.648 02:37:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:54.648 02:37:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:54.648 02:37:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:54.648 02:37:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:54.648 02:37:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:54.648 "name": "Existed_Raid", 00:13:54.648 "uuid": "f76d64c8-0a7b-4d97-a433-852ad41b424c", 00:13:54.648 "strip_size_kb": 64, 00:13:54.648 "state": "configuring", 00:13:54.648 "raid_level": "raid0", 00:13:54.648 "superblock": true, 00:13:54.648 "num_base_bdevs": 2, 00:13:54.648 "num_base_bdevs_discovered": 1, 00:13:54.648 "num_base_bdevs_operational": 2, 00:13:54.648 "base_bdevs_list": [ 00:13:54.648 { 00:13:54.648 "name": "BaseBdev1", 00:13:54.648 "uuid": "d457c386-ecba-47d4-8726-08f8681b960d", 00:13:54.648 "is_configured": true, 00:13:54.648 "data_offset": 2048, 00:13:54.648 "data_size": 63488 00:13:54.648 }, 00:13:54.648 { 00:13:54.648 "name": "BaseBdev2", 00:13:54.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.648 "is_configured": false, 00:13:54.648 "data_offset": 0, 00:13:54.648 "data_size": 0 00:13:54.648 } 00:13:54.648 ] 00:13:54.648 }' 00:13:54.648 02:37:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:54.648 02:37:19 -- common/autotest_common.sh@10 -- # set +x 00:13:55.584 02:37:20 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:55.584 [2024-07-11 02:37:20.631939] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:55.584 [2024-07-11 02:37:20.632293] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005a80 name Existed_Raid, state configuring 00:13:55.584 02:37:20 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:13:55.584 02:37:20 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:55.847 02:37:20 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:56.107 BaseBdev1 00:13:56.107 02:37:21 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:13:56.107 02:37:21 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:13:56.107 02:37:21 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:56.107 02:37:21 -- common/autotest_common.sh@889 -- # local i 00:13:56.107 02:37:21 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:56.107 02:37:21 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:56.107 02:37:21 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:56.365 02:37:21 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:56.624 [ 00:13:56.624 { 00:13:56.624 "name": "BaseBdev1", 00:13:56.624 "aliases": [ 00:13:56.624 "33c039c2-d2dc-465a-b764-8d1caba0c1c9" 00:13:56.624 ], 00:13:56.624 "product_name": "Malloc disk", 00:13:56.624 "block_size": 512, 00:13:56.624 "num_blocks": 65536, 00:13:56.624 "uuid": "33c039c2-d2dc-465a-b764-8d1caba0c1c9", 00:13:56.624 "assigned_rate_limits": { 00:13:56.624 "rw_ios_per_sec": 0, 00:13:56.624 "rw_mbytes_per_sec": 0, 00:13:56.624 "r_mbytes_per_sec": 0, 00:13:56.624 "w_mbytes_per_sec": 0 00:13:56.624 }, 00:13:56.624 "claimed": false, 00:13:56.624 "zoned": false, 00:13:56.624 "supported_io_types": { 00:13:56.624 "read": true, 00:13:56.624 "write": true, 00:13:56.624 "unmap": true, 00:13:56.624 "write_zeroes": true, 00:13:56.624 "flush": true, 00:13:56.624 "reset": true, 00:13:56.624 "compare": false, 00:13:56.624 "compare_and_write": false, 00:13:56.624 "abort": true, 00:13:56.624 "nvme_admin": false, 00:13:56.624 "nvme_io": false 00:13:56.624 }, 00:13:56.624 "memory_domains": [ 00:13:56.624 { 00:13:56.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:56.624 "dma_device_type": 2 00:13:56.624 } 00:13:56.624 ], 00:13:56.624 "driver_specific": {} 00:13:56.624 } 00:13:56.624 ] 00:13:56.624 02:37:21 -- common/autotest_common.sh@895 -- # return 0 00:13:56.624 02:37:21 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:56.883 [2024-07-11 02:37:21.756585] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:56.883 [2024-07-11 02:37:21.759223] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:56.883 [2024-07-11 02:37:21.759419] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:56.883 02:37:21 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:13:56.883 02:37:21 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:56.883 02:37:21 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:56.883 02:37:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:56.883 02:37:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:56.883 02:37:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:56.883 02:37:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:56.883 02:37:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:56.883 02:37:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:56.883 02:37:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:56.883 02:37:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:56.883 02:37:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:56.883 02:37:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:56.883 02:37:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.883 02:37:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:56.883 "name": "Existed_Raid", 00:13:56.883 "uuid": "cc12a226-6940-471d-80a8-73ac74e633b9", 00:13:56.883 "strip_size_kb": 64, 00:13:56.883 "state": "configuring", 00:13:56.883 "raid_level": "raid0", 00:13:56.883 "superblock": true, 00:13:56.883 "num_base_bdevs": 2, 00:13:56.883 "num_base_bdevs_discovered": 1, 00:13:56.883 "num_base_bdevs_operational": 2, 00:13:56.883 "base_bdevs_list": [ 00:13:56.883 { 00:13:56.883 "name": "BaseBdev1", 00:13:56.883 "uuid": "33c039c2-d2dc-465a-b764-8d1caba0c1c9", 00:13:56.883 "is_configured": true, 00:13:56.883 "data_offset": 2048, 00:13:56.883 "data_size": 63488 00:13:56.883 }, 00:13:56.883 { 00:13:56.883 "name": "BaseBdev2", 00:13:56.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.883 "is_configured": false, 00:13:56.883 "data_offset": 0, 00:13:56.883 "data_size": 0 00:13:56.883 } 00:13:56.883 ] 00:13:56.883 }' 00:13:56.883 02:37:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:56.883 02:37:21 -- common/autotest_common.sh@10 -- # set +x 00:13:57.818 02:37:22 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:57.818 [2024-07-11 02:37:22.822496] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:57.818 [2024-07-11 02:37:22.823025] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006980 00:13:57.818 BaseBdev2 00:13:57.818 [2024-07-11 02:37:22.823240] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:57.818 [2024-07-11 02:37:22.823543] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80 00:13:57.818 [2024-07-11 02:37:22.824097] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006980 00:13:57.818 [2024-07-11 02:37:22.824226] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006980 00:13:57.818 [2024-07-11 02:37:22.824500] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:57.818 02:37:22 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:13:57.818 02:37:22 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:13:57.818 02:37:22 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:57.818 02:37:22 -- common/autotest_common.sh@889 -- # local i 00:13:57.818 02:37:22 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:57.818 02:37:22 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:57.818 02:37:22 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:58.077 02:37:23 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:58.336 [ 00:13:58.336 { 00:13:58.336 "name": "BaseBdev2", 00:13:58.336 "aliases": [ 00:13:58.336 "aad24bfe-d27a-4c77-b3d4-faff7277f350" 00:13:58.336 ], 00:13:58.336 "product_name": "Malloc disk", 00:13:58.336 "block_size": 512, 00:13:58.336 "num_blocks": 65536, 00:13:58.336 "uuid": "aad24bfe-d27a-4c77-b3d4-faff7277f350", 00:13:58.336 "assigned_rate_limits": { 00:13:58.336 "rw_ios_per_sec": 0, 00:13:58.336 "rw_mbytes_per_sec": 0, 00:13:58.336 "r_mbytes_per_sec": 0, 00:13:58.336 "w_mbytes_per_sec": 0 00:13:58.336 }, 00:13:58.336 "claimed": true, 00:13:58.336 "claim_type": "exclusive_write", 00:13:58.336 "zoned": false, 00:13:58.336 "supported_io_types": { 00:13:58.336 "read": true, 00:13:58.336 "write": true, 00:13:58.336 "unmap": true, 00:13:58.336 "write_zeroes": true, 00:13:58.336 "flush": true, 00:13:58.336 "reset": true, 00:13:58.336 "compare": false, 00:13:58.336 "compare_and_write": false, 00:13:58.336 "abort": true, 00:13:58.336 "nvme_admin": false, 00:13:58.336 "nvme_io": false 00:13:58.336 }, 00:13:58.336 "memory_domains": [ 00:13:58.336 { 00:13:58.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:58.336 "dma_device_type": 2 00:13:58.336 } 00:13:58.336 ], 00:13:58.336 "driver_specific": {} 00:13:58.336 } 00:13:58.336 ] 00:13:58.336 02:37:23 -- common/autotest_common.sh@895 -- # return 0 00:13:58.336 02:37:23 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:13:58.336 02:37:23 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:58.336 02:37:23 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:13:58.336 02:37:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:58.336 02:37:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:58.336 02:37:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:58.336 02:37:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:58.336 02:37:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:58.336 02:37:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:58.336 02:37:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:58.336 02:37:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:58.336 02:37:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:58.336 02:37:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:58.336 02:37:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:58.595 02:37:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:58.595 "name": "Existed_Raid", 00:13:58.595 "uuid": "cc12a226-6940-471d-80a8-73ac74e633b9", 00:13:58.595 "strip_size_kb": 64, 00:13:58.595 "state": "online", 00:13:58.595 "raid_level": "raid0", 00:13:58.595 "superblock": true, 00:13:58.595 "num_base_bdevs": 2, 00:13:58.595 "num_base_bdevs_discovered": 2, 00:13:58.595 "num_base_bdevs_operational": 2, 00:13:58.595 "base_bdevs_list": [ 00:13:58.595 { 00:13:58.595 "name": "BaseBdev1", 00:13:58.595 "uuid": "33c039c2-d2dc-465a-b764-8d1caba0c1c9", 00:13:58.595 "is_configured": true, 00:13:58.595 "data_offset": 2048, 00:13:58.595 "data_size": 63488 00:13:58.595 }, 00:13:58.595 { 00:13:58.595 "name": "BaseBdev2", 00:13:58.595 "uuid": "aad24bfe-d27a-4c77-b3d4-faff7277f350", 00:13:58.595 "is_configured": true, 00:13:58.595 "data_offset": 2048, 00:13:58.595 "data_size": 63488 00:13:58.595 } 00:13:58.595 ] 00:13:58.595 }' 00:13:58.595 02:37:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:58.595 02:37:23 -- common/autotest_common.sh@10 -- # set +x 00:13:59.162 02:37:24 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:59.420 [2024-07-11 02:37:24.266959] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:59.420 [2024-07-11 02:37:24.267183] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:59.420 [2024-07-11 02:37:24.267381] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:59.420 02:37:24 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:13:59.420 02:37:24 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:13:59.420 02:37:24 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:13:59.420 02:37:24 -- bdev/bdev_raid.sh@197 -- # return 1 00:13:59.420 02:37:24 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:13:59.420 02:37:24 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:13:59.420 02:37:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:59.420 02:37:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:13:59.420 02:37:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:59.420 02:37:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:59.420 02:37:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:13:59.420 02:37:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:59.420 02:37:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:59.420 02:37:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:59.420 02:37:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:59.420 02:37:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:59.420 02:37:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:59.679 02:37:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:59.679 "name": "Existed_Raid", 00:13:59.679 "uuid": "cc12a226-6940-471d-80a8-73ac74e633b9", 00:13:59.679 "strip_size_kb": 64, 00:13:59.679 "state": "offline", 00:13:59.679 "raid_level": "raid0", 00:13:59.679 "superblock": true, 00:13:59.679 "num_base_bdevs": 2, 00:13:59.679 "num_base_bdevs_discovered": 1, 00:13:59.679 "num_base_bdevs_operational": 1, 00:13:59.679 "base_bdevs_list": [ 00:13:59.679 { 00:13:59.679 "name": null, 00:13:59.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.679 "is_configured": false, 00:13:59.679 "data_offset": 2048, 00:13:59.679 "data_size": 63488 00:13:59.679 }, 00:13:59.679 { 00:13:59.679 "name": "BaseBdev2", 00:13:59.679 "uuid": "aad24bfe-d27a-4c77-b3d4-faff7277f350", 00:13:59.679 "is_configured": true, 00:13:59.679 "data_offset": 2048, 00:13:59.679 "data_size": 63488 00:13:59.679 } 00:13:59.679 ] 00:13:59.679 }' 00:13:59.679 02:37:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:59.679 02:37:24 -- common/autotest_common.sh@10 -- # set +x 00:14:00.246 02:37:25 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:14:00.246 02:37:25 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:00.246 02:37:25 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:00.246 02:37:25 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:00.504 02:37:25 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:00.504 02:37:25 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:00.504 02:37:25 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:00.763 [2024-07-11 02:37:25.636658] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:00.763 [2024-07-11 02:37:25.636867] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state offline 00:14:00.763 02:37:25 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:00.763 02:37:25 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:00.763 02:37:25 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:00.763 02:37:25 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:14:01.046 02:37:25 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:14:01.046 02:37:25 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:14:01.046 02:37:25 -- bdev/bdev_raid.sh@287 -- # killprocess 124935 00:14:01.046 02:37:25 -- common/autotest_common.sh@926 -- # '[' -z 124935 ']' 00:14:01.046 02:37:25 -- common/autotest_common.sh@930 -- # kill -0 124935 00:14:01.046 02:37:25 -- common/autotest_common.sh@931 -- # uname 00:14:01.046 02:37:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:01.046 02:37:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 124935 00:14:01.046 02:37:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:01.046 02:37:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:01.046 02:37:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 124935' 00:14:01.046 killing process with pid 124935 00:14:01.046 02:37:25 -- common/autotest_common.sh@945 -- # kill 124935 00:14:01.046 [2024-07-11 02:37:25.928129] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:01.046 02:37:25 -- common/autotest_common.sh@950 -- # wait 124935 00:14:01.046 [2024-07-11 02:37:25.928234] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:01.305 ************************************ 00:14:01.305 END TEST raid_state_function_test_sb 00:14:01.305 ************************************ 00:14:01.305 02:37:26 -- bdev/bdev_raid.sh@289 -- # return 0 00:14:01.305 00:14:01.305 real 0m9.948s 00:14:01.305 user 0m18.382s 00:14:01.305 sys 0m1.086s 00:14:01.305 02:37:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:01.305 02:37:26 -- common/autotest_common.sh@10 -- # set +x 00:14:01.305 02:37:26 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:14:01.305 02:37:26 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:14:01.305 02:37:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:01.305 02:37:26 -- common/autotest_common.sh@10 -- # set +x 00:14:01.305 ************************************ 00:14:01.305 START TEST raid_superblock_test 00:14:01.305 ************************************ 00:14:01.305 02:37:26 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 2 00:14:01.305 02:37:26 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:14:01.305 02:37:26 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:14:01.305 02:37:26 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:14:01.305 02:37:26 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:14:01.305 02:37:26 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:14:01.305 02:37:26 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:14:01.305 02:37:26 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:14:01.305 02:37:26 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:14:01.305 02:37:26 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:14:01.305 02:37:26 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:14:01.305 02:37:26 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:14:01.305 02:37:26 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:14:01.305 02:37:26 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:14:01.305 02:37:26 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:14:01.305 02:37:26 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:14:01.305 02:37:26 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:14:01.305 02:37:26 -- bdev/bdev_raid.sh@357 -- # raid_pid=125280 00:14:01.305 02:37:26 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:01.305 02:37:26 -- bdev/bdev_raid.sh@358 -- # waitforlisten 125280 /var/tmp/spdk-raid.sock 00:14:01.305 02:37:26 -- common/autotest_common.sh@819 -- # '[' -z 125280 ']' 00:14:01.305 02:37:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:01.305 02:37:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:01.305 02:37:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:01.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:01.305 02:37:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:01.305 02:37:26 -- common/autotest_common.sh@10 -- # set +x 00:14:01.305 [2024-07-11 02:37:26.270402] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:14:01.305 [2024-07-11 02:37:26.270602] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125280 ] 00:14:01.564 [2024-07-11 02:37:26.413216] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.564 [2024-07-11 02:37:26.519440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.564 [2024-07-11 02:37:26.595900] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:02.130 02:37:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:02.130 02:37:27 -- common/autotest_common.sh@852 -- # return 0 00:14:02.130 02:37:27 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:14:02.130 02:37:27 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:02.130 02:37:27 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:14:02.130 02:37:27 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:14:02.130 02:37:27 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:02.130 02:37:27 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:02.130 02:37:27 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:02.130 02:37:27 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:02.130 02:37:27 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:14:02.387 malloc1 00:14:02.387 02:37:27 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:02.645 [2024-07-11 02:37:27.576803] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:02.645 [2024-07-11 02:37:27.576931] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:02.645 [2024-07-11 02:37:27.576967] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005d80 00:14:02.645 [2024-07-11 02:37:27.577021] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:02.645 [2024-07-11 02:37:27.579691] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:02.645 [2024-07-11 02:37:27.579736] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:02.645 pt1 00:14:02.645 02:37:27 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:02.645 02:37:27 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:02.645 02:37:27 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:14:02.645 02:37:27 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:14:02.645 02:37:27 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:02.645 02:37:27 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:02.645 02:37:27 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:02.645 02:37:27 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:02.645 02:37:27 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:14:02.903 malloc2 00:14:02.903 02:37:27 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:02.903 [2024-07-11 02:37:27.954412] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:02.903 [2024-07-11 02:37:27.954506] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:02.903 [2024-07-11 02:37:27.954545] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:14:02.903 [2024-07-11 02:37:27.954587] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:02.903 [2024-07-11 02:37:27.957004] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:02.903 [2024-07-11 02:37:27.957052] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:02.903 pt2 00:14:02.903 02:37:27 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:02.903 02:37:27 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:02.903 02:37:27 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:14:03.161 [2024-07-11 02:37:28.138516] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:03.161 [2024-07-11 02:37:28.140669] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:03.161 [2024-07-11 02:37:28.140859] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:14:03.161 [2024-07-11 02:37:28.140875] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:03.161 [2024-07-11 02:37:28.141027] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:14:03.161 [2024-07-11 02:37:28.141407] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:14:03.161 [2024-07-11 02:37:28.141429] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000006f80 00:14:03.161 [2024-07-11 02:37:28.141573] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:03.161 02:37:28 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:03.161 02:37:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:03.161 02:37:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:03.161 02:37:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:03.161 02:37:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:03.161 02:37:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:03.161 02:37:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:03.161 02:37:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:03.161 02:37:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:03.161 02:37:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:03.161 02:37:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:03.161 02:37:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.419 02:37:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:03.419 "name": "raid_bdev1", 00:14:03.419 "uuid": "5190f760-c550-40a5-91ec-6d83d2f18010", 00:14:03.419 "strip_size_kb": 64, 00:14:03.419 "state": "online", 00:14:03.419 "raid_level": "raid0", 00:14:03.419 "superblock": true, 00:14:03.419 "num_base_bdevs": 2, 00:14:03.419 "num_base_bdevs_discovered": 2, 00:14:03.419 "num_base_bdevs_operational": 2, 00:14:03.419 "base_bdevs_list": [ 00:14:03.419 { 00:14:03.419 "name": "pt1", 00:14:03.419 "uuid": "d54f4bd6-b055-5aa0-9ab8-6393bf69dabe", 00:14:03.419 "is_configured": true, 00:14:03.419 "data_offset": 2048, 00:14:03.419 "data_size": 63488 00:14:03.419 }, 00:14:03.419 { 00:14:03.419 "name": "pt2", 00:14:03.419 "uuid": "69a061b8-8412-56f0-bf06-239aaf4bb5b5", 00:14:03.419 "is_configured": true, 00:14:03.419 "data_offset": 2048, 00:14:03.419 "data_size": 63488 00:14:03.419 } 00:14:03.420 ] 00:14:03.420 }' 00:14:03.420 02:37:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:03.420 02:37:28 -- common/autotest_common.sh@10 -- # set +x 00:14:03.985 02:37:29 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:03.985 02:37:29 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:14:04.244 [2024-07-11 02:37:29.222868] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:04.244 02:37:29 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=5190f760-c550-40a5-91ec-6d83d2f18010 00:14:04.244 02:37:29 -- bdev/bdev_raid.sh@380 -- # '[' -z 5190f760-c550-40a5-91ec-6d83d2f18010 ']' 00:14:04.244 02:37:29 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:04.502 [2024-07-11 02:37:29.462682] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:04.502 [2024-07-11 02:37:29.462711] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:04.502 [2024-07-11 02:37:29.462828] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:04.502 [2024-07-11 02:37:29.462894] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:04.502 [2024-07-11 02:37:29.462908] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name raid_bdev1, state offline 00:14:04.502 02:37:29 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:04.502 02:37:29 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:14:04.761 02:37:29 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:14:04.761 02:37:29 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:14:04.761 02:37:29 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:04.761 02:37:29 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:04.761 02:37:29 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:04.761 02:37:29 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:05.019 02:37:30 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:14:05.019 02:37:30 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:05.277 02:37:30 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:14:05.277 02:37:30 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:14:05.277 02:37:30 -- common/autotest_common.sh@640 -- # local es=0 00:14:05.277 02:37:30 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:14:05.277 02:37:30 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:05.277 02:37:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:05.277 02:37:30 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:05.277 02:37:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:05.277 02:37:30 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:05.277 02:37:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:05.277 02:37:30 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:05.277 02:37:30 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:05.277 02:37:30 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:14:05.535 [2024-07-11 02:37:30.401777] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:05.535 [2024-07-11 02:37:30.403745] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:05.535 [2024-07-11 02:37:30.403827] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:14:05.535 [2024-07-11 02:37:30.403912] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:14:05.535 [2024-07-11 02:37:30.403955] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:05.535 [2024-07-11 02:37:30.403967] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name raid_bdev1, state configuring 00:14:05.535 request: 00:14:05.535 { 00:14:05.535 "name": "raid_bdev1", 00:14:05.535 "raid_level": "raid0", 00:14:05.535 "base_bdevs": [ 00:14:05.535 "malloc1", 00:14:05.535 "malloc2" 00:14:05.535 ], 00:14:05.535 "superblock": false, 00:14:05.535 "strip_size_kb": 64, 00:14:05.535 "method": "bdev_raid_create", 00:14:05.535 "req_id": 1 00:14:05.535 } 00:14:05.535 Got JSON-RPC error response 00:14:05.535 response: 00:14:05.535 { 00:14:05.535 "code": -17, 00:14:05.535 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:05.535 } 00:14:05.535 02:37:30 -- common/autotest_common.sh@643 -- # es=1 00:14:05.535 02:37:30 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:05.535 02:37:30 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:05.535 02:37:30 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:05.535 02:37:30 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:05.535 02:37:30 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:14:05.535 02:37:30 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:14:05.535 02:37:30 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:14:05.536 02:37:30 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:05.793 [2024-07-11 02:37:30.773804] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:05.793 [2024-07-11 02:37:30.773901] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:05.793 [2024-07-11 02:37:30.773939] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:14:05.793 [2024-07-11 02:37:30.773967] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:05.793 [2024-07-11 02:37:30.776070] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:05.793 [2024-07-11 02:37:30.776123] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:05.793 [2024-07-11 02:37:30.776193] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:14:05.793 [2024-07-11 02:37:30.776250] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:05.793 pt1 00:14:05.793 02:37:30 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:14:05.793 02:37:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:05.793 02:37:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:05.793 02:37:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:05.793 02:37:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:05.793 02:37:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:05.793 02:37:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:05.793 02:37:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:05.793 02:37:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:05.793 02:37:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:05.793 02:37:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:05.793 02:37:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.051 02:37:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:06.051 "name": "raid_bdev1", 00:14:06.051 "uuid": "5190f760-c550-40a5-91ec-6d83d2f18010", 00:14:06.051 "strip_size_kb": 64, 00:14:06.051 "state": "configuring", 00:14:06.051 "raid_level": "raid0", 00:14:06.051 "superblock": true, 00:14:06.051 "num_base_bdevs": 2, 00:14:06.051 "num_base_bdevs_discovered": 1, 00:14:06.051 "num_base_bdevs_operational": 2, 00:14:06.051 "base_bdevs_list": [ 00:14:06.051 { 00:14:06.051 "name": "pt1", 00:14:06.051 "uuid": "d54f4bd6-b055-5aa0-9ab8-6393bf69dabe", 00:14:06.051 "is_configured": true, 00:14:06.051 "data_offset": 2048, 00:14:06.051 "data_size": 63488 00:14:06.051 }, 00:14:06.051 { 00:14:06.051 "name": null, 00:14:06.051 "uuid": "69a061b8-8412-56f0-bf06-239aaf4bb5b5", 00:14:06.051 "is_configured": false, 00:14:06.051 "data_offset": 2048, 00:14:06.051 "data_size": 63488 00:14:06.051 } 00:14:06.051 ] 00:14:06.051 }' 00:14:06.051 02:37:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:06.051 02:37:30 -- common/autotest_common.sh@10 -- # set +x 00:14:06.617 02:37:31 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:14:06.617 02:37:31 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:14:06.617 02:37:31 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:06.617 02:37:31 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:06.876 [2024-07-11 02:37:31.842080] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:06.876 [2024-07-11 02:37:31.842197] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.876 [2024-07-11 02:37:31.842236] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:06.876 [2024-07-11 02:37:31.842264] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.876 [2024-07-11 02:37:31.842709] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.876 [2024-07-11 02:37:31.842749] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:06.876 [2024-07-11 02:37:31.842833] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:14:06.876 [2024-07-11 02:37:31.842871] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:06.876 [2024-07-11 02:37:31.842993] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:14:06.876 [2024-07-11 02:37:31.843009] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:06.876 [2024-07-11 02:37:31.843084] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:14:06.876 [2024-07-11 02:37:31.843394] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:14:06.876 [2024-07-11 02:37:31.843409] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008180 00:14:06.876 [2024-07-11 02:37:31.843562] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:06.876 pt2 00:14:06.876 02:37:31 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:14:06.876 02:37:31 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:06.876 02:37:31 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:06.876 02:37:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:06.876 02:37:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:06.876 02:37:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:06.876 02:37:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:06.876 02:37:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:06.876 02:37:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:06.876 02:37:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:06.876 02:37:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:06.876 02:37:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:06.876 02:37:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:06.876 02:37:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.134 02:37:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:07.134 "name": "raid_bdev1", 00:14:07.134 "uuid": "5190f760-c550-40a5-91ec-6d83d2f18010", 00:14:07.134 "strip_size_kb": 64, 00:14:07.134 "state": "online", 00:14:07.134 "raid_level": "raid0", 00:14:07.134 "superblock": true, 00:14:07.134 "num_base_bdevs": 2, 00:14:07.134 "num_base_bdevs_discovered": 2, 00:14:07.134 "num_base_bdevs_operational": 2, 00:14:07.134 "base_bdevs_list": [ 00:14:07.134 { 00:14:07.134 "name": "pt1", 00:14:07.134 "uuid": "d54f4bd6-b055-5aa0-9ab8-6393bf69dabe", 00:14:07.134 "is_configured": true, 00:14:07.134 "data_offset": 2048, 00:14:07.134 "data_size": 63488 00:14:07.134 }, 00:14:07.134 { 00:14:07.134 "name": "pt2", 00:14:07.134 "uuid": "69a061b8-8412-56f0-bf06-239aaf4bb5b5", 00:14:07.134 "is_configured": true, 00:14:07.134 "data_offset": 2048, 00:14:07.134 "data_size": 63488 00:14:07.134 } 00:14:07.134 ] 00:14:07.134 }' 00:14:07.134 02:37:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:07.134 02:37:32 -- common/autotest_common.sh@10 -- # set +x 00:14:07.700 02:37:32 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:07.700 02:37:32 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:14:07.958 [2024-07-11 02:37:32.878468] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:07.958 02:37:32 -- bdev/bdev_raid.sh@430 -- # '[' 5190f760-c550-40a5-91ec-6d83d2f18010 '!=' 5190f760-c550-40a5-91ec-6d83d2f18010 ']' 00:14:07.958 02:37:32 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:14:07.958 02:37:32 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:07.958 02:37:32 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:07.958 02:37:32 -- bdev/bdev_raid.sh@511 -- # killprocess 125280 00:14:07.958 02:37:32 -- common/autotest_common.sh@926 -- # '[' -z 125280 ']' 00:14:07.958 02:37:32 -- common/autotest_common.sh@930 -- # kill -0 125280 00:14:07.958 02:37:32 -- common/autotest_common.sh@931 -- # uname 00:14:07.958 02:37:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:07.958 02:37:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 125280 00:14:07.958 02:37:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:07.958 02:37:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:07.958 02:37:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 125280' 00:14:07.958 killing process with pid 125280 00:14:07.958 02:37:32 -- common/autotest_common.sh@945 -- # kill 125280 00:14:07.958 02:37:32 -- common/autotest_common.sh@950 -- # wait 125280 00:14:07.958 [2024-07-11 02:37:32.917487] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:07.958 [2024-07-11 02:37:32.917774] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:07.958 [2024-07-11 02:37:32.918061] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:07.958 [2024-07-11 02:37:32.918090] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state offline 00:14:07.958 [2024-07-11 02:37:32.940301] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:08.217 ************************************ 00:14:08.217 END TEST raid_superblock_test 00:14:08.217 ************************************ 00:14:08.217 02:37:33 -- bdev/bdev_raid.sh@513 -- # return 0 00:14:08.217 00:14:08.217 real 0m6.960s 00:14:08.217 user 0m12.619s 00:14:08.217 sys 0m0.896s 00:14:08.217 02:37:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:08.217 02:37:33 -- common/autotest_common.sh@10 -- # set +x 00:14:08.217 02:37:33 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:14:08.217 02:37:33 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:14:08.217 02:37:33 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:08.217 02:37:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:08.217 02:37:33 -- common/autotest_common.sh@10 -- # set +x 00:14:08.217 ************************************ 00:14:08.217 START TEST raid_state_function_test 00:14:08.217 ************************************ 00:14:08.217 02:37:33 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 2 false 00:14:08.217 02:37:33 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:14:08.217 02:37:33 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:14:08.217 02:37:33 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:14:08.217 02:37:33 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:08.217 02:37:33 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:14:08.217 02:37:33 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:08.217 02:37:33 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:08.217 02:37:33 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:14:08.217 02:37:33 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:08.217 02:37:33 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:08.217 02:37:33 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:14:08.217 02:37:33 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:08.217 02:37:33 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:08.217 02:37:33 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:08.217 02:37:33 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:08.217 02:37:33 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:08.217 02:37:33 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:08.217 02:37:33 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:08.217 02:37:33 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:14:08.217 02:37:33 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:14:08.217 02:37:33 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:14:08.217 02:37:33 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:14:08.217 02:37:33 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:14:08.217 02:37:33 -- bdev/bdev_raid.sh@226 -- # raid_pid=125511 00:14:08.217 02:37:33 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 125511' 00:14:08.217 Process raid pid: 125511 00:14:08.217 02:37:33 -- bdev/bdev_raid.sh@228 -- # waitforlisten 125511 /var/tmp/spdk-raid.sock 00:14:08.217 02:37:33 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:08.217 02:37:33 -- common/autotest_common.sh@819 -- # '[' -z 125511 ']' 00:14:08.217 02:37:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:08.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:08.217 02:37:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:08.217 02:37:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:08.217 02:37:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:08.217 02:37:33 -- common/autotest_common.sh@10 -- # set +x 00:14:08.217 [2024-07-11 02:37:33.292553] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:14:08.217 [2024-07-11 02:37:33.292799] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:08.476 [2024-07-11 02:37:33.436947] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.476 [2024-07-11 02:37:33.492794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.476 [2024-07-11 02:37:33.544170] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:09.407 02:37:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:09.407 02:37:34 -- common/autotest_common.sh@852 -- # return 0 00:14:09.407 02:37:34 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:09.407 [2024-07-11 02:37:34.469332] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:09.407 [2024-07-11 02:37:34.469448] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:09.407 [2024-07-11 02:37:34.469464] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:09.407 [2024-07-11 02:37:34.469487] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:09.407 02:37:34 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:09.407 02:37:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:09.407 02:37:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:09.407 02:37:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:09.407 02:37:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:09.407 02:37:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:09.407 02:37:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:09.407 02:37:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:09.407 02:37:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:09.407 02:37:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:09.407 02:37:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:09.407 02:37:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:09.665 02:37:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:09.665 "name": "Existed_Raid", 00:14:09.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.665 "strip_size_kb": 64, 00:14:09.665 "state": "configuring", 00:14:09.665 "raid_level": "concat", 00:14:09.665 "superblock": false, 00:14:09.665 "num_base_bdevs": 2, 00:14:09.665 "num_base_bdevs_discovered": 0, 00:14:09.665 "num_base_bdevs_operational": 2, 00:14:09.665 "base_bdevs_list": [ 00:14:09.665 { 00:14:09.665 "name": "BaseBdev1", 00:14:09.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.665 "is_configured": false, 00:14:09.665 "data_offset": 0, 00:14:09.665 "data_size": 0 00:14:09.665 }, 00:14:09.665 { 00:14:09.665 "name": "BaseBdev2", 00:14:09.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.665 "is_configured": false, 00:14:09.665 "data_offset": 0, 00:14:09.665 "data_size": 0 00:14:09.665 } 00:14:09.665 ] 00:14:09.665 }' 00:14:09.665 02:37:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:09.665 02:37:34 -- common/autotest_common.sh@10 -- # set +x 00:14:10.599 02:37:35 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:10.599 [2024-07-11 02:37:35.621403] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:10.599 [2024-07-11 02:37:35.621467] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:14:10.599 02:37:35 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:10.858 [2024-07-11 02:37:35.805460] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:10.858 [2024-07-11 02:37:35.805539] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:10.858 [2024-07-11 02:37:35.805569] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:10.858 [2024-07-11 02:37:35.805596] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:10.858 02:37:35 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:11.116 [2024-07-11 02:37:36.020434] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:11.116 BaseBdev1 00:14:11.116 02:37:36 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:14:11.116 02:37:36 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:14:11.116 02:37:36 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:11.116 02:37:36 -- common/autotest_common.sh@889 -- # local i 00:14:11.116 02:37:36 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:11.116 02:37:36 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:11.116 02:37:36 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:11.375 02:37:36 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:11.636 [ 00:14:11.636 { 00:14:11.636 "name": "BaseBdev1", 00:14:11.636 "aliases": [ 00:14:11.636 "9a7a1fd6-2296-43f1-a0a3-a453ae1ada74" 00:14:11.636 ], 00:14:11.636 "product_name": "Malloc disk", 00:14:11.636 "block_size": 512, 00:14:11.636 "num_blocks": 65536, 00:14:11.636 "uuid": "9a7a1fd6-2296-43f1-a0a3-a453ae1ada74", 00:14:11.636 "assigned_rate_limits": { 00:14:11.636 "rw_ios_per_sec": 0, 00:14:11.636 "rw_mbytes_per_sec": 0, 00:14:11.636 "r_mbytes_per_sec": 0, 00:14:11.636 "w_mbytes_per_sec": 0 00:14:11.636 }, 00:14:11.636 "claimed": true, 00:14:11.636 "claim_type": "exclusive_write", 00:14:11.636 "zoned": false, 00:14:11.636 "supported_io_types": { 00:14:11.636 "read": true, 00:14:11.636 "write": true, 00:14:11.636 "unmap": true, 00:14:11.636 "write_zeroes": true, 00:14:11.636 "flush": true, 00:14:11.636 "reset": true, 00:14:11.636 "compare": false, 00:14:11.636 "compare_and_write": false, 00:14:11.636 "abort": true, 00:14:11.636 "nvme_admin": false, 00:14:11.636 "nvme_io": false 00:14:11.636 }, 00:14:11.636 "memory_domains": [ 00:14:11.636 { 00:14:11.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:11.636 "dma_device_type": 2 00:14:11.636 } 00:14:11.636 ], 00:14:11.636 "driver_specific": {} 00:14:11.636 } 00:14:11.636 ] 00:14:11.636 02:37:36 -- common/autotest_common.sh@895 -- # return 0 00:14:11.636 02:37:36 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:11.636 02:37:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:11.636 02:37:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:11.636 02:37:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:11.636 02:37:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:11.636 02:37:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:11.636 02:37:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:11.636 02:37:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:11.636 02:37:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:11.636 02:37:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:11.636 02:37:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:11.636 02:37:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:11.894 02:37:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:11.894 "name": "Existed_Raid", 00:14:11.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.894 "strip_size_kb": 64, 00:14:11.894 "state": "configuring", 00:14:11.894 "raid_level": "concat", 00:14:11.894 "superblock": false, 00:14:11.894 "num_base_bdevs": 2, 00:14:11.894 "num_base_bdevs_discovered": 1, 00:14:11.894 "num_base_bdevs_operational": 2, 00:14:11.894 "base_bdevs_list": [ 00:14:11.894 { 00:14:11.894 "name": "BaseBdev1", 00:14:11.894 "uuid": "9a7a1fd6-2296-43f1-a0a3-a453ae1ada74", 00:14:11.894 "is_configured": true, 00:14:11.894 "data_offset": 0, 00:14:11.894 "data_size": 65536 00:14:11.894 }, 00:14:11.894 { 00:14:11.894 "name": "BaseBdev2", 00:14:11.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.894 "is_configured": false, 00:14:11.894 "data_offset": 0, 00:14:11.894 "data_size": 0 00:14:11.894 } 00:14:11.894 ] 00:14:11.894 }' 00:14:11.894 02:37:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:11.894 02:37:36 -- common/autotest_common.sh@10 -- # set +x 00:14:12.461 02:37:37 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:12.720 [2024-07-11 02:37:37.632773] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:12.720 [2024-07-11 02:37:37.632861] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005a80 name Existed_Raid, state configuring 00:14:12.720 02:37:37 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:14:12.720 02:37:37 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:12.978 [2024-07-11 02:37:37.872881] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:12.978 [2024-07-11 02:37:37.874751] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:12.978 [2024-07-11 02:37:37.874833] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:12.978 02:37:37 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:14:12.978 02:37:37 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:12.978 02:37:37 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:12.978 02:37:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:12.978 02:37:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:12.978 02:37:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:12.978 02:37:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:12.978 02:37:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:12.978 02:37:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:12.978 02:37:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:12.978 02:37:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:12.978 02:37:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:12.978 02:37:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:12.978 02:37:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:13.237 02:37:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:13.237 "name": "Existed_Raid", 00:14:13.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.237 "strip_size_kb": 64, 00:14:13.237 "state": "configuring", 00:14:13.237 "raid_level": "concat", 00:14:13.237 "superblock": false, 00:14:13.237 "num_base_bdevs": 2, 00:14:13.237 "num_base_bdevs_discovered": 1, 00:14:13.237 "num_base_bdevs_operational": 2, 00:14:13.237 "base_bdevs_list": [ 00:14:13.237 { 00:14:13.237 "name": "BaseBdev1", 00:14:13.237 "uuid": "9a7a1fd6-2296-43f1-a0a3-a453ae1ada74", 00:14:13.237 "is_configured": true, 00:14:13.237 "data_offset": 0, 00:14:13.237 "data_size": 65536 00:14:13.237 }, 00:14:13.237 { 00:14:13.237 "name": "BaseBdev2", 00:14:13.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.237 "is_configured": false, 00:14:13.237 "data_offset": 0, 00:14:13.237 "data_size": 0 00:14:13.237 } 00:14:13.237 ] 00:14:13.237 }' 00:14:13.237 02:37:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:13.237 02:37:38 -- common/autotest_common.sh@10 -- # set +x 00:14:13.804 02:37:38 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:14.063 [2024-07-11 02:37:38.953550] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:14.063 [2024-07-11 02:37:38.953670] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006380 00:14:14.063 [2024-07-11 02:37:38.953689] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:14.063 [2024-07-11 02:37:38.953959] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001eb0 00:14:14.063 [2024-07-11 02:37:38.954572] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006380 00:14:14.063 [2024-07-11 02:37:38.954604] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006380 00:14:14.063 BaseBdev2 00:14:14.063 [2024-07-11 02:37:38.955008] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:14.063 02:37:38 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:14:14.063 02:37:38 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:14:14.063 02:37:38 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:14.063 02:37:38 -- common/autotest_common.sh@889 -- # local i 00:14:14.063 02:37:38 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:14.063 02:37:38 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:14.063 02:37:38 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:14.063 02:37:39 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:14.321 [ 00:14:14.321 { 00:14:14.321 "name": "BaseBdev2", 00:14:14.322 "aliases": [ 00:14:14.322 "aaea5a24-82a5-44c7-ae13-22f2835470f8" 00:14:14.322 ], 00:14:14.322 "product_name": "Malloc disk", 00:14:14.322 "block_size": 512, 00:14:14.322 "num_blocks": 65536, 00:14:14.322 "uuid": "aaea5a24-82a5-44c7-ae13-22f2835470f8", 00:14:14.322 "assigned_rate_limits": { 00:14:14.322 "rw_ios_per_sec": 0, 00:14:14.322 "rw_mbytes_per_sec": 0, 00:14:14.322 "r_mbytes_per_sec": 0, 00:14:14.322 "w_mbytes_per_sec": 0 00:14:14.322 }, 00:14:14.322 "claimed": true, 00:14:14.322 "claim_type": "exclusive_write", 00:14:14.322 "zoned": false, 00:14:14.322 "supported_io_types": { 00:14:14.322 "read": true, 00:14:14.322 "write": true, 00:14:14.322 "unmap": true, 00:14:14.322 "write_zeroes": true, 00:14:14.322 "flush": true, 00:14:14.322 "reset": true, 00:14:14.322 "compare": false, 00:14:14.322 "compare_and_write": false, 00:14:14.322 "abort": true, 00:14:14.322 "nvme_admin": false, 00:14:14.322 "nvme_io": false 00:14:14.322 }, 00:14:14.322 "memory_domains": [ 00:14:14.322 { 00:14:14.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.322 "dma_device_type": 2 00:14:14.322 } 00:14:14.322 ], 00:14:14.322 "driver_specific": {} 00:14:14.322 } 00:14:14.322 ] 00:14:14.322 02:37:39 -- common/autotest_common.sh@895 -- # return 0 00:14:14.322 02:37:39 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:14.322 02:37:39 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:14.322 02:37:39 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:14:14.322 02:37:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:14.322 02:37:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:14.322 02:37:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:14.322 02:37:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:14.322 02:37:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:14.322 02:37:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:14.322 02:37:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:14.322 02:37:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:14.322 02:37:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:14.322 02:37:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:14.322 02:37:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:14.581 02:37:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:14.581 "name": "Existed_Raid", 00:14:14.581 "uuid": "d62ce6e3-8c59-4cf9-b04f-84db7aeae692", 00:14:14.581 "strip_size_kb": 64, 00:14:14.581 "state": "online", 00:14:14.581 "raid_level": "concat", 00:14:14.581 "superblock": false, 00:14:14.581 "num_base_bdevs": 2, 00:14:14.581 "num_base_bdevs_discovered": 2, 00:14:14.581 "num_base_bdevs_operational": 2, 00:14:14.581 "base_bdevs_list": [ 00:14:14.581 { 00:14:14.581 "name": "BaseBdev1", 00:14:14.581 "uuid": "9a7a1fd6-2296-43f1-a0a3-a453ae1ada74", 00:14:14.581 "is_configured": true, 00:14:14.581 "data_offset": 0, 00:14:14.581 "data_size": 65536 00:14:14.581 }, 00:14:14.581 { 00:14:14.581 "name": "BaseBdev2", 00:14:14.581 "uuid": "aaea5a24-82a5-44c7-ae13-22f2835470f8", 00:14:14.581 "is_configured": true, 00:14:14.581 "data_offset": 0, 00:14:14.581 "data_size": 65536 00:14:14.581 } 00:14:14.581 ] 00:14:14.581 }' 00:14:14.581 02:37:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:14.581 02:37:39 -- common/autotest_common.sh@10 -- # set +x 00:14:15.148 02:37:40 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:15.407 [2024-07-11 02:37:40.414035] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:15.407 [2024-07-11 02:37:40.414074] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:15.407 [2024-07-11 02:37:40.414190] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:15.407 02:37:40 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:14:15.407 02:37:40 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:14:15.407 02:37:40 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:15.407 02:37:40 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:15.407 02:37:40 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:14:15.407 02:37:40 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:14:15.407 02:37:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:15.407 02:37:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:14:15.407 02:37:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:15.407 02:37:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:15.407 02:37:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:14:15.407 02:37:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:15.407 02:37:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:15.407 02:37:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:15.407 02:37:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:15.407 02:37:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:15.407 02:37:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:15.666 02:37:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:15.666 "name": "Existed_Raid", 00:14:15.666 "uuid": "d62ce6e3-8c59-4cf9-b04f-84db7aeae692", 00:14:15.666 "strip_size_kb": 64, 00:14:15.666 "state": "offline", 00:14:15.666 "raid_level": "concat", 00:14:15.666 "superblock": false, 00:14:15.666 "num_base_bdevs": 2, 00:14:15.666 "num_base_bdevs_discovered": 1, 00:14:15.666 "num_base_bdevs_operational": 1, 00:14:15.666 "base_bdevs_list": [ 00:14:15.666 { 00:14:15.666 "name": null, 00:14:15.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.666 "is_configured": false, 00:14:15.666 "data_offset": 0, 00:14:15.666 "data_size": 65536 00:14:15.666 }, 00:14:15.666 { 00:14:15.666 "name": "BaseBdev2", 00:14:15.666 "uuid": "aaea5a24-82a5-44c7-ae13-22f2835470f8", 00:14:15.666 "is_configured": true, 00:14:15.666 "data_offset": 0, 00:14:15.666 "data_size": 65536 00:14:15.666 } 00:14:15.666 ] 00:14:15.666 }' 00:14:15.666 02:37:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:15.666 02:37:40 -- common/autotest_common.sh@10 -- # set +x 00:14:16.233 02:37:41 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:14:16.233 02:37:41 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:16.233 02:37:41 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:16.233 02:37:41 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:16.491 02:37:41 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:16.491 02:37:41 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:16.491 02:37:41 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:16.749 [2024-07-11 02:37:41.662109] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:16.749 [2024-07-11 02:37:41.662185] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state offline 00:14:16.749 02:37:41 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:16.749 02:37:41 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:16.749 02:37:41 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:14:16.749 02:37:41 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:17.008 02:37:41 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:14:17.008 02:37:41 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:14:17.008 02:37:41 -- bdev/bdev_raid.sh@287 -- # killprocess 125511 00:14:17.008 02:37:41 -- common/autotest_common.sh@926 -- # '[' -z 125511 ']' 00:14:17.008 02:37:41 -- common/autotest_common.sh@930 -- # kill -0 125511 00:14:17.008 02:37:41 -- common/autotest_common.sh@931 -- # uname 00:14:17.008 02:37:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:17.008 02:37:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 125511 00:14:17.008 killing process with pid 125511 00:14:17.008 02:37:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:17.008 02:37:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:17.008 02:37:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 125511' 00:14:17.008 02:37:41 -- common/autotest_common.sh@945 -- # kill 125511 00:14:17.008 02:37:41 -- common/autotest_common.sh@950 -- # wait 125511 00:14:17.008 [2024-07-11 02:37:41.948784] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:17.008 [2024-07-11 02:37:41.948896] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:17.267 ************************************ 00:14:17.267 END TEST raid_state_function_test 00:14:17.267 ************************************ 00:14:17.267 02:37:42 -- bdev/bdev_raid.sh@289 -- # return 0 00:14:17.267 00:14:17.267 real 0m8.926s 00:14:17.267 user 0m16.503s 00:14:17.267 sys 0m0.977s 00:14:17.267 02:37:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:17.267 02:37:42 -- common/autotest_common.sh@10 -- # set +x 00:14:17.267 02:37:42 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:14:17.267 02:37:42 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:17.267 02:37:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:17.267 02:37:42 -- common/autotest_common.sh@10 -- # set +x 00:14:17.267 ************************************ 00:14:17.267 START TEST raid_state_function_test_sb 00:14:17.267 ************************************ 00:14:17.267 02:37:42 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 2 true 00:14:17.267 02:37:42 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:14:17.267 02:37:42 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:14:17.267 02:37:42 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:14:17.267 02:37:42 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:17.267 02:37:42 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:14:17.267 02:37:42 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:17.267 02:37:42 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:17.267 02:37:42 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:14:17.267 02:37:42 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:17.267 02:37:42 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:17.267 02:37:42 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:14:17.267 02:37:42 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:17.267 02:37:42 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:17.267 Process raid pid: 125833 00:14:17.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:17.267 02:37:42 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:17.267 02:37:42 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:17.267 02:37:42 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:17.267 02:37:42 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:17.267 02:37:42 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:17.267 02:37:42 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:14:17.267 02:37:42 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:14:17.267 02:37:42 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:14:17.267 02:37:42 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:14:17.267 02:37:42 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:14:17.267 02:37:42 -- bdev/bdev_raid.sh@226 -- # raid_pid=125833 00:14:17.267 02:37:42 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 125833' 00:14:17.267 02:37:42 -- bdev/bdev_raid.sh@228 -- # waitforlisten 125833 /var/tmp/spdk-raid.sock 00:14:17.267 02:37:42 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:17.267 02:37:42 -- common/autotest_common.sh@819 -- # '[' -z 125833 ']' 00:14:17.267 02:37:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:17.268 02:37:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:17.268 02:37:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:17.268 02:37:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:17.268 02:37:42 -- common/autotest_common.sh@10 -- # set +x 00:14:17.268 [2024-07-11 02:37:42.269826] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:14:17.268 [2024-07-11 02:37:42.270080] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:17.525 [2024-07-11 02:37:42.420672] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:17.525 [2024-07-11 02:37:42.495316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:17.525 [2024-07-11 02:37:42.552495] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:18.459 02:37:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:18.459 02:37:43 -- common/autotest_common.sh@852 -- # return 0 00:14:18.459 02:37:43 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:18.459 [2024-07-11 02:37:43.443832] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:18.459 [2024-07-11 02:37:43.443927] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:18.459 [2024-07-11 02:37:43.443943] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:18.459 [2024-07-11 02:37:43.443963] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:18.459 02:37:43 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:18.459 02:37:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:18.459 02:37:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:18.459 02:37:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:18.459 02:37:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:18.459 02:37:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:18.459 02:37:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:18.459 02:37:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:18.459 02:37:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:18.459 02:37:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:18.459 02:37:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:18.459 02:37:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:18.727 02:37:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:18.727 "name": "Existed_Raid", 00:14:18.727 "uuid": "48a23c8d-be89-4e37-a7ce-e1b5781a7a88", 00:14:18.727 "strip_size_kb": 64, 00:14:18.727 "state": "configuring", 00:14:18.727 "raid_level": "concat", 00:14:18.727 "superblock": true, 00:14:18.727 "num_base_bdevs": 2, 00:14:18.727 "num_base_bdevs_discovered": 0, 00:14:18.727 "num_base_bdevs_operational": 2, 00:14:18.727 "base_bdevs_list": [ 00:14:18.727 { 00:14:18.727 "name": "BaseBdev1", 00:14:18.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.727 "is_configured": false, 00:14:18.727 "data_offset": 0, 00:14:18.727 "data_size": 0 00:14:18.727 }, 00:14:18.727 { 00:14:18.727 "name": "BaseBdev2", 00:14:18.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.727 "is_configured": false, 00:14:18.727 "data_offset": 0, 00:14:18.727 "data_size": 0 00:14:18.727 } 00:14:18.727 ] 00:14:18.727 }' 00:14:18.727 02:37:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:18.727 02:37:43 -- common/autotest_common.sh@10 -- # set +x 00:14:19.299 02:37:44 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:19.558 [2024-07-11 02:37:44.607907] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:19.558 [2024-07-11 02:37:44.607975] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:14:19.558 02:37:44 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:19.816 [2024-07-11 02:37:44.855977] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:19.816 [2024-07-11 02:37:44.856072] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:19.816 [2024-07-11 02:37:44.856088] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:19.816 [2024-07-11 02:37:44.856115] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:19.816 02:37:44 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:20.075 [2024-07-11 02:37:45.094470] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:20.075 BaseBdev1 00:14:20.075 02:37:45 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:14:20.075 02:37:45 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:14:20.075 02:37:45 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:20.075 02:37:45 -- common/autotest_common.sh@889 -- # local i 00:14:20.075 02:37:45 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:20.075 02:37:45 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:20.075 02:37:45 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:20.334 02:37:45 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:20.593 [ 00:14:20.593 { 00:14:20.593 "name": "BaseBdev1", 00:14:20.593 "aliases": [ 00:14:20.593 "bda9d7d3-bbd9-4478-ace5-fdc6a472d955" 00:14:20.593 ], 00:14:20.593 "product_name": "Malloc disk", 00:14:20.593 "block_size": 512, 00:14:20.593 "num_blocks": 65536, 00:14:20.593 "uuid": "bda9d7d3-bbd9-4478-ace5-fdc6a472d955", 00:14:20.593 "assigned_rate_limits": { 00:14:20.593 "rw_ios_per_sec": 0, 00:14:20.593 "rw_mbytes_per_sec": 0, 00:14:20.593 "r_mbytes_per_sec": 0, 00:14:20.593 "w_mbytes_per_sec": 0 00:14:20.593 }, 00:14:20.593 "claimed": true, 00:14:20.593 "claim_type": "exclusive_write", 00:14:20.593 "zoned": false, 00:14:20.593 "supported_io_types": { 00:14:20.593 "read": true, 00:14:20.593 "write": true, 00:14:20.593 "unmap": true, 00:14:20.593 "write_zeroes": true, 00:14:20.593 "flush": true, 00:14:20.593 "reset": true, 00:14:20.593 "compare": false, 00:14:20.593 "compare_and_write": false, 00:14:20.593 "abort": true, 00:14:20.593 "nvme_admin": false, 00:14:20.593 "nvme_io": false 00:14:20.593 }, 00:14:20.593 "memory_domains": [ 00:14:20.593 { 00:14:20.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:20.593 "dma_device_type": 2 00:14:20.593 } 00:14:20.593 ], 00:14:20.593 "driver_specific": {} 00:14:20.593 } 00:14:20.593 ] 00:14:20.593 02:37:45 -- common/autotest_common.sh@895 -- # return 0 00:14:20.593 02:37:45 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:20.593 02:37:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:20.593 02:37:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:20.593 02:37:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:20.593 02:37:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:20.593 02:37:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:20.593 02:37:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:20.593 02:37:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:20.593 02:37:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:20.593 02:37:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:20.593 02:37:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:20.593 02:37:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:20.852 02:37:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:20.852 "name": "Existed_Raid", 00:14:20.852 "uuid": "7817d7bd-4415-48d7-a7a8-af6e83a06400", 00:14:20.852 "strip_size_kb": 64, 00:14:20.852 "state": "configuring", 00:14:20.852 "raid_level": "concat", 00:14:20.852 "superblock": true, 00:14:20.852 "num_base_bdevs": 2, 00:14:20.852 "num_base_bdevs_discovered": 1, 00:14:20.852 "num_base_bdevs_operational": 2, 00:14:20.852 "base_bdevs_list": [ 00:14:20.852 { 00:14:20.852 "name": "BaseBdev1", 00:14:20.852 "uuid": "bda9d7d3-bbd9-4478-ace5-fdc6a472d955", 00:14:20.852 "is_configured": true, 00:14:20.852 "data_offset": 2048, 00:14:20.852 "data_size": 63488 00:14:20.852 }, 00:14:20.852 { 00:14:20.852 "name": "BaseBdev2", 00:14:20.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.852 "is_configured": false, 00:14:20.852 "data_offset": 0, 00:14:20.852 "data_size": 0 00:14:20.852 } 00:14:20.852 ] 00:14:20.852 }' 00:14:20.852 02:37:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:20.852 02:37:45 -- common/autotest_common.sh@10 -- # set +x 00:14:21.419 02:37:46 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:21.678 [2024-07-11 02:37:46.654800] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:21.678 [2024-07-11 02:37:46.654880] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005a80 name Existed_Raid, state configuring 00:14:21.678 02:37:46 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:14:21.678 02:37:46 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:21.937 02:37:46 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:22.195 BaseBdev1 00:14:22.195 02:37:47 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:14:22.195 02:37:47 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:14:22.195 02:37:47 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:22.195 02:37:47 -- common/autotest_common.sh@889 -- # local i 00:14:22.195 02:37:47 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:22.195 02:37:47 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:22.195 02:37:47 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:22.455 02:37:47 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:22.455 [ 00:14:22.455 { 00:14:22.455 "name": "BaseBdev1", 00:14:22.455 "aliases": [ 00:14:22.455 "75d6104f-0926-4f7a-a66c-eacccd23c8fa" 00:14:22.455 ], 00:14:22.455 "product_name": "Malloc disk", 00:14:22.455 "block_size": 512, 00:14:22.455 "num_blocks": 65536, 00:14:22.455 "uuid": "75d6104f-0926-4f7a-a66c-eacccd23c8fa", 00:14:22.455 "assigned_rate_limits": { 00:14:22.455 "rw_ios_per_sec": 0, 00:14:22.455 "rw_mbytes_per_sec": 0, 00:14:22.455 "r_mbytes_per_sec": 0, 00:14:22.455 "w_mbytes_per_sec": 0 00:14:22.455 }, 00:14:22.455 "claimed": false, 00:14:22.455 "zoned": false, 00:14:22.455 "supported_io_types": { 00:14:22.455 "read": true, 00:14:22.455 "write": true, 00:14:22.455 "unmap": true, 00:14:22.455 "write_zeroes": true, 00:14:22.455 "flush": true, 00:14:22.455 "reset": true, 00:14:22.455 "compare": false, 00:14:22.455 "compare_and_write": false, 00:14:22.455 "abort": true, 00:14:22.455 "nvme_admin": false, 00:14:22.455 "nvme_io": false 00:14:22.455 }, 00:14:22.455 "memory_domains": [ 00:14:22.455 { 00:14:22.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.455 "dma_device_type": 2 00:14:22.455 } 00:14:22.455 ], 00:14:22.455 "driver_specific": {} 00:14:22.455 } 00:14:22.455 ] 00:14:22.455 02:37:47 -- common/autotest_common.sh@895 -- # return 0 00:14:22.455 02:37:47 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:22.714 [2024-07-11 02:37:47.723646] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:22.714 [2024-07-11 02:37:47.725366] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:22.714 [2024-07-11 02:37:47.725430] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:22.714 02:37:47 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:14:22.714 02:37:47 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:22.714 02:37:47 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:22.714 02:37:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:22.714 02:37:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:22.714 02:37:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:22.714 02:37:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:22.714 02:37:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:22.714 02:37:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:22.714 02:37:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:22.714 02:37:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:22.714 02:37:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:22.714 02:37:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:22.714 02:37:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.973 02:37:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:22.973 "name": "Existed_Raid", 00:14:22.973 "uuid": "59251bbd-ad4a-4acf-a20a-ff1efa5912ad", 00:14:22.973 "strip_size_kb": 64, 00:14:22.973 "state": "configuring", 00:14:22.973 "raid_level": "concat", 00:14:22.973 "superblock": true, 00:14:22.973 "num_base_bdevs": 2, 00:14:22.973 "num_base_bdevs_discovered": 1, 00:14:22.973 "num_base_bdevs_operational": 2, 00:14:22.973 "base_bdevs_list": [ 00:14:22.973 { 00:14:22.973 "name": "BaseBdev1", 00:14:22.973 "uuid": "75d6104f-0926-4f7a-a66c-eacccd23c8fa", 00:14:22.973 "is_configured": true, 00:14:22.973 "data_offset": 2048, 00:14:22.973 "data_size": 63488 00:14:22.973 }, 00:14:22.973 { 00:14:22.973 "name": "BaseBdev2", 00:14:22.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.973 "is_configured": false, 00:14:22.973 "data_offset": 0, 00:14:22.973 "data_size": 0 00:14:22.973 } 00:14:22.973 ] 00:14:22.973 }' 00:14:22.973 02:37:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:22.973 02:37:47 -- common/autotest_common.sh@10 -- # set +x 00:14:23.540 02:37:48 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:23.798 [2024-07-11 02:37:48.838021] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:23.798 [2024-07-11 02:37:48.838389] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006980 00:14:23.798 [2024-07-11 02:37:48.838428] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:23.798 [2024-07-11 02:37:48.838694] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80 00:14:23.798 BaseBdev2 00:14:23.799 [2024-07-11 02:37:48.839418] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006980 00:14:23.799 [2024-07-11 02:37:48.839464] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006980 00:14:23.799 [2024-07-11 02:37:48.839745] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:23.799 02:37:48 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:14:23.799 02:37:48 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:14:23.799 02:37:48 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:23.799 02:37:48 -- common/autotest_common.sh@889 -- # local i 00:14:23.799 02:37:48 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:23.799 02:37:48 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:23.799 02:37:48 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:24.056 02:37:49 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:24.314 [ 00:14:24.314 { 00:14:24.314 "name": "BaseBdev2", 00:14:24.314 "aliases": [ 00:14:24.314 "e2a09d82-55fe-46be-99d4-b6763d06a208" 00:14:24.314 ], 00:14:24.314 "product_name": "Malloc disk", 00:14:24.314 "block_size": 512, 00:14:24.314 "num_blocks": 65536, 00:14:24.314 "uuid": "e2a09d82-55fe-46be-99d4-b6763d06a208", 00:14:24.314 "assigned_rate_limits": { 00:14:24.314 "rw_ios_per_sec": 0, 00:14:24.314 "rw_mbytes_per_sec": 0, 00:14:24.314 "r_mbytes_per_sec": 0, 00:14:24.314 "w_mbytes_per_sec": 0 00:14:24.314 }, 00:14:24.314 "claimed": true, 00:14:24.314 "claim_type": "exclusive_write", 00:14:24.314 "zoned": false, 00:14:24.314 "supported_io_types": { 00:14:24.314 "read": true, 00:14:24.314 "write": true, 00:14:24.314 "unmap": true, 00:14:24.314 "write_zeroes": true, 00:14:24.314 "flush": true, 00:14:24.314 "reset": true, 00:14:24.314 "compare": false, 00:14:24.314 "compare_and_write": false, 00:14:24.314 "abort": true, 00:14:24.314 "nvme_admin": false, 00:14:24.314 "nvme_io": false 00:14:24.314 }, 00:14:24.314 "memory_domains": [ 00:14:24.314 { 00:14:24.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.314 "dma_device_type": 2 00:14:24.314 } 00:14:24.314 ], 00:14:24.314 "driver_specific": {} 00:14:24.314 } 00:14:24.314 ] 00:14:24.314 02:37:49 -- common/autotest_common.sh@895 -- # return 0 00:14:24.314 02:37:49 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:24.314 02:37:49 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:24.314 02:37:49 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:14:24.314 02:37:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:24.314 02:37:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:24.314 02:37:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:24.314 02:37:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:24.314 02:37:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:24.314 02:37:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:24.314 02:37:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:24.314 02:37:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:24.314 02:37:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:24.314 02:37:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:24.314 02:37:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.572 02:37:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:24.572 "name": "Existed_Raid", 00:14:24.572 "uuid": "59251bbd-ad4a-4acf-a20a-ff1efa5912ad", 00:14:24.572 "strip_size_kb": 64, 00:14:24.572 "state": "online", 00:14:24.572 "raid_level": "concat", 00:14:24.572 "superblock": true, 00:14:24.572 "num_base_bdevs": 2, 00:14:24.572 "num_base_bdevs_discovered": 2, 00:14:24.572 "num_base_bdevs_operational": 2, 00:14:24.572 "base_bdevs_list": [ 00:14:24.572 { 00:14:24.572 "name": "BaseBdev1", 00:14:24.572 "uuid": "75d6104f-0926-4f7a-a66c-eacccd23c8fa", 00:14:24.572 "is_configured": true, 00:14:24.572 "data_offset": 2048, 00:14:24.572 "data_size": 63488 00:14:24.572 }, 00:14:24.572 { 00:14:24.572 "name": "BaseBdev2", 00:14:24.572 "uuid": "e2a09d82-55fe-46be-99d4-b6763d06a208", 00:14:24.572 "is_configured": true, 00:14:24.572 "data_offset": 2048, 00:14:24.572 "data_size": 63488 00:14:24.572 } 00:14:24.572 ] 00:14:24.572 }' 00:14:24.572 02:37:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:24.572 02:37:49 -- common/autotest_common.sh@10 -- # set +x 00:14:25.507 02:37:50 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:25.507 [2024-07-11 02:37:50.482453] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:25.507 [2024-07-11 02:37:50.482490] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:25.507 [2024-07-11 02:37:50.482592] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:25.507 02:37:50 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:14:25.507 02:37:50 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:14:25.507 02:37:50 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:25.507 02:37:50 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:25.507 02:37:50 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:14:25.507 02:37:50 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:14:25.507 02:37:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:25.507 02:37:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:14:25.507 02:37:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:25.507 02:37:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:25.507 02:37:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:14:25.507 02:37:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:25.507 02:37:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:25.507 02:37:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:25.507 02:37:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:25.507 02:37:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:25.507 02:37:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:25.765 02:37:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:25.765 "name": "Existed_Raid", 00:14:25.765 "uuid": "59251bbd-ad4a-4acf-a20a-ff1efa5912ad", 00:14:25.765 "strip_size_kb": 64, 00:14:25.765 "state": "offline", 00:14:25.765 "raid_level": "concat", 00:14:25.765 "superblock": true, 00:14:25.765 "num_base_bdevs": 2, 00:14:25.765 "num_base_bdevs_discovered": 1, 00:14:25.765 "num_base_bdevs_operational": 1, 00:14:25.765 "base_bdevs_list": [ 00:14:25.765 { 00:14:25.765 "name": null, 00:14:25.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.765 "is_configured": false, 00:14:25.765 "data_offset": 2048, 00:14:25.765 "data_size": 63488 00:14:25.765 }, 00:14:25.765 { 00:14:25.765 "name": "BaseBdev2", 00:14:25.765 "uuid": "e2a09d82-55fe-46be-99d4-b6763d06a208", 00:14:25.765 "is_configured": true, 00:14:25.765 "data_offset": 2048, 00:14:25.765 "data_size": 63488 00:14:25.765 } 00:14:25.765 ] 00:14:25.765 }' 00:14:25.765 02:37:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:25.765 02:37:50 -- common/autotest_common.sh@10 -- # set +x 00:14:26.353 02:37:51 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:14:26.353 02:37:51 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:26.353 02:37:51 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:26.353 02:37:51 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:26.611 02:37:51 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:26.611 02:37:51 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:26.611 02:37:51 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:26.868 [2024-07-11 02:37:51.822087] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:26.868 [2024-07-11 02:37:51.822235] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state offline 00:14:26.868 02:37:51 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:26.868 02:37:51 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:26.868 02:37:51 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:26.868 02:37:51 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:14:27.126 02:37:52 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:14:27.126 02:37:52 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:14:27.126 02:37:52 -- bdev/bdev_raid.sh@287 -- # killprocess 125833 00:14:27.126 02:37:52 -- common/autotest_common.sh@926 -- # '[' -z 125833 ']' 00:14:27.126 02:37:52 -- common/autotest_common.sh@930 -- # kill -0 125833 00:14:27.126 02:37:52 -- common/autotest_common.sh@931 -- # uname 00:14:27.126 02:37:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:27.126 02:37:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 125833 00:14:27.126 killing process with pid 125833 00:14:27.126 02:37:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:27.126 02:37:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:27.126 02:37:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 125833' 00:14:27.126 02:37:52 -- common/autotest_common.sh@945 -- # kill 125833 00:14:27.126 02:37:52 -- common/autotest_common.sh@950 -- # wait 125833 00:14:27.126 [2024-07-11 02:37:52.094325] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:27.126 [2024-07-11 02:37:52.094428] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:27.384 ************************************ 00:14:27.384 END TEST raid_state_function_test_sb 00:14:27.384 ************************************ 00:14:27.384 02:37:52 -- bdev/bdev_raid.sh@289 -- # return 0 00:14:27.384 00:14:27.384 real 0m10.109s 00:14:27.384 user 0m18.600s 00:14:27.384 sys 0m1.190s 00:14:27.384 02:37:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:27.384 02:37:52 -- common/autotest_common.sh@10 -- # set +x 00:14:27.384 02:37:52 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:14:27.384 02:37:52 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:14:27.384 02:37:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:27.384 02:37:52 -- common/autotest_common.sh@10 -- # set +x 00:14:27.384 ************************************ 00:14:27.384 START TEST raid_superblock_test 00:14:27.384 ************************************ 00:14:27.384 02:37:52 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 2 00:14:27.384 02:37:52 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:14:27.384 02:37:52 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:14:27.384 02:37:52 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:14:27.384 02:37:52 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:14:27.384 02:37:52 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:14:27.384 02:37:52 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:14:27.384 02:37:52 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:14:27.384 02:37:52 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:14:27.384 02:37:52 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:14:27.384 02:37:52 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:14:27.384 02:37:52 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:14:27.384 02:37:52 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:14:27.384 02:37:52 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:14:27.384 02:37:52 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:14:27.384 02:37:52 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:14:27.384 02:37:52 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:14:27.384 02:37:52 -- bdev/bdev_raid.sh@357 -- # raid_pid=126172 00:14:27.384 02:37:52 -- bdev/bdev_raid.sh@358 -- # waitforlisten 126172 /var/tmp/spdk-raid.sock 00:14:27.384 02:37:52 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:27.384 02:37:52 -- common/autotest_common.sh@819 -- # '[' -z 126172 ']' 00:14:27.384 02:37:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:27.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:27.384 02:37:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:27.384 02:37:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:27.384 02:37:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:27.384 02:37:52 -- common/autotest_common.sh@10 -- # set +x 00:14:27.384 [2024-07-11 02:37:52.433421] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:14:27.384 [2024-07-11 02:37:52.433704] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126172 ] 00:14:27.643 [2024-07-11 02:37:52.572790] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.643 [2024-07-11 02:37:52.633812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.643 [2024-07-11 02:37:52.685595] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:28.577 02:37:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:28.577 02:37:53 -- common/autotest_common.sh@852 -- # return 0 00:14:28.577 02:37:53 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:14:28.577 02:37:53 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:28.577 02:37:53 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:14:28.577 02:37:53 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:14:28.577 02:37:53 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:28.577 02:37:53 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:28.577 02:37:53 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:28.577 02:37:53 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:28.577 02:37:53 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:14:28.577 malloc1 00:14:28.577 02:37:53 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:28.835 [2024-07-11 02:37:53.756208] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:28.835 [2024-07-11 02:37:53.756310] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:28.835 [2024-07-11 02:37:53.756350] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005d80 00:14:28.835 [2024-07-11 02:37:53.756396] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:28.835 [2024-07-11 02:37:53.758515] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:28.835 [2024-07-11 02:37:53.758579] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:28.835 pt1 00:14:28.835 02:37:53 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:28.835 02:37:53 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:28.835 02:37:53 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:14:28.835 02:37:53 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:14:28.835 02:37:53 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:28.835 02:37:53 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:28.835 02:37:53 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:28.835 02:37:53 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:28.835 02:37:53 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:14:29.093 malloc2 00:14:29.093 02:37:53 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:29.093 [2024-07-11 02:37:54.130213] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:29.093 [2024-07-11 02:37:54.130284] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.093 [2024-07-11 02:37:54.130321] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:14:29.093 [2024-07-11 02:37:54.130362] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.093 [2024-07-11 02:37:54.132368] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.093 [2024-07-11 02:37:54.132414] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:29.093 pt2 00:14:29.093 02:37:54 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:29.093 02:37:54 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:29.093 02:37:54 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:14:29.351 [2024-07-11 02:37:54.318295] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:29.351 [2024-07-11 02:37:54.320077] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:29.351 [2024-07-11 02:37:54.320279] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:14:29.351 [2024-07-11 02:37:54.320295] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:29.351 [2024-07-11 02:37:54.320491] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:14:29.351 [2024-07-11 02:37:54.320920] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:14:29.351 [2024-07-11 02:37:54.320945] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000006f80 00:14:29.351 [2024-07-11 02:37:54.321123] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:29.351 02:37:54 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:14:29.351 02:37:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:29.351 02:37:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:29.351 02:37:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:29.351 02:37:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:29.351 02:37:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:29.351 02:37:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:29.351 02:37:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:29.351 02:37:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:29.351 02:37:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:29.351 02:37:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:29.351 02:37:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.609 02:37:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:29.609 "name": "raid_bdev1", 00:14:29.609 "uuid": "f5dba95a-6989-42f2-b60a-df4185556bce", 00:14:29.609 "strip_size_kb": 64, 00:14:29.609 "state": "online", 00:14:29.609 "raid_level": "concat", 00:14:29.609 "superblock": true, 00:14:29.609 "num_base_bdevs": 2, 00:14:29.609 "num_base_bdevs_discovered": 2, 00:14:29.609 "num_base_bdevs_operational": 2, 00:14:29.609 "base_bdevs_list": [ 00:14:29.609 { 00:14:29.609 "name": "pt1", 00:14:29.609 "uuid": "4bb55123-b9d5-5bfe-8195-653c76357bd0", 00:14:29.609 "is_configured": true, 00:14:29.609 "data_offset": 2048, 00:14:29.609 "data_size": 63488 00:14:29.609 }, 00:14:29.609 { 00:14:29.609 "name": "pt2", 00:14:29.609 "uuid": "5f8c2d59-2123-575a-ac5d-e5c4acbc3726", 00:14:29.609 "is_configured": true, 00:14:29.609 "data_offset": 2048, 00:14:29.609 "data_size": 63488 00:14:29.609 } 00:14:29.609 ] 00:14:29.609 }' 00:14:29.609 02:37:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:29.609 02:37:54 -- common/autotest_common.sh@10 -- # set +x 00:14:30.175 02:37:55 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:30.175 02:37:55 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:14:30.432 [2024-07-11 02:37:55.410680] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:30.432 02:37:55 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=f5dba95a-6989-42f2-b60a-df4185556bce 00:14:30.433 02:37:55 -- bdev/bdev_raid.sh@380 -- # '[' -z f5dba95a-6989-42f2-b60a-df4185556bce ']' 00:14:30.433 02:37:55 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:30.690 [2024-07-11 02:37:55.650490] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:30.690 [2024-07-11 02:37:55.650538] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:30.690 [2024-07-11 02:37:55.650669] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:30.690 [2024-07-11 02:37:55.650775] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:30.690 [2024-07-11 02:37:55.650791] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name raid_bdev1, state offline 00:14:30.690 02:37:55 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:30.690 02:37:55 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:14:30.948 02:37:55 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:14:30.948 02:37:55 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:14:30.948 02:37:55 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:30.948 02:37:55 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:31.206 02:37:56 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:31.206 02:37:56 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:31.466 02:37:56 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:14:31.466 02:37:56 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:31.466 02:37:56 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:14:31.466 02:37:56 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:14:31.466 02:37:56 -- common/autotest_common.sh@640 -- # local es=0 00:14:31.466 02:37:56 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:14:31.466 02:37:56 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:31.466 02:37:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:31.466 02:37:56 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:31.466 02:37:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:31.466 02:37:56 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:31.466 02:37:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:31.466 02:37:56 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:31.466 02:37:56 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:31.466 02:37:56 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:14:31.725 [2024-07-11 02:37:56.739396] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:31.725 [2024-07-11 02:37:56.741290] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:31.725 [2024-07-11 02:37:56.741385] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:14:31.725 [2024-07-11 02:37:56.741466] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:14:31.725 [2024-07-11 02:37:56.741524] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:31.725 [2024-07-11 02:37:56.741569] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name raid_bdev1, state configuring 00:14:31.725 request: 00:14:31.725 { 00:14:31.725 "name": "raid_bdev1", 00:14:31.725 "raid_level": "concat", 00:14:31.725 "base_bdevs": [ 00:14:31.725 "malloc1", 00:14:31.725 "malloc2" 00:14:31.725 ], 00:14:31.725 "superblock": false, 00:14:31.725 "strip_size_kb": 64, 00:14:31.725 "method": "bdev_raid_create", 00:14:31.725 "req_id": 1 00:14:31.725 } 00:14:31.725 Got JSON-RPC error response 00:14:31.725 response: 00:14:31.725 { 00:14:31.725 "code": -17, 00:14:31.725 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:31.725 } 00:14:31.725 02:37:56 -- common/autotest_common.sh@643 -- # es=1 00:14:31.725 02:37:56 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:31.725 02:37:56 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:31.725 02:37:56 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:31.725 02:37:56 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:31.725 02:37:56 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:14:31.984 02:37:56 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:14:31.984 02:37:56 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:14:31.984 02:37:56 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:32.243 [2024-07-11 02:37:57.199395] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:32.243 [2024-07-11 02:37:57.199497] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:32.243 [2024-07-11 02:37:57.199539] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:14:32.243 [2024-07-11 02:37:57.199567] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:32.243 [2024-07-11 02:37:57.201785] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:32.243 [2024-07-11 02:37:57.201848] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:32.243 [2024-07-11 02:37:57.201934] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:14:32.243 [2024-07-11 02:37:57.202055] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:32.243 pt1 00:14:32.243 02:37:57 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:14:32.243 02:37:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:32.243 02:37:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:32.243 02:37:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:32.243 02:37:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:32.243 02:37:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:32.243 02:37:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:32.243 02:37:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:32.243 02:37:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:32.243 02:37:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:32.243 02:37:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:32.243 02:37:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.501 02:37:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:32.501 "name": "raid_bdev1", 00:14:32.501 "uuid": "f5dba95a-6989-42f2-b60a-df4185556bce", 00:14:32.501 "strip_size_kb": 64, 00:14:32.501 "state": "configuring", 00:14:32.501 "raid_level": "concat", 00:14:32.501 "superblock": true, 00:14:32.501 "num_base_bdevs": 2, 00:14:32.501 "num_base_bdevs_discovered": 1, 00:14:32.501 "num_base_bdevs_operational": 2, 00:14:32.501 "base_bdevs_list": [ 00:14:32.501 { 00:14:32.501 "name": "pt1", 00:14:32.501 "uuid": "4bb55123-b9d5-5bfe-8195-653c76357bd0", 00:14:32.501 "is_configured": true, 00:14:32.501 "data_offset": 2048, 00:14:32.501 "data_size": 63488 00:14:32.501 }, 00:14:32.501 { 00:14:32.501 "name": null, 00:14:32.501 "uuid": "5f8c2d59-2123-575a-ac5d-e5c4acbc3726", 00:14:32.501 "is_configured": false, 00:14:32.501 "data_offset": 2048, 00:14:32.501 "data_size": 63488 00:14:32.501 } 00:14:32.501 ] 00:14:32.501 }' 00:14:32.501 02:37:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:32.501 02:37:57 -- common/autotest_common.sh@10 -- # set +x 00:14:33.068 02:37:58 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:14:33.068 02:37:58 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:14:33.069 02:37:58 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:33.069 02:37:58 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:33.327 [2024-07-11 02:37:58.219678] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:33.327 [2024-07-11 02:37:58.219802] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:33.327 [2024-07-11 02:37:58.219842] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:33.327 [2024-07-11 02:37:58.219870] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:33.327 [2024-07-11 02:37:58.220380] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:33.327 [2024-07-11 02:37:58.220484] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:33.327 [2024-07-11 02:37:58.220587] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:14:33.327 [2024-07-11 02:37:58.220641] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:33.327 [2024-07-11 02:37:58.220818] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:14:33.327 [2024-07-11 02:37:58.220838] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:33.327 [2024-07-11 02:37:58.220914] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:14:33.327 [2024-07-11 02:37:58.221239] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:14:33.327 [2024-07-11 02:37:58.221265] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008180 00:14:33.327 [2024-07-11 02:37:58.221378] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:33.328 pt2 00:14:33.328 02:37:58 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:14:33.328 02:37:58 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:33.328 02:37:58 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:14:33.328 02:37:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:33.328 02:37:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:33.328 02:37:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:33.328 02:37:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:33.328 02:37:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:33.328 02:37:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:33.328 02:37:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:33.328 02:37:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:33.328 02:37:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:33.328 02:37:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:33.328 02:37:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.587 02:37:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:33.587 "name": "raid_bdev1", 00:14:33.587 "uuid": "f5dba95a-6989-42f2-b60a-df4185556bce", 00:14:33.587 "strip_size_kb": 64, 00:14:33.587 "state": "online", 00:14:33.587 "raid_level": "concat", 00:14:33.587 "superblock": true, 00:14:33.587 "num_base_bdevs": 2, 00:14:33.587 "num_base_bdevs_discovered": 2, 00:14:33.587 "num_base_bdevs_operational": 2, 00:14:33.587 "base_bdevs_list": [ 00:14:33.587 { 00:14:33.587 "name": "pt1", 00:14:33.587 "uuid": "4bb55123-b9d5-5bfe-8195-653c76357bd0", 00:14:33.587 "is_configured": true, 00:14:33.587 "data_offset": 2048, 00:14:33.587 "data_size": 63488 00:14:33.587 }, 00:14:33.587 { 00:14:33.587 "name": "pt2", 00:14:33.587 "uuid": "5f8c2d59-2123-575a-ac5d-e5c4acbc3726", 00:14:33.587 "is_configured": true, 00:14:33.587 "data_offset": 2048, 00:14:33.587 "data_size": 63488 00:14:33.587 } 00:14:33.587 ] 00:14:33.587 }' 00:14:33.587 02:37:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:33.587 02:37:58 -- common/autotest_common.sh@10 -- # set +x 00:14:34.154 02:37:59 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:34.154 02:37:59 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:14:34.413 [2024-07-11 02:37:59.316024] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:34.413 02:37:59 -- bdev/bdev_raid.sh@430 -- # '[' f5dba95a-6989-42f2-b60a-df4185556bce '!=' f5dba95a-6989-42f2-b60a-df4185556bce ']' 00:14:34.413 02:37:59 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:14:34.413 02:37:59 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:34.413 02:37:59 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:34.413 02:37:59 -- bdev/bdev_raid.sh@511 -- # killprocess 126172 00:14:34.413 02:37:59 -- common/autotest_common.sh@926 -- # '[' -z 126172 ']' 00:14:34.413 02:37:59 -- common/autotest_common.sh@930 -- # kill -0 126172 00:14:34.413 02:37:59 -- common/autotest_common.sh@931 -- # uname 00:14:34.413 02:37:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:34.413 02:37:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 126172 00:14:34.413 killing process with pid 126172 00:14:34.413 02:37:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:34.413 02:37:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:34.413 02:37:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 126172' 00:14:34.413 02:37:59 -- common/autotest_common.sh@945 -- # kill 126172 00:14:34.413 02:37:59 -- common/autotest_common.sh@950 -- # wait 126172 00:14:34.413 [2024-07-11 02:37:59.347676] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:34.413 [2024-07-11 02:37:59.347788] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:34.413 [2024-07-11 02:37:59.347869] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:34.413 [2024-07-11 02:37:59.347891] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state offline 00:14:34.413 [2024-07-11 02:37:59.367463] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:34.672 ************************************ 00:14:34.672 END TEST raid_superblock_test 00:14:34.672 ************************************ 00:14:34.672 02:37:59 -- bdev/bdev_raid.sh@513 -- # return 0 00:14:34.672 00:14:34.672 real 0m7.197s 00:14:34.672 user 0m13.147s 00:14:34.672 sys 0m0.835s 00:14:34.672 02:37:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:34.672 02:37:59 -- common/autotest_common.sh@10 -- # set +x 00:14:34.672 02:37:59 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:14:34.672 02:37:59 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:14:34.672 02:37:59 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:34.672 02:37:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:34.672 02:37:59 -- common/autotest_common.sh@10 -- # set +x 00:14:34.672 ************************************ 00:14:34.672 START TEST raid_state_function_test 00:14:34.672 ************************************ 00:14:34.672 02:37:59 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 2 false 00:14:34.672 02:37:59 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:14:34.672 02:37:59 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:14:34.672 02:37:59 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:14:34.672 02:37:59 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:34.672 02:37:59 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:14:34.672 02:37:59 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:34.672 02:37:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:34.672 02:37:59 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:14:34.672 02:37:59 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:34.672 02:37:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:34.672 02:37:59 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:14:34.672 02:37:59 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:34.672 02:37:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:34.672 02:37:59 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:34.672 02:37:59 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:34.672 02:37:59 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:34.672 02:37:59 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:34.672 02:37:59 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:34.672 02:37:59 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:14:34.672 02:37:59 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:14:34.672 02:37:59 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:14:34.672 02:37:59 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:14:34.672 02:37:59 -- bdev/bdev_raid.sh@226 -- # raid_pid=126430 00:14:34.672 02:37:59 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 126430' 00:14:34.672 Process raid pid: 126430 00:14:34.672 02:37:59 -- bdev/bdev_raid.sh@228 -- # waitforlisten 126430 /var/tmp/spdk-raid.sock 00:14:34.672 02:37:59 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:34.672 02:37:59 -- common/autotest_common.sh@819 -- # '[' -z 126430 ']' 00:14:34.672 02:37:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:34.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:34.672 02:37:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:34.672 02:37:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:34.672 02:37:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:34.672 02:37:59 -- common/autotest_common.sh@10 -- # set +x 00:14:34.672 [2024-07-11 02:37:59.679636] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:14:34.672 [2024-07-11 02:37:59.679866] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:34.931 [2024-07-11 02:37:59.825179] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.931 [2024-07-11 02:37:59.888647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.931 [2024-07-11 02:37:59.940442] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:35.498 02:38:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:35.498 02:38:00 -- common/autotest_common.sh@852 -- # return 0 00:14:35.498 02:38:00 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:35.757 [2024-07-11 02:38:00.728750] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:35.757 [2024-07-11 02:38:00.728846] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:35.757 [2024-07-11 02:38:00.728864] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:35.757 [2024-07-11 02:38:00.728884] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:35.757 02:38:00 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:35.757 02:38:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:35.757 02:38:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:35.757 02:38:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:35.757 02:38:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:35.757 02:38:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:35.757 02:38:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:35.757 02:38:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:35.757 02:38:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:35.757 02:38:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:35.757 02:38:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:35.757 02:38:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:36.016 02:38:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:36.016 "name": "Existed_Raid", 00:14:36.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.016 "strip_size_kb": 0, 00:14:36.016 "state": "configuring", 00:14:36.016 "raid_level": "raid1", 00:14:36.016 "superblock": false, 00:14:36.016 "num_base_bdevs": 2, 00:14:36.016 "num_base_bdevs_discovered": 0, 00:14:36.016 "num_base_bdevs_operational": 2, 00:14:36.016 "base_bdevs_list": [ 00:14:36.016 { 00:14:36.016 "name": "BaseBdev1", 00:14:36.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.016 "is_configured": false, 00:14:36.016 "data_offset": 0, 00:14:36.016 "data_size": 0 00:14:36.016 }, 00:14:36.016 { 00:14:36.016 "name": "BaseBdev2", 00:14:36.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.016 "is_configured": false, 00:14:36.016 "data_offset": 0, 00:14:36.016 "data_size": 0 00:14:36.016 } 00:14:36.016 ] 00:14:36.016 }' 00:14:36.016 02:38:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:36.016 02:38:00 -- common/autotest_common.sh@10 -- # set +x 00:14:36.583 02:38:01 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:36.841 [2024-07-11 02:38:01.760847] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:36.841 [2024-07-11 02:38:01.760989] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:14:36.841 02:38:01 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:37.100 [2024-07-11 02:38:01.948894] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:37.100 [2024-07-11 02:38:01.949085] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:37.100 [2024-07-11 02:38:01.949182] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:37.100 [2024-07-11 02:38:01.949239] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:37.100 02:38:01 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:37.100 [2024-07-11 02:38:02.143425] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:37.100 BaseBdev1 00:14:37.100 02:38:02 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:14:37.100 02:38:02 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:14:37.100 02:38:02 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:37.100 02:38:02 -- common/autotest_common.sh@889 -- # local i 00:14:37.100 02:38:02 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:37.100 02:38:02 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:37.100 02:38:02 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:37.359 02:38:02 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:37.617 [ 00:14:37.617 { 00:14:37.617 "name": "BaseBdev1", 00:14:37.617 "aliases": [ 00:14:37.617 "10308dd0-29ab-4415-b8f0-ff69a22fcf27" 00:14:37.617 ], 00:14:37.617 "product_name": "Malloc disk", 00:14:37.617 "block_size": 512, 00:14:37.617 "num_blocks": 65536, 00:14:37.617 "uuid": "10308dd0-29ab-4415-b8f0-ff69a22fcf27", 00:14:37.617 "assigned_rate_limits": { 00:14:37.617 "rw_ios_per_sec": 0, 00:14:37.617 "rw_mbytes_per_sec": 0, 00:14:37.617 "r_mbytes_per_sec": 0, 00:14:37.617 "w_mbytes_per_sec": 0 00:14:37.617 }, 00:14:37.617 "claimed": true, 00:14:37.617 "claim_type": "exclusive_write", 00:14:37.617 "zoned": false, 00:14:37.617 "supported_io_types": { 00:14:37.617 "read": true, 00:14:37.617 "write": true, 00:14:37.617 "unmap": true, 00:14:37.617 "write_zeroes": true, 00:14:37.617 "flush": true, 00:14:37.617 "reset": true, 00:14:37.617 "compare": false, 00:14:37.617 "compare_and_write": false, 00:14:37.617 "abort": true, 00:14:37.617 "nvme_admin": false, 00:14:37.617 "nvme_io": false 00:14:37.617 }, 00:14:37.617 "memory_domains": [ 00:14:37.617 { 00:14:37.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:37.617 "dma_device_type": 2 00:14:37.617 } 00:14:37.617 ], 00:14:37.618 "driver_specific": {} 00:14:37.618 } 00:14:37.618 ] 00:14:37.618 02:38:02 -- common/autotest_common.sh@895 -- # return 0 00:14:37.618 02:38:02 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:37.618 02:38:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:37.618 02:38:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:37.618 02:38:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:37.618 02:38:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:37.618 02:38:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:37.618 02:38:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:37.618 02:38:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:37.618 02:38:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:37.618 02:38:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:37.618 02:38:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:37.618 02:38:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:37.876 02:38:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:37.876 "name": "Existed_Raid", 00:14:37.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.876 "strip_size_kb": 0, 00:14:37.876 "state": "configuring", 00:14:37.876 "raid_level": "raid1", 00:14:37.876 "superblock": false, 00:14:37.876 "num_base_bdevs": 2, 00:14:37.876 "num_base_bdevs_discovered": 1, 00:14:37.876 "num_base_bdevs_operational": 2, 00:14:37.876 "base_bdevs_list": [ 00:14:37.876 { 00:14:37.876 "name": "BaseBdev1", 00:14:37.876 "uuid": "10308dd0-29ab-4415-b8f0-ff69a22fcf27", 00:14:37.876 "is_configured": true, 00:14:37.876 "data_offset": 0, 00:14:37.876 "data_size": 65536 00:14:37.876 }, 00:14:37.876 { 00:14:37.876 "name": "BaseBdev2", 00:14:37.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.876 "is_configured": false, 00:14:37.876 "data_offset": 0, 00:14:37.876 "data_size": 0 00:14:37.876 } 00:14:37.876 ] 00:14:37.876 }' 00:14:37.876 02:38:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:37.876 02:38:02 -- common/autotest_common.sh@10 -- # set +x 00:14:38.443 02:38:03 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:38.708 [2024-07-11 02:38:03.551828] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:38.708 [2024-07-11 02:38:03.552013] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005a80 name Existed_Raid, state configuring 00:14:38.708 02:38:03 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:14:38.708 02:38:03 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:38.983 [2024-07-11 02:38:03.803950] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:38.983 [2024-07-11 02:38:03.805788] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:38.983 [2024-07-11 02:38:03.805976] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:38.983 02:38:03 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:14:38.983 02:38:03 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:38.983 02:38:03 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:38.983 02:38:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:38.983 02:38:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:38.983 02:38:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:38.983 02:38:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:38.983 02:38:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:38.983 02:38:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:38.983 02:38:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:38.983 02:38:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:38.983 02:38:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:38.983 02:38:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:38.983 02:38:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:38.983 02:38:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:38.983 "name": "Existed_Raid", 00:14:38.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.983 "strip_size_kb": 0, 00:14:38.983 "state": "configuring", 00:14:38.983 "raid_level": "raid1", 00:14:38.983 "superblock": false, 00:14:38.983 "num_base_bdevs": 2, 00:14:38.983 "num_base_bdevs_discovered": 1, 00:14:38.983 "num_base_bdevs_operational": 2, 00:14:38.983 "base_bdevs_list": [ 00:14:38.983 { 00:14:38.983 "name": "BaseBdev1", 00:14:38.983 "uuid": "10308dd0-29ab-4415-b8f0-ff69a22fcf27", 00:14:38.983 "is_configured": true, 00:14:38.983 "data_offset": 0, 00:14:38.983 "data_size": 65536 00:14:38.983 }, 00:14:38.983 { 00:14:38.983 "name": "BaseBdev2", 00:14:38.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.983 "is_configured": false, 00:14:38.983 "data_offset": 0, 00:14:38.983 "data_size": 0 00:14:38.983 } 00:14:38.983 ] 00:14:38.983 }' 00:14:38.983 02:38:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:38.983 02:38:04 -- common/autotest_common.sh@10 -- # set +x 00:14:39.586 02:38:04 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:39.845 [2024-07-11 02:38:04.897230] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:39.845 [2024-07-11 02:38:04.897291] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006380 00:14:39.845 [2024-07-11 02:38:04.897302] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:39.845 [2024-07-11 02:38:04.897451] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001eb0 00:14:39.845 [2024-07-11 02:38:04.897907] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006380 00:14:39.845 [2024-07-11 02:38:04.897931] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006380 00:14:39.845 [2024-07-11 02:38:04.898224] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:39.845 BaseBdev2 00:14:39.845 02:38:04 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:14:39.845 02:38:04 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:14:39.845 02:38:04 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:39.845 02:38:04 -- common/autotest_common.sh@889 -- # local i 00:14:39.845 02:38:04 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:39.845 02:38:04 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:39.845 02:38:04 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:40.104 02:38:05 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:40.362 [ 00:14:40.362 { 00:14:40.362 "name": "BaseBdev2", 00:14:40.362 "aliases": [ 00:14:40.362 "e36abfa2-13b9-4855-a73f-ce8fb99c594a" 00:14:40.362 ], 00:14:40.362 "product_name": "Malloc disk", 00:14:40.362 "block_size": 512, 00:14:40.362 "num_blocks": 65536, 00:14:40.362 "uuid": "e36abfa2-13b9-4855-a73f-ce8fb99c594a", 00:14:40.362 "assigned_rate_limits": { 00:14:40.362 "rw_ios_per_sec": 0, 00:14:40.362 "rw_mbytes_per_sec": 0, 00:14:40.362 "r_mbytes_per_sec": 0, 00:14:40.362 "w_mbytes_per_sec": 0 00:14:40.362 }, 00:14:40.362 "claimed": true, 00:14:40.362 "claim_type": "exclusive_write", 00:14:40.362 "zoned": false, 00:14:40.362 "supported_io_types": { 00:14:40.362 "read": true, 00:14:40.363 "write": true, 00:14:40.363 "unmap": true, 00:14:40.363 "write_zeroes": true, 00:14:40.363 "flush": true, 00:14:40.363 "reset": true, 00:14:40.363 "compare": false, 00:14:40.363 "compare_and_write": false, 00:14:40.363 "abort": true, 00:14:40.363 "nvme_admin": false, 00:14:40.363 "nvme_io": false 00:14:40.363 }, 00:14:40.363 "memory_domains": [ 00:14:40.363 { 00:14:40.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:40.363 "dma_device_type": 2 00:14:40.363 } 00:14:40.363 ], 00:14:40.363 "driver_specific": {} 00:14:40.363 } 00:14:40.363 ] 00:14:40.363 02:38:05 -- common/autotest_common.sh@895 -- # return 0 00:14:40.363 02:38:05 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:40.363 02:38:05 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:40.363 02:38:05 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:14:40.363 02:38:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:40.363 02:38:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:40.363 02:38:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:40.363 02:38:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:40.363 02:38:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:40.363 02:38:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:40.363 02:38:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:40.363 02:38:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:40.363 02:38:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:40.363 02:38:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:40.363 02:38:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:40.621 02:38:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:40.621 "name": "Existed_Raid", 00:14:40.621 "uuid": "bd5ce2b7-e457-4420-9cc7-6ff0e8d26b59", 00:14:40.621 "strip_size_kb": 0, 00:14:40.621 "state": "online", 00:14:40.621 "raid_level": "raid1", 00:14:40.621 "superblock": false, 00:14:40.621 "num_base_bdevs": 2, 00:14:40.621 "num_base_bdevs_discovered": 2, 00:14:40.621 "num_base_bdevs_operational": 2, 00:14:40.621 "base_bdevs_list": [ 00:14:40.621 { 00:14:40.621 "name": "BaseBdev1", 00:14:40.621 "uuid": "10308dd0-29ab-4415-b8f0-ff69a22fcf27", 00:14:40.621 "is_configured": true, 00:14:40.621 "data_offset": 0, 00:14:40.621 "data_size": 65536 00:14:40.621 }, 00:14:40.621 { 00:14:40.621 "name": "BaseBdev2", 00:14:40.621 "uuid": "e36abfa2-13b9-4855-a73f-ce8fb99c594a", 00:14:40.621 "is_configured": true, 00:14:40.622 "data_offset": 0, 00:14:40.622 "data_size": 65536 00:14:40.622 } 00:14:40.622 ] 00:14:40.622 }' 00:14:40.622 02:38:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:40.622 02:38:05 -- common/autotest_common.sh@10 -- # set +x 00:14:41.189 02:38:06 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:41.448 [2024-07-11 02:38:06.401673] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:41.448 02:38:06 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:14:41.448 02:38:06 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:14:41.448 02:38:06 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:41.448 02:38:06 -- bdev/bdev_raid.sh@196 -- # return 0 00:14:41.448 02:38:06 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:14:41.448 02:38:06 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:14:41.448 02:38:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:41.448 02:38:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:41.448 02:38:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:41.448 02:38:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:41.448 02:38:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:14:41.448 02:38:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:41.448 02:38:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:41.448 02:38:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:41.448 02:38:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:41.448 02:38:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:41.448 02:38:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:41.706 02:38:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:41.707 "name": "Existed_Raid", 00:14:41.707 "uuid": "bd5ce2b7-e457-4420-9cc7-6ff0e8d26b59", 00:14:41.707 "strip_size_kb": 0, 00:14:41.707 "state": "online", 00:14:41.707 "raid_level": "raid1", 00:14:41.707 "superblock": false, 00:14:41.707 "num_base_bdevs": 2, 00:14:41.707 "num_base_bdevs_discovered": 1, 00:14:41.707 "num_base_bdevs_operational": 1, 00:14:41.707 "base_bdevs_list": [ 00:14:41.707 { 00:14:41.707 "name": null, 00:14:41.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.707 "is_configured": false, 00:14:41.707 "data_offset": 0, 00:14:41.707 "data_size": 65536 00:14:41.707 }, 00:14:41.707 { 00:14:41.707 "name": "BaseBdev2", 00:14:41.707 "uuid": "e36abfa2-13b9-4855-a73f-ce8fb99c594a", 00:14:41.707 "is_configured": true, 00:14:41.707 "data_offset": 0, 00:14:41.707 "data_size": 65536 00:14:41.707 } 00:14:41.707 ] 00:14:41.707 }' 00:14:41.707 02:38:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:41.707 02:38:06 -- common/autotest_common.sh@10 -- # set +x 00:14:42.274 02:38:07 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:14:42.274 02:38:07 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:42.274 02:38:07 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:42.274 02:38:07 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:42.532 02:38:07 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:42.532 02:38:07 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:42.532 02:38:07 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:42.791 [2024-07-11 02:38:07.787431] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:42.791 [2024-07-11 02:38:07.787464] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:42.791 [2024-07-11 02:38:07.787542] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:42.791 [2024-07-11 02:38:07.796899] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:42.791 [2024-07-11 02:38:07.796930] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state offline 00:14:42.791 02:38:07 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:42.791 02:38:07 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:42.791 02:38:07 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:42.791 02:38:07 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:14:43.050 02:38:08 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:14:43.050 02:38:08 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:14:43.050 02:38:08 -- bdev/bdev_raid.sh@287 -- # killprocess 126430 00:14:43.050 02:38:08 -- common/autotest_common.sh@926 -- # '[' -z 126430 ']' 00:14:43.050 02:38:08 -- common/autotest_common.sh@930 -- # kill -0 126430 00:14:43.050 02:38:08 -- common/autotest_common.sh@931 -- # uname 00:14:43.050 02:38:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:43.050 02:38:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 126430 00:14:43.050 killing process with pid 126430 00:14:43.050 02:38:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:43.050 02:38:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:43.050 02:38:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 126430' 00:14:43.050 02:38:08 -- common/autotest_common.sh@945 -- # kill 126430 00:14:43.050 02:38:08 -- common/autotest_common.sh@950 -- # wait 126430 00:14:43.050 [2024-07-11 02:38:08.112870] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:43.050 [2024-07-11 02:38:08.112978] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:43.308 ************************************ 00:14:43.308 END TEST raid_state_function_test 00:14:43.308 ************************************ 00:14:43.308 02:38:08 -- bdev/bdev_raid.sh@289 -- # return 0 00:14:43.308 00:14:43.308 real 0m8.698s 00:14:43.308 user 0m16.122s 00:14:43.308 sys 0m0.916s 00:14:43.308 02:38:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:43.308 02:38:08 -- common/autotest_common.sh@10 -- # set +x 00:14:43.308 02:38:08 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:14:43.308 02:38:08 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:43.308 02:38:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:43.308 02:38:08 -- common/autotest_common.sh@10 -- # set +x 00:14:43.308 ************************************ 00:14:43.308 START TEST raid_state_function_test_sb 00:14:43.308 ************************************ 00:14:43.308 02:38:08 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 2 true 00:14:43.308 02:38:08 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:14:43.308 02:38:08 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:14:43.308 02:38:08 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:14:43.308 02:38:08 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:43.308 02:38:08 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:14:43.308 02:38:08 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:43.308 02:38:08 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:43.308 02:38:08 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:14:43.308 02:38:08 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:43.308 02:38:08 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:43.308 02:38:08 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:14:43.308 02:38:08 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:43.308 02:38:08 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:43.308 02:38:08 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:43.308 02:38:08 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:43.308 02:38:08 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:43.308 02:38:08 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:43.308 02:38:08 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:43.308 02:38:08 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:14:43.308 02:38:08 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:14:43.308 02:38:08 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:14:43.308 02:38:08 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:14:43.308 02:38:08 -- bdev/bdev_raid.sh@226 -- # raid_pid=126753 00:14:43.308 Process raid pid: 126753 00:14:43.308 02:38:08 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 126753' 00:14:43.308 02:38:08 -- bdev/bdev_raid.sh@228 -- # waitforlisten 126753 /var/tmp/spdk-raid.sock 00:14:43.308 02:38:08 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:43.308 02:38:08 -- common/autotest_common.sh@819 -- # '[' -z 126753 ']' 00:14:43.308 02:38:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:43.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:43.308 02:38:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:43.308 02:38:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:43.308 02:38:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:43.308 02:38:08 -- common/autotest_common.sh@10 -- # set +x 00:14:43.566 [2024-07-11 02:38:08.433515] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:14:43.566 [2024-07-11 02:38:08.433767] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:43.566 [2024-07-11 02:38:08.580687] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.566 [2024-07-11 02:38:08.645213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.824 [2024-07-11 02:38:08.696403] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:44.391 02:38:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:44.391 02:38:09 -- common/autotest_common.sh@852 -- # return 0 00:14:44.391 02:38:09 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:44.391 [2024-07-11 02:38:09.481966] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:44.391 [2024-07-11 02:38:09.482538] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:44.391 [2024-07-11 02:38:09.482684] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:44.391 [2024-07-11 02:38:09.482868] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:44.650 02:38:09 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:44.650 02:38:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:44.650 02:38:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:44.650 02:38:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:44.650 02:38:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:44.650 02:38:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:44.650 02:38:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:44.650 02:38:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:44.650 02:38:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:44.650 02:38:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:44.650 02:38:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:44.650 02:38:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:44.650 02:38:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:44.650 "name": "Existed_Raid", 00:14:44.650 "uuid": "ab889af5-28b4-4b1c-bb06-2e8c9121b64b", 00:14:44.650 "strip_size_kb": 0, 00:14:44.650 "state": "configuring", 00:14:44.650 "raid_level": "raid1", 00:14:44.650 "superblock": true, 00:14:44.650 "num_base_bdevs": 2, 00:14:44.650 "num_base_bdevs_discovered": 0, 00:14:44.650 "num_base_bdevs_operational": 2, 00:14:44.650 "base_bdevs_list": [ 00:14:44.650 { 00:14:44.650 "name": "BaseBdev1", 00:14:44.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.650 "is_configured": false, 00:14:44.650 "data_offset": 0, 00:14:44.650 "data_size": 0 00:14:44.650 }, 00:14:44.650 { 00:14:44.650 "name": "BaseBdev2", 00:14:44.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.650 "is_configured": false, 00:14:44.650 "data_offset": 0, 00:14:44.650 "data_size": 0 00:14:44.650 } 00:14:44.650 ] 00:14:44.650 }' 00:14:44.650 02:38:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:44.650 02:38:09 -- common/autotest_common.sh@10 -- # set +x 00:14:45.217 02:38:10 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:45.476 [2024-07-11 02:38:10.494046] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:45.476 [2024-07-11 02:38:10.494212] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:14:45.476 02:38:10 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:45.733 [2024-07-11 02:38:10.681014] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:45.733 [2024-07-11 02:38:10.681602] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:45.733 [2024-07-11 02:38:10.681812] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:45.733 [2024-07-11 02:38:10.681988] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:45.733 02:38:10 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:45.990 [2024-07-11 02:38:10.879950] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:45.990 BaseBdev1 00:14:45.990 02:38:10 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:14:45.990 02:38:10 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:14:45.990 02:38:10 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:45.990 02:38:10 -- common/autotest_common.sh@889 -- # local i 00:14:45.990 02:38:10 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:45.990 02:38:10 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:45.990 02:38:10 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:45.990 02:38:11 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:46.249 [ 00:14:46.249 { 00:14:46.249 "name": "BaseBdev1", 00:14:46.249 "aliases": [ 00:14:46.249 "fcc080ac-9826-49ff-bf87-225e2a0e9cd7" 00:14:46.249 ], 00:14:46.249 "product_name": "Malloc disk", 00:14:46.249 "block_size": 512, 00:14:46.249 "num_blocks": 65536, 00:14:46.249 "uuid": "fcc080ac-9826-49ff-bf87-225e2a0e9cd7", 00:14:46.249 "assigned_rate_limits": { 00:14:46.249 "rw_ios_per_sec": 0, 00:14:46.249 "rw_mbytes_per_sec": 0, 00:14:46.249 "r_mbytes_per_sec": 0, 00:14:46.249 "w_mbytes_per_sec": 0 00:14:46.249 }, 00:14:46.249 "claimed": true, 00:14:46.249 "claim_type": "exclusive_write", 00:14:46.249 "zoned": false, 00:14:46.249 "supported_io_types": { 00:14:46.249 "read": true, 00:14:46.249 "write": true, 00:14:46.249 "unmap": true, 00:14:46.249 "write_zeroes": true, 00:14:46.249 "flush": true, 00:14:46.249 "reset": true, 00:14:46.249 "compare": false, 00:14:46.249 "compare_and_write": false, 00:14:46.249 "abort": true, 00:14:46.249 "nvme_admin": false, 00:14:46.249 "nvme_io": false 00:14:46.249 }, 00:14:46.249 "memory_domains": [ 00:14:46.249 { 00:14:46.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:46.249 "dma_device_type": 2 00:14:46.249 } 00:14:46.249 ], 00:14:46.249 "driver_specific": {} 00:14:46.249 } 00:14:46.249 ] 00:14:46.249 02:38:11 -- common/autotest_common.sh@895 -- # return 0 00:14:46.249 02:38:11 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:46.249 02:38:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:46.249 02:38:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:46.249 02:38:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:46.249 02:38:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:46.249 02:38:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:46.249 02:38:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:46.249 02:38:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:46.249 02:38:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:46.249 02:38:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:46.249 02:38:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.249 02:38:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:46.507 02:38:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:46.507 "name": "Existed_Raid", 00:14:46.507 "uuid": "ca970a45-a19e-4570-a9b6-26ccad4fcb05", 00:14:46.507 "strip_size_kb": 0, 00:14:46.507 "state": "configuring", 00:14:46.507 "raid_level": "raid1", 00:14:46.507 "superblock": true, 00:14:46.507 "num_base_bdevs": 2, 00:14:46.507 "num_base_bdevs_discovered": 1, 00:14:46.507 "num_base_bdevs_operational": 2, 00:14:46.507 "base_bdevs_list": [ 00:14:46.507 { 00:14:46.507 "name": "BaseBdev1", 00:14:46.507 "uuid": "fcc080ac-9826-49ff-bf87-225e2a0e9cd7", 00:14:46.507 "is_configured": true, 00:14:46.507 "data_offset": 2048, 00:14:46.507 "data_size": 63488 00:14:46.507 }, 00:14:46.507 { 00:14:46.508 "name": "BaseBdev2", 00:14:46.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.508 "is_configured": false, 00:14:46.508 "data_offset": 0, 00:14:46.508 "data_size": 0 00:14:46.508 } 00:14:46.508 ] 00:14:46.508 }' 00:14:46.508 02:38:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:46.508 02:38:11 -- common/autotest_common.sh@10 -- # set +x 00:14:47.074 02:38:12 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:47.332 [2024-07-11 02:38:12.276253] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:47.332 [2024-07-11 02:38:12.276440] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005a80 name Existed_Raid, state configuring 00:14:47.332 02:38:12 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:14:47.332 02:38:12 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:47.590 02:38:12 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:47.590 BaseBdev1 00:14:47.849 02:38:12 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:14:47.849 02:38:12 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:14:47.849 02:38:12 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:47.849 02:38:12 -- common/autotest_common.sh@889 -- # local i 00:14:47.849 02:38:12 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:47.849 02:38:12 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:47.849 02:38:12 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:47.849 02:38:12 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:48.107 [ 00:14:48.107 { 00:14:48.107 "name": "BaseBdev1", 00:14:48.107 "aliases": [ 00:14:48.107 "186978f2-9fd7-41cd-84f2-c68e1a1280ac" 00:14:48.107 ], 00:14:48.107 "product_name": "Malloc disk", 00:14:48.107 "block_size": 512, 00:14:48.107 "num_blocks": 65536, 00:14:48.107 "uuid": "186978f2-9fd7-41cd-84f2-c68e1a1280ac", 00:14:48.107 "assigned_rate_limits": { 00:14:48.107 "rw_ios_per_sec": 0, 00:14:48.107 "rw_mbytes_per_sec": 0, 00:14:48.107 "r_mbytes_per_sec": 0, 00:14:48.107 "w_mbytes_per_sec": 0 00:14:48.107 }, 00:14:48.107 "claimed": false, 00:14:48.107 "zoned": false, 00:14:48.107 "supported_io_types": { 00:14:48.107 "read": true, 00:14:48.107 "write": true, 00:14:48.107 "unmap": true, 00:14:48.107 "write_zeroes": true, 00:14:48.107 "flush": true, 00:14:48.107 "reset": true, 00:14:48.107 "compare": false, 00:14:48.107 "compare_and_write": false, 00:14:48.107 "abort": true, 00:14:48.107 "nvme_admin": false, 00:14:48.107 "nvme_io": false 00:14:48.107 }, 00:14:48.107 "memory_domains": [ 00:14:48.107 { 00:14:48.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:48.107 "dma_device_type": 2 00:14:48.107 } 00:14:48.107 ], 00:14:48.107 "driver_specific": {} 00:14:48.107 } 00:14:48.107 ] 00:14:48.107 02:38:13 -- common/autotest_common.sh@895 -- # return 0 00:14:48.107 02:38:13 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:48.366 [2024-07-11 02:38:13.215807] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:48.366 [2024-07-11 02:38:13.217748] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:48.366 [2024-07-11 02:38:13.218350] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:48.366 02:38:13 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:14:48.366 02:38:13 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:48.366 02:38:13 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:48.366 02:38:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:48.366 02:38:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:48.366 02:38:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:48.366 02:38:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:48.366 02:38:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:48.366 02:38:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:48.366 02:38:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:48.366 02:38:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:48.366 02:38:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:48.366 02:38:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.366 02:38:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:48.366 02:38:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:48.366 "name": "Existed_Raid", 00:14:48.366 "uuid": "fa11a5a4-f564-4bc5-bd16-c0aceb8a729c", 00:14:48.366 "strip_size_kb": 0, 00:14:48.366 "state": "configuring", 00:14:48.366 "raid_level": "raid1", 00:14:48.366 "superblock": true, 00:14:48.366 "num_base_bdevs": 2, 00:14:48.366 "num_base_bdevs_discovered": 1, 00:14:48.366 "num_base_bdevs_operational": 2, 00:14:48.366 "base_bdevs_list": [ 00:14:48.366 { 00:14:48.366 "name": "BaseBdev1", 00:14:48.366 "uuid": "186978f2-9fd7-41cd-84f2-c68e1a1280ac", 00:14:48.366 "is_configured": true, 00:14:48.366 "data_offset": 2048, 00:14:48.366 "data_size": 63488 00:14:48.366 }, 00:14:48.366 { 00:14:48.366 "name": "BaseBdev2", 00:14:48.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.366 "is_configured": false, 00:14:48.366 "data_offset": 0, 00:14:48.366 "data_size": 0 00:14:48.366 } 00:14:48.366 ] 00:14:48.366 }' 00:14:48.366 02:38:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:48.366 02:38:13 -- common/autotest_common.sh@10 -- # set +x 00:14:48.932 02:38:13 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:49.190 [2024-07-11 02:38:14.177387] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:49.190 [2024-07-11 02:38:14.177831] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006980 00:14:49.190 [2024-07-11 02:38:14.177956] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:49.190 BaseBdev2 00:14:49.190 [2024-07-11 02:38:14.178267] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80 00:14:49.190 [2024-07-11 02:38:14.178883] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006980 00:14:49.190 [2024-07-11 02:38:14.179016] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006980 00:14:49.190 [2024-07-11 02:38:14.179289] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:49.190 02:38:14 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:14:49.190 02:38:14 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:14:49.190 02:38:14 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:49.190 02:38:14 -- common/autotest_common.sh@889 -- # local i 00:14:49.190 02:38:14 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:49.190 02:38:14 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:49.190 02:38:14 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:49.448 02:38:14 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:49.706 [ 00:14:49.707 { 00:14:49.707 "name": "BaseBdev2", 00:14:49.707 "aliases": [ 00:14:49.707 "177e2a59-f242-4d32-8fed-b5a583cc4b91" 00:14:49.707 ], 00:14:49.707 "product_name": "Malloc disk", 00:14:49.707 "block_size": 512, 00:14:49.707 "num_blocks": 65536, 00:14:49.707 "uuid": "177e2a59-f242-4d32-8fed-b5a583cc4b91", 00:14:49.707 "assigned_rate_limits": { 00:14:49.707 "rw_ios_per_sec": 0, 00:14:49.707 "rw_mbytes_per_sec": 0, 00:14:49.707 "r_mbytes_per_sec": 0, 00:14:49.707 "w_mbytes_per_sec": 0 00:14:49.707 }, 00:14:49.707 "claimed": true, 00:14:49.707 "claim_type": "exclusive_write", 00:14:49.707 "zoned": false, 00:14:49.707 "supported_io_types": { 00:14:49.707 "read": true, 00:14:49.707 "write": true, 00:14:49.707 "unmap": true, 00:14:49.707 "write_zeroes": true, 00:14:49.707 "flush": true, 00:14:49.707 "reset": true, 00:14:49.707 "compare": false, 00:14:49.707 "compare_and_write": false, 00:14:49.707 "abort": true, 00:14:49.707 "nvme_admin": false, 00:14:49.707 "nvme_io": false 00:14:49.707 }, 00:14:49.707 "memory_domains": [ 00:14:49.707 { 00:14:49.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.707 "dma_device_type": 2 00:14:49.707 } 00:14:49.707 ], 00:14:49.707 "driver_specific": {} 00:14:49.707 } 00:14:49.707 ] 00:14:49.707 02:38:14 -- common/autotest_common.sh@895 -- # return 0 00:14:49.707 02:38:14 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:49.707 02:38:14 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:49.707 02:38:14 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:14:49.707 02:38:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:49.707 02:38:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:49.707 02:38:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:49.707 02:38:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:49.707 02:38:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:49.707 02:38:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:49.707 02:38:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:49.707 02:38:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:49.707 02:38:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:49.707 02:38:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:49.707 02:38:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.707 02:38:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:49.707 "name": "Existed_Raid", 00:14:49.707 "uuid": "fa11a5a4-f564-4bc5-bd16-c0aceb8a729c", 00:14:49.707 "strip_size_kb": 0, 00:14:49.707 "state": "online", 00:14:49.707 "raid_level": "raid1", 00:14:49.707 "superblock": true, 00:14:49.707 "num_base_bdevs": 2, 00:14:49.707 "num_base_bdevs_discovered": 2, 00:14:49.707 "num_base_bdevs_operational": 2, 00:14:49.707 "base_bdevs_list": [ 00:14:49.707 { 00:14:49.707 "name": "BaseBdev1", 00:14:49.707 "uuid": "186978f2-9fd7-41cd-84f2-c68e1a1280ac", 00:14:49.707 "is_configured": true, 00:14:49.707 "data_offset": 2048, 00:14:49.707 "data_size": 63488 00:14:49.707 }, 00:14:49.707 { 00:14:49.707 "name": "BaseBdev2", 00:14:49.707 "uuid": "177e2a59-f242-4d32-8fed-b5a583cc4b91", 00:14:49.707 "is_configured": true, 00:14:49.707 "data_offset": 2048, 00:14:49.707 "data_size": 63488 00:14:49.707 } 00:14:49.707 ] 00:14:49.707 }' 00:14:49.707 02:38:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:49.707 02:38:14 -- common/autotest_common.sh@10 -- # set +x 00:14:50.641 02:38:15 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:50.641 [2024-07-11 02:38:15.617755] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:50.641 02:38:15 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:14:50.641 02:38:15 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:14:50.641 02:38:15 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:50.641 02:38:15 -- bdev/bdev_raid.sh@196 -- # return 0 00:14:50.641 02:38:15 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:14:50.641 02:38:15 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:14:50.641 02:38:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:50.641 02:38:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:50.641 02:38:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:50.641 02:38:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:50.641 02:38:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:14:50.641 02:38:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:50.641 02:38:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:50.641 02:38:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:50.641 02:38:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:50.642 02:38:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:50.642 02:38:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.900 02:38:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:50.900 "name": "Existed_Raid", 00:14:50.900 "uuid": "fa11a5a4-f564-4bc5-bd16-c0aceb8a729c", 00:14:50.900 "strip_size_kb": 0, 00:14:50.900 "state": "online", 00:14:50.900 "raid_level": "raid1", 00:14:50.900 "superblock": true, 00:14:50.900 "num_base_bdevs": 2, 00:14:50.900 "num_base_bdevs_discovered": 1, 00:14:50.900 "num_base_bdevs_operational": 1, 00:14:50.900 "base_bdevs_list": [ 00:14:50.900 { 00:14:50.900 "name": null, 00:14:50.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.900 "is_configured": false, 00:14:50.900 "data_offset": 2048, 00:14:50.900 "data_size": 63488 00:14:50.900 }, 00:14:50.900 { 00:14:50.900 "name": "BaseBdev2", 00:14:50.900 "uuid": "177e2a59-f242-4d32-8fed-b5a583cc4b91", 00:14:50.900 "is_configured": true, 00:14:50.900 "data_offset": 2048, 00:14:50.900 "data_size": 63488 00:14:50.900 } 00:14:50.900 ] 00:14:50.900 }' 00:14:50.900 02:38:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:50.900 02:38:15 -- common/autotest_common.sh@10 -- # set +x 00:14:51.466 02:38:16 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:14:51.466 02:38:16 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:51.466 02:38:16 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:51.466 02:38:16 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:51.724 02:38:16 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:51.724 02:38:16 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:51.724 02:38:16 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:51.982 [2024-07-11 02:38:16.863128] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:51.982 [2024-07-11 02:38:16.863165] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:51.982 [2024-07-11 02:38:16.863243] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:51.982 [2024-07-11 02:38:16.873061] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:51.982 [2024-07-11 02:38:16.873090] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state offline 00:14:51.982 02:38:16 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:51.982 02:38:16 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:51.982 02:38:16 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:51.982 02:38:16 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:14:52.260 02:38:17 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:14:52.260 02:38:17 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:14:52.260 02:38:17 -- bdev/bdev_raid.sh@287 -- # killprocess 126753 00:14:52.260 02:38:17 -- common/autotest_common.sh@926 -- # '[' -z 126753 ']' 00:14:52.260 02:38:17 -- common/autotest_common.sh@930 -- # kill -0 126753 00:14:52.260 02:38:17 -- common/autotest_common.sh@931 -- # uname 00:14:52.260 02:38:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:52.260 02:38:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 126753 00:14:52.260 killing process with pid 126753 00:14:52.260 02:38:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:52.260 02:38:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:52.260 02:38:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 126753' 00:14:52.260 02:38:17 -- common/autotest_common.sh@945 -- # kill 126753 00:14:52.260 02:38:17 -- common/autotest_common.sh@950 -- # wait 126753 00:14:52.260 [2024-07-11 02:38:17.131184] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:52.260 [2024-07-11 02:38:17.131320] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:52.522 ************************************ 00:14:52.522 END TEST raid_state_function_test_sb 00:14:52.522 ************************************ 00:14:52.522 02:38:17 -- bdev/bdev_raid.sh@289 -- # return 0 00:14:52.522 00:14:52.522 real 0m8.979s 00:14:52.522 user 0m16.367s 00:14:52.522 sys 0m1.097s 00:14:52.522 02:38:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:52.522 02:38:17 -- common/autotest_common.sh@10 -- # set +x 00:14:52.522 02:38:17 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:14:52.522 02:38:17 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:14:52.522 02:38:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:52.522 02:38:17 -- common/autotest_common.sh@10 -- # set +x 00:14:52.522 ************************************ 00:14:52.522 START TEST raid_superblock_test 00:14:52.522 ************************************ 00:14:52.522 02:38:17 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 2 00:14:52.522 02:38:17 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:14:52.522 02:38:17 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:14:52.522 02:38:17 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:14:52.522 02:38:17 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:14:52.522 02:38:17 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:14:52.522 02:38:17 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:14:52.522 02:38:17 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:14:52.522 02:38:17 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:14:52.522 02:38:17 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:14:52.522 02:38:17 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:14:52.522 02:38:17 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:14:52.522 02:38:17 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:14:52.522 02:38:17 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:14:52.522 02:38:17 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:14:52.522 02:38:17 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:14:52.522 02:38:17 -- bdev/bdev_raid.sh@357 -- # raid_pid=127085 00:14:52.522 02:38:17 -- bdev/bdev_raid.sh@358 -- # waitforlisten 127085 /var/tmp/spdk-raid.sock 00:14:52.522 02:38:17 -- common/autotest_common.sh@819 -- # '[' -z 127085 ']' 00:14:52.522 02:38:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:52.522 02:38:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:52.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:52.522 02:38:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:52.522 02:38:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:52.522 02:38:17 -- common/autotest_common.sh@10 -- # set +x 00:14:52.522 02:38:17 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:52.522 [2024-07-11 02:38:17.464377] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:14:52.522 [2024-07-11 02:38:17.464830] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127085 ] 00:14:52.522 [2024-07-11 02:38:17.612693] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.780 [2024-07-11 02:38:17.682546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:52.780 [2024-07-11 02:38:17.738472] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:53.347 02:38:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:53.347 02:38:18 -- common/autotest_common.sh@852 -- # return 0 00:14:53.347 02:38:18 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:14:53.347 02:38:18 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:53.347 02:38:18 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:14:53.347 02:38:18 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:14:53.347 02:38:18 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:53.347 02:38:18 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:53.347 02:38:18 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:53.347 02:38:18 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:53.347 02:38:18 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:14:53.605 malloc1 00:14:53.605 02:38:18 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:53.864 [2024-07-11 02:38:18.698913] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:53.864 [2024-07-11 02:38:18.699167] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:53.864 [2024-07-11 02:38:18.699237] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005d80 00:14:53.864 [2024-07-11 02:38:18.699507] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:53.864 [2024-07-11 02:38:18.701623] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:53.864 [2024-07-11 02:38:18.701840] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:53.864 pt1 00:14:53.864 02:38:18 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:53.864 02:38:18 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:53.864 02:38:18 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:14:53.864 02:38:18 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:14:53.864 02:38:18 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:53.864 02:38:18 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:53.864 02:38:18 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:53.864 02:38:18 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:53.864 02:38:18 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:14:53.864 malloc2 00:14:53.864 02:38:18 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:54.122 [2024-07-11 02:38:19.141402] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:54.122 [2024-07-11 02:38:19.141616] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:54.122 [2024-07-11 02:38:19.141878] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:14:54.122 [2024-07-11 02:38:19.141998] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:54.122 [2024-07-11 02:38:19.147119] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:54.122 [2024-07-11 02:38:19.147225] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:54.122 pt2 00:14:54.122 02:38:19 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:54.122 02:38:19 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:54.122 02:38:19 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:14:54.381 [2024-07-11 02:38:19.335546] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:54.381 [2024-07-11 02:38:19.337222] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:54.381 [2024-07-11 02:38:19.337441] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:14:54.381 [2024-07-11 02:38:19.337455] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:54.381 [2024-07-11 02:38:19.337616] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:14:54.381 [2024-07-11 02:38:19.338060] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:14:54.381 [2024-07-11 02:38:19.338082] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000006f80 00:14:54.381 [2024-07-11 02:38:19.338251] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:54.381 02:38:19 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:54.381 02:38:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:54.381 02:38:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:54.381 02:38:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:54.381 02:38:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:54.381 02:38:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:54.381 02:38:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:54.381 02:38:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:54.381 02:38:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:54.381 02:38:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:54.381 02:38:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:54.381 02:38:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.638 02:38:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:54.638 "name": "raid_bdev1", 00:14:54.638 "uuid": "cfc749aa-8a01-4afd-9f8e-43b85cd177db", 00:14:54.638 "strip_size_kb": 0, 00:14:54.638 "state": "online", 00:14:54.638 "raid_level": "raid1", 00:14:54.638 "superblock": true, 00:14:54.638 "num_base_bdevs": 2, 00:14:54.638 "num_base_bdevs_discovered": 2, 00:14:54.638 "num_base_bdevs_operational": 2, 00:14:54.638 "base_bdevs_list": [ 00:14:54.638 { 00:14:54.638 "name": "pt1", 00:14:54.638 "uuid": "6cbe87c9-c5b5-504b-82e3-faa7791cfbb0", 00:14:54.638 "is_configured": true, 00:14:54.639 "data_offset": 2048, 00:14:54.639 "data_size": 63488 00:14:54.639 }, 00:14:54.639 { 00:14:54.639 "name": "pt2", 00:14:54.639 "uuid": "8abaafab-a070-5306-b626-63acdad7d79e", 00:14:54.639 "is_configured": true, 00:14:54.639 "data_offset": 2048, 00:14:54.639 "data_size": 63488 00:14:54.639 } 00:14:54.639 ] 00:14:54.639 }' 00:14:54.639 02:38:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:54.639 02:38:19 -- common/autotest_common.sh@10 -- # set +x 00:14:55.204 02:38:20 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:55.204 02:38:20 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:14:55.204 [2024-07-11 02:38:20.283851] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:55.462 02:38:20 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=cfc749aa-8a01-4afd-9f8e-43b85cd177db 00:14:55.462 02:38:20 -- bdev/bdev_raid.sh@380 -- # '[' -z cfc749aa-8a01-4afd-9f8e-43b85cd177db ']' 00:14:55.462 02:38:20 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:55.462 [2024-07-11 02:38:20.467682] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:55.462 [2024-07-11 02:38:20.467708] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:55.462 [2024-07-11 02:38:20.467840] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:55.462 [2024-07-11 02:38:20.467909] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:55.462 [2024-07-11 02:38:20.467922] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name raid_bdev1, state offline 00:14:55.462 02:38:20 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:55.462 02:38:20 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:14:55.721 02:38:20 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:14:55.721 02:38:20 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:14:55.721 02:38:20 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:55.721 02:38:20 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:55.979 02:38:20 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:55.979 02:38:20 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:55.979 02:38:21 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:14:55.979 02:38:21 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:56.238 02:38:21 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:14:56.238 02:38:21 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:14:56.238 02:38:21 -- common/autotest_common.sh@640 -- # local es=0 00:14:56.238 02:38:21 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:14:56.238 02:38:21 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:56.238 02:38:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:56.238 02:38:21 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:56.238 02:38:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:56.238 02:38:21 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:56.238 02:38:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:56.238 02:38:21 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:56.238 02:38:21 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:56.238 02:38:21 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:14:56.496 [2024-07-11 02:38:21.492370] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:56.496 [2024-07-11 02:38:21.494429] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:56.496 [2024-07-11 02:38:21.494513] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:14:56.496 [2024-07-11 02:38:21.494602] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:14:56.496 [2024-07-11 02:38:21.494638] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:56.496 [2024-07-11 02:38:21.494648] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name raid_bdev1, state configuring 00:14:56.496 request: 00:14:56.496 { 00:14:56.496 "name": "raid_bdev1", 00:14:56.496 "raid_level": "raid1", 00:14:56.496 "base_bdevs": [ 00:14:56.496 "malloc1", 00:14:56.496 "malloc2" 00:14:56.496 ], 00:14:56.496 "superblock": false, 00:14:56.496 "method": "bdev_raid_create", 00:14:56.496 "req_id": 1 00:14:56.496 } 00:14:56.496 Got JSON-RPC error response 00:14:56.496 response: 00:14:56.496 { 00:14:56.496 "code": -17, 00:14:56.496 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:56.496 } 00:14:56.496 02:38:21 -- common/autotest_common.sh@643 -- # es=1 00:14:56.496 02:38:21 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:56.496 02:38:21 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:56.496 02:38:21 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:56.496 02:38:21 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:56.496 02:38:21 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:14:56.755 02:38:21 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:14:56.755 02:38:21 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:14:56.755 02:38:21 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:57.013 [2024-07-11 02:38:21.860367] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:57.013 [2024-07-11 02:38:21.860477] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:57.013 [2024-07-11 02:38:21.860524] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:14:57.013 [2024-07-11 02:38:21.860551] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:57.013 [2024-07-11 02:38:21.862617] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:57.013 [2024-07-11 02:38:21.862681] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:57.013 [2024-07-11 02:38:21.862773] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:14:57.013 [2024-07-11 02:38:21.862840] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:57.013 pt1 00:14:57.013 02:38:21 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:14:57.013 02:38:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:57.013 02:38:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:57.013 02:38:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:57.013 02:38:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:57.013 02:38:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:57.013 02:38:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:57.013 02:38:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:57.013 02:38:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:57.013 02:38:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:57.013 02:38:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:57.013 02:38:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.273 02:38:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:57.273 "name": "raid_bdev1", 00:14:57.273 "uuid": "cfc749aa-8a01-4afd-9f8e-43b85cd177db", 00:14:57.273 "strip_size_kb": 0, 00:14:57.273 "state": "configuring", 00:14:57.273 "raid_level": "raid1", 00:14:57.273 "superblock": true, 00:14:57.273 "num_base_bdevs": 2, 00:14:57.273 "num_base_bdevs_discovered": 1, 00:14:57.273 "num_base_bdevs_operational": 2, 00:14:57.273 "base_bdevs_list": [ 00:14:57.273 { 00:14:57.273 "name": "pt1", 00:14:57.273 "uuid": "6cbe87c9-c5b5-504b-82e3-faa7791cfbb0", 00:14:57.273 "is_configured": true, 00:14:57.273 "data_offset": 2048, 00:14:57.273 "data_size": 63488 00:14:57.273 }, 00:14:57.273 { 00:14:57.273 "name": null, 00:14:57.273 "uuid": "8abaafab-a070-5306-b626-63acdad7d79e", 00:14:57.273 "is_configured": false, 00:14:57.273 "data_offset": 2048, 00:14:57.273 "data_size": 63488 00:14:57.273 } 00:14:57.273 ] 00:14:57.273 }' 00:14:57.273 02:38:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:57.273 02:38:22 -- common/autotest_common.sh@10 -- # set +x 00:14:57.840 02:38:22 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:14:57.840 02:38:22 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:14:57.840 02:38:22 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:57.840 02:38:22 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:57.840 [2024-07-11 02:38:22.852615] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:57.840 [2024-07-11 02:38:22.852749] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:57.840 [2024-07-11 02:38:22.852784] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:57.840 [2024-07-11 02:38:22.852808] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:57.840 [2024-07-11 02:38:22.853287] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:57.840 [2024-07-11 02:38:22.853333] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:57.840 [2024-07-11 02:38:22.853430] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:14:57.840 [2024-07-11 02:38:22.853481] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:57.840 [2024-07-11 02:38:22.853614] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:14:57.840 [2024-07-11 02:38:22.853652] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:57.840 [2024-07-11 02:38:22.853749] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:14:57.840 [2024-07-11 02:38:22.854075] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:14:57.840 [2024-07-11 02:38:22.854099] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008180 00:14:57.840 [2024-07-11 02:38:22.854221] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:57.840 pt2 00:14:57.840 02:38:22 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:14:57.840 02:38:22 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:57.840 02:38:22 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:57.840 02:38:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:57.840 02:38:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:57.840 02:38:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:57.840 02:38:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:57.840 02:38:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:57.840 02:38:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:57.840 02:38:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:57.840 02:38:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:57.840 02:38:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:57.840 02:38:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:57.840 02:38:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.098 02:38:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:58.098 "name": "raid_bdev1", 00:14:58.098 "uuid": "cfc749aa-8a01-4afd-9f8e-43b85cd177db", 00:14:58.098 "strip_size_kb": 0, 00:14:58.098 "state": "online", 00:14:58.098 "raid_level": "raid1", 00:14:58.098 "superblock": true, 00:14:58.098 "num_base_bdevs": 2, 00:14:58.098 "num_base_bdevs_discovered": 2, 00:14:58.098 "num_base_bdevs_operational": 2, 00:14:58.098 "base_bdevs_list": [ 00:14:58.098 { 00:14:58.098 "name": "pt1", 00:14:58.098 "uuid": "6cbe87c9-c5b5-504b-82e3-faa7791cfbb0", 00:14:58.098 "is_configured": true, 00:14:58.098 "data_offset": 2048, 00:14:58.098 "data_size": 63488 00:14:58.098 }, 00:14:58.098 { 00:14:58.098 "name": "pt2", 00:14:58.098 "uuid": "8abaafab-a070-5306-b626-63acdad7d79e", 00:14:58.098 "is_configured": true, 00:14:58.098 "data_offset": 2048, 00:14:58.098 "data_size": 63488 00:14:58.098 } 00:14:58.098 ] 00:14:58.098 }' 00:14:58.098 02:38:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:58.098 02:38:23 -- common/autotest_common.sh@10 -- # set +x 00:14:58.663 02:38:23 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:58.663 02:38:23 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:14:58.921 [2024-07-11 02:38:23.889016] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:58.921 02:38:23 -- bdev/bdev_raid.sh@430 -- # '[' cfc749aa-8a01-4afd-9f8e-43b85cd177db '!=' cfc749aa-8a01-4afd-9f8e-43b85cd177db ']' 00:14:58.921 02:38:23 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:14:58.921 02:38:23 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:58.921 02:38:23 -- bdev/bdev_raid.sh@196 -- # return 0 00:14:58.921 02:38:23 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:59.179 [2024-07-11 02:38:24.076952] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:59.179 02:38:24 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:59.179 02:38:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:59.179 02:38:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:59.179 02:38:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:59.179 02:38:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:59.179 02:38:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:14:59.179 02:38:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:59.179 02:38:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:59.179 02:38:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:59.179 02:38:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:59.179 02:38:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:59.179 02:38:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.179 02:38:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:59.179 "name": "raid_bdev1", 00:14:59.179 "uuid": "cfc749aa-8a01-4afd-9f8e-43b85cd177db", 00:14:59.179 "strip_size_kb": 0, 00:14:59.179 "state": "online", 00:14:59.179 "raid_level": "raid1", 00:14:59.179 "superblock": true, 00:14:59.179 "num_base_bdevs": 2, 00:14:59.179 "num_base_bdevs_discovered": 1, 00:14:59.179 "num_base_bdevs_operational": 1, 00:14:59.179 "base_bdevs_list": [ 00:14:59.179 { 00:14:59.179 "name": null, 00:14:59.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.179 "is_configured": false, 00:14:59.179 "data_offset": 2048, 00:14:59.179 "data_size": 63488 00:14:59.179 }, 00:14:59.179 { 00:14:59.179 "name": "pt2", 00:14:59.179 "uuid": "8abaafab-a070-5306-b626-63acdad7d79e", 00:14:59.179 "is_configured": true, 00:14:59.179 "data_offset": 2048, 00:14:59.179 "data_size": 63488 00:14:59.179 } 00:14:59.179 ] 00:14:59.179 }' 00:14:59.179 02:38:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:59.179 02:38:24 -- common/autotest_common.sh@10 -- # set +x 00:14:59.746 02:38:24 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:00.003 [2024-07-11 02:38:25.065081] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:00.003 [2024-07-11 02:38:25.065114] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:00.003 [2024-07-11 02:38:25.065204] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:00.003 [2024-07-11 02:38:25.065258] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:00.003 [2024-07-11 02:38:25.065269] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state offline 00:15:00.003 02:38:25 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:00.003 02:38:25 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:15:00.261 02:38:25 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:15:00.261 02:38:25 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:15:00.261 02:38:25 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:15:00.261 02:38:25 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:15:00.261 02:38:25 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:00.520 02:38:25 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:15:00.520 02:38:25 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:15:00.520 02:38:25 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:15:00.520 02:38:25 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:15:00.520 02:38:25 -- bdev/bdev_raid.sh@462 -- # i=1 00:15:00.520 02:38:25 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:00.520 [2024-07-11 02:38:25.605115] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:00.520 [2024-07-11 02:38:25.605204] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:00.520 [2024-07-11 02:38:25.605254] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:15:00.520 [2024-07-11 02:38:25.605290] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:00.520 [2024-07-11 02:38:25.607533] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:00.520 [2024-07-11 02:38:25.607586] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:00.520 [2024-07-11 02:38:25.607685] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:00.520 [2024-07-11 02:38:25.607739] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:00.520 [2024-07-11 02:38:25.607901] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:15:00.520 [2024-07-11 02:38:25.607915] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:00.520 [2024-07-11 02:38:25.607986] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:15:00.520 [2024-07-11 02:38:25.608278] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:15:00.520 [2024-07-11 02:38:25.608302] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:15:00.520 [2024-07-11 02:38:25.608401] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:00.520 pt2 00:15:00.779 02:38:25 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:00.779 02:38:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:00.779 02:38:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:00.779 02:38:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:00.779 02:38:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:00.779 02:38:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:00.779 02:38:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:00.779 02:38:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:00.779 02:38:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:00.779 02:38:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:00.779 02:38:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:00.779 02:38:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.779 02:38:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:00.779 "name": "raid_bdev1", 00:15:00.779 "uuid": "cfc749aa-8a01-4afd-9f8e-43b85cd177db", 00:15:00.779 "strip_size_kb": 0, 00:15:00.779 "state": "online", 00:15:00.779 "raid_level": "raid1", 00:15:00.779 "superblock": true, 00:15:00.779 "num_base_bdevs": 2, 00:15:00.779 "num_base_bdevs_discovered": 1, 00:15:00.779 "num_base_bdevs_operational": 1, 00:15:00.779 "base_bdevs_list": [ 00:15:00.779 { 00:15:00.779 "name": null, 00:15:00.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.779 "is_configured": false, 00:15:00.779 "data_offset": 2048, 00:15:00.779 "data_size": 63488 00:15:00.779 }, 00:15:00.779 { 00:15:00.779 "name": "pt2", 00:15:00.779 "uuid": "8abaafab-a070-5306-b626-63acdad7d79e", 00:15:00.779 "is_configured": true, 00:15:00.779 "data_offset": 2048, 00:15:00.779 "data_size": 63488 00:15:00.779 } 00:15:00.779 ] 00:15:00.779 }' 00:15:00.779 02:38:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:00.779 02:38:25 -- common/autotest_common.sh@10 -- # set +x 00:15:01.714 02:38:26 -- bdev/bdev_raid.sh@468 -- # '[' 2 -gt 2 ']' 00:15:01.714 02:38:26 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:15:01.714 02:38:26 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:01.714 [2024-07-11 02:38:26.689475] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:01.714 02:38:26 -- bdev/bdev_raid.sh@506 -- # '[' cfc749aa-8a01-4afd-9f8e-43b85cd177db '!=' cfc749aa-8a01-4afd-9f8e-43b85cd177db ']' 00:15:01.714 02:38:26 -- bdev/bdev_raid.sh@511 -- # killprocess 127085 00:15:01.714 02:38:26 -- common/autotest_common.sh@926 -- # '[' -z 127085 ']' 00:15:01.714 02:38:26 -- common/autotest_common.sh@930 -- # kill -0 127085 00:15:01.714 02:38:26 -- common/autotest_common.sh@931 -- # uname 00:15:01.714 02:38:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:01.714 02:38:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 127085 00:15:01.714 killing process with pid 127085 00:15:01.714 02:38:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:01.714 02:38:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:01.714 02:38:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 127085' 00:15:01.714 02:38:26 -- common/autotest_common.sh@945 -- # kill 127085 00:15:01.714 02:38:26 -- common/autotest_common.sh@950 -- # wait 127085 00:15:01.714 [2024-07-11 02:38:26.723486] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:01.714 [2024-07-11 02:38:26.723567] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:01.714 [2024-07-11 02:38:26.723633] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:01.714 [2024-07-11 02:38:26.723653] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:15:01.714 [2024-07-11 02:38:26.742502] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:01.972 ************************************ 00:15:01.972 END TEST raid_superblock_test 00:15:01.972 ************************************ 00:15:01.972 02:38:26 -- bdev/bdev_raid.sh@513 -- # return 0 00:15:01.972 00:15:01.972 real 0m9.539s 00:15:01.972 user 0m17.814s 00:15:01.972 sys 0m1.117s 00:15:01.972 02:38:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:01.972 02:38:26 -- common/autotest_common.sh@10 -- # set +x 00:15:01.972 02:38:26 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:15:01.972 02:38:26 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:15:01.972 02:38:26 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:15:01.972 02:38:26 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:01.972 02:38:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:01.972 02:38:26 -- common/autotest_common.sh@10 -- # set +x 00:15:01.972 ************************************ 00:15:01.972 START TEST raid_state_function_test 00:15:01.972 ************************************ 00:15:01.972 02:38:26 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 3 false 00:15:01.972 02:38:26 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:15:01.972 02:38:26 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:15:01.972 02:38:26 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:15:01.972 02:38:26 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:01.972 02:38:26 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:15:01.972 02:38:27 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:01.972 02:38:27 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:01.972 02:38:27 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:01.972 02:38:27 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:01.972 02:38:27 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:01.972 02:38:27 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:01.972 02:38:27 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:01.972 02:38:27 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:01.972 02:38:27 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:15:01.972 02:38:27 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:01.972 02:38:27 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:01.972 Process raid pid: 127424 00:15:01.972 02:38:27 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:01.972 02:38:27 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:01.972 02:38:27 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:01.972 02:38:27 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:01.972 02:38:27 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:01.972 02:38:27 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:15:01.972 02:38:27 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:15:01.972 02:38:27 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:15:01.972 02:38:27 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:15:01.972 02:38:27 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:15:01.972 02:38:27 -- bdev/bdev_raid.sh@226 -- # raid_pid=127424 00:15:01.972 02:38:27 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 127424' 00:15:01.973 02:38:27 -- bdev/bdev_raid.sh@228 -- # waitforlisten 127424 /var/tmp/spdk-raid.sock 00:15:01.973 02:38:27 -- common/autotest_common.sh@819 -- # '[' -z 127424 ']' 00:15:01.973 02:38:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:01.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:01.973 02:38:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:01.973 02:38:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:01.973 02:38:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:01.973 02:38:27 -- common/autotest_common.sh@10 -- # set +x 00:15:01.973 02:38:27 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:01.973 [2024-07-11 02:38:27.052556] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:01.973 [2024-07-11 02:38:27.052787] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:02.231 [2024-07-11 02:38:27.195256] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.231 [2024-07-11 02:38:27.256337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.231 [2024-07-11 02:38:27.307524] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:03.167 02:38:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:03.167 02:38:28 -- common/autotest_common.sh@852 -- # return 0 00:15:03.167 02:38:28 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:03.425 [2024-07-11 02:38:28.263139] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:03.425 [2024-07-11 02:38:28.263220] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:03.425 [2024-07-11 02:38:28.263233] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:03.425 [2024-07-11 02:38:28.263251] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:03.425 [2024-07-11 02:38:28.263258] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:03.425 [2024-07-11 02:38:28.263290] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:03.425 02:38:28 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:03.425 02:38:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:03.425 02:38:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:03.425 02:38:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:03.425 02:38:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:03.425 02:38:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:03.425 02:38:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:03.425 02:38:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:03.425 02:38:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:03.425 02:38:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:03.425 02:38:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:03.425 02:38:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:03.685 02:38:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:03.685 "name": "Existed_Raid", 00:15:03.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.685 "strip_size_kb": 64, 00:15:03.685 "state": "configuring", 00:15:03.685 "raid_level": "raid0", 00:15:03.685 "superblock": false, 00:15:03.685 "num_base_bdevs": 3, 00:15:03.685 "num_base_bdevs_discovered": 0, 00:15:03.685 "num_base_bdevs_operational": 3, 00:15:03.685 "base_bdevs_list": [ 00:15:03.685 { 00:15:03.685 "name": "BaseBdev1", 00:15:03.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.685 "is_configured": false, 00:15:03.685 "data_offset": 0, 00:15:03.685 "data_size": 0 00:15:03.685 }, 00:15:03.685 { 00:15:03.685 "name": "BaseBdev2", 00:15:03.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.685 "is_configured": false, 00:15:03.685 "data_offset": 0, 00:15:03.685 "data_size": 0 00:15:03.685 }, 00:15:03.685 { 00:15:03.685 "name": "BaseBdev3", 00:15:03.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.685 "is_configured": false, 00:15:03.685 "data_offset": 0, 00:15:03.685 "data_size": 0 00:15:03.685 } 00:15:03.685 ] 00:15:03.685 }' 00:15:03.685 02:38:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:03.685 02:38:28 -- common/autotest_common.sh@10 -- # set +x 00:15:04.252 02:38:29 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:04.511 [2024-07-11 02:38:29.367216] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:04.511 [2024-07-11 02:38:29.367264] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:15:04.511 02:38:29 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:04.511 [2024-07-11 02:38:29.551248] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:04.511 [2024-07-11 02:38:29.551300] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:04.511 [2024-07-11 02:38:29.551311] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:04.511 [2024-07-11 02:38:29.551329] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:04.511 [2024-07-11 02:38:29.551336] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:04.511 [2024-07-11 02:38:29.551356] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:04.511 02:38:29 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:04.769 [2024-07-11 02:38:29.741483] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:04.769 BaseBdev1 00:15:04.769 02:38:29 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:04.769 02:38:29 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:04.769 02:38:29 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:04.769 02:38:29 -- common/autotest_common.sh@889 -- # local i 00:15:04.769 02:38:29 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:04.769 02:38:29 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:04.769 02:38:29 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:05.028 02:38:29 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:05.028 [ 00:15:05.028 { 00:15:05.028 "name": "BaseBdev1", 00:15:05.028 "aliases": [ 00:15:05.028 "9d457831-623e-4914-b265-e51fb48063da" 00:15:05.028 ], 00:15:05.028 "product_name": "Malloc disk", 00:15:05.028 "block_size": 512, 00:15:05.028 "num_blocks": 65536, 00:15:05.028 "uuid": "9d457831-623e-4914-b265-e51fb48063da", 00:15:05.028 "assigned_rate_limits": { 00:15:05.028 "rw_ios_per_sec": 0, 00:15:05.028 "rw_mbytes_per_sec": 0, 00:15:05.028 "r_mbytes_per_sec": 0, 00:15:05.028 "w_mbytes_per_sec": 0 00:15:05.028 }, 00:15:05.028 "claimed": true, 00:15:05.028 "claim_type": "exclusive_write", 00:15:05.028 "zoned": false, 00:15:05.028 "supported_io_types": { 00:15:05.028 "read": true, 00:15:05.028 "write": true, 00:15:05.028 "unmap": true, 00:15:05.028 "write_zeroes": true, 00:15:05.028 "flush": true, 00:15:05.028 "reset": true, 00:15:05.028 "compare": false, 00:15:05.028 "compare_and_write": false, 00:15:05.028 "abort": true, 00:15:05.028 "nvme_admin": false, 00:15:05.028 "nvme_io": false 00:15:05.028 }, 00:15:05.028 "memory_domains": [ 00:15:05.028 { 00:15:05.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.028 "dma_device_type": 2 00:15:05.028 } 00:15:05.028 ], 00:15:05.028 "driver_specific": {} 00:15:05.028 } 00:15:05.028 ] 00:15:05.028 02:38:30 -- common/autotest_common.sh@895 -- # return 0 00:15:05.028 02:38:30 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:05.028 02:38:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:05.028 02:38:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:05.028 02:38:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:05.028 02:38:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:05.028 02:38:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:05.028 02:38:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:05.028 02:38:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:05.028 02:38:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:05.028 02:38:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:05.028 02:38:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:05.358 02:38:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:05.358 02:38:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:05.358 "name": "Existed_Raid", 00:15:05.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.358 "strip_size_kb": 64, 00:15:05.358 "state": "configuring", 00:15:05.358 "raid_level": "raid0", 00:15:05.358 "superblock": false, 00:15:05.358 "num_base_bdevs": 3, 00:15:05.358 "num_base_bdevs_discovered": 1, 00:15:05.358 "num_base_bdevs_operational": 3, 00:15:05.358 "base_bdevs_list": [ 00:15:05.358 { 00:15:05.358 "name": "BaseBdev1", 00:15:05.358 "uuid": "9d457831-623e-4914-b265-e51fb48063da", 00:15:05.358 "is_configured": true, 00:15:05.358 "data_offset": 0, 00:15:05.358 "data_size": 65536 00:15:05.358 }, 00:15:05.358 { 00:15:05.358 "name": "BaseBdev2", 00:15:05.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.358 "is_configured": false, 00:15:05.358 "data_offset": 0, 00:15:05.358 "data_size": 0 00:15:05.358 }, 00:15:05.358 { 00:15:05.358 "name": "BaseBdev3", 00:15:05.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.358 "is_configured": false, 00:15:05.358 "data_offset": 0, 00:15:05.358 "data_size": 0 00:15:05.358 } 00:15:05.358 ] 00:15:05.358 }' 00:15:05.358 02:38:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:05.358 02:38:30 -- common/autotest_common.sh@10 -- # set +x 00:15:06.293 02:38:31 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:06.293 [2024-07-11 02:38:31.257835] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:06.293 [2024-07-11 02:38:31.257912] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005a80 name Existed_Raid, state configuring 00:15:06.293 02:38:31 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:15:06.293 02:38:31 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:06.551 [2024-07-11 02:38:31.429943] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:06.551 [2024-07-11 02:38:31.431851] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:06.551 [2024-07-11 02:38:31.431915] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:06.551 [2024-07-11 02:38:31.431926] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:06.551 [2024-07-11 02:38:31.431948] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:06.551 02:38:31 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:06.551 02:38:31 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:06.551 02:38:31 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:06.551 02:38:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:06.551 02:38:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:06.551 02:38:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:06.551 02:38:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:06.551 02:38:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:06.551 02:38:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:06.551 02:38:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:06.551 02:38:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:06.552 02:38:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:06.552 02:38:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:06.552 02:38:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:06.810 02:38:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:06.810 "name": "Existed_Raid", 00:15:06.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.810 "strip_size_kb": 64, 00:15:06.810 "state": "configuring", 00:15:06.810 "raid_level": "raid0", 00:15:06.810 "superblock": false, 00:15:06.810 "num_base_bdevs": 3, 00:15:06.810 "num_base_bdevs_discovered": 1, 00:15:06.810 "num_base_bdevs_operational": 3, 00:15:06.810 "base_bdevs_list": [ 00:15:06.810 { 00:15:06.810 "name": "BaseBdev1", 00:15:06.810 "uuid": "9d457831-623e-4914-b265-e51fb48063da", 00:15:06.810 "is_configured": true, 00:15:06.810 "data_offset": 0, 00:15:06.810 "data_size": 65536 00:15:06.810 }, 00:15:06.810 { 00:15:06.810 "name": "BaseBdev2", 00:15:06.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.810 "is_configured": false, 00:15:06.810 "data_offset": 0, 00:15:06.810 "data_size": 0 00:15:06.810 }, 00:15:06.810 { 00:15:06.810 "name": "BaseBdev3", 00:15:06.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.810 "is_configured": false, 00:15:06.810 "data_offset": 0, 00:15:06.810 "data_size": 0 00:15:06.810 } 00:15:06.810 ] 00:15:06.810 }' 00:15:06.810 02:38:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:06.810 02:38:31 -- common/autotest_common.sh@10 -- # set +x 00:15:07.374 02:38:32 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:07.632 [2024-07-11 02:38:32.561616] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:07.632 BaseBdev2 00:15:07.632 02:38:32 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:07.632 02:38:32 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:15:07.632 02:38:32 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:07.632 02:38:32 -- common/autotest_common.sh@889 -- # local i 00:15:07.632 02:38:32 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:07.632 02:38:32 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:07.632 02:38:32 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:07.891 02:38:32 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:07.891 [ 00:15:07.891 { 00:15:07.891 "name": "BaseBdev2", 00:15:07.891 "aliases": [ 00:15:07.891 "2f28f708-8134-41e4-876e-796ccd734d12" 00:15:07.891 ], 00:15:07.891 "product_name": "Malloc disk", 00:15:07.891 "block_size": 512, 00:15:07.891 "num_blocks": 65536, 00:15:07.891 "uuid": "2f28f708-8134-41e4-876e-796ccd734d12", 00:15:07.891 "assigned_rate_limits": { 00:15:07.891 "rw_ios_per_sec": 0, 00:15:07.891 "rw_mbytes_per_sec": 0, 00:15:07.891 "r_mbytes_per_sec": 0, 00:15:07.891 "w_mbytes_per_sec": 0 00:15:07.891 }, 00:15:07.891 "claimed": true, 00:15:07.891 "claim_type": "exclusive_write", 00:15:07.891 "zoned": false, 00:15:07.891 "supported_io_types": { 00:15:07.891 "read": true, 00:15:07.891 "write": true, 00:15:07.891 "unmap": true, 00:15:07.891 "write_zeroes": true, 00:15:07.891 "flush": true, 00:15:07.891 "reset": true, 00:15:07.891 "compare": false, 00:15:07.891 "compare_and_write": false, 00:15:07.891 "abort": true, 00:15:07.891 "nvme_admin": false, 00:15:07.891 "nvme_io": false 00:15:07.891 }, 00:15:07.891 "memory_domains": [ 00:15:07.891 { 00:15:07.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:07.891 "dma_device_type": 2 00:15:07.891 } 00:15:07.891 ], 00:15:07.891 "driver_specific": {} 00:15:07.891 } 00:15:07.891 ] 00:15:07.891 02:38:32 -- common/autotest_common.sh@895 -- # return 0 00:15:07.891 02:38:32 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:07.891 02:38:32 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:07.891 02:38:32 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:07.891 02:38:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:07.891 02:38:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:07.891 02:38:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:07.891 02:38:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:07.891 02:38:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:07.891 02:38:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:07.891 02:38:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:07.891 02:38:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:07.891 02:38:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:07.891 02:38:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:07.891 02:38:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:08.150 02:38:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:08.150 "name": "Existed_Raid", 00:15:08.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.150 "strip_size_kb": 64, 00:15:08.150 "state": "configuring", 00:15:08.150 "raid_level": "raid0", 00:15:08.150 "superblock": false, 00:15:08.150 "num_base_bdevs": 3, 00:15:08.150 "num_base_bdevs_discovered": 2, 00:15:08.150 "num_base_bdevs_operational": 3, 00:15:08.150 "base_bdevs_list": [ 00:15:08.150 { 00:15:08.150 "name": "BaseBdev1", 00:15:08.150 "uuid": "9d457831-623e-4914-b265-e51fb48063da", 00:15:08.150 "is_configured": true, 00:15:08.150 "data_offset": 0, 00:15:08.150 "data_size": 65536 00:15:08.150 }, 00:15:08.150 { 00:15:08.150 "name": "BaseBdev2", 00:15:08.150 "uuid": "2f28f708-8134-41e4-876e-796ccd734d12", 00:15:08.150 "is_configured": true, 00:15:08.150 "data_offset": 0, 00:15:08.150 "data_size": 65536 00:15:08.150 }, 00:15:08.150 { 00:15:08.150 "name": "BaseBdev3", 00:15:08.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.150 "is_configured": false, 00:15:08.150 "data_offset": 0, 00:15:08.150 "data_size": 0 00:15:08.150 } 00:15:08.150 ] 00:15:08.150 }' 00:15:08.150 02:38:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:08.150 02:38:33 -- common/autotest_common.sh@10 -- # set +x 00:15:09.085 02:38:33 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:09.085 [2024-07-11 02:38:34.006366] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:09.085 [2024-07-11 02:38:34.006410] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006380 00:15:09.085 [2024-07-11 02:38:34.006420] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:09.085 [2024-07-11 02:38:34.006590] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80 00:15:09.085 [2024-07-11 02:38:34.007019] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006380 00:15:09.085 [2024-07-11 02:38:34.007068] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006380 00:15:09.085 [2024-07-11 02:38:34.007361] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:09.085 BaseBdev3 00:15:09.085 02:38:34 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:15:09.085 02:38:34 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:15:09.085 02:38:34 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:09.085 02:38:34 -- common/autotest_common.sh@889 -- # local i 00:15:09.085 02:38:34 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:09.085 02:38:34 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:09.085 02:38:34 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:09.343 02:38:34 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:09.602 [ 00:15:09.602 { 00:15:09.602 "name": "BaseBdev3", 00:15:09.602 "aliases": [ 00:15:09.602 "c6edff34-227b-44ef-a9dd-8eb969384bef" 00:15:09.602 ], 00:15:09.602 "product_name": "Malloc disk", 00:15:09.602 "block_size": 512, 00:15:09.602 "num_blocks": 65536, 00:15:09.602 "uuid": "c6edff34-227b-44ef-a9dd-8eb969384bef", 00:15:09.602 "assigned_rate_limits": { 00:15:09.602 "rw_ios_per_sec": 0, 00:15:09.602 "rw_mbytes_per_sec": 0, 00:15:09.602 "r_mbytes_per_sec": 0, 00:15:09.602 "w_mbytes_per_sec": 0 00:15:09.602 }, 00:15:09.602 "claimed": true, 00:15:09.602 "claim_type": "exclusive_write", 00:15:09.602 "zoned": false, 00:15:09.602 "supported_io_types": { 00:15:09.602 "read": true, 00:15:09.602 "write": true, 00:15:09.602 "unmap": true, 00:15:09.602 "write_zeroes": true, 00:15:09.602 "flush": true, 00:15:09.602 "reset": true, 00:15:09.602 "compare": false, 00:15:09.602 "compare_and_write": false, 00:15:09.602 "abort": true, 00:15:09.602 "nvme_admin": false, 00:15:09.602 "nvme_io": false 00:15:09.602 }, 00:15:09.602 "memory_domains": [ 00:15:09.602 { 00:15:09.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.602 "dma_device_type": 2 00:15:09.602 } 00:15:09.602 ], 00:15:09.602 "driver_specific": {} 00:15:09.602 } 00:15:09.602 ] 00:15:09.602 02:38:34 -- common/autotest_common.sh@895 -- # return 0 00:15:09.602 02:38:34 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:09.602 02:38:34 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:09.602 02:38:34 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:15:09.602 02:38:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:09.602 02:38:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:09.602 02:38:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:09.602 02:38:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:09.602 02:38:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:09.602 02:38:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:09.602 02:38:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:09.602 02:38:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:09.602 02:38:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:09.602 02:38:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:09.602 02:38:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:09.602 02:38:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:09.602 "name": "Existed_Raid", 00:15:09.602 "uuid": "4650e5ac-c64f-4e94-bc0c-977d008beb8a", 00:15:09.602 "strip_size_kb": 64, 00:15:09.602 "state": "online", 00:15:09.602 "raid_level": "raid0", 00:15:09.602 "superblock": false, 00:15:09.602 "num_base_bdevs": 3, 00:15:09.602 "num_base_bdevs_discovered": 3, 00:15:09.602 "num_base_bdevs_operational": 3, 00:15:09.602 "base_bdevs_list": [ 00:15:09.602 { 00:15:09.602 "name": "BaseBdev1", 00:15:09.602 "uuid": "9d457831-623e-4914-b265-e51fb48063da", 00:15:09.602 "is_configured": true, 00:15:09.602 "data_offset": 0, 00:15:09.602 "data_size": 65536 00:15:09.602 }, 00:15:09.602 { 00:15:09.602 "name": "BaseBdev2", 00:15:09.602 "uuid": "2f28f708-8134-41e4-876e-796ccd734d12", 00:15:09.602 "is_configured": true, 00:15:09.602 "data_offset": 0, 00:15:09.602 "data_size": 65536 00:15:09.602 }, 00:15:09.602 { 00:15:09.602 "name": "BaseBdev3", 00:15:09.602 "uuid": "c6edff34-227b-44ef-a9dd-8eb969384bef", 00:15:09.602 "is_configured": true, 00:15:09.602 "data_offset": 0, 00:15:09.602 "data_size": 65536 00:15:09.602 } 00:15:09.602 ] 00:15:09.602 }' 00:15:09.602 02:38:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:09.602 02:38:34 -- common/autotest_common.sh@10 -- # set +x 00:15:10.537 02:38:35 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:10.537 [2024-07-11 02:38:35.472898] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:10.537 [2024-07-11 02:38:35.472932] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:10.537 [2024-07-11 02:38:35.473039] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:10.537 02:38:35 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:10.537 02:38:35 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:15:10.537 02:38:35 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:10.537 02:38:35 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:10.537 02:38:35 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:15:10.537 02:38:35 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:15:10.537 02:38:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:10.537 02:38:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:10.537 02:38:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:10.537 02:38:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:10.537 02:38:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:10.537 02:38:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:10.537 02:38:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:10.537 02:38:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:10.537 02:38:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:10.537 02:38:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:10.537 02:38:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:10.795 02:38:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:10.796 "name": "Existed_Raid", 00:15:10.796 "uuid": "4650e5ac-c64f-4e94-bc0c-977d008beb8a", 00:15:10.796 "strip_size_kb": 64, 00:15:10.796 "state": "offline", 00:15:10.796 "raid_level": "raid0", 00:15:10.796 "superblock": false, 00:15:10.796 "num_base_bdevs": 3, 00:15:10.796 "num_base_bdevs_discovered": 2, 00:15:10.796 "num_base_bdevs_operational": 2, 00:15:10.796 "base_bdevs_list": [ 00:15:10.796 { 00:15:10.796 "name": null, 00:15:10.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.796 "is_configured": false, 00:15:10.796 "data_offset": 0, 00:15:10.796 "data_size": 65536 00:15:10.796 }, 00:15:10.796 { 00:15:10.796 "name": "BaseBdev2", 00:15:10.796 "uuid": "2f28f708-8134-41e4-876e-796ccd734d12", 00:15:10.796 "is_configured": true, 00:15:10.796 "data_offset": 0, 00:15:10.796 "data_size": 65536 00:15:10.796 }, 00:15:10.796 { 00:15:10.796 "name": "BaseBdev3", 00:15:10.796 "uuid": "c6edff34-227b-44ef-a9dd-8eb969384bef", 00:15:10.796 "is_configured": true, 00:15:10.796 "data_offset": 0, 00:15:10.796 "data_size": 65536 00:15:10.796 } 00:15:10.796 ] 00:15:10.796 }' 00:15:10.796 02:38:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:10.796 02:38:35 -- common/autotest_common.sh@10 -- # set +x 00:15:11.362 02:38:36 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:11.362 02:38:36 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:11.362 02:38:36 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:11.362 02:38:36 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:11.621 02:38:36 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:11.621 02:38:36 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:11.621 02:38:36 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:11.879 [2024-07-11 02:38:36.738806] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:11.879 02:38:36 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:11.879 02:38:36 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:11.879 02:38:36 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:11.879 02:38:36 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:11.879 02:38:36 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:11.879 02:38:36 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:11.879 02:38:36 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:12.137 [2024-07-11 02:38:37.212663] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:12.137 [2024-07-11 02:38:37.212741] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state offline 00:15:12.395 02:38:37 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:12.395 02:38:37 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:12.395 02:38:37 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:12.395 02:38:37 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:12.395 02:38:37 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:12.395 02:38:37 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:12.395 02:38:37 -- bdev/bdev_raid.sh@287 -- # killprocess 127424 00:15:12.395 02:38:37 -- common/autotest_common.sh@926 -- # '[' -z 127424 ']' 00:15:12.395 02:38:37 -- common/autotest_common.sh@930 -- # kill -0 127424 00:15:12.395 02:38:37 -- common/autotest_common.sh@931 -- # uname 00:15:12.395 02:38:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:12.395 02:38:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 127424 00:15:12.395 killing process with pid 127424 00:15:12.395 02:38:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:12.395 02:38:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:12.395 02:38:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 127424' 00:15:12.395 02:38:37 -- common/autotest_common.sh@945 -- # kill 127424 00:15:12.395 02:38:37 -- common/autotest_common.sh@950 -- # wait 127424 00:15:12.395 [2024-07-11 02:38:37.454185] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:12.395 [2024-07-11 02:38:37.454292] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:12.654 ************************************ 00:15:12.654 END TEST raid_state_function_test 00:15:12.654 ************************************ 00:15:12.654 02:38:37 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:12.654 00:15:12.654 real 0m10.672s 00:15:12.654 user 0m19.899s 00:15:12.654 sys 0m1.160s 00:15:12.654 02:38:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:12.654 02:38:37 -- common/autotest_common.sh@10 -- # set +x 00:15:12.654 02:38:37 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:15:12.654 02:38:37 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:12.654 02:38:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:12.654 02:38:37 -- common/autotest_common.sh@10 -- # set +x 00:15:12.654 ************************************ 00:15:12.654 START TEST raid_state_function_test_sb 00:15:12.654 ************************************ 00:15:12.654 02:38:37 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 3 true 00:15:12.654 02:38:37 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:15:12.654 02:38:37 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:15:12.654 02:38:37 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:15:12.654 02:38:37 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:12.654 02:38:37 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:15:12.654 02:38:37 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:12.654 02:38:37 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:12.654 02:38:37 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:12.654 02:38:37 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:12.654 02:38:37 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:12.654 02:38:37 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:12.654 02:38:37 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:12.654 02:38:37 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:12.654 02:38:37 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:15:12.654 02:38:37 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:12.654 02:38:37 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:12.654 02:38:37 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:12.654 02:38:37 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:12.654 02:38:37 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:12.654 02:38:37 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:12.654 02:38:37 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:12.654 02:38:37 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:15:12.654 02:38:37 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:15:12.654 02:38:37 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:15:12.654 02:38:37 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:15:12.654 02:38:37 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:15:12.654 02:38:37 -- bdev/bdev_raid.sh@226 -- # raid_pid=127815 00:15:12.654 Process raid pid: 127815 00:15:12.654 02:38:37 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 127815' 00:15:12.654 02:38:37 -- bdev/bdev_raid.sh@228 -- # waitforlisten 127815 /var/tmp/spdk-raid.sock 00:15:12.654 02:38:37 -- common/autotest_common.sh@819 -- # '[' -z 127815 ']' 00:15:12.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:12.654 02:38:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:12.654 02:38:37 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:12.654 02:38:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:12.654 02:38:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:12.654 02:38:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:12.654 02:38:37 -- common/autotest_common.sh@10 -- # set +x 00:15:12.913 [2024-07-11 02:38:37.773485] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:12.913 [2024-07-11 02:38:37.773756] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:12.913 [2024-07-11 02:38:37.920275] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.913 [2024-07-11 02:38:37.974717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:13.171 [2024-07-11 02:38:38.024895] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:13.737 02:38:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:13.737 02:38:38 -- common/autotest_common.sh@852 -- # return 0 00:15:13.737 02:38:38 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:13.995 [2024-07-11 02:38:38.883789] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:13.995 [2024-07-11 02:38:38.883888] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:13.995 [2024-07-11 02:38:38.883902] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:13.995 [2024-07-11 02:38:38.883919] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:13.995 [2024-07-11 02:38:38.883926] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:13.995 [2024-07-11 02:38:38.883959] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:13.995 02:38:38 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:13.995 02:38:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:13.995 02:38:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:13.995 02:38:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:13.995 02:38:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:13.995 02:38:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:13.995 02:38:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:13.995 02:38:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:13.995 02:38:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:13.995 02:38:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:13.995 02:38:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:13.995 02:38:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:14.253 02:38:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:14.253 "name": "Existed_Raid", 00:15:14.253 "uuid": "aff34312-b82c-4e76-9da2-ed4e42ffe52a", 00:15:14.253 "strip_size_kb": 64, 00:15:14.253 "state": "configuring", 00:15:14.253 "raid_level": "raid0", 00:15:14.253 "superblock": true, 00:15:14.253 "num_base_bdevs": 3, 00:15:14.253 "num_base_bdevs_discovered": 0, 00:15:14.253 "num_base_bdevs_operational": 3, 00:15:14.253 "base_bdevs_list": [ 00:15:14.253 { 00:15:14.253 "name": "BaseBdev1", 00:15:14.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.253 "is_configured": false, 00:15:14.253 "data_offset": 0, 00:15:14.253 "data_size": 0 00:15:14.253 }, 00:15:14.253 { 00:15:14.253 "name": "BaseBdev2", 00:15:14.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.253 "is_configured": false, 00:15:14.253 "data_offset": 0, 00:15:14.253 "data_size": 0 00:15:14.253 }, 00:15:14.253 { 00:15:14.253 "name": "BaseBdev3", 00:15:14.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.253 "is_configured": false, 00:15:14.253 "data_offset": 0, 00:15:14.253 "data_size": 0 00:15:14.253 } 00:15:14.253 ] 00:15:14.253 }' 00:15:14.253 02:38:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:14.253 02:38:39 -- common/autotest_common.sh@10 -- # set +x 00:15:14.818 02:38:39 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:15.075 [2024-07-11 02:38:39.923840] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:15.075 [2024-07-11 02:38:39.923890] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:15:15.075 02:38:39 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:15.075 [2024-07-11 02:38:40.159899] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:15.075 [2024-07-11 02:38:40.159956] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:15.075 [2024-07-11 02:38:40.159983] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:15.075 [2024-07-11 02:38:40.160001] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:15.075 [2024-07-11 02:38:40.160008] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:15.075 [2024-07-11 02:38:40.160029] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:15.332 02:38:40 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:15.332 [2024-07-11 02:38:40.358018] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:15.332 BaseBdev1 00:15:15.332 02:38:40 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:15.332 02:38:40 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:15.332 02:38:40 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:15.332 02:38:40 -- common/autotest_common.sh@889 -- # local i 00:15:15.332 02:38:40 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:15.332 02:38:40 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:15.332 02:38:40 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:15.589 02:38:40 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:15.846 [ 00:15:15.846 { 00:15:15.846 "name": "BaseBdev1", 00:15:15.846 "aliases": [ 00:15:15.846 "56029349-87c4-4229-8c60-205037b01fe8" 00:15:15.846 ], 00:15:15.846 "product_name": "Malloc disk", 00:15:15.846 "block_size": 512, 00:15:15.846 "num_blocks": 65536, 00:15:15.846 "uuid": "56029349-87c4-4229-8c60-205037b01fe8", 00:15:15.846 "assigned_rate_limits": { 00:15:15.846 "rw_ios_per_sec": 0, 00:15:15.846 "rw_mbytes_per_sec": 0, 00:15:15.846 "r_mbytes_per_sec": 0, 00:15:15.846 "w_mbytes_per_sec": 0 00:15:15.846 }, 00:15:15.846 "claimed": true, 00:15:15.846 "claim_type": "exclusive_write", 00:15:15.846 "zoned": false, 00:15:15.846 "supported_io_types": { 00:15:15.846 "read": true, 00:15:15.846 "write": true, 00:15:15.846 "unmap": true, 00:15:15.846 "write_zeroes": true, 00:15:15.846 "flush": true, 00:15:15.846 "reset": true, 00:15:15.846 "compare": false, 00:15:15.846 "compare_and_write": false, 00:15:15.846 "abort": true, 00:15:15.846 "nvme_admin": false, 00:15:15.846 "nvme_io": false 00:15:15.846 }, 00:15:15.846 "memory_domains": [ 00:15:15.846 { 00:15:15.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:15.846 "dma_device_type": 2 00:15:15.846 } 00:15:15.846 ], 00:15:15.846 "driver_specific": {} 00:15:15.846 } 00:15:15.846 ] 00:15:15.846 02:38:40 -- common/autotest_common.sh@895 -- # return 0 00:15:15.846 02:38:40 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:15.846 02:38:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:15.846 02:38:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:15.846 02:38:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:15.846 02:38:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:15.846 02:38:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:15.846 02:38:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:15.846 02:38:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:15.846 02:38:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:15.846 02:38:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:15.846 02:38:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:15.846 02:38:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:16.103 02:38:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:16.103 "name": "Existed_Raid", 00:15:16.103 "uuid": "026d4302-4a4f-4bbb-9ac3-9d54348a1822", 00:15:16.103 "strip_size_kb": 64, 00:15:16.103 "state": "configuring", 00:15:16.103 "raid_level": "raid0", 00:15:16.103 "superblock": true, 00:15:16.103 "num_base_bdevs": 3, 00:15:16.103 "num_base_bdevs_discovered": 1, 00:15:16.103 "num_base_bdevs_operational": 3, 00:15:16.103 "base_bdevs_list": [ 00:15:16.103 { 00:15:16.103 "name": "BaseBdev1", 00:15:16.103 "uuid": "56029349-87c4-4229-8c60-205037b01fe8", 00:15:16.103 "is_configured": true, 00:15:16.103 "data_offset": 2048, 00:15:16.103 "data_size": 63488 00:15:16.103 }, 00:15:16.103 { 00:15:16.103 "name": "BaseBdev2", 00:15:16.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.103 "is_configured": false, 00:15:16.103 "data_offset": 0, 00:15:16.103 "data_size": 0 00:15:16.103 }, 00:15:16.103 { 00:15:16.103 "name": "BaseBdev3", 00:15:16.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.103 "is_configured": false, 00:15:16.103 "data_offset": 0, 00:15:16.103 "data_size": 0 00:15:16.103 } 00:15:16.103 ] 00:15:16.103 }' 00:15:16.103 02:38:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:16.103 02:38:40 -- common/autotest_common.sh@10 -- # set +x 00:15:16.668 02:38:41 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:16.926 [2024-07-11 02:38:41.804243] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:16.926 [2024-07-11 02:38:41.804311] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005a80 name Existed_Raid, state configuring 00:15:16.926 02:38:41 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:15:16.926 02:38:41 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:16.926 02:38:41 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:17.183 BaseBdev1 00:15:17.183 02:38:42 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:15:17.183 02:38:42 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:17.183 02:38:42 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:17.183 02:38:42 -- common/autotest_common.sh@889 -- # local i 00:15:17.183 02:38:42 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:17.183 02:38:42 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:17.183 02:38:42 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:17.441 02:38:42 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:17.700 [ 00:15:17.700 { 00:15:17.700 "name": "BaseBdev1", 00:15:17.700 "aliases": [ 00:15:17.700 "d0380121-ae31-479b-8a4e-b91877fda538" 00:15:17.700 ], 00:15:17.700 "product_name": "Malloc disk", 00:15:17.700 "block_size": 512, 00:15:17.700 "num_blocks": 65536, 00:15:17.700 "uuid": "d0380121-ae31-479b-8a4e-b91877fda538", 00:15:17.700 "assigned_rate_limits": { 00:15:17.700 "rw_ios_per_sec": 0, 00:15:17.700 "rw_mbytes_per_sec": 0, 00:15:17.700 "r_mbytes_per_sec": 0, 00:15:17.700 "w_mbytes_per_sec": 0 00:15:17.700 }, 00:15:17.700 "claimed": false, 00:15:17.700 "zoned": false, 00:15:17.700 "supported_io_types": { 00:15:17.700 "read": true, 00:15:17.700 "write": true, 00:15:17.700 "unmap": true, 00:15:17.700 "write_zeroes": true, 00:15:17.700 "flush": true, 00:15:17.700 "reset": true, 00:15:17.700 "compare": false, 00:15:17.700 "compare_and_write": false, 00:15:17.700 "abort": true, 00:15:17.700 "nvme_admin": false, 00:15:17.700 "nvme_io": false 00:15:17.700 }, 00:15:17.700 "memory_domains": [ 00:15:17.700 { 00:15:17.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:17.700 "dma_device_type": 2 00:15:17.700 } 00:15:17.700 ], 00:15:17.700 "driver_specific": {} 00:15:17.700 } 00:15:17.700 ] 00:15:17.700 02:38:42 -- common/autotest_common.sh@895 -- # return 0 00:15:17.700 02:38:42 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:17.959 [2024-07-11 02:38:42.798904] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:17.959 [2024-07-11 02:38:42.800737] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:17.959 [2024-07-11 02:38:42.800814] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:17.959 [2024-07-11 02:38:42.800841] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:17.959 [2024-07-11 02:38:42.800876] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:17.959 02:38:42 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:17.959 02:38:42 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:17.959 02:38:42 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:17.959 02:38:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:17.959 02:38:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:17.959 02:38:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:17.959 02:38:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:17.959 02:38:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:17.959 02:38:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:17.959 02:38:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:17.959 02:38:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:17.959 02:38:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:17.959 02:38:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:17.959 02:38:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:17.959 02:38:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:17.959 "name": "Existed_Raid", 00:15:17.959 "uuid": "f9aa7cb0-81a9-443f-bc34-ded5baa5d8c4", 00:15:17.959 "strip_size_kb": 64, 00:15:17.959 "state": "configuring", 00:15:17.959 "raid_level": "raid0", 00:15:17.959 "superblock": true, 00:15:17.959 "num_base_bdevs": 3, 00:15:17.959 "num_base_bdevs_discovered": 1, 00:15:17.959 "num_base_bdevs_operational": 3, 00:15:17.959 "base_bdevs_list": [ 00:15:17.959 { 00:15:17.959 "name": "BaseBdev1", 00:15:17.959 "uuid": "d0380121-ae31-479b-8a4e-b91877fda538", 00:15:17.959 "is_configured": true, 00:15:17.959 "data_offset": 2048, 00:15:17.959 "data_size": 63488 00:15:17.959 }, 00:15:17.959 { 00:15:17.959 "name": "BaseBdev2", 00:15:17.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.959 "is_configured": false, 00:15:17.959 "data_offset": 0, 00:15:17.959 "data_size": 0 00:15:17.959 }, 00:15:17.959 { 00:15:17.959 "name": "BaseBdev3", 00:15:17.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.959 "is_configured": false, 00:15:17.959 "data_offset": 0, 00:15:17.959 "data_size": 0 00:15:17.959 } 00:15:17.959 ] 00:15:17.959 }' 00:15:17.959 02:38:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:17.959 02:38:42 -- common/autotest_common.sh@10 -- # set +x 00:15:18.900 02:38:43 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:18.900 [2024-07-11 02:38:43.881932] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:18.900 BaseBdev2 00:15:18.900 02:38:43 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:18.900 02:38:43 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:15:18.900 02:38:43 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:18.900 02:38:43 -- common/autotest_common.sh@889 -- # local i 00:15:18.900 02:38:43 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:18.900 02:38:43 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:18.900 02:38:43 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:19.170 02:38:44 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:19.170 [ 00:15:19.170 { 00:15:19.170 "name": "BaseBdev2", 00:15:19.170 "aliases": [ 00:15:19.170 "6251f576-b73f-48fc-9a1d-1a94ad5bd3d2" 00:15:19.170 ], 00:15:19.170 "product_name": "Malloc disk", 00:15:19.170 "block_size": 512, 00:15:19.170 "num_blocks": 65536, 00:15:19.170 "uuid": "6251f576-b73f-48fc-9a1d-1a94ad5bd3d2", 00:15:19.170 "assigned_rate_limits": { 00:15:19.170 "rw_ios_per_sec": 0, 00:15:19.170 "rw_mbytes_per_sec": 0, 00:15:19.170 "r_mbytes_per_sec": 0, 00:15:19.170 "w_mbytes_per_sec": 0 00:15:19.170 }, 00:15:19.170 "claimed": true, 00:15:19.170 "claim_type": "exclusive_write", 00:15:19.170 "zoned": false, 00:15:19.170 "supported_io_types": { 00:15:19.170 "read": true, 00:15:19.170 "write": true, 00:15:19.170 "unmap": true, 00:15:19.170 "write_zeroes": true, 00:15:19.170 "flush": true, 00:15:19.170 "reset": true, 00:15:19.170 "compare": false, 00:15:19.170 "compare_and_write": false, 00:15:19.170 "abort": true, 00:15:19.170 "nvme_admin": false, 00:15:19.170 "nvme_io": false 00:15:19.170 }, 00:15:19.170 "memory_domains": [ 00:15:19.170 { 00:15:19.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:19.170 "dma_device_type": 2 00:15:19.170 } 00:15:19.170 ], 00:15:19.170 "driver_specific": {} 00:15:19.170 } 00:15:19.170 ] 00:15:19.428 02:38:44 -- common/autotest_common.sh@895 -- # return 0 00:15:19.428 02:38:44 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:19.428 02:38:44 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:19.428 02:38:44 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:19.428 02:38:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:19.428 02:38:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:19.428 02:38:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:19.428 02:38:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:19.428 02:38:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:19.428 02:38:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:19.428 02:38:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:19.428 02:38:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:19.428 02:38:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:19.428 02:38:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:19.428 02:38:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:19.428 02:38:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:19.428 "name": "Existed_Raid", 00:15:19.428 "uuid": "f9aa7cb0-81a9-443f-bc34-ded5baa5d8c4", 00:15:19.428 "strip_size_kb": 64, 00:15:19.428 "state": "configuring", 00:15:19.428 "raid_level": "raid0", 00:15:19.428 "superblock": true, 00:15:19.428 "num_base_bdevs": 3, 00:15:19.428 "num_base_bdevs_discovered": 2, 00:15:19.428 "num_base_bdevs_operational": 3, 00:15:19.428 "base_bdevs_list": [ 00:15:19.428 { 00:15:19.428 "name": "BaseBdev1", 00:15:19.428 "uuid": "d0380121-ae31-479b-8a4e-b91877fda538", 00:15:19.428 "is_configured": true, 00:15:19.428 "data_offset": 2048, 00:15:19.428 "data_size": 63488 00:15:19.428 }, 00:15:19.428 { 00:15:19.428 "name": "BaseBdev2", 00:15:19.428 "uuid": "6251f576-b73f-48fc-9a1d-1a94ad5bd3d2", 00:15:19.428 "is_configured": true, 00:15:19.428 "data_offset": 2048, 00:15:19.428 "data_size": 63488 00:15:19.428 }, 00:15:19.428 { 00:15:19.428 "name": "BaseBdev3", 00:15:19.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.428 "is_configured": false, 00:15:19.428 "data_offset": 0, 00:15:19.428 "data_size": 0 00:15:19.428 } 00:15:19.428 ] 00:15:19.428 }' 00:15:19.428 02:38:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:19.428 02:38:44 -- common/autotest_common.sh@10 -- # set +x 00:15:20.364 02:38:45 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:20.364 [2024-07-11 02:38:45.287378] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:20.364 [2024-07-11 02:38:45.287675] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006980 00:15:20.364 [2024-07-11 02:38:45.287707] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:20.364 [2024-07-11 02:38:45.287848] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:15:20.364 BaseBdev3 00:15:20.364 [2024-07-11 02:38:45.288244] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006980 00:15:20.364 [2024-07-11 02:38:45.288266] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006980 00:15:20.364 [2024-07-11 02:38:45.288428] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:20.364 02:38:45 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:15:20.364 02:38:45 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:15:20.364 02:38:45 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:20.364 02:38:45 -- common/autotest_common.sh@889 -- # local i 00:15:20.364 02:38:45 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:20.364 02:38:45 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:20.364 02:38:45 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:20.623 02:38:45 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:20.623 [ 00:15:20.623 { 00:15:20.623 "name": "BaseBdev3", 00:15:20.623 "aliases": [ 00:15:20.623 "631f8cd7-81a3-4ad8-8526-482513f6453a" 00:15:20.623 ], 00:15:20.623 "product_name": "Malloc disk", 00:15:20.623 "block_size": 512, 00:15:20.623 "num_blocks": 65536, 00:15:20.623 "uuid": "631f8cd7-81a3-4ad8-8526-482513f6453a", 00:15:20.623 "assigned_rate_limits": { 00:15:20.623 "rw_ios_per_sec": 0, 00:15:20.623 "rw_mbytes_per_sec": 0, 00:15:20.623 "r_mbytes_per_sec": 0, 00:15:20.623 "w_mbytes_per_sec": 0 00:15:20.623 }, 00:15:20.623 "claimed": true, 00:15:20.623 "claim_type": "exclusive_write", 00:15:20.623 "zoned": false, 00:15:20.623 "supported_io_types": { 00:15:20.623 "read": true, 00:15:20.623 "write": true, 00:15:20.623 "unmap": true, 00:15:20.623 "write_zeroes": true, 00:15:20.623 "flush": true, 00:15:20.623 "reset": true, 00:15:20.623 "compare": false, 00:15:20.623 "compare_and_write": false, 00:15:20.623 "abort": true, 00:15:20.623 "nvme_admin": false, 00:15:20.623 "nvme_io": false 00:15:20.623 }, 00:15:20.623 "memory_domains": [ 00:15:20.623 { 00:15:20.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:20.623 "dma_device_type": 2 00:15:20.623 } 00:15:20.623 ], 00:15:20.623 "driver_specific": {} 00:15:20.623 } 00:15:20.623 ] 00:15:20.623 02:38:45 -- common/autotest_common.sh@895 -- # return 0 00:15:20.623 02:38:45 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:20.623 02:38:45 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:20.623 02:38:45 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:15:20.623 02:38:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:20.623 02:38:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:20.623 02:38:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:20.623 02:38:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:20.623 02:38:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:20.623 02:38:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:20.623 02:38:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:20.623 02:38:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:20.623 02:38:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:20.623 02:38:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:20.623 02:38:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:20.883 02:38:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:20.883 "name": "Existed_Raid", 00:15:20.883 "uuid": "f9aa7cb0-81a9-443f-bc34-ded5baa5d8c4", 00:15:20.883 "strip_size_kb": 64, 00:15:20.883 "state": "online", 00:15:20.883 "raid_level": "raid0", 00:15:20.883 "superblock": true, 00:15:20.883 "num_base_bdevs": 3, 00:15:20.883 "num_base_bdevs_discovered": 3, 00:15:20.883 "num_base_bdevs_operational": 3, 00:15:20.883 "base_bdevs_list": [ 00:15:20.883 { 00:15:20.883 "name": "BaseBdev1", 00:15:20.883 "uuid": "d0380121-ae31-479b-8a4e-b91877fda538", 00:15:20.883 "is_configured": true, 00:15:20.883 "data_offset": 2048, 00:15:20.883 "data_size": 63488 00:15:20.883 }, 00:15:20.883 { 00:15:20.883 "name": "BaseBdev2", 00:15:20.883 "uuid": "6251f576-b73f-48fc-9a1d-1a94ad5bd3d2", 00:15:20.883 "is_configured": true, 00:15:20.883 "data_offset": 2048, 00:15:20.883 "data_size": 63488 00:15:20.883 }, 00:15:20.883 { 00:15:20.883 "name": "BaseBdev3", 00:15:20.883 "uuid": "631f8cd7-81a3-4ad8-8526-482513f6453a", 00:15:20.883 "is_configured": true, 00:15:20.883 "data_offset": 2048, 00:15:20.883 "data_size": 63488 00:15:20.883 } 00:15:20.883 ] 00:15:20.883 }' 00:15:20.883 02:38:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:20.883 02:38:45 -- common/autotest_common.sh@10 -- # set +x 00:15:21.451 02:38:46 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:21.709 [2024-07-11 02:38:46.647730] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:21.709 [2024-07-11 02:38:46.647760] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:21.709 [2024-07-11 02:38:46.647829] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:21.709 02:38:46 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:21.709 02:38:46 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:15:21.709 02:38:46 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:21.709 02:38:46 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:21.709 02:38:46 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:15:21.709 02:38:46 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:15:21.709 02:38:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:21.709 02:38:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:21.709 02:38:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:21.709 02:38:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:21.709 02:38:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:21.709 02:38:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:21.709 02:38:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:21.709 02:38:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:21.709 02:38:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:21.709 02:38:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:21.709 02:38:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:21.968 02:38:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:21.968 "name": "Existed_Raid", 00:15:21.968 "uuid": "f9aa7cb0-81a9-443f-bc34-ded5baa5d8c4", 00:15:21.968 "strip_size_kb": 64, 00:15:21.968 "state": "offline", 00:15:21.968 "raid_level": "raid0", 00:15:21.968 "superblock": true, 00:15:21.968 "num_base_bdevs": 3, 00:15:21.968 "num_base_bdevs_discovered": 2, 00:15:21.968 "num_base_bdevs_operational": 2, 00:15:21.968 "base_bdevs_list": [ 00:15:21.968 { 00:15:21.968 "name": null, 00:15:21.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.968 "is_configured": false, 00:15:21.968 "data_offset": 2048, 00:15:21.968 "data_size": 63488 00:15:21.968 }, 00:15:21.968 { 00:15:21.968 "name": "BaseBdev2", 00:15:21.968 "uuid": "6251f576-b73f-48fc-9a1d-1a94ad5bd3d2", 00:15:21.968 "is_configured": true, 00:15:21.968 "data_offset": 2048, 00:15:21.968 "data_size": 63488 00:15:21.968 }, 00:15:21.968 { 00:15:21.968 "name": "BaseBdev3", 00:15:21.968 "uuid": "631f8cd7-81a3-4ad8-8526-482513f6453a", 00:15:21.968 "is_configured": true, 00:15:21.968 "data_offset": 2048, 00:15:21.968 "data_size": 63488 00:15:21.968 } 00:15:21.968 ] 00:15:21.968 }' 00:15:21.968 02:38:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:21.968 02:38:46 -- common/autotest_common.sh@10 -- # set +x 00:15:22.533 02:38:47 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:22.533 02:38:47 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:22.533 02:38:47 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:22.533 02:38:47 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:22.791 02:38:47 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:22.791 02:38:47 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:22.791 02:38:47 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:23.049 [2024-07-11 02:38:47.992937] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:23.049 02:38:48 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:23.049 02:38:48 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:23.049 02:38:48 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:23.049 02:38:48 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:23.307 02:38:48 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:23.307 02:38:48 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:23.307 02:38:48 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:23.564 [2024-07-11 02:38:48.469760] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:23.564 [2024-07-11 02:38:48.469820] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state offline 00:15:23.564 02:38:48 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:23.564 02:38:48 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:23.564 02:38:48 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:23.565 02:38:48 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:23.823 02:38:48 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:23.823 02:38:48 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:23.823 02:38:48 -- bdev/bdev_raid.sh@287 -- # killprocess 127815 00:15:23.823 02:38:48 -- common/autotest_common.sh@926 -- # '[' -z 127815 ']' 00:15:23.823 02:38:48 -- common/autotest_common.sh@930 -- # kill -0 127815 00:15:23.823 02:38:48 -- common/autotest_common.sh@931 -- # uname 00:15:23.823 02:38:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:23.823 02:38:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 127815 00:15:23.823 killing process with pid 127815 00:15:23.823 02:38:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:23.823 02:38:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:23.823 02:38:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 127815' 00:15:23.823 02:38:48 -- common/autotest_common.sh@945 -- # kill 127815 00:15:23.823 02:38:48 -- common/autotest_common.sh@950 -- # wait 127815 00:15:23.823 [2024-07-11 02:38:48.780347] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:23.823 [2024-07-11 02:38:48.780439] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:24.082 ************************************ 00:15:24.082 END TEST raid_state_function_test_sb 00:15:24.082 ************************************ 00:15:24.082 02:38:48 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:24.082 00:15:24.082 real 0m11.273s 00:15:24.082 user 0m20.935s 00:15:24.082 sys 0m1.259s 00:15:24.082 02:38:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:24.082 02:38:48 -- common/autotest_common.sh@10 -- # set +x 00:15:24.082 02:38:49 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:15:24.082 02:38:49 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:15:24.082 02:38:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:24.082 02:38:49 -- common/autotest_common.sh@10 -- # set +x 00:15:24.082 ************************************ 00:15:24.082 START TEST raid_superblock_test 00:15:24.082 ************************************ 00:15:24.082 02:38:49 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 3 00:15:24.082 02:38:49 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:15:24.082 02:38:49 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:15:24.082 02:38:49 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:15:24.082 02:38:49 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:15:24.082 02:38:49 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:15:24.082 02:38:49 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:15:24.082 02:38:49 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:15:24.082 02:38:49 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:15:24.082 02:38:49 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:15:24.082 02:38:49 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:15:24.082 02:38:49 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:15:24.082 02:38:49 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:15:24.082 02:38:49 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:15:24.082 02:38:49 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:15:24.082 02:38:49 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:15:24.082 02:38:49 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:15:24.082 02:38:49 -- bdev/bdev_raid.sh@357 -- # raid_pid=128203 00:15:24.082 02:38:49 -- bdev/bdev_raid.sh@358 -- # waitforlisten 128203 /var/tmp/spdk-raid.sock 00:15:24.082 02:38:49 -- common/autotest_common.sh@819 -- # '[' -z 128203 ']' 00:15:24.082 02:38:49 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:24.082 02:38:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:24.082 02:38:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:24.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:24.082 02:38:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:24.082 02:38:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:24.082 02:38:49 -- common/autotest_common.sh@10 -- # set +x 00:15:24.082 [2024-07-11 02:38:49.101756] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:24.082 [2024-07-11 02:38:49.102541] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128203 ] 00:15:24.341 [2024-07-11 02:38:49.250342] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:24.341 [2024-07-11 02:38:49.317151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:24.341 [2024-07-11 02:38:49.373213] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:25.276 02:38:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:25.276 02:38:50 -- common/autotest_common.sh@852 -- # return 0 00:15:25.276 02:38:50 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:15:25.276 02:38:50 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:25.276 02:38:50 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:15:25.276 02:38:50 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:15:25.276 02:38:50 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:25.276 02:38:50 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:25.276 02:38:50 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:25.276 02:38:50 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:25.276 02:38:50 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:25.276 malloc1 00:15:25.276 02:38:50 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:25.535 [2024-07-11 02:38:50.465340] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:25.535 [2024-07-11 02:38:50.465439] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:25.535 [2024-07-11 02:38:50.465474] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005d80 00:15:25.535 [2024-07-11 02:38:50.465544] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:25.535 [2024-07-11 02:38:50.467671] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:25.535 [2024-07-11 02:38:50.467722] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:25.535 pt1 00:15:25.535 02:38:50 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:25.535 02:38:50 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:25.535 02:38:50 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:15:25.535 02:38:50 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:15:25.535 02:38:50 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:25.535 02:38:50 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:25.535 02:38:50 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:25.535 02:38:50 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:25.535 02:38:50 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:25.792 malloc2 00:15:25.792 02:38:50 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:26.051 [2024-07-11 02:38:50.911310] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:26.051 [2024-07-11 02:38:50.911392] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:26.051 [2024-07-11 02:38:50.911426] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:15:26.051 [2024-07-11 02:38:50.911459] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:26.051 [2024-07-11 02:38:50.913357] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:26.051 [2024-07-11 02:38:50.913401] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:26.051 pt2 00:15:26.051 02:38:50 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:26.051 02:38:50 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:26.051 02:38:50 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:15:26.051 02:38:50 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:15:26.051 02:38:50 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:26.051 02:38:50 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:26.051 02:38:50 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:26.051 02:38:50 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:26.051 02:38:50 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:15:26.051 malloc3 00:15:26.310 02:38:51 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:26.310 [2024-07-11 02:38:51.328812] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:26.310 [2024-07-11 02:38:51.328910] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:26.310 [2024-07-11 02:38:51.328948] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:26.310 [2024-07-11 02:38:51.328985] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:26.310 [2024-07-11 02:38:51.330986] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:26.310 [2024-07-11 02:38:51.331034] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:26.310 pt3 00:15:26.310 02:38:51 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:26.310 02:38:51 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:26.310 02:38:51 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:15:26.569 [2024-07-11 02:38:51.512901] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:26.569 [2024-07-11 02:38:51.514578] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:26.569 [2024-07-11 02:38:51.514645] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:26.569 [2024-07-11 02:38:51.514872] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007b80 00:15:26.569 [2024-07-11 02:38:51.514896] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:26.569 [2024-07-11 02:38:51.515034] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:15:26.569 [2024-07-11 02:38:51.515433] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007b80 00:15:26.569 [2024-07-11 02:38:51.515455] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007b80 00:15:26.569 [2024-07-11 02:38:51.515670] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:26.569 02:38:51 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:15:26.569 02:38:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:26.569 02:38:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:26.569 02:38:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:26.569 02:38:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:26.569 02:38:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:26.569 02:38:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:26.569 02:38:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:26.569 02:38:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:26.569 02:38:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:26.569 02:38:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:26.569 02:38:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.827 02:38:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:26.827 "name": "raid_bdev1", 00:15:26.827 "uuid": "b02a6c02-44e5-47b1-9a01-7ff722d3008d", 00:15:26.827 "strip_size_kb": 64, 00:15:26.828 "state": "online", 00:15:26.828 "raid_level": "raid0", 00:15:26.828 "superblock": true, 00:15:26.828 "num_base_bdevs": 3, 00:15:26.828 "num_base_bdevs_discovered": 3, 00:15:26.828 "num_base_bdevs_operational": 3, 00:15:26.828 "base_bdevs_list": [ 00:15:26.828 { 00:15:26.828 "name": "pt1", 00:15:26.828 "uuid": "aef6f287-5a1b-55b6-86a7-74074896c06e", 00:15:26.828 "is_configured": true, 00:15:26.828 "data_offset": 2048, 00:15:26.828 "data_size": 63488 00:15:26.828 }, 00:15:26.828 { 00:15:26.828 "name": "pt2", 00:15:26.828 "uuid": "974bbbca-1a5d-578d-aea1-fe5d03234312", 00:15:26.828 "is_configured": true, 00:15:26.828 "data_offset": 2048, 00:15:26.828 "data_size": 63488 00:15:26.828 }, 00:15:26.828 { 00:15:26.828 "name": "pt3", 00:15:26.828 "uuid": "14511b0a-28c6-5b76-918a-7d8e941bf825", 00:15:26.828 "is_configured": true, 00:15:26.828 "data_offset": 2048, 00:15:26.828 "data_size": 63488 00:15:26.828 } 00:15:26.828 ] 00:15:26.828 }' 00:15:26.828 02:38:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:26.828 02:38:51 -- common/autotest_common.sh@10 -- # set +x 00:15:27.394 02:38:52 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:27.394 02:38:52 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:15:27.653 [2024-07-11 02:38:52.585275] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:27.653 02:38:52 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=b02a6c02-44e5-47b1-9a01-7ff722d3008d 00:15:27.653 02:38:52 -- bdev/bdev_raid.sh@380 -- # '[' -z b02a6c02-44e5-47b1-9a01-7ff722d3008d ']' 00:15:27.653 02:38:52 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:27.921 [2024-07-11 02:38:52.825064] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:27.921 [2024-07-11 02:38:52.825088] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:27.921 [2024-07-11 02:38:52.825183] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:27.921 [2024-07-11 02:38:52.825291] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:27.921 [2024-07-11 02:38:52.825321] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007b80 name raid_bdev1, state offline 00:15:27.921 02:38:52 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:27.921 02:38:52 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:15:28.180 02:38:53 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:15:28.180 02:38:53 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:15:28.180 02:38:53 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:28.180 02:38:53 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:28.438 02:38:53 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:28.439 02:38:53 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:28.696 02:38:53 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:28.696 02:38:53 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:15:28.696 02:38:53 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:28.696 02:38:53 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:28.953 02:38:53 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:15:28.953 02:38:53 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:15:28.953 02:38:53 -- common/autotest_common.sh@640 -- # local es=0 00:15:28.953 02:38:53 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:15:28.953 02:38:53 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:28.953 02:38:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:28.953 02:38:53 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:28.953 02:38:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:28.953 02:38:53 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:28.953 02:38:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:28.953 02:38:53 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:28.953 02:38:53 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:28.953 02:38:53 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:15:29.210 [2024-07-11 02:38:54.141301] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:29.210 [2024-07-11 02:38:54.143213] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:29.210 [2024-07-11 02:38:54.143283] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:29.210 [2024-07-11 02:38:54.143337] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:15:29.210 [2024-07-11 02:38:54.143463] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:15:29.210 [2024-07-11 02:38:54.143534] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:15:29.210 [2024-07-11 02:38:54.143581] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:29.210 [2024-07-11 02:38:54.143594] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state configuring 00:15:29.210 request: 00:15:29.210 { 00:15:29.210 "name": "raid_bdev1", 00:15:29.210 "raid_level": "raid0", 00:15:29.210 "base_bdevs": [ 00:15:29.210 "malloc1", 00:15:29.210 "malloc2", 00:15:29.210 "malloc3" 00:15:29.210 ], 00:15:29.210 "superblock": false, 00:15:29.210 "strip_size_kb": 64, 00:15:29.210 "method": "bdev_raid_create", 00:15:29.210 "req_id": 1 00:15:29.210 } 00:15:29.210 Got JSON-RPC error response 00:15:29.210 response: 00:15:29.210 { 00:15:29.210 "code": -17, 00:15:29.210 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:29.210 } 00:15:29.210 02:38:54 -- common/autotest_common.sh@643 -- # es=1 00:15:29.210 02:38:54 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:29.210 02:38:54 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:29.210 02:38:54 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:29.210 02:38:54 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:15:29.210 02:38:54 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:29.469 02:38:54 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:15:29.469 02:38:54 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:15:29.469 02:38:54 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:29.727 [2024-07-11 02:38:54.581297] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:29.727 [2024-07-11 02:38:54.581372] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:29.727 [2024-07-11 02:38:54.581408] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:15:29.727 [2024-07-11 02:38:54.581429] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:29.727 [2024-07-11 02:38:54.583482] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:29.727 [2024-07-11 02:38:54.583530] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:29.727 [2024-07-11 02:38:54.583636] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:15:29.727 [2024-07-11 02:38:54.583698] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:29.727 pt1 00:15:29.727 02:38:54 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:15:29.727 02:38:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:29.727 02:38:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:29.727 02:38:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:29.728 02:38:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:29.728 02:38:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:29.728 02:38:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:29.728 02:38:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:29.728 02:38:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:29.728 02:38:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:29.728 02:38:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:29.728 02:38:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.728 02:38:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:29.728 "name": "raid_bdev1", 00:15:29.728 "uuid": "b02a6c02-44e5-47b1-9a01-7ff722d3008d", 00:15:29.728 "strip_size_kb": 64, 00:15:29.728 "state": "configuring", 00:15:29.728 "raid_level": "raid0", 00:15:29.728 "superblock": true, 00:15:29.728 "num_base_bdevs": 3, 00:15:29.728 "num_base_bdevs_discovered": 1, 00:15:29.728 "num_base_bdevs_operational": 3, 00:15:29.728 "base_bdevs_list": [ 00:15:29.728 { 00:15:29.728 "name": "pt1", 00:15:29.728 "uuid": "aef6f287-5a1b-55b6-86a7-74074896c06e", 00:15:29.728 "is_configured": true, 00:15:29.728 "data_offset": 2048, 00:15:29.728 "data_size": 63488 00:15:29.728 }, 00:15:29.728 { 00:15:29.728 "name": null, 00:15:29.728 "uuid": "974bbbca-1a5d-578d-aea1-fe5d03234312", 00:15:29.728 "is_configured": false, 00:15:29.728 "data_offset": 2048, 00:15:29.728 "data_size": 63488 00:15:29.728 }, 00:15:29.728 { 00:15:29.728 "name": null, 00:15:29.728 "uuid": "14511b0a-28c6-5b76-918a-7d8e941bf825", 00:15:29.728 "is_configured": false, 00:15:29.728 "data_offset": 2048, 00:15:29.728 "data_size": 63488 00:15:29.728 } 00:15:29.728 ] 00:15:29.728 }' 00:15:29.728 02:38:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:29.728 02:38:54 -- common/autotest_common.sh@10 -- # set +x 00:15:30.294 02:38:55 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:15:30.294 02:38:55 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:30.554 [2024-07-11 02:38:55.541476] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:30.554 [2024-07-11 02:38:55.541564] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:30.554 [2024-07-11 02:38:55.541603] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:30.554 [2024-07-11 02:38:55.541672] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:30.554 [2024-07-11 02:38:55.542091] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:30.554 [2024-07-11 02:38:55.542120] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:30.554 [2024-07-11 02:38:55.542203] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:30.554 [2024-07-11 02:38:55.542229] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:30.554 pt2 00:15:30.554 02:38:55 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:30.812 [2024-07-11 02:38:55.725529] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:30.812 02:38:55 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:15:30.812 02:38:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:30.812 02:38:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:30.812 02:38:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:30.812 02:38:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:30.812 02:38:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:30.812 02:38:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:30.812 02:38:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:30.812 02:38:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:30.812 02:38:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:30.812 02:38:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:30.813 02:38:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.070 02:38:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:31.070 "name": "raid_bdev1", 00:15:31.070 "uuid": "b02a6c02-44e5-47b1-9a01-7ff722d3008d", 00:15:31.070 "strip_size_kb": 64, 00:15:31.070 "state": "configuring", 00:15:31.070 "raid_level": "raid0", 00:15:31.070 "superblock": true, 00:15:31.070 "num_base_bdevs": 3, 00:15:31.070 "num_base_bdevs_discovered": 1, 00:15:31.070 "num_base_bdevs_operational": 3, 00:15:31.070 "base_bdevs_list": [ 00:15:31.070 { 00:15:31.070 "name": "pt1", 00:15:31.070 "uuid": "aef6f287-5a1b-55b6-86a7-74074896c06e", 00:15:31.070 "is_configured": true, 00:15:31.070 "data_offset": 2048, 00:15:31.070 "data_size": 63488 00:15:31.070 }, 00:15:31.070 { 00:15:31.070 "name": null, 00:15:31.070 "uuid": "974bbbca-1a5d-578d-aea1-fe5d03234312", 00:15:31.070 "is_configured": false, 00:15:31.070 "data_offset": 2048, 00:15:31.070 "data_size": 63488 00:15:31.070 }, 00:15:31.070 { 00:15:31.070 "name": null, 00:15:31.070 "uuid": "14511b0a-28c6-5b76-918a-7d8e941bf825", 00:15:31.070 "is_configured": false, 00:15:31.070 "data_offset": 2048, 00:15:31.070 "data_size": 63488 00:15:31.070 } 00:15:31.070 ] 00:15:31.070 }' 00:15:31.070 02:38:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:31.070 02:38:55 -- common/autotest_common.sh@10 -- # set +x 00:15:31.636 02:38:56 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:15:31.636 02:38:56 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:31.636 02:38:56 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:31.899 [2024-07-11 02:38:56.753728] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:31.899 [2024-07-11 02:38:56.753843] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:31.899 [2024-07-11 02:38:56.753877] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:31.899 [2024-07-11 02:38:56.753907] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:31.899 [2024-07-11 02:38:56.754405] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:31.899 [2024-07-11 02:38:56.754465] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:31.899 [2024-07-11 02:38:56.754578] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:31.899 [2024-07-11 02:38:56.754605] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:31.899 pt2 00:15:31.899 02:38:56 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:15:31.899 02:38:56 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:31.899 02:38:56 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:32.208 [2024-07-11 02:38:56.997777] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:32.208 [2024-07-11 02:38:56.997849] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:32.208 [2024-07-11 02:38:56.997876] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:32.208 [2024-07-11 02:38:56.997898] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:32.208 [2024-07-11 02:38:56.998287] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:32.208 [2024-07-11 02:38:56.998331] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:32.208 [2024-07-11 02:38:56.998440] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:15:32.208 [2024-07-11 02:38:56.998472] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:32.208 [2024-07-11 02:38:56.998613] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:15:32.208 [2024-07-11 02:38:56.998629] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:32.208 [2024-07-11 02:38:56.998710] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:15:32.208 [2024-07-11 02:38:56.999022] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:15:32.208 [2024-07-11 02:38:56.999052] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:15:32.208 [2024-07-11 02:38:56.999199] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:32.208 pt3 00:15:32.208 02:38:57 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:15:32.208 02:38:57 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:32.208 02:38:57 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:15:32.208 02:38:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:32.208 02:38:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:32.208 02:38:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:32.208 02:38:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:32.208 02:38:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:32.208 02:38:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:32.208 02:38:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:32.208 02:38:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:32.208 02:38:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:32.208 02:38:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:32.208 02:38:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.208 02:38:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:32.208 "name": "raid_bdev1", 00:15:32.208 "uuid": "b02a6c02-44e5-47b1-9a01-7ff722d3008d", 00:15:32.208 "strip_size_kb": 64, 00:15:32.208 "state": "online", 00:15:32.208 "raid_level": "raid0", 00:15:32.208 "superblock": true, 00:15:32.208 "num_base_bdevs": 3, 00:15:32.208 "num_base_bdevs_discovered": 3, 00:15:32.208 "num_base_bdevs_operational": 3, 00:15:32.208 "base_bdevs_list": [ 00:15:32.208 { 00:15:32.208 "name": "pt1", 00:15:32.208 "uuid": "aef6f287-5a1b-55b6-86a7-74074896c06e", 00:15:32.208 "is_configured": true, 00:15:32.208 "data_offset": 2048, 00:15:32.208 "data_size": 63488 00:15:32.208 }, 00:15:32.208 { 00:15:32.208 "name": "pt2", 00:15:32.208 "uuid": "974bbbca-1a5d-578d-aea1-fe5d03234312", 00:15:32.208 "is_configured": true, 00:15:32.208 "data_offset": 2048, 00:15:32.208 "data_size": 63488 00:15:32.208 }, 00:15:32.208 { 00:15:32.208 "name": "pt3", 00:15:32.208 "uuid": "14511b0a-28c6-5b76-918a-7d8e941bf825", 00:15:32.208 "is_configured": true, 00:15:32.208 "data_offset": 2048, 00:15:32.208 "data_size": 63488 00:15:32.208 } 00:15:32.208 ] 00:15:32.208 }' 00:15:32.208 02:38:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:32.209 02:38:57 -- common/autotest_common.sh@10 -- # set +x 00:15:32.791 02:38:57 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:32.791 02:38:57 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:15:33.049 [2024-07-11 02:38:58.018238] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:33.049 02:38:58 -- bdev/bdev_raid.sh@430 -- # '[' b02a6c02-44e5-47b1-9a01-7ff722d3008d '!=' b02a6c02-44e5-47b1-9a01-7ff722d3008d ']' 00:15:33.049 02:38:58 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:15:33.049 02:38:58 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:33.049 02:38:58 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:33.049 02:38:58 -- bdev/bdev_raid.sh@511 -- # killprocess 128203 00:15:33.049 02:38:58 -- common/autotest_common.sh@926 -- # '[' -z 128203 ']' 00:15:33.049 02:38:58 -- common/autotest_common.sh@930 -- # kill -0 128203 00:15:33.049 02:38:58 -- common/autotest_common.sh@931 -- # uname 00:15:33.049 02:38:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:33.049 02:38:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 128203 00:15:33.049 killing process with pid 128203 00:15:33.049 02:38:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:33.049 02:38:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:33.049 02:38:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 128203' 00:15:33.049 02:38:58 -- common/autotest_common.sh@945 -- # kill 128203 00:15:33.049 02:38:58 -- common/autotest_common.sh@950 -- # wait 128203 00:15:33.049 [2024-07-11 02:38:58.051040] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:33.049 [2024-07-11 02:38:58.051154] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:33.049 [2024-07-11 02:38:58.051231] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:33.049 [2024-07-11 02:38:58.051250] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:15:33.049 [2024-07-11 02:38:58.079433] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:33.307 ************************************ 00:15:33.307 END TEST raid_superblock_test 00:15:33.307 ************************************ 00:15:33.307 02:38:58 -- bdev/bdev_raid.sh@513 -- # return 0 00:15:33.307 00:15:33.307 real 0m9.243s 00:15:33.307 user 0m17.052s 00:15:33.307 sys 0m1.015s 00:15:33.307 02:38:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:33.307 02:38:58 -- common/autotest_common.sh@10 -- # set +x 00:15:33.307 02:38:58 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:15:33.307 02:38:58 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:15:33.307 02:38:58 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:33.307 02:38:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:33.307 02:38:58 -- common/autotest_common.sh@10 -- # set +x 00:15:33.307 ************************************ 00:15:33.307 START TEST raid_state_function_test 00:15:33.307 ************************************ 00:15:33.307 02:38:58 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 3 false 00:15:33.307 02:38:58 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:15:33.307 02:38:58 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:15:33.307 02:38:58 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:15:33.307 02:38:58 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:33.307 02:38:58 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:15:33.307 02:38:58 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:33.307 02:38:58 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:33.307 02:38:58 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:33.307 02:38:58 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:33.307 02:38:58 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:33.307 02:38:58 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:33.307 02:38:58 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:33.307 02:38:58 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:33.307 02:38:58 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:15:33.307 02:38:58 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:33.307 02:38:58 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:33.307 02:38:58 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:33.307 02:38:58 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:33.307 02:38:58 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:33.307 02:38:58 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:33.307 02:38:58 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:33.307 02:38:58 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:15:33.307 02:38:58 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:15:33.307 02:38:58 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:15:33.307 02:38:58 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:15:33.307 02:38:58 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:15:33.307 02:38:58 -- bdev/bdev_raid.sh@226 -- # raid_pid=128510 00:15:33.307 Process raid pid: 128510 00:15:33.307 02:38:58 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 128510' 00:15:33.307 02:38:58 -- bdev/bdev_raid.sh@228 -- # waitforlisten 128510 /var/tmp/spdk-raid.sock 00:15:33.307 02:38:58 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:33.307 02:38:58 -- common/autotest_common.sh@819 -- # '[' -z 128510 ']' 00:15:33.307 02:38:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:33.307 02:38:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:33.307 02:38:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:33.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:33.307 02:38:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:33.307 02:38:58 -- common/autotest_common.sh@10 -- # set +x 00:15:33.307 [2024-07-11 02:38:58.396574] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:33.307 [2024-07-11 02:38:58.396832] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:33.565 [2024-07-11 02:38:58.541357] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.565 [2024-07-11 02:38:58.609196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.823 [2024-07-11 02:38:58.659898] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:34.401 02:38:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:34.401 02:38:59 -- common/autotest_common.sh@852 -- # return 0 00:15:34.401 02:38:59 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:34.401 [2024-07-11 02:38:59.475613] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:34.401 [2024-07-11 02:38:59.475674] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:34.401 [2024-07-11 02:38:59.475687] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:34.401 [2024-07-11 02:38:59.475702] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:34.401 [2024-07-11 02:38:59.475709] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:34.401 [2024-07-11 02:38:59.475740] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:34.662 02:38:59 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:34.662 02:38:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:34.662 02:38:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:34.662 02:38:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:34.662 02:38:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:34.662 02:38:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:34.662 02:38:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:34.662 02:38:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:34.662 02:38:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:34.662 02:38:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:34.662 02:38:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:34.662 02:38:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.662 02:38:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:34.662 "name": "Existed_Raid", 00:15:34.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.662 "strip_size_kb": 64, 00:15:34.662 "state": "configuring", 00:15:34.662 "raid_level": "concat", 00:15:34.662 "superblock": false, 00:15:34.662 "num_base_bdevs": 3, 00:15:34.662 "num_base_bdevs_discovered": 0, 00:15:34.662 "num_base_bdevs_operational": 3, 00:15:34.662 "base_bdevs_list": [ 00:15:34.662 { 00:15:34.662 "name": "BaseBdev1", 00:15:34.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.662 "is_configured": false, 00:15:34.662 "data_offset": 0, 00:15:34.662 "data_size": 0 00:15:34.662 }, 00:15:34.662 { 00:15:34.662 "name": "BaseBdev2", 00:15:34.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.662 "is_configured": false, 00:15:34.662 "data_offset": 0, 00:15:34.662 "data_size": 0 00:15:34.662 }, 00:15:34.662 { 00:15:34.662 "name": "BaseBdev3", 00:15:34.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.662 "is_configured": false, 00:15:34.662 "data_offset": 0, 00:15:34.662 "data_size": 0 00:15:34.662 } 00:15:34.662 ] 00:15:34.662 }' 00:15:34.662 02:38:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:34.662 02:38:59 -- common/autotest_common.sh@10 -- # set +x 00:15:35.596 02:39:00 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:35.596 [2024-07-11 02:39:00.531669] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:35.596 [2024-07-11 02:39:00.531708] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:15:35.596 02:39:00 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:35.854 [2024-07-11 02:39:00.779733] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:35.854 [2024-07-11 02:39:00.779809] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:35.854 [2024-07-11 02:39:00.779819] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:35.854 [2024-07-11 02:39:00.779837] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:35.854 [2024-07-11 02:39:00.779844] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:35.854 [2024-07-11 02:39:00.779866] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:35.854 02:39:00 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:36.112 [2024-07-11 02:39:01.038003] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:36.112 BaseBdev1 00:15:36.112 02:39:01 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:36.112 02:39:01 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:36.112 02:39:01 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:36.112 02:39:01 -- common/autotest_common.sh@889 -- # local i 00:15:36.112 02:39:01 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:36.112 02:39:01 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:36.112 02:39:01 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:36.370 02:39:01 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:36.370 [ 00:15:36.370 { 00:15:36.370 "name": "BaseBdev1", 00:15:36.370 "aliases": [ 00:15:36.370 "12306b45-c2f2-46e8-aa76-5412e8947402" 00:15:36.370 ], 00:15:36.370 "product_name": "Malloc disk", 00:15:36.370 "block_size": 512, 00:15:36.370 "num_blocks": 65536, 00:15:36.370 "uuid": "12306b45-c2f2-46e8-aa76-5412e8947402", 00:15:36.371 "assigned_rate_limits": { 00:15:36.371 "rw_ios_per_sec": 0, 00:15:36.371 "rw_mbytes_per_sec": 0, 00:15:36.371 "r_mbytes_per_sec": 0, 00:15:36.371 "w_mbytes_per_sec": 0 00:15:36.371 }, 00:15:36.371 "claimed": true, 00:15:36.371 "claim_type": "exclusive_write", 00:15:36.371 "zoned": false, 00:15:36.371 "supported_io_types": { 00:15:36.371 "read": true, 00:15:36.371 "write": true, 00:15:36.371 "unmap": true, 00:15:36.371 "write_zeroes": true, 00:15:36.371 "flush": true, 00:15:36.371 "reset": true, 00:15:36.371 "compare": false, 00:15:36.371 "compare_and_write": false, 00:15:36.371 "abort": true, 00:15:36.371 "nvme_admin": false, 00:15:36.371 "nvme_io": false 00:15:36.371 }, 00:15:36.371 "memory_domains": [ 00:15:36.371 { 00:15:36.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.371 "dma_device_type": 2 00:15:36.371 } 00:15:36.371 ], 00:15:36.371 "driver_specific": {} 00:15:36.371 } 00:15:36.371 ] 00:15:36.371 02:39:01 -- common/autotest_common.sh@895 -- # return 0 00:15:36.371 02:39:01 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:36.371 02:39:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:36.371 02:39:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:36.371 02:39:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:36.371 02:39:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:36.371 02:39:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:36.371 02:39:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:36.371 02:39:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:36.371 02:39:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:36.371 02:39:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:36.371 02:39:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:36.371 02:39:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.629 02:39:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:36.629 "name": "Existed_Raid", 00:15:36.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.629 "strip_size_kb": 64, 00:15:36.629 "state": "configuring", 00:15:36.629 "raid_level": "concat", 00:15:36.629 "superblock": false, 00:15:36.629 "num_base_bdevs": 3, 00:15:36.629 "num_base_bdevs_discovered": 1, 00:15:36.629 "num_base_bdevs_operational": 3, 00:15:36.629 "base_bdevs_list": [ 00:15:36.629 { 00:15:36.629 "name": "BaseBdev1", 00:15:36.629 "uuid": "12306b45-c2f2-46e8-aa76-5412e8947402", 00:15:36.629 "is_configured": true, 00:15:36.629 "data_offset": 0, 00:15:36.629 "data_size": 65536 00:15:36.629 }, 00:15:36.629 { 00:15:36.629 "name": "BaseBdev2", 00:15:36.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.629 "is_configured": false, 00:15:36.629 "data_offset": 0, 00:15:36.629 "data_size": 0 00:15:36.629 }, 00:15:36.629 { 00:15:36.629 "name": "BaseBdev3", 00:15:36.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.629 "is_configured": false, 00:15:36.629 "data_offset": 0, 00:15:36.629 "data_size": 0 00:15:36.629 } 00:15:36.629 ] 00:15:36.629 }' 00:15:36.629 02:39:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:36.629 02:39:01 -- common/autotest_common.sh@10 -- # set +x 00:15:37.196 02:39:02 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:37.454 [2024-07-11 02:39:02.494348] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:37.454 [2024-07-11 02:39:02.494410] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005a80 name Existed_Raid, state configuring 00:15:37.454 02:39:02 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:15:37.454 02:39:02 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:37.713 [2024-07-11 02:39:02.690434] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:37.713 [2024-07-11 02:39:02.692279] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:37.713 [2024-07-11 02:39:02.692329] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:37.713 [2024-07-11 02:39:02.692339] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:37.713 [2024-07-11 02:39:02.692362] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:37.713 02:39:02 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:37.713 02:39:02 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:37.713 02:39:02 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:37.713 02:39:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:37.713 02:39:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:37.713 02:39:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:37.713 02:39:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:37.713 02:39:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:37.713 02:39:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:37.713 02:39:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:37.713 02:39:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:37.713 02:39:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:37.713 02:39:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:37.713 02:39:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:37.972 02:39:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:37.972 "name": "Existed_Raid", 00:15:37.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.972 "strip_size_kb": 64, 00:15:37.972 "state": "configuring", 00:15:37.972 "raid_level": "concat", 00:15:37.972 "superblock": false, 00:15:37.972 "num_base_bdevs": 3, 00:15:37.972 "num_base_bdevs_discovered": 1, 00:15:37.972 "num_base_bdevs_operational": 3, 00:15:37.972 "base_bdevs_list": [ 00:15:37.972 { 00:15:37.972 "name": "BaseBdev1", 00:15:37.972 "uuid": "12306b45-c2f2-46e8-aa76-5412e8947402", 00:15:37.972 "is_configured": true, 00:15:37.972 "data_offset": 0, 00:15:37.972 "data_size": 65536 00:15:37.972 }, 00:15:37.972 { 00:15:37.972 "name": "BaseBdev2", 00:15:37.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.972 "is_configured": false, 00:15:37.972 "data_offset": 0, 00:15:37.972 "data_size": 0 00:15:37.972 }, 00:15:37.972 { 00:15:37.972 "name": "BaseBdev3", 00:15:37.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.972 "is_configured": false, 00:15:37.972 "data_offset": 0, 00:15:37.972 "data_size": 0 00:15:37.972 } 00:15:37.972 ] 00:15:37.972 }' 00:15:37.972 02:39:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:37.972 02:39:02 -- common/autotest_common.sh@10 -- # set +x 00:15:38.539 02:39:03 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:38.797 [2024-07-11 02:39:03.833510] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:38.797 BaseBdev2 00:15:38.797 02:39:03 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:38.797 02:39:03 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:15:38.797 02:39:03 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:38.797 02:39:03 -- common/autotest_common.sh@889 -- # local i 00:15:38.797 02:39:03 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:38.797 02:39:03 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:38.797 02:39:03 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:39.055 02:39:04 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:39.313 [ 00:15:39.313 { 00:15:39.313 "name": "BaseBdev2", 00:15:39.313 "aliases": [ 00:15:39.313 "c9e75cfe-2f70-49f6-bcd7-361b9c772261" 00:15:39.313 ], 00:15:39.313 "product_name": "Malloc disk", 00:15:39.313 "block_size": 512, 00:15:39.313 "num_blocks": 65536, 00:15:39.313 "uuid": "c9e75cfe-2f70-49f6-bcd7-361b9c772261", 00:15:39.313 "assigned_rate_limits": { 00:15:39.313 "rw_ios_per_sec": 0, 00:15:39.313 "rw_mbytes_per_sec": 0, 00:15:39.313 "r_mbytes_per_sec": 0, 00:15:39.313 "w_mbytes_per_sec": 0 00:15:39.313 }, 00:15:39.313 "claimed": true, 00:15:39.313 "claim_type": "exclusive_write", 00:15:39.313 "zoned": false, 00:15:39.313 "supported_io_types": { 00:15:39.313 "read": true, 00:15:39.313 "write": true, 00:15:39.313 "unmap": true, 00:15:39.313 "write_zeroes": true, 00:15:39.313 "flush": true, 00:15:39.313 "reset": true, 00:15:39.313 "compare": false, 00:15:39.313 "compare_and_write": false, 00:15:39.313 "abort": true, 00:15:39.313 "nvme_admin": false, 00:15:39.313 "nvme_io": false 00:15:39.313 }, 00:15:39.313 "memory_domains": [ 00:15:39.313 { 00:15:39.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.313 "dma_device_type": 2 00:15:39.313 } 00:15:39.313 ], 00:15:39.313 "driver_specific": {} 00:15:39.313 } 00:15:39.313 ] 00:15:39.313 02:39:04 -- common/autotest_common.sh@895 -- # return 0 00:15:39.313 02:39:04 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:39.313 02:39:04 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:39.313 02:39:04 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:39.313 02:39:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:39.313 02:39:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:39.313 02:39:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:39.313 02:39:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:39.313 02:39:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:39.313 02:39:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:39.313 02:39:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:39.313 02:39:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:39.313 02:39:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:39.313 02:39:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:39.313 02:39:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.572 02:39:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:39.572 "name": "Existed_Raid", 00:15:39.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.572 "strip_size_kb": 64, 00:15:39.572 "state": "configuring", 00:15:39.572 "raid_level": "concat", 00:15:39.572 "superblock": false, 00:15:39.572 "num_base_bdevs": 3, 00:15:39.572 "num_base_bdevs_discovered": 2, 00:15:39.572 "num_base_bdevs_operational": 3, 00:15:39.572 "base_bdevs_list": [ 00:15:39.572 { 00:15:39.572 "name": "BaseBdev1", 00:15:39.572 "uuid": "12306b45-c2f2-46e8-aa76-5412e8947402", 00:15:39.572 "is_configured": true, 00:15:39.572 "data_offset": 0, 00:15:39.572 "data_size": 65536 00:15:39.572 }, 00:15:39.572 { 00:15:39.572 "name": "BaseBdev2", 00:15:39.572 "uuid": "c9e75cfe-2f70-49f6-bcd7-361b9c772261", 00:15:39.572 "is_configured": true, 00:15:39.572 "data_offset": 0, 00:15:39.572 "data_size": 65536 00:15:39.572 }, 00:15:39.572 { 00:15:39.572 "name": "BaseBdev3", 00:15:39.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.572 "is_configured": false, 00:15:39.572 "data_offset": 0, 00:15:39.572 "data_size": 0 00:15:39.572 } 00:15:39.572 ] 00:15:39.572 }' 00:15:39.572 02:39:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:39.572 02:39:04 -- common/autotest_common.sh@10 -- # set +x 00:15:40.139 02:39:05 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:40.397 [2024-07-11 02:39:05.346830] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:40.397 [2024-07-11 02:39:05.346897] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006380 00:15:40.397 [2024-07-11 02:39:05.346908] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:40.397 [2024-07-11 02:39:05.347076] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80 00:15:40.397 [2024-07-11 02:39:05.347541] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006380 00:15:40.397 [2024-07-11 02:39:05.347564] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006380 00:15:40.397 [2024-07-11 02:39:05.347859] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:40.397 BaseBdev3 00:15:40.397 02:39:05 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:15:40.397 02:39:05 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:15:40.397 02:39:05 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:40.397 02:39:05 -- common/autotest_common.sh@889 -- # local i 00:15:40.397 02:39:05 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:40.397 02:39:05 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:40.397 02:39:05 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:40.656 02:39:05 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:40.656 [ 00:15:40.656 { 00:15:40.656 "name": "BaseBdev3", 00:15:40.656 "aliases": [ 00:15:40.656 "75d32b99-8b9c-4e97-b06a-12fba2c07059" 00:15:40.656 ], 00:15:40.656 "product_name": "Malloc disk", 00:15:40.656 "block_size": 512, 00:15:40.656 "num_blocks": 65536, 00:15:40.656 "uuid": "75d32b99-8b9c-4e97-b06a-12fba2c07059", 00:15:40.656 "assigned_rate_limits": { 00:15:40.656 "rw_ios_per_sec": 0, 00:15:40.656 "rw_mbytes_per_sec": 0, 00:15:40.656 "r_mbytes_per_sec": 0, 00:15:40.656 "w_mbytes_per_sec": 0 00:15:40.656 }, 00:15:40.656 "claimed": true, 00:15:40.656 "claim_type": "exclusive_write", 00:15:40.656 "zoned": false, 00:15:40.656 "supported_io_types": { 00:15:40.656 "read": true, 00:15:40.656 "write": true, 00:15:40.656 "unmap": true, 00:15:40.656 "write_zeroes": true, 00:15:40.656 "flush": true, 00:15:40.656 "reset": true, 00:15:40.656 "compare": false, 00:15:40.656 "compare_and_write": false, 00:15:40.656 "abort": true, 00:15:40.656 "nvme_admin": false, 00:15:40.656 "nvme_io": false 00:15:40.656 }, 00:15:40.656 "memory_domains": [ 00:15:40.656 { 00:15:40.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.656 "dma_device_type": 2 00:15:40.656 } 00:15:40.656 ], 00:15:40.656 "driver_specific": {} 00:15:40.656 } 00:15:40.656 ] 00:15:40.656 02:39:05 -- common/autotest_common.sh@895 -- # return 0 00:15:40.656 02:39:05 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:40.656 02:39:05 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:40.656 02:39:05 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:15:40.656 02:39:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:40.656 02:39:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:40.656 02:39:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:40.656 02:39:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:40.656 02:39:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:40.656 02:39:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:40.656 02:39:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:40.656 02:39:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:40.656 02:39:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:40.656 02:39:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:40.656 02:39:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:40.914 02:39:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:40.914 "name": "Existed_Raid", 00:15:40.914 "uuid": "99739cff-e7c7-437b-b1db-272dd728dc25", 00:15:40.914 "strip_size_kb": 64, 00:15:40.914 "state": "online", 00:15:40.914 "raid_level": "concat", 00:15:40.914 "superblock": false, 00:15:40.914 "num_base_bdevs": 3, 00:15:40.914 "num_base_bdevs_discovered": 3, 00:15:40.914 "num_base_bdevs_operational": 3, 00:15:40.914 "base_bdevs_list": [ 00:15:40.914 { 00:15:40.914 "name": "BaseBdev1", 00:15:40.914 "uuid": "12306b45-c2f2-46e8-aa76-5412e8947402", 00:15:40.914 "is_configured": true, 00:15:40.914 "data_offset": 0, 00:15:40.914 "data_size": 65536 00:15:40.914 }, 00:15:40.914 { 00:15:40.914 "name": "BaseBdev2", 00:15:40.914 "uuid": "c9e75cfe-2f70-49f6-bcd7-361b9c772261", 00:15:40.914 "is_configured": true, 00:15:40.914 "data_offset": 0, 00:15:40.914 "data_size": 65536 00:15:40.914 }, 00:15:40.914 { 00:15:40.914 "name": "BaseBdev3", 00:15:40.914 "uuid": "75d32b99-8b9c-4e97-b06a-12fba2c07059", 00:15:40.914 "is_configured": true, 00:15:40.914 "data_offset": 0, 00:15:40.914 "data_size": 65536 00:15:40.914 } 00:15:40.914 ] 00:15:40.914 }' 00:15:40.914 02:39:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:40.914 02:39:05 -- common/autotest_common.sh@10 -- # set +x 00:15:41.850 02:39:06 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:41.850 [2024-07-11 02:39:06.815270] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:41.850 [2024-07-11 02:39:06.815303] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:41.850 [2024-07-11 02:39:06.815427] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:41.850 02:39:06 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:41.850 02:39:06 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:15:41.850 02:39:06 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:41.850 02:39:06 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:41.850 02:39:06 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:15:41.850 02:39:06 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:15:41.850 02:39:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:41.850 02:39:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:41.850 02:39:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:41.850 02:39:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:41.850 02:39:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:41.850 02:39:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:41.850 02:39:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:41.850 02:39:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:41.850 02:39:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:41.850 02:39:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:41.850 02:39:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:42.108 02:39:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:42.108 "name": "Existed_Raid", 00:15:42.108 "uuid": "99739cff-e7c7-437b-b1db-272dd728dc25", 00:15:42.108 "strip_size_kb": 64, 00:15:42.108 "state": "offline", 00:15:42.108 "raid_level": "concat", 00:15:42.108 "superblock": false, 00:15:42.108 "num_base_bdevs": 3, 00:15:42.108 "num_base_bdevs_discovered": 2, 00:15:42.108 "num_base_bdevs_operational": 2, 00:15:42.108 "base_bdevs_list": [ 00:15:42.108 { 00:15:42.108 "name": null, 00:15:42.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.108 "is_configured": false, 00:15:42.108 "data_offset": 0, 00:15:42.108 "data_size": 65536 00:15:42.108 }, 00:15:42.108 { 00:15:42.108 "name": "BaseBdev2", 00:15:42.108 "uuid": "c9e75cfe-2f70-49f6-bcd7-361b9c772261", 00:15:42.108 "is_configured": true, 00:15:42.108 "data_offset": 0, 00:15:42.108 "data_size": 65536 00:15:42.108 }, 00:15:42.108 { 00:15:42.108 "name": "BaseBdev3", 00:15:42.108 "uuid": "75d32b99-8b9c-4e97-b06a-12fba2c07059", 00:15:42.108 "is_configured": true, 00:15:42.108 "data_offset": 0, 00:15:42.108 "data_size": 65536 00:15:42.108 } 00:15:42.108 ] 00:15:42.108 }' 00:15:42.108 02:39:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:42.108 02:39:07 -- common/autotest_common.sh@10 -- # set +x 00:15:42.676 02:39:07 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:42.676 02:39:07 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:42.676 02:39:07 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:42.676 02:39:07 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:42.934 02:39:07 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:42.934 02:39:07 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:42.934 02:39:07 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:43.192 [2024-07-11 02:39:08.073050] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:43.192 02:39:08 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:43.192 02:39:08 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:43.192 02:39:08 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:43.192 02:39:08 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:43.451 02:39:08 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:43.451 02:39:08 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:43.451 02:39:08 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:43.451 [2024-07-11 02:39:08.518713] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:43.451 [2024-07-11 02:39:08.518896] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state offline 00:15:43.451 02:39:08 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:43.451 02:39:08 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:43.451 02:39:08 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:43.451 02:39:08 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:43.709 02:39:08 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:43.709 02:39:08 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:43.709 02:39:08 -- bdev/bdev_raid.sh@287 -- # killprocess 128510 00:15:43.709 02:39:08 -- common/autotest_common.sh@926 -- # '[' -z 128510 ']' 00:15:43.709 02:39:08 -- common/autotest_common.sh@930 -- # kill -0 128510 00:15:43.709 02:39:08 -- common/autotest_common.sh@931 -- # uname 00:15:43.709 02:39:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:43.709 02:39:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 128510 00:15:43.709 killing process with pid 128510 00:15:43.709 02:39:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:43.709 02:39:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:43.709 02:39:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 128510' 00:15:43.709 02:39:08 -- common/autotest_common.sh@945 -- # kill 128510 00:15:43.709 02:39:08 -- common/autotest_common.sh@950 -- # wait 128510 00:15:43.709 [2024-07-11 02:39:08.738270] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:43.709 [2024-07-11 02:39:08.738384] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:43.968 ************************************ 00:15:43.968 END TEST raid_state_function_test 00:15:43.968 ************************************ 00:15:43.968 02:39:08 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:43.968 00:15:43.968 real 0m10.617s 00:15:43.968 user 0m19.851s 00:15:43.968 sys 0m1.139s 00:15:43.968 02:39:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:43.968 02:39:08 -- common/autotest_common.sh@10 -- # set +x 00:15:43.968 02:39:08 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:15:43.968 02:39:08 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:43.968 02:39:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:43.968 02:39:08 -- common/autotest_common.sh@10 -- # set +x 00:15:43.968 ************************************ 00:15:43.968 START TEST raid_state_function_test_sb 00:15:43.968 ************************************ 00:15:43.968 02:39:09 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 3 true 00:15:43.968 02:39:09 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:15:43.968 02:39:09 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:15:43.968 02:39:09 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:15:43.968 02:39:09 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:43.968 02:39:09 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:15:43.968 02:39:09 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:43.968 02:39:09 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:43.968 02:39:09 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:43.968 02:39:09 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:43.968 02:39:09 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:43.968 02:39:09 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:43.968 02:39:09 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:43.968 02:39:09 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:43.968 02:39:09 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:15:43.968 02:39:09 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:43.968 02:39:09 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:43.968 02:39:09 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:43.968 02:39:09 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:43.968 02:39:09 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:43.968 02:39:09 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:43.968 02:39:09 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:43.968 02:39:09 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:15:43.968 02:39:09 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:15:43.968 02:39:09 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:15:43.968 02:39:09 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:15:43.968 02:39:09 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:15:43.968 02:39:09 -- bdev/bdev_raid.sh@226 -- # raid_pid=128898 00:15:43.968 Process raid pid: 128898 00:15:43.968 02:39:09 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 128898' 00:15:43.968 02:39:09 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:43.968 02:39:09 -- bdev/bdev_raid.sh@228 -- # waitforlisten 128898 /var/tmp/spdk-raid.sock 00:15:43.968 02:39:09 -- common/autotest_common.sh@819 -- # '[' -z 128898 ']' 00:15:43.968 02:39:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:43.968 02:39:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:43.968 02:39:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:43.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:43.968 02:39:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:43.968 02:39:09 -- common/autotest_common.sh@10 -- # set +x 00:15:44.226 [2024-07-11 02:39:09.062318] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:44.226 [2024-07-11 02:39:09.062522] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:44.226 [2024-07-11 02:39:09.198116] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:44.226 [2024-07-11 02:39:09.260683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.226 [2024-07-11 02:39:09.309947] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:45.162 02:39:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:45.162 02:39:09 -- common/autotest_common.sh@852 -- # return 0 00:15:45.162 02:39:09 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:45.162 [2024-07-11 02:39:10.145493] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:45.162 [2024-07-11 02:39:10.145599] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:45.162 [2024-07-11 02:39:10.145613] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:45.162 [2024-07-11 02:39:10.145631] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:45.162 [2024-07-11 02:39:10.145656] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:45.162 [2024-07-11 02:39:10.145708] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:45.162 02:39:10 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:45.162 02:39:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:45.162 02:39:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:45.162 02:39:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:45.162 02:39:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:45.162 02:39:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:45.162 02:39:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:45.162 02:39:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:45.162 02:39:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:45.162 02:39:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:45.162 02:39:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:45.162 02:39:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.426 02:39:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:45.426 "name": "Existed_Raid", 00:15:45.426 "uuid": "fd7d4980-f8fe-45d1-ae7a-4284c6fb56eb", 00:15:45.426 "strip_size_kb": 64, 00:15:45.426 "state": "configuring", 00:15:45.426 "raid_level": "concat", 00:15:45.426 "superblock": true, 00:15:45.426 "num_base_bdevs": 3, 00:15:45.426 "num_base_bdevs_discovered": 0, 00:15:45.426 "num_base_bdevs_operational": 3, 00:15:45.426 "base_bdevs_list": [ 00:15:45.426 { 00:15:45.426 "name": "BaseBdev1", 00:15:45.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.426 "is_configured": false, 00:15:45.426 "data_offset": 0, 00:15:45.426 "data_size": 0 00:15:45.426 }, 00:15:45.426 { 00:15:45.426 "name": "BaseBdev2", 00:15:45.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.426 "is_configured": false, 00:15:45.426 "data_offset": 0, 00:15:45.426 "data_size": 0 00:15:45.426 }, 00:15:45.426 { 00:15:45.426 "name": "BaseBdev3", 00:15:45.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.426 "is_configured": false, 00:15:45.426 "data_offset": 0, 00:15:45.426 "data_size": 0 00:15:45.426 } 00:15:45.426 ] 00:15:45.426 }' 00:15:45.426 02:39:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:45.426 02:39:10 -- common/autotest_common.sh@10 -- # set +x 00:15:45.999 02:39:11 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:46.257 [2024-07-11 02:39:11.301515] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:46.257 [2024-07-11 02:39:11.301579] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:15:46.257 02:39:11 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:46.516 [2024-07-11 02:39:11.501612] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:46.516 [2024-07-11 02:39:11.501705] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:46.516 [2024-07-11 02:39:11.501733] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:46.516 [2024-07-11 02:39:11.501754] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:46.516 [2024-07-11 02:39:11.501761] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:46.516 [2024-07-11 02:39:11.501784] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:46.516 02:39:11 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:46.775 [2024-07-11 02:39:11.700308] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:46.775 BaseBdev1 00:15:46.775 02:39:11 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:46.775 02:39:11 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:46.775 02:39:11 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:46.775 02:39:11 -- common/autotest_common.sh@889 -- # local i 00:15:46.775 02:39:11 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:46.775 02:39:11 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:46.775 02:39:11 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:47.033 02:39:11 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:47.033 [ 00:15:47.033 { 00:15:47.033 "name": "BaseBdev1", 00:15:47.033 "aliases": [ 00:15:47.033 "787e83ce-61c2-42c1-917c-3af709c74a20" 00:15:47.033 ], 00:15:47.033 "product_name": "Malloc disk", 00:15:47.033 "block_size": 512, 00:15:47.033 "num_blocks": 65536, 00:15:47.033 "uuid": "787e83ce-61c2-42c1-917c-3af709c74a20", 00:15:47.033 "assigned_rate_limits": { 00:15:47.033 "rw_ios_per_sec": 0, 00:15:47.033 "rw_mbytes_per_sec": 0, 00:15:47.033 "r_mbytes_per_sec": 0, 00:15:47.033 "w_mbytes_per_sec": 0 00:15:47.033 }, 00:15:47.033 "claimed": true, 00:15:47.033 "claim_type": "exclusive_write", 00:15:47.033 "zoned": false, 00:15:47.033 "supported_io_types": { 00:15:47.033 "read": true, 00:15:47.033 "write": true, 00:15:47.033 "unmap": true, 00:15:47.033 "write_zeroes": true, 00:15:47.033 "flush": true, 00:15:47.033 "reset": true, 00:15:47.033 "compare": false, 00:15:47.033 "compare_and_write": false, 00:15:47.033 "abort": true, 00:15:47.033 "nvme_admin": false, 00:15:47.033 "nvme_io": false 00:15:47.033 }, 00:15:47.033 "memory_domains": [ 00:15:47.033 { 00:15:47.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.033 "dma_device_type": 2 00:15:47.033 } 00:15:47.033 ], 00:15:47.033 "driver_specific": {} 00:15:47.033 } 00:15:47.033 ] 00:15:47.033 02:39:12 -- common/autotest_common.sh@895 -- # return 0 00:15:47.033 02:39:12 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:47.033 02:39:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:47.033 02:39:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:47.033 02:39:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:47.033 02:39:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:47.033 02:39:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:47.033 02:39:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:47.033 02:39:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:47.033 02:39:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:47.033 02:39:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:47.033 02:39:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:47.033 02:39:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:47.292 02:39:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:47.292 "name": "Existed_Raid", 00:15:47.292 "uuid": "6c5dc2db-704d-43a0-8101-c8902bc85f00", 00:15:47.292 "strip_size_kb": 64, 00:15:47.292 "state": "configuring", 00:15:47.292 "raid_level": "concat", 00:15:47.292 "superblock": true, 00:15:47.292 "num_base_bdevs": 3, 00:15:47.292 "num_base_bdevs_discovered": 1, 00:15:47.292 "num_base_bdevs_operational": 3, 00:15:47.292 "base_bdevs_list": [ 00:15:47.292 { 00:15:47.292 "name": "BaseBdev1", 00:15:47.292 "uuid": "787e83ce-61c2-42c1-917c-3af709c74a20", 00:15:47.292 "is_configured": true, 00:15:47.292 "data_offset": 2048, 00:15:47.292 "data_size": 63488 00:15:47.292 }, 00:15:47.292 { 00:15:47.292 "name": "BaseBdev2", 00:15:47.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.292 "is_configured": false, 00:15:47.292 "data_offset": 0, 00:15:47.292 "data_size": 0 00:15:47.292 }, 00:15:47.292 { 00:15:47.292 "name": "BaseBdev3", 00:15:47.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.292 "is_configured": false, 00:15:47.292 "data_offset": 0, 00:15:47.292 "data_size": 0 00:15:47.292 } 00:15:47.292 ] 00:15:47.292 }' 00:15:47.292 02:39:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:47.292 02:39:12 -- common/autotest_common.sh@10 -- # set +x 00:15:48.227 02:39:12 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:48.227 [2024-07-11 02:39:13.208602] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:48.227 [2024-07-11 02:39:13.208664] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005a80 name Existed_Raid, state configuring 00:15:48.227 02:39:13 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:15:48.227 02:39:13 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:48.486 02:39:13 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:48.744 BaseBdev1 00:15:48.744 02:39:13 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:15:48.744 02:39:13 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:48.744 02:39:13 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:48.744 02:39:13 -- common/autotest_common.sh@889 -- # local i 00:15:48.744 02:39:13 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:48.744 02:39:13 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:48.744 02:39:13 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:48.744 02:39:13 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:49.001 [ 00:15:49.001 { 00:15:49.001 "name": "BaseBdev1", 00:15:49.001 "aliases": [ 00:15:49.001 "b9c0edda-8558-4c22-8489-b2e5a2995c67" 00:15:49.001 ], 00:15:49.001 "product_name": "Malloc disk", 00:15:49.001 "block_size": 512, 00:15:49.001 "num_blocks": 65536, 00:15:49.001 "uuid": "b9c0edda-8558-4c22-8489-b2e5a2995c67", 00:15:49.001 "assigned_rate_limits": { 00:15:49.001 "rw_ios_per_sec": 0, 00:15:49.001 "rw_mbytes_per_sec": 0, 00:15:49.001 "r_mbytes_per_sec": 0, 00:15:49.001 "w_mbytes_per_sec": 0 00:15:49.001 }, 00:15:49.001 "claimed": false, 00:15:49.002 "zoned": false, 00:15:49.002 "supported_io_types": { 00:15:49.002 "read": true, 00:15:49.002 "write": true, 00:15:49.002 "unmap": true, 00:15:49.002 "write_zeroes": true, 00:15:49.002 "flush": true, 00:15:49.002 "reset": true, 00:15:49.002 "compare": false, 00:15:49.002 "compare_and_write": false, 00:15:49.002 "abort": true, 00:15:49.002 "nvme_admin": false, 00:15:49.002 "nvme_io": false 00:15:49.002 }, 00:15:49.002 "memory_domains": [ 00:15:49.002 { 00:15:49.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.002 "dma_device_type": 2 00:15:49.002 } 00:15:49.002 ], 00:15:49.002 "driver_specific": {} 00:15:49.002 } 00:15:49.002 ] 00:15:49.002 02:39:14 -- common/autotest_common.sh@895 -- # return 0 00:15:49.002 02:39:14 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:49.259 [2024-07-11 02:39:14.183098] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:49.259 [2024-07-11 02:39:14.184832] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:49.259 [2024-07-11 02:39:14.184905] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:49.259 [2024-07-11 02:39:14.184932] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:49.259 [2024-07-11 02:39:14.184958] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:49.259 02:39:14 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:49.259 02:39:14 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:49.259 02:39:14 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:49.259 02:39:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:49.259 02:39:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:49.259 02:39:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:49.259 02:39:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:49.259 02:39:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:49.259 02:39:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:49.259 02:39:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:49.259 02:39:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:49.259 02:39:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:49.259 02:39:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:49.259 02:39:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.517 02:39:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:49.517 "name": "Existed_Raid", 00:15:49.517 "uuid": "6e064903-fcf6-4bb7-b24c-2ee44d21007b", 00:15:49.517 "strip_size_kb": 64, 00:15:49.517 "state": "configuring", 00:15:49.517 "raid_level": "concat", 00:15:49.517 "superblock": true, 00:15:49.517 "num_base_bdevs": 3, 00:15:49.517 "num_base_bdevs_discovered": 1, 00:15:49.517 "num_base_bdevs_operational": 3, 00:15:49.517 "base_bdevs_list": [ 00:15:49.517 { 00:15:49.517 "name": "BaseBdev1", 00:15:49.517 "uuid": "b9c0edda-8558-4c22-8489-b2e5a2995c67", 00:15:49.517 "is_configured": true, 00:15:49.517 "data_offset": 2048, 00:15:49.517 "data_size": 63488 00:15:49.517 }, 00:15:49.517 { 00:15:49.517 "name": "BaseBdev2", 00:15:49.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.517 "is_configured": false, 00:15:49.517 "data_offset": 0, 00:15:49.517 "data_size": 0 00:15:49.517 }, 00:15:49.517 { 00:15:49.517 "name": "BaseBdev3", 00:15:49.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.517 "is_configured": false, 00:15:49.517 "data_offset": 0, 00:15:49.517 "data_size": 0 00:15:49.517 } 00:15:49.517 ] 00:15:49.517 }' 00:15:49.517 02:39:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:49.517 02:39:14 -- common/autotest_common.sh@10 -- # set +x 00:15:50.084 02:39:15 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:50.343 [2024-07-11 02:39:15.369322] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:50.343 BaseBdev2 00:15:50.343 02:39:15 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:50.343 02:39:15 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:15:50.343 02:39:15 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:50.343 02:39:15 -- common/autotest_common.sh@889 -- # local i 00:15:50.343 02:39:15 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:50.343 02:39:15 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:50.343 02:39:15 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:50.602 02:39:15 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:50.860 [ 00:15:50.860 { 00:15:50.860 "name": "BaseBdev2", 00:15:50.860 "aliases": [ 00:15:50.860 "64ca25af-73d5-4890-9ff2-41280ad472aa" 00:15:50.860 ], 00:15:50.860 "product_name": "Malloc disk", 00:15:50.860 "block_size": 512, 00:15:50.860 "num_blocks": 65536, 00:15:50.860 "uuid": "64ca25af-73d5-4890-9ff2-41280ad472aa", 00:15:50.860 "assigned_rate_limits": { 00:15:50.860 "rw_ios_per_sec": 0, 00:15:50.860 "rw_mbytes_per_sec": 0, 00:15:50.860 "r_mbytes_per_sec": 0, 00:15:50.860 "w_mbytes_per_sec": 0 00:15:50.860 }, 00:15:50.860 "claimed": true, 00:15:50.860 "claim_type": "exclusive_write", 00:15:50.860 "zoned": false, 00:15:50.860 "supported_io_types": { 00:15:50.860 "read": true, 00:15:50.860 "write": true, 00:15:50.860 "unmap": true, 00:15:50.860 "write_zeroes": true, 00:15:50.860 "flush": true, 00:15:50.860 "reset": true, 00:15:50.860 "compare": false, 00:15:50.860 "compare_and_write": false, 00:15:50.860 "abort": true, 00:15:50.860 "nvme_admin": false, 00:15:50.860 "nvme_io": false 00:15:50.860 }, 00:15:50.860 "memory_domains": [ 00:15:50.860 { 00:15:50.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.860 "dma_device_type": 2 00:15:50.860 } 00:15:50.860 ], 00:15:50.860 "driver_specific": {} 00:15:50.860 } 00:15:50.860 ] 00:15:50.860 02:39:15 -- common/autotest_common.sh@895 -- # return 0 00:15:50.860 02:39:15 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:50.860 02:39:15 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:50.860 02:39:15 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:50.860 02:39:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:50.860 02:39:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:50.860 02:39:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:50.860 02:39:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:50.860 02:39:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:50.860 02:39:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:50.860 02:39:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:50.860 02:39:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:50.860 02:39:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:50.860 02:39:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:50.860 02:39:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.119 02:39:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:51.119 "name": "Existed_Raid", 00:15:51.119 "uuid": "6e064903-fcf6-4bb7-b24c-2ee44d21007b", 00:15:51.119 "strip_size_kb": 64, 00:15:51.119 "state": "configuring", 00:15:51.119 "raid_level": "concat", 00:15:51.119 "superblock": true, 00:15:51.119 "num_base_bdevs": 3, 00:15:51.119 "num_base_bdevs_discovered": 2, 00:15:51.119 "num_base_bdevs_operational": 3, 00:15:51.119 "base_bdevs_list": [ 00:15:51.119 { 00:15:51.119 "name": "BaseBdev1", 00:15:51.119 "uuid": "b9c0edda-8558-4c22-8489-b2e5a2995c67", 00:15:51.119 "is_configured": true, 00:15:51.119 "data_offset": 2048, 00:15:51.119 "data_size": 63488 00:15:51.119 }, 00:15:51.119 { 00:15:51.119 "name": "BaseBdev2", 00:15:51.119 "uuid": "64ca25af-73d5-4890-9ff2-41280ad472aa", 00:15:51.119 "is_configured": true, 00:15:51.119 "data_offset": 2048, 00:15:51.119 "data_size": 63488 00:15:51.119 }, 00:15:51.119 { 00:15:51.119 "name": "BaseBdev3", 00:15:51.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.119 "is_configured": false, 00:15:51.119 "data_offset": 0, 00:15:51.119 "data_size": 0 00:15:51.119 } 00:15:51.119 ] 00:15:51.119 }' 00:15:51.119 02:39:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:51.119 02:39:15 -- common/autotest_common.sh@10 -- # set +x 00:15:51.684 02:39:16 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:51.952 [2024-07-11 02:39:16.929945] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:51.952 [2024-07-11 02:39:16.930197] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006980 00:15:51.953 [2024-07-11 02:39:16.930212] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:51.953 BaseBdev3 00:15:51.953 [2024-07-11 02:39:16.930387] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:15:51.953 [2024-07-11 02:39:16.930835] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006980 00:15:51.953 [2024-07-11 02:39:16.930857] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006980 00:15:51.953 [2024-07-11 02:39:16.931027] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.953 02:39:16 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:15:51.953 02:39:16 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:15:51.953 02:39:16 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:51.953 02:39:16 -- common/autotest_common.sh@889 -- # local i 00:15:51.953 02:39:16 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:51.953 02:39:16 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:51.953 02:39:16 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:52.212 02:39:17 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:52.470 [ 00:15:52.470 { 00:15:52.470 "name": "BaseBdev3", 00:15:52.470 "aliases": [ 00:15:52.470 "bc21a62d-6e59-41b2-b9fa-fb9a733b8b33" 00:15:52.470 ], 00:15:52.470 "product_name": "Malloc disk", 00:15:52.470 "block_size": 512, 00:15:52.470 "num_blocks": 65536, 00:15:52.470 "uuid": "bc21a62d-6e59-41b2-b9fa-fb9a733b8b33", 00:15:52.470 "assigned_rate_limits": { 00:15:52.470 "rw_ios_per_sec": 0, 00:15:52.470 "rw_mbytes_per_sec": 0, 00:15:52.470 "r_mbytes_per_sec": 0, 00:15:52.470 "w_mbytes_per_sec": 0 00:15:52.470 }, 00:15:52.470 "claimed": true, 00:15:52.470 "claim_type": "exclusive_write", 00:15:52.470 "zoned": false, 00:15:52.470 "supported_io_types": { 00:15:52.470 "read": true, 00:15:52.470 "write": true, 00:15:52.470 "unmap": true, 00:15:52.470 "write_zeroes": true, 00:15:52.470 "flush": true, 00:15:52.470 "reset": true, 00:15:52.470 "compare": false, 00:15:52.470 "compare_and_write": false, 00:15:52.470 "abort": true, 00:15:52.470 "nvme_admin": false, 00:15:52.470 "nvme_io": false 00:15:52.470 }, 00:15:52.470 "memory_domains": [ 00:15:52.470 { 00:15:52.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.470 "dma_device_type": 2 00:15:52.470 } 00:15:52.470 ], 00:15:52.470 "driver_specific": {} 00:15:52.470 } 00:15:52.470 ] 00:15:52.470 02:39:17 -- common/autotest_common.sh@895 -- # return 0 00:15:52.470 02:39:17 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:52.470 02:39:17 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:52.470 02:39:17 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:15:52.470 02:39:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:52.470 02:39:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:52.470 02:39:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:52.470 02:39:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:52.470 02:39:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:52.470 02:39:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:52.470 02:39:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:52.470 02:39:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:52.470 02:39:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:52.470 02:39:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:52.470 02:39:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.728 02:39:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:52.728 "name": "Existed_Raid", 00:15:52.728 "uuid": "6e064903-fcf6-4bb7-b24c-2ee44d21007b", 00:15:52.728 "strip_size_kb": 64, 00:15:52.728 "state": "online", 00:15:52.728 "raid_level": "concat", 00:15:52.728 "superblock": true, 00:15:52.728 "num_base_bdevs": 3, 00:15:52.728 "num_base_bdevs_discovered": 3, 00:15:52.728 "num_base_bdevs_operational": 3, 00:15:52.728 "base_bdevs_list": [ 00:15:52.728 { 00:15:52.728 "name": "BaseBdev1", 00:15:52.728 "uuid": "b9c0edda-8558-4c22-8489-b2e5a2995c67", 00:15:52.728 "is_configured": true, 00:15:52.728 "data_offset": 2048, 00:15:52.728 "data_size": 63488 00:15:52.728 }, 00:15:52.728 { 00:15:52.728 "name": "BaseBdev2", 00:15:52.728 "uuid": "64ca25af-73d5-4890-9ff2-41280ad472aa", 00:15:52.728 "is_configured": true, 00:15:52.728 "data_offset": 2048, 00:15:52.728 "data_size": 63488 00:15:52.728 }, 00:15:52.728 { 00:15:52.728 "name": "BaseBdev3", 00:15:52.728 "uuid": "bc21a62d-6e59-41b2-b9fa-fb9a733b8b33", 00:15:52.728 "is_configured": true, 00:15:52.728 "data_offset": 2048, 00:15:52.728 "data_size": 63488 00:15:52.728 } 00:15:52.728 ] 00:15:52.728 }' 00:15:52.728 02:39:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:52.728 02:39:17 -- common/autotest_common.sh@10 -- # set +x 00:15:53.295 02:39:18 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:53.295 [2024-07-11 02:39:18.350076] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:53.295 [2024-07-11 02:39:18.350112] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:53.295 [2024-07-11 02:39:18.350194] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:53.295 02:39:18 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:53.295 02:39:18 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:15:53.295 02:39:18 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:53.295 02:39:18 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:53.295 02:39:18 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:15:53.295 02:39:18 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:15:53.295 02:39:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:53.295 02:39:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:53.295 02:39:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:53.295 02:39:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:53.296 02:39:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:53.296 02:39:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:53.296 02:39:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:53.296 02:39:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:53.296 02:39:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:53.296 02:39:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:53.296 02:39:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:53.862 02:39:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:53.862 "name": "Existed_Raid", 00:15:53.862 "uuid": "6e064903-fcf6-4bb7-b24c-2ee44d21007b", 00:15:53.862 "strip_size_kb": 64, 00:15:53.862 "state": "offline", 00:15:53.862 "raid_level": "concat", 00:15:53.862 "superblock": true, 00:15:53.862 "num_base_bdevs": 3, 00:15:53.862 "num_base_bdevs_discovered": 2, 00:15:53.862 "num_base_bdevs_operational": 2, 00:15:53.862 "base_bdevs_list": [ 00:15:53.862 { 00:15:53.862 "name": null, 00:15:53.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.862 "is_configured": false, 00:15:53.862 "data_offset": 2048, 00:15:53.862 "data_size": 63488 00:15:53.862 }, 00:15:53.862 { 00:15:53.862 "name": "BaseBdev2", 00:15:53.862 "uuid": "64ca25af-73d5-4890-9ff2-41280ad472aa", 00:15:53.862 "is_configured": true, 00:15:53.862 "data_offset": 2048, 00:15:53.862 "data_size": 63488 00:15:53.862 }, 00:15:53.862 { 00:15:53.862 "name": "BaseBdev3", 00:15:53.862 "uuid": "bc21a62d-6e59-41b2-b9fa-fb9a733b8b33", 00:15:53.862 "is_configured": true, 00:15:53.862 "data_offset": 2048, 00:15:53.862 "data_size": 63488 00:15:53.862 } 00:15:53.862 ] 00:15:53.862 }' 00:15:53.862 02:39:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:53.862 02:39:18 -- common/autotest_common.sh@10 -- # set +x 00:15:54.428 02:39:19 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:54.428 02:39:19 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:54.428 02:39:19 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:54.428 02:39:19 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:54.428 02:39:19 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:54.428 02:39:19 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:54.428 02:39:19 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:54.687 [2024-07-11 02:39:19.755192] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:54.687 02:39:19 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:54.687 02:39:19 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:54.687 02:39:19 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:54.687 02:39:19 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:54.945 02:39:19 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:54.945 02:39:19 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:54.945 02:39:19 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:55.203 [2024-07-11 02:39:20.200644] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:55.203 [2024-07-11 02:39:20.200720] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state offline 00:15:55.203 02:39:20 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:55.203 02:39:20 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:55.203 02:39:20 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:55.203 02:39:20 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:55.461 02:39:20 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:55.461 02:39:20 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:55.461 02:39:20 -- bdev/bdev_raid.sh@287 -- # killprocess 128898 00:15:55.461 02:39:20 -- common/autotest_common.sh@926 -- # '[' -z 128898 ']' 00:15:55.461 02:39:20 -- common/autotest_common.sh@930 -- # kill -0 128898 00:15:55.462 02:39:20 -- common/autotest_common.sh@931 -- # uname 00:15:55.462 02:39:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:55.462 02:39:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 128898 00:15:55.462 killing process with pid 128898 00:15:55.462 02:39:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:55.462 02:39:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:55.462 02:39:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 128898' 00:15:55.462 02:39:20 -- common/autotest_common.sh@945 -- # kill 128898 00:15:55.462 02:39:20 -- common/autotest_common.sh@950 -- # wait 128898 00:15:55.462 [2024-07-11 02:39:20.442070] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:55.462 [2024-07-11 02:39:20.442183] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:55.720 ************************************ 00:15:55.720 END TEST raid_state_function_test_sb 00:15:55.720 ************************************ 00:15:55.720 02:39:20 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:55.720 00:15:55.720 real 0m11.644s 00:15:55.720 user 0m21.709s 00:15:55.720 sys 0m1.305s 00:15:55.720 02:39:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:55.720 02:39:20 -- common/autotest_common.sh@10 -- # set +x 00:15:55.720 02:39:20 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:15:55.720 02:39:20 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:15:55.720 02:39:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:55.720 02:39:20 -- common/autotest_common.sh@10 -- # set +x 00:15:55.720 ************************************ 00:15:55.720 START TEST raid_superblock_test 00:15:55.720 ************************************ 00:15:55.720 02:39:20 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 3 00:15:55.720 02:39:20 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:15:55.720 02:39:20 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:15:55.720 02:39:20 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:15:55.720 02:39:20 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:15:55.720 02:39:20 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:15:55.720 02:39:20 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:15:55.720 02:39:20 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:15:55.720 02:39:20 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:15:55.720 02:39:20 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:15:55.720 02:39:20 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:15:55.720 02:39:20 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:15:55.720 02:39:20 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:15:55.720 02:39:20 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:15:55.720 02:39:20 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:15:55.720 02:39:20 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:15:55.720 02:39:20 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:15:55.720 02:39:20 -- bdev/bdev_raid.sh@357 -- # raid_pid=129285 00:15:55.720 02:39:20 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:55.720 02:39:20 -- bdev/bdev_raid.sh@358 -- # waitforlisten 129285 /var/tmp/spdk-raid.sock 00:15:55.720 02:39:20 -- common/autotest_common.sh@819 -- # '[' -z 129285 ']' 00:15:55.720 02:39:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:55.720 02:39:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:55.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:55.720 02:39:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:55.720 02:39:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:55.720 02:39:20 -- common/autotest_common.sh@10 -- # set +x 00:15:55.720 [2024-07-11 02:39:20.756652] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:55.720 [2024-07-11 02:39:20.756863] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129285 ] 00:15:55.978 [2024-07-11 02:39:20.894443] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:55.978 [2024-07-11 02:39:20.961476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.978 [2024-07-11 02:39:21.012898] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:56.927 02:39:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:56.927 02:39:21 -- common/autotest_common.sh@852 -- # return 0 00:15:56.927 02:39:21 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:15:56.927 02:39:21 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:56.927 02:39:21 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:15:56.927 02:39:21 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:15:56.927 02:39:21 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:56.927 02:39:21 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:56.927 02:39:21 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:56.927 02:39:21 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:56.927 02:39:21 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:56.927 malloc1 00:15:56.927 02:39:21 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:57.235 [2024-07-11 02:39:22.103306] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:57.235 [2024-07-11 02:39:22.103447] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:57.235 [2024-07-11 02:39:22.103479] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005d80 00:15:57.235 [2024-07-11 02:39:22.103518] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:57.235 [2024-07-11 02:39:22.105587] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:57.235 [2024-07-11 02:39:22.105658] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:57.235 pt1 00:15:57.235 02:39:22 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:57.235 02:39:22 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:57.235 02:39:22 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:15:57.235 02:39:22 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:15:57.235 02:39:22 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:57.235 02:39:22 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:57.235 02:39:22 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:57.235 02:39:22 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:57.236 02:39:22 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:57.236 malloc2 00:15:57.236 02:39:22 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:57.494 [2024-07-11 02:39:22.501165] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:57.494 [2024-07-11 02:39:22.501309] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:57.494 [2024-07-11 02:39:22.501349] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:15:57.494 [2024-07-11 02:39:22.501388] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:57.494 [2024-07-11 02:39:22.503627] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:57.494 [2024-07-11 02:39:22.503694] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:57.494 pt2 00:15:57.494 02:39:22 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:57.494 02:39:22 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:57.494 02:39:22 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:15:57.494 02:39:22 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:15:57.494 02:39:22 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:57.494 02:39:22 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:57.494 02:39:22 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:57.494 02:39:22 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:57.494 02:39:22 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:15:57.752 malloc3 00:15:57.752 02:39:22 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:58.010 [2024-07-11 02:39:22.945462] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:58.010 [2024-07-11 02:39:22.945556] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.010 [2024-07-11 02:39:22.945595] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:58.010 [2024-07-11 02:39:22.945650] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.010 [2024-07-11 02:39:22.947818] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.010 [2024-07-11 02:39:22.947870] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:58.010 pt3 00:15:58.010 02:39:22 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:58.010 02:39:22 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:58.010 02:39:22 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:15:58.268 [2024-07-11 02:39:23.145569] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:58.268 [2024-07-11 02:39:23.147467] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:58.268 [2024-07-11 02:39:23.147533] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:58.268 [2024-07-11 02:39:23.147738] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007b80 00:15:58.268 [2024-07-11 02:39:23.147769] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:58.268 [2024-07-11 02:39:23.147898] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:15:58.268 [2024-07-11 02:39:23.148263] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007b80 00:15:58.268 [2024-07-11 02:39:23.148287] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007b80 00:15:58.268 [2024-07-11 02:39:23.148458] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:58.268 02:39:23 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:15:58.268 02:39:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:58.268 02:39:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:58.268 02:39:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:58.268 02:39:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:58.268 02:39:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:58.268 02:39:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:58.268 02:39:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:58.268 02:39:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:58.268 02:39:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:58.268 02:39:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:58.268 02:39:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.526 02:39:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:58.526 "name": "raid_bdev1", 00:15:58.526 "uuid": "1eb2ff46-ee39-4140-8f24-590ee59d2bde", 00:15:58.526 "strip_size_kb": 64, 00:15:58.526 "state": "online", 00:15:58.526 "raid_level": "concat", 00:15:58.526 "superblock": true, 00:15:58.526 "num_base_bdevs": 3, 00:15:58.526 "num_base_bdevs_discovered": 3, 00:15:58.526 "num_base_bdevs_operational": 3, 00:15:58.526 "base_bdevs_list": [ 00:15:58.526 { 00:15:58.526 "name": "pt1", 00:15:58.526 "uuid": "8a729121-dd3e-5979-8d49-52119fb29fcb", 00:15:58.526 "is_configured": true, 00:15:58.526 "data_offset": 2048, 00:15:58.526 "data_size": 63488 00:15:58.526 }, 00:15:58.526 { 00:15:58.526 "name": "pt2", 00:15:58.526 "uuid": "077deb37-d3a0-57c0-8d11-77c86576b19e", 00:15:58.526 "is_configured": true, 00:15:58.526 "data_offset": 2048, 00:15:58.526 "data_size": 63488 00:15:58.526 }, 00:15:58.526 { 00:15:58.526 "name": "pt3", 00:15:58.526 "uuid": "0e3f9473-1630-5ba1-ada6-0e1f1494deb5", 00:15:58.526 "is_configured": true, 00:15:58.526 "data_offset": 2048, 00:15:58.526 "data_size": 63488 00:15:58.526 } 00:15:58.526 ] 00:15:58.526 }' 00:15:58.526 02:39:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:58.526 02:39:23 -- common/autotest_common.sh@10 -- # set +x 00:15:59.092 02:39:23 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:59.092 02:39:23 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:15:59.350 [2024-07-11 02:39:24.245942] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:59.350 02:39:24 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=1eb2ff46-ee39-4140-8f24-590ee59d2bde 00:15:59.350 02:39:24 -- bdev/bdev_raid.sh@380 -- # '[' -z 1eb2ff46-ee39-4140-8f24-590ee59d2bde ']' 00:15:59.350 02:39:24 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:59.607 [2024-07-11 02:39:24.493804] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:59.607 [2024-07-11 02:39:24.493830] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:59.607 [2024-07-11 02:39:24.493939] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:59.607 [2024-07-11 02:39:24.494015] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:59.607 [2024-07-11 02:39:24.494042] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007b80 name raid_bdev1, state offline 00:15:59.607 02:39:24 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:59.607 02:39:24 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:15:59.865 02:39:24 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:15:59.865 02:39:24 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:15:59.865 02:39:24 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:59.865 02:39:24 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:00.122 02:39:25 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:00.122 02:39:25 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:00.380 02:39:25 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:00.380 02:39:25 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:00.380 02:39:25 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:00.380 02:39:25 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:00.639 02:39:25 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:16:00.639 02:39:25 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:00.639 02:39:25 -- common/autotest_common.sh@640 -- # local es=0 00:16:00.639 02:39:25 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:00.639 02:39:25 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:00.639 02:39:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:00.639 02:39:25 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:00.639 02:39:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:00.639 02:39:25 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:00.639 02:39:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:00.639 02:39:25 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:00.639 02:39:25 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:00.639 02:39:25 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:00.897 [2024-07-11 02:39:25.814094] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:00.897 [2024-07-11 02:39:25.815869] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:00.897 [2024-07-11 02:39:25.815918] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:00.897 [2024-07-11 02:39:25.815969] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:16:00.897 [2024-07-11 02:39:25.816057] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:16:00.897 [2024-07-11 02:39:25.816089] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:16:00.897 [2024-07-11 02:39:25.816143] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:00.897 [2024-07-11 02:39:25.816170] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state configuring 00:16:00.897 request: 00:16:00.897 { 00:16:00.897 "name": "raid_bdev1", 00:16:00.897 "raid_level": "concat", 00:16:00.897 "base_bdevs": [ 00:16:00.897 "malloc1", 00:16:00.897 "malloc2", 00:16:00.897 "malloc3" 00:16:00.897 ], 00:16:00.897 "superblock": false, 00:16:00.897 "strip_size_kb": 64, 00:16:00.897 "method": "bdev_raid_create", 00:16:00.897 "req_id": 1 00:16:00.897 } 00:16:00.897 Got JSON-RPC error response 00:16:00.897 response: 00:16:00.897 { 00:16:00.897 "code": -17, 00:16:00.897 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:00.897 } 00:16:00.897 02:39:25 -- common/autotest_common.sh@643 -- # es=1 00:16:00.897 02:39:25 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:00.897 02:39:25 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:00.897 02:39:25 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:00.897 02:39:25 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:00.897 02:39:25 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:16:01.156 02:39:26 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:16:01.156 02:39:26 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:16:01.156 02:39:26 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:01.415 [2024-07-11 02:39:26.258132] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:01.415 [2024-07-11 02:39:26.258225] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:01.415 [2024-07-11 02:39:26.258263] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:16:01.415 [2024-07-11 02:39:26.258287] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:01.415 [2024-07-11 02:39:26.260400] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:01.415 [2024-07-11 02:39:26.260448] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:01.415 [2024-07-11 02:39:26.260560] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:16:01.415 [2024-07-11 02:39:26.260631] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:01.415 pt1 00:16:01.415 02:39:26 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:16:01.415 02:39:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:01.415 02:39:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:01.415 02:39:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:01.415 02:39:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:01.415 02:39:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:01.415 02:39:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:01.415 02:39:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:01.415 02:39:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:01.415 02:39:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:01.415 02:39:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.415 02:39:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:01.415 02:39:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:01.415 "name": "raid_bdev1", 00:16:01.415 "uuid": "1eb2ff46-ee39-4140-8f24-590ee59d2bde", 00:16:01.415 "strip_size_kb": 64, 00:16:01.415 "state": "configuring", 00:16:01.415 "raid_level": "concat", 00:16:01.415 "superblock": true, 00:16:01.415 "num_base_bdevs": 3, 00:16:01.415 "num_base_bdevs_discovered": 1, 00:16:01.415 "num_base_bdevs_operational": 3, 00:16:01.415 "base_bdevs_list": [ 00:16:01.415 { 00:16:01.415 "name": "pt1", 00:16:01.415 "uuid": "8a729121-dd3e-5979-8d49-52119fb29fcb", 00:16:01.415 "is_configured": true, 00:16:01.415 "data_offset": 2048, 00:16:01.415 "data_size": 63488 00:16:01.415 }, 00:16:01.415 { 00:16:01.415 "name": null, 00:16:01.415 "uuid": "077deb37-d3a0-57c0-8d11-77c86576b19e", 00:16:01.415 "is_configured": false, 00:16:01.415 "data_offset": 2048, 00:16:01.415 "data_size": 63488 00:16:01.415 }, 00:16:01.415 { 00:16:01.415 "name": null, 00:16:01.415 "uuid": "0e3f9473-1630-5ba1-ada6-0e1f1494deb5", 00:16:01.415 "is_configured": false, 00:16:01.415 "data_offset": 2048, 00:16:01.415 "data_size": 63488 00:16:01.415 } 00:16:01.415 ] 00:16:01.415 }' 00:16:01.415 02:39:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:01.415 02:39:26 -- common/autotest_common.sh@10 -- # set +x 00:16:02.351 02:39:27 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:16:02.352 02:39:27 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:02.352 [2024-07-11 02:39:27.370429] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:02.352 [2024-07-11 02:39:27.370528] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:02.352 [2024-07-11 02:39:27.370571] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:02.352 [2024-07-11 02:39:27.370607] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:02.352 [2024-07-11 02:39:27.371018] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:02.352 [2024-07-11 02:39:27.371046] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:02.352 [2024-07-11 02:39:27.371160] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:02.352 [2024-07-11 02:39:27.371187] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:02.352 pt2 00:16:02.352 02:39:27 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:02.610 [2024-07-11 02:39:27.610464] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:02.610 02:39:27 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:16:02.610 02:39:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:02.610 02:39:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:02.610 02:39:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:02.610 02:39:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:02.610 02:39:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:02.610 02:39:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:02.610 02:39:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:02.610 02:39:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:02.610 02:39:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:02.610 02:39:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:02.610 02:39:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.869 02:39:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:02.869 "name": "raid_bdev1", 00:16:02.869 "uuid": "1eb2ff46-ee39-4140-8f24-590ee59d2bde", 00:16:02.869 "strip_size_kb": 64, 00:16:02.869 "state": "configuring", 00:16:02.869 "raid_level": "concat", 00:16:02.869 "superblock": true, 00:16:02.869 "num_base_bdevs": 3, 00:16:02.869 "num_base_bdevs_discovered": 1, 00:16:02.869 "num_base_bdevs_operational": 3, 00:16:02.869 "base_bdevs_list": [ 00:16:02.869 { 00:16:02.869 "name": "pt1", 00:16:02.869 "uuid": "8a729121-dd3e-5979-8d49-52119fb29fcb", 00:16:02.869 "is_configured": true, 00:16:02.869 "data_offset": 2048, 00:16:02.869 "data_size": 63488 00:16:02.869 }, 00:16:02.869 { 00:16:02.869 "name": null, 00:16:02.869 "uuid": "077deb37-d3a0-57c0-8d11-77c86576b19e", 00:16:02.869 "is_configured": false, 00:16:02.869 "data_offset": 2048, 00:16:02.869 "data_size": 63488 00:16:02.869 }, 00:16:02.869 { 00:16:02.869 "name": null, 00:16:02.869 "uuid": "0e3f9473-1630-5ba1-ada6-0e1f1494deb5", 00:16:02.869 "is_configured": false, 00:16:02.869 "data_offset": 2048, 00:16:02.869 "data_size": 63488 00:16:02.869 } 00:16:02.869 ] 00:16:02.869 }' 00:16:02.869 02:39:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:02.869 02:39:27 -- common/autotest_common.sh@10 -- # set +x 00:16:03.436 02:39:28 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:16:03.436 02:39:28 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:03.436 02:39:28 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:03.694 [2024-07-11 02:39:28.666755] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:03.694 [2024-07-11 02:39:28.666848] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.694 [2024-07-11 02:39:28.666881] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:03.694 [2024-07-11 02:39:28.666907] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.694 [2024-07-11 02:39:28.667437] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.694 [2024-07-11 02:39:28.667480] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:03.694 [2024-07-11 02:39:28.667585] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:03.694 [2024-07-11 02:39:28.667611] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:03.694 pt2 00:16:03.694 02:39:28 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:03.694 02:39:28 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:03.694 02:39:28 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:03.954 [2024-07-11 02:39:28.942889] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:03.954 [2024-07-11 02:39:28.942964] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.954 [2024-07-11 02:39:28.942997] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:03.954 [2024-07-11 02:39:28.943021] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.954 [2024-07-11 02:39:28.943524] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.954 [2024-07-11 02:39:28.943572] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:03.954 [2024-07-11 02:39:28.943670] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:16:03.954 [2024-07-11 02:39:28.943720] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:03.954 [2024-07-11 02:39:28.943844] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:16:03.954 [2024-07-11 02:39:28.943863] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:03.954 [2024-07-11 02:39:28.943941] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:16:03.954 [2024-07-11 02:39:28.944248] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:16:03.954 [2024-07-11 02:39:28.944268] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:16:03.954 [2024-07-11 02:39:28.944364] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:03.954 pt3 00:16:03.954 02:39:28 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:03.954 02:39:28 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:03.954 02:39:28 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:16:03.954 02:39:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:03.954 02:39:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:03.954 02:39:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:03.954 02:39:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:03.954 02:39:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:03.954 02:39:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:03.954 02:39:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:03.954 02:39:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:03.954 02:39:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:03.954 02:39:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:03.954 02:39:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.213 02:39:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:04.213 "name": "raid_bdev1", 00:16:04.213 "uuid": "1eb2ff46-ee39-4140-8f24-590ee59d2bde", 00:16:04.213 "strip_size_kb": 64, 00:16:04.213 "state": "online", 00:16:04.213 "raid_level": "concat", 00:16:04.213 "superblock": true, 00:16:04.213 "num_base_bdevs": 3, 00:16:04.213 "num_base_bdevs_discovered": 3, 00:16:04.213 "num_base_bdevs_operational": 3, 00:16:04.213 "base_bdevs_list": [ 00:16:04.213 { 00:16:04.213 "name": "pt1", 00:16:04.213 "uuid": "8a729121-dd3e-5979-8d49-52119fb29fcb", 00:16:04.213 "is_configured": true, 00:16:04.213 "data_offset": 2048, 00:16:04.213 "data_size": 63488 00:16:04.213 }, 00:16:04.213 { 00:16:04.213 "name": "pt2", 00:16:04.213 "uuid": "077deb37-d3a0-57c0-8d11-77c86576b19e", 00:16:04.213 "is_configured": true, 00:16:04.213 "data_offset": 2048, 00:16:04.213 "data_size": 63488 00:16:04.213 }, 00:16:04.213 { 00:16:04.213 "name": "pt3", 00:16:04.213 "uuid": "0e3f9473-1630-5ba1-ada6-0e1f1494deb5", 00:16:04.213 "is_configured": true, 00:16:04.213 "data_offset": 2048, 00:16:04.213 "data_size": 63488 00:16:04.213 } 00:16:04.213 ] 00:16:04.213 }' 00:16:04.213 02:39:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:04.213 02:39:29 -- common/autotest_common.sh@10 -- # set +x 00:16:04.780 02:39:29 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:04.780 02:39:29 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:16:05.038 [2024-07-11 02:39:30.103300] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:05.038 02:39:30 -- bdev/bdev_raid.sh@430 -- # '[' 1eb2ff46-ee39-4140-8f24-590ee59d2bde '!=' 1eb2ff46-ee39-4140-8f24-590ee59d2bde ']' 00:16:05.038 02:39:30 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:16:05.038 02:39:30 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:05.038 02:39:30 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:05.038 02:39:30 -- bdev/bdev_raid.sh@511 -- # killprocess 129285 00:16:05.038 02:39:30 -- common/autotest_common.sh@926 -- # '[' -z 129285 ']' 00:16:05.038 02:39:30 -- common/autotest_common.sh@930 -- # kill -0 129285 00:16:05.038 02:39:30 -- common/autotest_common.sh@931 -- # uname 00:16:05.038 02:39:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:05.038 02:39:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 129285 00:16:05.297 killing process with pid 129285 00:16:05.297 02:39:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:05.297 02:39:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:05.297 02:39:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 129285' 00:16:05.297 02:39:30 -- common/autotest_common.sh@945 -- # kill 129285 00:16:05.297 02:39:30 -- common/autotest_common.sh@950 -- # wait 129285 00:16:05.297 [2024-07-11 02:39:30.141431] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:05.297 [2024-07-11 02:39:30.141501] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:05.297 [2024-07-11 02:39:30.141603] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:05.297 [2024-07-11 02:39:30.141621] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:16:05.297 [2024-07-11 02:39:30.170675] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:05.555 ************************************ 00:16:05.555 END TEST raid_superblock_test 00:16:05.555 ************************************ 00:16:05.555 02:39:30 -- bdev/bdev_raid.sh@513 -- # return 0 00:16:05.555 00:16:05.555 real 0m9.676s 00:16:05.555 user 0m17.815s 00:16:05.555 sys 0m1.142s 00:16:05.555 02:39:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:05.555 02:39:30 -- common/autotest_common.sh@10 -- # set +x 00:16:05.555 02:39:30 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:16:05.555 02:39:30 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:16:05.555 02:39:30 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:16:05.555 02:39:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:05.555 02:39:30 -- common/autotest_common.sh@10 -- # set +x 00:16:05.555 ************************************ 00:16:05.555 START TEST raid_state_function_test 00:16:05.555 ************************************ 00:16:05.555 02:39:30 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 3 false 00:16:05.555 02:39:30 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:16:05.555 02:39:30 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:16:05.555 02:39:30 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:16:05.555 02:39:30 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:05.555 02:39:30 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:16:05.555 02:39:30 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:05.555 02:39:30 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:05.555 02:39:30 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:05.555 02:39:30 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:05.555 02:39:30 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:05.555 02:39:30 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:05.555 02:39:30 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:05.555 02:39:30 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:05.555 02:39:30 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:16:05.555 02:39:30 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:05.555 02:39:30 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:05.555 02:39:30 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:05.555 02:39:30 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:05.555 02:39:30 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:05.555 02:39:30 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:05.555 02:39:30 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:05.555 02:39:30 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:16:05.555 02:39:30 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:16:05.555 02:39:30 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:16:05.555 02:39:30 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:16:05.555 02:39:30 -- bdev/bdev_raid.sh@226 -- # raid_pid=129611 00:16:05.555 Process raid pid: 129611 00:16:05.555 02:39:30 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 129611' 00:16:05.555 02:39:30 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:05.555 02:39:30 -- bdev/bdev_raid.sh@228 -- # waitforlisten 129611 /var/tmp/spdk-raid.sock 00:16:05.555 02:39:30 -- common/autotest_common.sh@819 -- # '[' -z 129611 ']' 00:16:05.555 02:39:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:05.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:05.555 02:39:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:05.555 02:39:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:05.555 02:39:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:05.555 02:39:30 -- common/autotest_common.sh@10 -- # set +x 00:16:05.555 [2024-07-11 02:39:30.487453] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:16:05.555 [2024-07-11 02:39:30.488075] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:05.555 [2024-07-11 02:39:30.629269] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:05.813 [2024-07-11 02:39:30.705569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:05.813 [2024-07-11 02:39:30.761528] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:06.378 02:39:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:06.378 02:39:31 -- common/autotest_common.sh@852 -- # return 0 00:16:06.378 02:39:31 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:06.636 [2024-07-11 02:39:31.565816] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:06.636 [2024-07-11 02:39:31.565920] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:06.636 [2024-07-11 02:39:31.565933] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:06.636 [2024-07-11 02:39:31.565951] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:06.636 [2024-07-11 02:39:31.565958] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:06.636 [2024-07-11 02:39:31.565995] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:06.636 02:39:31 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:06.636 02:39:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:06.636 02:39:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:06.636 02:39:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:06.636 02:39:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:06.636 02:39:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:06.636 02:39:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:06.636 02:39:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:06.636 02:39:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:06.636 02:39:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:06.636 02:39:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:06.636 02:39:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:06.893 02:39:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:06.893 "name": "Existed_Raid", 00:16:06.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.893 "strip_size_kb": 0, 00:16:06.893 "state": "configuring", 00:16:06.893 "raid_level": "raid1", 00:16:06.893 "superblock": false, 00:16:06.893 "num_base_bdevs": 3, 00:16:06.893 "num_base_bdevs_discovered": 0, 00:16:06.893 "num_base_bdevs_operational": 3, 00:16:06.893 "base_bdevs_list": [ 00:16:06.893 { 00:16:06.893 "name": "BaseBdev1", 00:16:06.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.893 "is_configured": false, 00:16:06.893 "data_offset": 0, 00:16:06.893 "data_size": 0 00:16:06.893 }, 00:16:06.893 { 00:16:06.893 "name": "BaseBdev2", 00:16:06.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.893 "is_configured": false, 00:16:06.893 "data_offset": 0, 00:16:06.893 "data_size": 0 00:16:06.893 }, 00:16:06.893 { 00:16:06.893 "name": "BaseBdev3", 00:16:06.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.893 "is_configured": false, 00:16:06.893 "data_offset": 0, 00:16:06.893 "data_size": 0 00:16:06.893 } 00:16:06.893 ] 00:16:06.893 }' 00:16:06.893 02:39:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:06.893 02:39:31 -- common/autotest_common.sh@10 -- # set +x 00:16:07.459 02:39:32 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:07.718 [2024-07-11 02:39:32.633894] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:07.718 [2024-07-11 02:39:32.633957] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:16:07.718 02:39:32 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:07.976 [2024-07-11 02:39:32.861971] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:07.976 [2024-07-11 02:39:32.862048] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:07.976 [2024-07-11 02:39:32.862076] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:07.976 [2024-07-11 02:39:32.862096] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:07.976 [2024-07-11 02:39:32.862103] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:07.976 [2024-07-11 02:39:32.862126] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:07.976 02:39:32 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:08.234 [2024-07-11 02:39:33.073051] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:08.234 BaseBdev1 00:16:08.234 02:39:33 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:08.234 02:39:33 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:08.234 02:39:33 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:08.234 02:39:33 -- common/autotest_common.sh@889 -- # local i 00:16:08.234 02:39:33 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:08.234 02:39:33 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:08.234 02:39:33 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:08.493 02:39:33 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:08.493 [ 00:16:08.493 { 00:16:08.493 "name": "BaseBdev1", 00:16:08.493 "aliases": [ 00:16:08.493 "eedb0a52-206f-4104-90c2-f6917e9a85b5" 00:16:08.493 ], 00:16:08.493 "product_name": "Malloc disk", 00:16:08.493 "block_size": 512, 00:16:08.493 "num_blocks": 65536, 00:16:08.493 "uuid": "eedb0a52-206f-4104-90c2-f6917e9a85b5", 00:16:08.493 "assigned_rate_limits": { 00:16:08.493 "rw_ios_per_sec": 0, 00:16:08.493 "rw_mbytes_per_sec": 0, 00:16:08.493 "r_mbytes_per_sec": 0, 00:16:08.493 "w_mbytes_per_sec": 0 00:16:08.493 }, 00:16:08.493 "claimed": true, 00:16:08.493 "claim_type": "exclusive_write", 00:16:08.493 "zoned": false, 00:16:08.493 "supported_io_types": { 00:16:08.493 "read": true, 00:16:08.493 "write": true, 00:16:08.493 "unmap": true, 00:16:08.493 "write_zeroes": true, 00:16:08.493 "flush": true, 00:16:08.493 "reset": true, 00:16:08.493 "compare": false, 00:16:08.493 "compare_and_write": false, 00:16:08.493 "abort": true, 00:16:08.493 "nvme_admin": false, 00:16:08.493 "nvme_io": false 00:16:08.493 }, 00:16:08.493 "memory_domains": [ 00:16:08.493 { 00:16:08.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.493 "dma_device_type": 2 00:16:08.493 } 00:16:08.493 ], 00:16:08.493 "driver_specific": {} 00:16:08.493 } 00:16:08.493 ] 00:16:08.493 02:39:33 -- common/autotest_common.sh@895 -- # return 0 00:16:08.493 02:39:33 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:08.493 02:39:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:08.493 02:39:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:08.493 02:39:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:08.493 02:39:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:08.493 02:39:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:08.493 02:39:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:08.493 02:39:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:08.493 02:39:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:08.493 02:39:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:08.493 02:39:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:08.493 02:39:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:08.786 02:39:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:08.786 "name": "Existed_Raid", 00:16:08.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.786 "strip_size_kb": 0, 00:16:08.787 "state": "configuring", 00:16:08.787 "raid_level": "raid1", 00:16:08.787 "superblock": false, 00:16:08.787 "num_base_bdevs": 3, 00:16:08.787 "num_base_bdevs_discovered": 1, 00:16:08.787 "num_base_bdevs_operational": 3, 00:16:08.787 "base_bdevs_list": [ 00:16:08.787 { 00:16:08.787 "name": "BaseBdev1", 00:16:08.787 "uuid": "eedb0a52-206f-4104-90c2-f6917e9a85b5", 00:16:08.787 "is_configured": true, 00:16:08.787 "data_offset": 0, 00:16:08.787 "data_size": 65536 00:16:08.787 }, 00:16:08.787 { 00:16:08.787 "name": "BaseBdev2", 00:16:08.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.787 "is_configured": false, 00:16:08.787 "data_offset": 0, 00:16:08.787 "data_size": 0 00:16:08.787 }, 00:16:08.787 { 00:16:08.787 "name": "BaseBdev3", 00:16:08.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.787 "is_configured": false, 00:16:08.787 "data_offset": 0, 00:16:08.787 "data_size": 0 00:16:08.787 } 00:16:08.787 ] 00:16:08.787 }' 00:16:08.787 02:39:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:08.787 02:39:33 -- common/autotest_common.sh@10 -- # set +x 00:16:09.356 02:39:34 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:09.615 [2024-07-11 02:39:34.593349] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:09.615 [2024-07-11 02:39:34.593412] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005a80 name Existed_Raid, state configuring 00:16:09.615 02:39:34 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:16:09.615 02:39:34 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:09.874 [2024-07-11 02:39:34.773422] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:09.874 [2024-07-11 02:39:34.775161] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:09.874 [2024-07-11 02:39:34.775216] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:09.874 [2024-07-11 02:39:34.775243] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:09.874 [2024-07-11 02:39:34.775265] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:09.874 02:39:34 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:09.874 02:39:34 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:09.874 02:39:34 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:09.874 02:39:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:09.874 02:39:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:09.874 02:39:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:09.874 02:39:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:09.874 02:39:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:09.874 02:39:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:09.874 02:39:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:09.874 02:39:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:09.874 02:39:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:09.874 02:39:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:09.874 02:39:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:10.133 02:39:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:10.133 "name": "Existed_Raid", 00:16:10.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.133 "strip_size_kb": 0, 00:16:10.133 "state": "configuring", 00:16:10.133 "raid_level": "raid1", 00:16:10.133 "superblock": false, 00:16:10.133 "num_base_bdevs": 3, 00:16:10.133 "num_base_bdevs_discovered": 1, 00:16:10.133 "num_base_bdevs_operational": 3, 00:16:10.133 "base_bdevs_list": [ 00:16:10.133 { 00:16:10.133 "name": "BaseBdev1", 00:16:10.133 "uuid": "eedb0a52-206f-4104-90c2-f6917e9a85b5", 00:16:10.133 "is_configured": true, 00:16:10.133 "data_offset": 0, 00:16:10.133 "data_size": 65536 00:16:10.133 }, 00:16:10.133 { 00:16:10.133 "name": "BaseBdev2", 00:16:10.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.133 "is_configured": false, 00:16:10.133 "data_offset": 0, 00:16:10.133 "data_size": 0 00:16:10.133 }, 00:16:10.133 { 00:16:10.133 "name": "BaseBdev3", 00:16:10.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.133 "is_configured": false, 00:16:10.133 "data_offset": 0, 00:16:10.133 "data_size": 0 00:16:10.133 } 00:16:10.133 ] 00:16:10.133 }' 00:16:10.133 02:39:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:10.133 02:39:34 -- common/autotest_common.sh@10 -- # set +x 00:16:10.700 02:39:35 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:10.958 [2024-07-11 02:39:35.869558] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:10.958 BaseBdev2 00:16:10.958 02:39:35 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:10.958 02:39:35 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:16:10.958 02:39:35 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:10.958 02:39:35 -- common/autotest_common.sh@889 -- # local i 00:16:10.958 02:39:35 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:10.958 02:39:35 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:10.958 02:39:35 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:11.217 02:39:36 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:11.475 [ 00:16:11.475 { 00:16:11.475 "name": "BaseBdev2", 00:16:11.475 "aliases": [ 00:16:11.475 "7d7844c0-dd2c-4ebd-9fb6-35b69a478c7c" 00:16:11.475 ], 00:16:11.475 "product_name": "Malloc disk", 00:16:11.475 "block_size": 512, 00:16:11.475 "num_blocks": 65536, 00:16:11.475 "uuid": "7d7844c0-dd2c-4ebd-9fb6-35b69a478c7c", 00:16:11.475 "assigned_rate_limits": { 00:16:11.475 "rw_ios_per_sec": 0, 00:16:11.475 "rw_mbytes_per_sec": 0, 00:16:11.475 "r_mbytes_per_sec": 0, 00:16:11.475 "w_mbytes_per_sec": 0 00:16:11.475 }, 00:16:11.475 "claimed": true, 00:16:11.475 "claim_type": "exclusive_write", 00:16:11.475 "zoned": false, 00:16:11.475 "supported_io_types": { 00:16:11.475 "read": true, 00:16:11.475 "write": true, 00:16:11.475 "unmap": true, 00:16:11.475 "write_zeroes": true, 00:16:11.475 "flush": true, 00:16:11.475 "reset": true, 00:16:11.475 "compare": false, 00:16:11.475 "compare_and_write": false, 00:16:11.475 "abort": true, 00:16:11.475 "nvme_admin": false, 00:16:11.475 "nvme_io": false 00:16:11.475 }, 00:16:11.475 "memory_domains": [ 00:16:11.475 { 00:16:11.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.475 "dma_device_type": 2 00:16:11.475 } 00:16:11.475 ], 00:16:11.475 "driver_specific": {} 00:16:11.475 } 00:16:11.475 ] 00:16:11.475 02:39:36 -- common/autotest_common.sh@895 -- # return 0 00:16:11.475 02:39:36 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:11.475 02:39:36 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:11.475 02:39:36 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:11.476 02:39:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:11.476 02:39:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:11.476 02:39:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:11.476 02:39:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:11.476 02:39:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:11.476 02:39:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:11.476 02:39:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:11.476 02:39:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:11.476 02:39:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:11.476 02:39:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:11.476 02:39:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:11.734 02:39:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:11.734 "name": "Existed_Raid", 00:16:11.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.734 "strip_size_kb": 0, 00:16:11.734 "state": "configuring", 00:16:11.734 "raid_level": "raid1", 00:16:11.734 "superblock": false, 00:16:11.734 "num_base_bdevs": 3, 00:16:11.734 "num_base_bdevs_discovered": 2, 00:16:11.734 "num_base_bdevs_operational": 3, 00:16:11.734 "base_bdevs_list": [ 00:16:11.734 { 00:16:11.734 "name": "BaseBdev1", 00:16:11.734 "uuid": "eedb0a52-206f-4104-90c2-f6917e9a85b5", 00:16:11.734 "is_configured": true, 00:16:11.734 "data_offset": 0, 00:16:11.734 "data_size": 65536 00:16:11.734 }, 00:16:11.734 { 00:16:11.734 "name": "BaseBdev2", 00:16:11.734 "uuid": "7d7844c0-dd2c-4ebd-9fb6-35b69a478c7c", 00:16:11.734 "is_configured": true, 00:16:11.734 "data_offset": 0, 00:16:11.734 "data_size": 65536 00:16:11.734 }, 00:16:11.734 { 00:16:11.734 "name": "BaseBdev3", 00:16:11.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.734 "is_configured": false, 00:16:11.734 "data_offset": 0, 00:16:11.734 "data_size": 0 00:16:11.734 } 00:16:11.734 ] 00:16:11.734 }' 00:16:11.734 02:39:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:11.734 02:39:36 -- common/autotest_common.sh@10 -- # set +x 00:16:12.301 02:39:37 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:12.560 [2024-07-11 02:39:37.458753] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:12.560 [2024-07-11 02:39:37.458830] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006380 00:16:12.560 [2024-07-11 02:39:37.458841] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:12.560 [2024-07-11 02:39:37.458982] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80 00:16:12.560 [2024-07-11 02:39:37.459513] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006380 00:16:12.560 [2024-07-11 02:39:37.459536] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006380 00:16:12.560 [2024-07-11 02:39:37.459827] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:12.560 BaseBdev3 00:16:12.560 02:39:37 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:12.560 02:39:37 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:16:12.560 02:39:37 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:12.560 02:39:37 -- common/autotest_common.sh@889 -- # local i 00:16:12.560 02:39:37 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:12.560 02:39:37 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:12.560 02:39:37 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:12.818 02:39:37 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:12.818 [ 00:16:12.818 { 00:16:12.818 "name": "BaseBdev3", 00:16:12.818 "aliases": [ 00:16:12.818 "9f2aa51d-d347-439e-abf0-710b227ba923" 00:16:12.818 ], 00:16:12.818 "product_name": "Malloc disk", 00:16:12.818 "block_size": 512, 00:16:12.818 "num_blocks": 65536, 00:16:12.818 "uuid": "9f2aa51d-d347-439e-abf0-710b227ba923", 00:16:12.818 "assigned_rate_limits": { 00:16:12.818 "rw_ios_per_sec": 0, 00:16:12.818 "rw_mbytes_per_sec": 0, 00:16:12.818 "r_mbytes_per_sec": 0, 00:16:12.818 "w_mbytes_per_sec": 0 00:16:12.818 }, 00:16:12.818 "claimed": true, 00:16:12.818 "claim_type": "exclusive_write", 00:16:12.818 "zoned": false, 00:16:12.818 "supported_io_types": { 00:16:12.818 "read": true, 00:16:12.818 "write": true, 00:16:12.818 "unmap": true, 00:16:12.818 "write_zeroes": true, 00:16:12.818 "flush": true, 00:16:12.818 "reset": true, 00:16:12.818 "compare": false, 00:16:12.818 "compare_and_write": false, 00:16:12.818 "abort": true, 00:16:12.818 "nvme_admin": false, 00:16:12.818 "nvme_io": false 00:16:12.818 }, 00:16:12.818 "memory_domains": [ 00:16:12.818 { 00:16:12.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:12.818 "dma_device_type": 2 00:16:12.818 } 00:16:12.818 ], 00:16:12.818 "driver_specific": {} 00:16:12.818 } 00:16:12.818 ] 00:16:12.818 02:39:37 -- common/autotest_common.sh@895 -- # return 0 00:16:12.818 02:39:37 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:12.818 02:39:37 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:12.818 02:39:37 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:12.818 02:39:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:12.818 02:39:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:12.818 02:39:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:12.818 02:39:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:12.819 02:39:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:12.819 02:39:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:12.819 02:39:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:12.819 02:39:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:12.819 02:39:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:12.819 02:39:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:12.819 02:39:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:13.077 02:39:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:13.077 "name": "Existed_Raid", 00:16:13.077 "uuid": "cc7e5b6d-a50a-407c-81c6-6d953e44f507", 00:16:13.077 "strip_size_kb": 0, 00:16:13.077 "state": "online", 00:16:13.077 "raid_level": "raid1", 00:16:13.077 "superblock": false, 00:16:13.077 "num_base_bdevs": 3, 00:16:13.077 "num_base_bdevs_discovered": 3, 00:16:13.077 "num_base_bdevs_operational": 3, 00:16:13.077 "base_bdevs_list": [ 00:16:13.077 { 00:16:13.077 "name": "BaseBdev1", 00:16:13.077 "uuid": "eedb0a52-206f-4104-90c2-f6917e9a85b5", 00:16:13.077 "is_configured": true, 00:16:13.077 "data_offset": 0, 00:16:13.077 "data_size": 65536 00:16:13.077 }, 00:16:13.077 { 00:16:13.077 "name": "BaseBdev2", 00:16:13.077 "uuid": "7d7844c0-dd2c-4ebd-9fb6-35b69a478c7c", 00:16:13.077 "is_configured": true, 00:16:13.077 "data_offset": 0, 00:16:13.077 "data_size": 65536 00:16:13.077 }, 00:16:13.077 { 00:16:13.077 "name": "BaseBdev3", 00:16:13.077 "uuid": "9f2aa51d-d347-439e-abf0-710b227ba923", 00:16:13.077 "is_configured": true, 00:16:13.077 "data_offset": 0, 00:16:13.077 "data_size": 65536 00:16:13.077 } 00:16:13.077 ] 00:16:13.077 }' 00:16:13.077 02:39:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:13.077 02:39:38 -- common/autotest_common.sh@10 -- # set +x 00:16:14.011 02:39:38 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:14.011 [2024-07-11 02:39:38.931675] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:14.011 02:39:38 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:14.011 02:39:38 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:16:14.011 02:39:38 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:14.011 02:39:38 -- bdev/bdev_raid.sh@196 -- # return 0 00:16:14.011 02:39:38 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:16:14.011 02:39:38 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:14.011 02:39:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:14.011 02:39:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:14.011 02:39:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:14.012 02:39:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:14.012 02:39:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:14.012 02:39:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:14.012 02:39:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:14.012 02:39:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:14.012 02:39:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:14.012 02:39:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:14.012 02:39:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:14.268 02:39:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:14.268 "name": "Existed_Raid", 00:16:14.268 "uuid": "cc7e5b6d-a50a-407c-81c6-6d953e44f507", 00:16:14.268 "strip_size_kb": 0, 00:16:14.268 "state": "online", 00:16:14.268 "raid_level": "raid1", 00:16:14.268 "superblock": false, 00:16:14.268 "num_base_bdevs": 3, 00:16:14.268 "num_base_bdevs_discovered": 2, 00:16:14.268 "num_base_bdevs_operational": 2, 00:16:14.268 "base_bdevs_list": [ 00:16:14.268 { 00:16:14.268 "name": null, 00:16:14.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.268 "is_configured": false, 00:16:14.268 "data_offset": 0, 00:16:14.268 "data_size": 65536 00:16:14.268 }, 00:16:14.268 { 00:16:14.268 "name": "BaseBdev2", 00:16:14.268 "uuid": "7d7844c0-dd2c-4ebd-9fb6-35b69a478c7c", 00:16:14.268 "is_configured": true, 00:16:14.268 "data_offset": 0, 00:16:14.268 "data_size": 65536 00:16:14.268 }, 00:16:14.268 { 00:16:14.268 "name": "BaseBdev3", 00:16:14.268 "uuid": "9f2aa51d-d347-439e-abf0-710b227ba923", 00:16:14.268 "is_configured": true, 00:16:14.268 "data_offset": 0, 00:16:14.268 "data_size": 65536 00:16:14.268 } 00:16:14.268 ] 00:16:14.268 }' 00:16:14.268 02:39:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:14.268 02:39:39 -- common/autotest_common.sh@10 -- # set +x 00:16:14.832 02:39:39 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:14.832 02:39:39 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:14.832 02:39:39 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:14.832 02:39:39 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:15.090 02:39:40 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:15.090 02:39:40 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:15.090 02:39:40 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:15.348 [2024-07-11 02:39:40.229539] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:15.348 02:39:40 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:15.348 02:39:40 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:15.348 02:39:40 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:15.348 02:39:40 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:15.605 02:39:40 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:15.605 02:39:40 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:15.605 02:39:40 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:15.863 [2024-07-11 02:39:40.730163] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:15.863 [2024-07-11 02:39:40.730222] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:15.863 [2024-07-11 02:39:40.730317] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:15.863 [2024-07-11 02:39:40.743131] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:15.863 [2024-07-11 02:39:40.743168] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state offline 00:16:15.863 02:39:40 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:15.863 02:39:40 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:15.863 02:39:40 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:15.863 02:39:40 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:16.121 02:39:40 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:16.121 02:39:40 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:16.121 02:39:40 -- bdev/bdev_raid.sh@287 -- # killprocess 129611 00:16:16.121 02:39:40 -- common/autotest_common.sh@926 -- # '[' -z 129611 ']' 00:16:16.121 02:39:40 -- common/autotest_common.sh@930 -- # kill -0 129611 00:16:16.121 02:39:40 -- common/autotest_common.sh@931 -- # uname 00:16:16.121 02:39:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:16.121 02:39:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 129611 00:16:16.121 killing process with pid 129611 00:16:16.121 02:39:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:16.121 02:39:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:16.121 02:39:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 129611' 00:16:16.121 02:39:40 -- common/autotest_common.sh@945 -- # kill 129611 00:16:16.121 02:39:40 -- common/autotest_common.sh@950 -- # wait 129611 00:16:16.121 [2024-07-11 02:39:40.981604] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:16.121 [2024-07-11 02:39:40.981720] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:16.379 ************************************ 00:16:16.379 END TEST raid_state_function_test 00:16:16.379 ************************************ 00:16:16.379 02:39:41 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:16.379 00:16:16.379 real 0m10.850s 00:16:16.379 user 0m20.010s 00:16:16.379 sys 0m1.290s 00:16:16.379 02:39:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:16.379 02:39:41 -- common/autotest_common.sh@10 -- # set +x 00:16:16.379 02:39:41 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:16:16.379 02:39:41 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:16:16.379 02:39:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:16.379 02:39:41 -- common/autotest_common.sh@10 -- # set +x 00:16:16.379 ************************************ 00:16:16.379 START TEST raid_state_function_test_sb 00:16:16.379 ************************************ 00:16:16.379 02:39:41 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 3 true 00:16:16.379 02:39:41 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:16:16.379 02:39:41 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:16:16.379 02:39:41 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:16:16.379 02:39:41 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:16.379 02:39:41 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:16:16.379 02:39:41 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:16.379 02:39:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:16.379 02:39:41 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:16.379 02:39:41 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:16.379 02:39:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:16.379 02:39:41 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:16.379 02:39:41 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:16.379 02:39:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:16.379 02:39:41 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:16:16.379 02:39:41 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:16.379 02:39:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:16.379 02:39:41 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:16.379 02:39:41 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:16.379 02:39:41 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:16.379 02:39:41 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:16.379 02:39:41 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:16.379 02:39:41 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:16:16.379 02:39:41 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:16:16.379 02:39:41 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:16:16.379 02:39:41 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:16:16.379 02:39:41 -- bdev/bdev_raid.sh@226 -- # raid_pid=129986 00:16:16.379 Process raid pid: 129986 00:16:16.379 02:39:41 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 129986' 00:16:16.379 02:39:41 -- bdev/bdev_raid.sh@228 -- # waitforlisten 129986 /var/tmp/spdk-raid.sock 00:16:16.379 02:39:41 -- common/autotest_common.sh@819 -- # '[' -z 129986 ']' 00:16:16.379 02:39:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:16.379 02:39:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:16.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:16.380 02:39:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:16.380 02:39:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:16.380 02:39:41 -- common/autotest_common.sh@10 -- # set +x 00:16:16.380 02:39:41 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:16.380 [2024-07-11 02:39:41.405360] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:16:16.380 [2024-07-11 02:39:41.405766] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:16.638 [2024-07-11 02:39:41.555454] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:16.638 [2024-07-11 02:39:41.626869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.638 [2024-07-11 02:39:41.682968] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:17.571 02:39:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:17.571 02:39:42 -- common/autotest_common.sh@852 -- # return 0 00:16:17.571 02:39:42 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:17.571 [2024-07-11 02:39:42.581434] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:17.571 [2024-07-11 02:39:42.581671] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:17.571 [2024-07-11 02:39:42.581780] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:17.571 [2024-07-11 02:39:42.581835] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:17.571 [2024-07-11 02:39:42.581924] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:17.571 [2024-07-11 02:39:42.581999] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:17.571 02:39:42 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:17.571 02:39:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:17.571 02:39:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:17.571 02:39:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:17.571 02:39:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:17.571 02:39:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:17.571 02:39:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:17.571 02:39:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:17.571 02:39:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:17.571 02:39:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:17.571 02:39:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:17.571 02:39:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:17.829 02:39:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:17.829 "name": "Existed_Raid", 00:16:17.829 "uuid": "e1520764-a237-4341-b579-0bed088d85f5", 00:16:17.829 "strip_size_kb": 0, 00:16:17.829 "state": "configuring", 00:16:17.829 "raid_level": "raid1", 00:16:17.829 "superblock": true, 00:16:17.829 "num_base_bdevs": 3, 00:16:17.829 "num_base_bdevs_discovered": 0, 00:16:17.829 "num_base_bdevs_operational": 3, 00:16:17.829 "base_bdevs_list": [ 00:16:17.829 { 00:16:17.829 "name": "BaseBdev1", 00:16:17.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.829 "is_configured": false, 00:16:17.829 "data_offset": 0, 00:16:17.829 "data_size": 0 00:16:17.829 }, 00:16:17.829 { 00:16:17.829 "name": "BaseBdev2", 00:16:17.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.829 "is_configured": false, 00:16:17.829 "data_offset": 0, 00:16:17.829 "data_size": 0 00:16:17.829 }, 00:16:17.829 { 00:16:17.829 "name": "BaseBdev3", 00:16:17.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.829 "is_configured": false, 00:16:17.829 "data_offset": 0, 00:16:17.829 "data_size": 0 00:16:17.829 } 00:16:17.829 ] 00:16:17.829 }' 00:16:17.829 02:39:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:17.829 02:39:42 -- common/autotest_common.sh@10 -- # set +x 00:16:18.762 02:39:43 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:18.762 [2024-07-11 02:39:43.797593] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:18.762 [2024-07-11 02:39:43.797775] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:16:18.762 02:39:43 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:19.021 [2024-07-11 02:39:43.997714] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:19.021 [2024-07-11 02:39:43.997945] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:19.021 [2024-07-11 02:39:43.998056] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:19.021 [2024-07-11 02:39:43.998116] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:19.021 [2024-07-11 02:39:43.998224] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:19.021 [2024-07-11 02:39:43.998287] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:19.021 02:39:44 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:19.278 [2024-07-11 02:39:44.208777] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:19.278 BaseBdev1 00:16:19.278 02:39:44 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:19.278 02:39:44 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:19.278 02:39:44 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:19.278 02:39:44 -- common/autotest_common.sh@889 -- # local i 00:16:19.278 02:39:44 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:19.278 02:39:44 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:19.278 02:39:44 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:19.548 02:39:44 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:19.840 [ 00:16:19.840 { 00:16:19.840 "name": "BaseBdev1", 00:16:19.840 "aliases": [ 00:16:19.840 "4411a7ad-bf9a-46cb-8df6-c39fa962eae2" 00:16:19.840 ], 00:16:19.840 "product_name": "Malloc disk", 00:16:19.840 "block_size": 512, 00:16:19.840 "num_blocks": 65536, 00:16:19.840 "uuid": "4411a7ad-bf9a-46cb-8df6-c39fa962eae2", 00:16:19.840 "assigned_rate_limits": { 00:16:19.840 "rw_ios_per_sec": 0, 00:16:19.840 "rw_mbytes_per_sec": 0, 00:16:19.840 "r_mbytes_per_sec": 0, 00:16:19.840 "w_mbytes_per_sec": 0 00:16:19.840 }, 00:16:19.840 "claimed": true, 00:16:19.840 "claim_type": "exclusive_write", 00:16:19.840 "zoned": false, 00:16:19.840 "supported_io_types": { 00:16:19.840 "read": true, 00:16:19.840 "write": true, 00:16:19.840 "unmap": true, 00:16:19.840 "write_zeroes": true, 00:16:19.840 "flush": true, 00:16:19.840 "reset": true, 00:16:19.840 "compare": false, 00:16:19.840 "compare_and_write": false, 00:16:19.840 "abort": true, 00:16:19.840 "nvme_admin": false, 00:16:19.840 "nvme_io": false 00:16:19.841 }, 00:16:19.841 "memory_domains": [ 00:16:19.841 { 00:16:19.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.841 "dma_device_type": 2 00:16:19.841 } 00:16:19.841 ], 00:16:19.841 "driver_specific": {} 00:16:19.841 } 00:16:19.841 ] 00:16:19.841 02:39:44 -- common/autotest_common.sh@895 -- # return 0 00:16:19.841 02:39:44 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:19.841 02:39:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:19.841 02:39:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:19.841 02:39:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:19.841 02:39:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:19.841 02:39:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:19.841 02:39:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:19.841 02:39:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:19.841 02:39:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:19.841 02:39:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:19.841 02:39:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:19.841 02:39:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:19.841 02:39:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:19.841 "name": "Existed_Raid", 00:16:19.841 "uuid": "c2110d0e-41f2-42d3-a21d-2018db287d71", 00:16:19.841 "strip_size_kb": 0, 00:16:19.841 "state": "configuring", 00:16:19.841 "raid_level": "raid1", 00:16:19.841 "superblock": true, 00:16:19.841 "num_base_bdevs": 3, 00:16:19.841 "num_base_bdevs_discovered": 1, 00:16:19.841 "num_base_bdevs_operational": 3, 00:16:19.841 "base_bdevs_list": [ 00:16:19.841 { 00:16:19.841 "name": "BaseBdev1", 00:16:19.841 "uuid": "4411a7ad-bf9a-46cb-8df6-c39fa962eae2", 00:16:19.841 "is_configured": true, 00:16:19.841 "data_offset": 2048, 00:16:19.841 "data_size": 63488 00:16:19.841 }, 00:16:19.841 { 00:16:19.841 "name": "BaseBdev2", 00:16:19.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.841 "is_configured": false, 00:16:19.841 "data_offset": 0, 00:16:19.841 "data_size": 0 00:16:19.841 }, 00:16:19.841 { 00:16:19.841 "name": "BaseBdev3", 00:16:19.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.841 "is_configured": false, 00:16:19.841 "data_offset": 0, 00:16:19.841 "data_size": 0 00:16:19.841 } 00:16:19.841 ] 00:16:19.841 }' 00:16:19.841 02:39:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:19.841 02:39:44 -- common/autotest_common.sh@10 -- # set +x 00:16:20.408 02:39:45 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:20.666 [2024-07-11 02:39:45.665078] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:20.667 [2024-07-11 02:39:45.665247] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005a80 name Existed_Raid, state configuring 00:16:20.667 02:39:45 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:16:20.667 02:39:45 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:20.926 02:39:45 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:21.185 BaseBdev1 00:16:21.185 02:39:46 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:16:21.185 02:39:46 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:21.185 02:39:46 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:21.185 02:39:46 -- common/autotest_common.sh@889 -- # local i 00:16:21.185 02:39:46 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:21.185 02:39:46 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:21.185 02:39:46 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:21.444 02:39:46 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:21.444 [ 00:16:21.444 { 00:16:21.444 "name": "BaseBdev1", 00:16:21.444 "aliases": [ 00:16:21.444 "02e3aedb-1453-44ab-aedb-061894c97b37" 00:16:21.444 ], 00:16:21.444 "product_name": "Malloc disk", 00:16:21.444 "block_size": 512, 00:16:21.444 "num_blocks": 65536, 00:16:21.444 "uuid": "02e3aedb-1453-44ab-aedb-061894c97b37", 00:16:21.444 "assigned_rate_limits": { 00:16:21.444 "rw_ios_per_sec": 0, 00:16:21.444 "rw_mbytes_per_sec": 0, 00:16:21.444 "r_mbytes_per_sec": 0, 00:16:21.444 "w_mbytes_per_sec": 0 00:16:21.444 }, 00:16:21.444 "claimed": false, 00:16:21.444 "zoned": false, 00:16:21.444 "supported_io_types": { 00:16:21.444 "read": true, 00:16:21.444 "write": true, 00:16:21.444 "unmap": true, 00:16:21.444 "write_zeroes": true, 00:16:21.444 "flush": true, 00:16:21.444 "reset": true, 00:16:21.444 "compare": false, 00:16:21.444 "compare_and_write": false, 00:16:21.444 "abort": true, 00:16:21.444 "nvme_admin": false, 00:16:21.444 "nvme_io": false 00:16:21.444 }, 00:16:21.444 "memory_domains": [ 00:16:21.444 { 00:16:21.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.444 "dma_device_type": 2 00:16:21.444 } 00:16:21.444 ], 00:16:21.444 "driver_specific": {} 00:16:21.444 } 00:16:21.444 ] 00:16:21.444 02:39:46 -- common/autotest_common.sh@895 -- # return 0 00:16:21.444 02:39:46 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:21.703 [2024-07-11 02:39:46.727471] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:21.703 [2024-07-11 02:39:46.729205] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:21.703 [2024-07-11 02:39:46.729360] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:21.703 [2024-07-11 02:39:46.729455] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:21.703 [2024-07-11 02:39:46.729574] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:21.703 02:39:46 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:21.703 02:39:46 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:21.703 02:39:46 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:21.703 02:39:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:21.703 02:39:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:21.703 02:39:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:21.703 02:39:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:21.703 02:39:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:21.703 02:39:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:21.703 02:39:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:21.703 02:39:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:21.703 02:39:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:21.703 02:39:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:21.703 02:39:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:21.962 02:39:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:21.962 "name": "Existed_Raid", 00:16:21.962 "uuid": "1aaeda7c-4f4c-411b-8ab5-51f5f5c1aad6", 00:16:21.962 "strip_size_kb": 0, 00:16:21.962 "state": "configuring", 00:16:21.962 "raid_level": "raid1", 00:16:21.962 "superblock": true, 00:16:21.962 "num_base_bdevs": 3, 00:16:21.962 "num_base_bdevs_discovered": 1, 00:16:21.962 "num_base_bdevs_operational": 3, 00:16:21.962 "base_bdevs_list": [ 00:16:21.962 { 00:16:21.962 "name": "BaseBdev1", 00:16:21.962 "uuid": "02e3aedb-1453-44ab-aedb-061894c97b37", 00:16:21.962 "is_configured": true, 00:16:21.962 "data_offset": 2048, 00:16:21.962 "data_size": 63488 00:16:21.962 }, 00:16:21.962 { 00:16:21.962 "name": "BaseBdev2", 00:16:21.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.962 "is_configured": false, 00:16:21.962 "data_offset": 0, 00:16:21.962 "data_size": 0 00:16:21.962 }, 00:16:21.962 { 00:16:21.962 "name": "BaseBdev3", 00:16:21.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.962 "is_configured": false, 00:16:21.962 "data_offset": 0, 00:16:21.962 "data_size": 0 00:16:21.962 } 00:16:21.962 ] 00:16:21.962 }' 00:16:21.962 02:39:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:21.962 02:39:46 -- common/autotest_common.sh@10 -- # set +x 00:16:22.898 02:39:47 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:22.898 [2024-07-11 02:39:47.897781] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:22.898 BaseBdev2 00:16:22.898 02:39:47 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:22.898 02:39:47 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:16:22.898 02:39:47 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:22.898 02:39:47 -- common/autotest_common.sh@889 -- # local i 00:16:22.898 02:39:47 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:22.898 02:39:47 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:22.898 02:39:47 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:23.156 02:39:48 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:23.415 [ 00:16:23.415 { 00:16:23.415 "name": "BaseBdev2", 00:16:23.415 "aliases": [ 00:16:23.415 "335fe7ea-55d7-4d27-a2d7-8d6ce1ce0449" 00:16:23.415 ], 00:16:23.415 "product_name": "Malloc disk", 00:16:23.415 "block_size": 512, 00:16:23.415 "num_blocks": 65536, 00:16:23.415 "uuid": "335fe7ea-55d7-4d27-a2d7-8d6ce1ce0449", 00:16:23.415 "assigned_rate_limits": { 00:16:23.415 "rw_ios_per_sec": 0, 00:16:23.415 "rw_mbytes_per_sec": 0, 00:16:23.415 "r_mbytes_per_sec": 0, 00:16:23.415 "w_mbytes_per_sec": 0 00:16:23.415 }, 00:16:23.415 "claimed": true, 00:16:23.415 "claim_type": "exclusive_write", 00:16:23.415 "zoned": false, 00:16:23.415 "supported_io_types": { 00:16:23.415 "read": true, 00:16:23.415 "write": true, 00:16:23.415 "unmap": true, 00:16:23.415 "write_zeroes": true, 00:16:23.415 "flush": true, 00:16:23.415 "reset": true, 00:16:23.415 "compare": false, 00:16:23.415 "compare_and_write": false, 00:16:23.415 "abort": true, 00:16:23.415 "nvme_admin": false, 00:16:23.415 "nvme_io": false 00:16:23.415 }, 00:16:23.415 "memory_domains": [ 00:16:23.415 { 00:16:23.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:23.415 "dma_device_type": 2 00:16:23.415 } 00:16:23.415 ], 00:16:23.415 "driver_specific": {} 00:16:23.415 } 00:16:23.415 ] 00:16:23.415 02:39:48 -- common/autotest_common.sh@895 -- # return 0 00:16:23.415 02:39:48 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:23.415 02:39:48 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:23.415 02:39:48 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:23.415 02:39:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:23.415 02:39:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:23.415 02:39:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:23.415 02:39:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:23.415 02:39:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:23.415 02:39:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:23.415 02:39:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:23.415 02:39:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:23.415 02:39:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:23.415 02:39:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:23.415 02:39:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.673 02:39:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:23.673 "name": "Existed_Raid", 00:16:23.674 "uuid": "1aaeda7c-4f4c-411b-8ab5-51f5f5c1aad6", 00:16:23.674 "strip_size_kb": 0, 00:16:23.674 "state": "configuring", 00:16:23.674 "raid_level": "raid1", 00:16:23.674 "superblock": true, 00:16:23.674 "num_base_bdevs": 3, 00:16:23.674 "num_base_bdevs_discovered": 2, 00:16:23.674 "num_base_bdevs_operational": 3, 00:16:23.674 "base_bdevs_list": [ 00:16:23.674 { 00:16:23.674 "name": "BaseBdev1", 00:16:23.674 "uuid": "02e3aedb-1453-44ab-aedb-061894c97b37", 00:16:23.674 "is_configured": true, 00:16:23.674 "data_offset": 2048, 00:16:23.674 "data_size": 63488 00:16:23.674 }, 00:16:23.674 { 00:16:23.674 "name": "BaseBdev2", 00:16:23.674 "uuid": "335fe7ea-55d7-4d27-a2d7-8d6ce1ce0449", 00:16:23.674 "is_configured": true, 00:16:23.674 "data_offset": 2048, 00:16:23.674 "data_size": 63488 00:16:23.674 }, 00:16:23.674 { 00:16:23.674 "name": "BaseBdev3", 00:16:23.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.674 "is_configured": false, 00:16:23.674 "data_offset": 0, 00:16:23.674 "data_size": 0 00:16:23.674 } 00:16:23.674 ] 00:16:23.674 }' 00:16:23.674 02:39:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:23.674 02:39:48 -- common/autotest_common.sh@10 -- # set +x 00:16:24.241 02:39:49 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:24.500 [2024-07-11 02:39:49.435167] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:24.500 [2024-07-11 02:39:49.435468] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006980 00:16:24.500 [2024-07-11 02:39:49.435491] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:24.500 [2024-07-11 02:39:49.435645] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:16:24.500 BaseBdev3 00:16:24.500 [2024-07-11 02:39:49.436041] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006980 00:16:24.500 [2024-07-11 02:39:49.436064] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006980 00:16:24.500 [2024-07-11 02:39:49.436218] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:24.500 02:39:49 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:24.500 02:39:49 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:16:24.500 02:39:49 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:24.500 02:39:49 -- common/autotest_common.sh@889 -- # local i 00:16:24.500 02:39:49 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:24.500 02:39:49 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:24.500 02:39:49 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:24.759 02:39:49 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:25.017 [ 00:16:25.017 { 00:16:25.017 "name": "BaseBdev3", 00:16:25.017 "aliases": [ 00:16:25.017 "2eefc210-dca4-4bc6-8a1a-3e899e302e41" 00:16:25.017 ], 00:16:25.017 "product_name": "Malloc disk", 00:16:25.017 "block_size": 512, 00:16:25.017 "num_blocks": 65536, 00:16:25.017 "uuid": "2eefc210-dca4-4bc6-8a1a-3e899e302e41", 00:16:25.017 "assigned_rate_limits": { 00:16:25.017 "rw_ios_per_sec": 0, 00:16:25.017 "rw_mbytes_per_sec": 0, 00:16:25.017 "r_mbytes_per_sec": 0, 00:16:25.017 "w_mbytes_per_sec": 0 00:16:25.017 }, 00:16:25.017 "claimed": true, 00:16:25.017 "claim_type": "exclusive_write", 00:16:25.017 "zoned": false, 00:16:25.017 "supported_io_types": { 00:16:25.017 "read": true, 00:16:25.017 "write": true, 00:16:25.017 "unmap": true, 00:16:25.017 "write_zeroes": true, 00:16:25.017 "flush": true, 00:16:25.017 "reset": true, 00:16:25.017 "compare": false, 00:16:25.017 "compare_and_write": false, 00:16:25.017 "abort": true, 00:16:25.017 "nvme_admin": false, 00:16:25.017 "nvme_io": false 00:16:25.017 }, 00:16:25.017 "memory_domains": [ 00:16:25.017 { 00:16:25.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.017 "dma_device_type": 2 00:16:25.017 } 00:16:25.017 ], 00:16:25.017 "driver_specific": {} 00:16:25.017 } 00:16:25.017 ] 00:16:25.017 02:39:49 -- common/autotest_common.sh@895 -- # return 0 00:16:25.017 02:39:49 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:25.017 02:39:49 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:25.017 02:39:49 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:25.017 02:39:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:25.017 02:39:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:25.017 02:39:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:25.017 02:39:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:25.017 02:39:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:25.017 02:39:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:25.017 02:39:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:25.017 02:39:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:25.017 02:39:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:25.017 02:39:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:25.017 02:39:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.276 02:39:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:25.276 "name": "Existed_Raid", 00:16:25.276 "uuid": "1aaeda7c-4f4c-411b-8ab5-51f5f5c1aad6", 00:16:25.276 "strip_size_kb": 0, 00:16:25.276 "state": "online", 00:16:25.276 "raid_level": "raid1", 00:16:25.276 "superblock": true, 00:16:25.276 "num_base_bdevs": 3, 00:16:25.276 "num_base_bdevs_discovered": 3, 00:16:25.276 "num_base_bdevs_operational": 3, 00:16:25.276 "base_bdevs_list": [ 00:16:25.276 { 00:16:25.276 "name": "BaseBdev1", 00:16:25.276 "uuid": "02e3aedb-1453-44ab-aedb-061894c97b37", 00:16:25.276 "is_configured": true, 00:16:25.276 "data_offset": 2048, 00:16:25.276 "data_size": 63488 00:16:25.276 }, 00:16:25.276 { 00:16:25.276 "name": "BaseBdev2", 00:16:25.276 "uuid": "335fe7ea-55d7-4d27-a2d7-8d6ce1ce0449", 00:16:25.276 "is_configured": true, 00:16:25.276 "data_offset": 2048, 00:16:25.276 "data_size": 63488 00:16:25.276 }, 00:16:25.276 { 00:16:25.276 "name": "BaseBdev3", 00:16:25.276 "uuid": "2eefc210-dca4-4bc6-8a1a-3e899e302e41", 00:16:25.276 "is_configured": true, 00:16:25.276 "data_offset": 2048, 00:16:25.276 "data_size": 63488 00:16:25.276 } 00:16:25.276 ] 00:16:25.276 }' 00:16:25.276 02:39:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:25.276 02:39:50 -- common/autotest_common.sh@10 -- # set +x 00:16:25.842 02:39:50 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:26.101 [2024-07-11 02:39:51.147682] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:26.101 02:39:51 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:26.101 02:39:51 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:16:26.101 02:39:51 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:26.101 02:39:51 -- bdev/bdev_raid.sh@196 -- # return 0 00:16:26.101 02:39:51 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:16:26.101 02:39:51 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:26.101 02:39:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:26.101 02:39:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:26.101 02:39:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:26.101 02:39:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:26.101 02:39:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:26.101 02:39:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:26.101 02:39:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:26.101 02:39:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:26.101 02:39:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:26.101 02:39:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:26.101 02:39:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.360 02:39:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:26.360 "name": "Existed_Raid", 00:16:26.360 "uuid": "1aaeda7c-4f4c-411b-8ab5-51f5f5c1aad6", 00:16:26.360 "strip_size_kb": 0, 00:16:26.360 "state": "online", 00:16:26.360 "raid_level": "raid1", 00:16:26.360 "superblock": true, 00:16:26.360 "num_base_bdevs": 3, 00:16:26.360 "num_base_bdevs_discovered": 2, 00:16:26.360 "num_base_bdevs_operational": 2, 00:16:26.360 "base_bdevs_list": [ 00:16:26.360 { 00:16:26.360 "name": null, 00:16:26.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.360 "is_configured": false, 00:16:26.360 "data_offset": 2048, 00:16:26.360 "data_size": 63488 00:16:26.360 }, 00:16:26.360 { 00:16:26.360 "name": "BaseBdev2", 00:16:26.360 "uuid": "335fe7ea-55d7-4d27-a2d7-8d6ce1ce0449", 00:16:26.360 "is_configured": true, 00:16:26.360 "data_offset": 2048, 00:16:26.360 "data_size": 63488 00:16:26.360 }, 00:16:26.360 { 00:16:26.360 "name": "BaseBdev3", 00:16:26.360 "uuid": "2eefc210-dca4-4bc6-8a1a-3e899e302e41", 00:16:26.360 "is_configured": true, 00:16:26.360 "data_offset": 2048, 00:16:26.360 "data_size": 63488 00:16:26.360 } 00:16:26.360 ] 00:16:26.360 }' 00:16:26.360 02:39:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:26.360 02:39:51 -- common/autotest_common.sh@10 -- # set +x 00:16:26.927 02:39:51 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:26.927 02:39:51 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:26.927 02:39:51 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:26.927 02:39:51 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:27.186 02:39:52 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:27.186 02:39:52 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:27.186 02:39:52 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:27.445 [2024-07-11 02:39:52.489551] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:27.445 02:39:52 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:27.445 02:39:52 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:27.445 02:39:52 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:27.445 02:39:52 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:27.703 02:39:52 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:27.703 02:39:52 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:27.703 02:39:52 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:27.962 [2024-07-11 02:39:52.975199] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:27.962 [2024-07-11 02:39:52.975233] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:27.962 [2024-07-11 02:39:52.975331] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:27.962 [2024-07-11 02:39:52.985876] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:27.962 [2024-07-11 02:39:52.985984] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state offline 00:16:27.962 02:39:52 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:27.962 02:39:52 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:27.962 02:39:52 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:27.962 02:39:52 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:28.220 02:39:53 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:28.220 02:39:53 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:28.221 02:39:53 -- bdev/bdev_raid.sh@287 -- # killprocess 129986 00:16:28.221 02:39:53 -- common/autotest_common.sh@926 -- # '[' -z 129986 ']' 00:16:28.221 02:39:53 -- common/autotest_common.sh@930 -- # kill -0 129986 00:16:28.221 02:39:53 -- common/autotest_common.sh@931 -- # uname 00:16:28.221 02:39:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:28.221 02:39:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 129986 00:16:28.221 killing process with pid 129986 00:16:28.221 02:39:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:28.221 02:39:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:28.221 02:39:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 129986' 00:16:28.221 02:39:53 -- common/autotest_common.sh@945 -- # kill 129986 00:16:28.221 02:39:53 -- common/autotest_common.sh@950 -- # wait 129986 00:16:28.221 [2024-07-11 02:39:53.224833] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:28.221 [2024-07-11 02:39:53.224905] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:28.479 ************************************ 00:16:28.479 END TEST raid_state_function_test_sb 00:16:28.479 ************************************ 00:16:28.479 02:39:53 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:28.479 00:16:28.479 real 0m12.094s 00:16:28.479 user 0m22.438s 00:16:28.479 sys 0m1.469s 00:16:28.479 02:39:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:28.479 02:39:53 -- common/autotest_common.sh@10 -- # set +x 00:16:28.479 02:39:53 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:16:28.479 02:39:53 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:16:28.479 02:39:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:28.479 02:39:53 -- common/autotest_common.sh@10 -- # set +x 00:16:28.479 ************************************ 00:16:28.479 START TEST raid_superblock_test 00:16:28.479 ************************************ 00:16:28.479 02:39:53 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 3 00:16:28.479 02:39:53 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:16:28.479 02:39:53 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:16:28.479 02:39:53 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:16:28.479 02:39:53 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:16:28.479 02:39:53 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:16:28.479 02:39:53 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:16:28.479 02:39:53 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:16:28.479 02:39:53 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:16:28.479 02:39:53 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:16:28.479 02:39:53 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:16:28.479 02:39:53 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:16:28.479 02:39:53 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:16:28.479 02:39:53 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:16:28.479 02:39:53 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:16:28.479 02:39:53 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:16:28.479 02:39:53 -- bdev/bdev_raid.sh@357 -- # raid_pid=130391 00:16:28.479 02:39:53 -- bdev/bdev_raid.sh@358 -- # waitforlisten 130391 /var/tmp/spdk-raid.sock 00:16:28.479 02:39:53 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:28.479 02:39:53 -- common/autotest_common.sh@819 -- # '[' -z 130391 ']' 00:16:28.479 02:39:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:28.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:28.479 02:39:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:28.479 02:39:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:28.479 02:39:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:28.479 02:39:53 -- common/autotest_common.sh@10 -- # set +x 00:16:28.479 [2024-07-11 02:39:53.536068] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:16:28.479 [2024-07-11 02:39:53.536259] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130391 ] 00:16:28.738 [2024-07-11 02:39:53.673368] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.738 [2024-07-11 02:39:53.728100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.738 [2024-07-11 02:39:53.778147] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:29.671 02:39:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:29.671 02:39:54 -- common/autotest_common.sh@852 -- # return 0 00:16:29.671 02:39:54 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:16:29.671 02:39:54 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:29.671 02:39:54 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:16:29.671 02:39:54 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:16:29.671 02:39:54 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:29.671 02:39:54 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:29.671 02:39:54 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:29.671 02:39:54 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:29.671 02:39:54 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:29.671 malloc1 00:16:29.671 02:39:54 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:29.929 [2024-07-11 02:39:54.955944] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:29.929 [2024-07-11 02:39:54.956062] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:29.929 [2024-07-11 02:39:54.956094] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005d80 00:16:29.929 [2024-07-11 02:39:54.956133] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:29.929 [2024-07-11 02:39:54.958586] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:29.929 [2024-07-11 02:39:54.958666] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:29.929 pt1 00:16:29.929 02:39:54 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:29.929 02:39:54 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:29.929 02:39:54 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:16:29.929 02:39:54 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:16:29.929 02:39:54 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:29.929 02:39:54 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:29.929 02:39:54 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:29.929 02:39:54 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:29.929 02:39:54 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:30.187 malloc2 00:16:30.187 02:39:55 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:30.445 [2024-07-11 02:39:55.398140] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:30.445 [2024-07-11 02:39:55.398250] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.445 [2024-07-11 02:39:55.398285] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:16:30.445 [2024-07-11 02:39:55.398323] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.445 [2024-07-11 02:39:55.400305] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.445 [2024-07-11 02:39:55.400369] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:30.445 pt2 00:16:30.445 02:39:55 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:30.445 02:39:55 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:30.445 02:39:55 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:16:30.445 02:39:55 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:16:30.445 02:39:55 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:30.445 02:39:55 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:30.445 02:39:55 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:30.445 02:39:55 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:30.445 02:39:55 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:16:30.704 malloc3 00:16:30.704 02:39:55 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:30.962 [2024-07-11 02:39:55.881589] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:30.962 [2024-07-11 02:39:55.881695] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.962 [2024-07-11 02:39:55.881734] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:16:30.962 [2024-07-11 02:39:55.881821] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.962 [2024-07-11 02:39:55.883737] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.962 [2024-07-11 02:39:55.883790] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:30.962 pt3 00:16:30.962 02:39:55 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:30.962 02:39:55 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:30.962 02:39:55 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:16:31.230 [2024-07-11 02:39:56.069682] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:31.230 [2024-07-11 02:39:56.071356] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:31.230 [2024-07-11 02:39:56.071426] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:31.230 [2024-07-11 02:39:56.071641] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007b80 00:16:31.230 [2024-07-11 02:39:56.071673] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:31.230 [2024-07-11 02:39:56.071811] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:16:31.230 [2024-07-11 02:39:56.072200] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007b80 00:16:31.230 [2024-07-11 02:39:56.072224] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007b80 00:16:31.230 [2024-07-11 02:39:56.072376] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:31.230 02:39:56 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:31.230 02:39:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:31.230 02:39:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:31.230 02:39:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:31.230 02:39:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:31.230 02:39:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:31.230 02:39:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:31.230 02:39:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:31.230 02:39:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:31.230 02:39:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:31.230 02:39:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:31.230 02:39:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.230 02:39:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:31.230 "name": "raid_bdev1", 00:16:31.230 "uuid": "94c40156-2974-4972-a341-f11407f63cde", 00:16:31.230 "strip_size_kb": 0, 00:16:31.230 "state": "online", 00:16:31.230 "raid_level": "raid1", 00:16:31.230 "superblock": true, 00:16:31.231 "num_base_bdevs": 3, 00:16:31.231 "num_base_bdevs_discovered": 3, 00:16:31.231 "num_base_bdevs_operational": 3, 00:16:31.231 "base_bdevs_list": [ 00:16:31.231 { 00:16:31.231 "name": "pt1", 00:16:31.231 "uuid": "8dd3fd4e-494f-5e1b-9866-1fb11d01773d", 00:16:31.231 "is_configured": true, 00:16:31.231 "data_offset": 2048, 00:16:31.231 "data_size": 63488 00:16:31.231 }, 00:16:31.231 { 00:16:31.231 "name": "pt2", 00:16:31.231 "uuid": "32e7695f-3023-5ca3-91d7-944a840ccb33", 00:16:31.231 "is_configured": true, 00:16:31.231 "data_offset": 2048, 00:16:31.231 "data_size": 63488 00:16:31.231 }, 00:16:31.231 { 00:16:31.231 "name": "pt3", 00:16:31.231 "uuid": "b670ed41-de58-510e-9bd5-7df33e418335", 00:16:31.231 "is_configured": true, 00:16:31.231 "data_offset": 2048, 00:16:31.231 "data_size": 63488 00:16:31.231 } 00:16:31.231 ] 00:16:31.231 }' 00:16:31.231 02:39:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:31.231 02:39:56 -- common/autotest_common.sh@10 -- # set +x 00:16:32.187 02:39:56 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:32.187 02:39:56 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:16:32.187 [2024-07-11 02:39:57.162066] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:32.187 02:39:57 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=94c40156-2974-4972-a341-f11407f63cde 00:16:32.187 02:39:57 -- bdev/bdev_raid.sh@380 -- # '[' -z 94c40156-2974-4972-a341-f11407f63cde ']' 00:16:32.187 02:39:57 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:32.446 [2024-07-11 02:39:57.349863] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:32.446 [2024-07-11 02:39:57.349889] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:32.446 [2024-07-11 02:39:57.349977] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:32.446 [2024-07-11 02:39:57.350067] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:32.446 [2024-07-11 02:39:57.350079] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007b80 name raid_bdev1, state offline 00:16:32.446 02:39:57 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:32.446 02:39:57 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:16:32.704 02:39:57 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:16:32.704 02:39:57 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:16:32.704 02:39:57 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:32.705 02:39:57 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:32.963 02:39:57 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:32.964 02:39:57 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:32.964 02:39:58 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:32.964 02:39:58 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:33.222 02:39:58 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:33.222 02:39:58 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:33.479 02:39:58 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:16:33.479 02:39:58 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:33.479 02:39:58 -- common/autotest_common.sh@640 -- # local es=0 00:16:33.479 02:39:58 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:33.479 02:39:58 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:33.479 02:39:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:33.479 02:39:58 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:33.479 02:39:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:33.479 02:39:58 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:33.479 02:39:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:33.479 02:39:58 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:33.479 02:39:58 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:33.479 02:39:58 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:33.737 [2024-07-11 02:39:58.662104] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:33.737 [2024-07-11 02:39:58.664069] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:33.737 [2024-07-11 02:39:58.664122] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:33.737 [2024-07-11 02:39:58.664177] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:16:33.737 [2024-07-11 02:39:58.664271] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:16:33.737 [2024-07-11 02:39:58.664304] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:16:33.737 [2024-07-11 02:39:58.664395] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:33.737 [2024-07-11 02:39:58.664408] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state configuring 00:16:33.737 request: 00:16:33.737 { 00:16:33.737 "name": "raid_bdev1", 00:16:33.737 "raid_level": "raid1", 00:16:33.737 "base_bdevs": [ 00:16:33.737 "malloc1", 00:16:33.737 "malloc2", 00:16:33.737 "malloc3" 00:16:33.737 ], 00:16:33.737 "superblock": false, 00:16:33.737 "method": "bdev_raid_create", 00:16:33.737 "req_id": 1 00:16:33.737 } 00:16:33.737 Got JSON-RPC error response 00:16:33.737 response: 00:16:33.737 { 00:16:33.737 "code": -17, 00:16:33.737 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:33.737 } 00:16:33.737 02:39:58 -- common/autotest_common.sh@643 -- # es=1 00:16:33.737 02:39:58 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:33.737 02:39:58 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:33.737 02:39:58 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:33.737 02:39:58 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:33.737 02:39:58 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:16:33.994 02:39:58 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:16:33.994 02:39:58 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:16:33.994 02:39:58 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:34.251 [2024-07-11 02:39:59.170213] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:34.251 [2024-07-11 02:39:59.170309] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:34.251 [2024-07-11 02:39:59.170348] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:16:34.251 [2024-07-11 02:39:59.170371] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:34.251 [2024-07-11 02:39:59.172562] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:34.251 [2024-07-11 02:39:59.172611] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:34.251 [2024-07-11 02:39:59.172724] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:16:34.251 [2024-07-11 02:39:59.172786] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:34.251 pt1 00:16:34.251 02:39:59 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:16:34.251 02:39:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:34.251 02:39:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:34.251 02:39:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:34.251 02:39:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:34.251 02:39:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:34.251 02:39:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:34.251 02:39:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:34.251 02:39:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:34.251 02:39:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:34.251 02:39:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:34.251 02:39:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.508 02:39:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:34.509 "name": "raid_bdev1", 00:16:34.509 "uuid": "94c40156-2974-4972-a341-f11407f63cde", 00:16:34.509 "strip_size_kb": 0, 00:16:34.509 "state": "configuring", 00:16:34.509 "raid_level": "raid1", 00:16:34.509 "superblock": true, 00:16:34.509 "num_base_bdevs": 3, 00:16:34.509 "num_base_bdevs_discovered": 1, 00:16:34.509 "num_base_bdevs_operational": 3, 00:16:34.509 "base_bdevs_list": [ 00:16:34.509 { 00:16:34.509 "name": "pt1", 00:16:34.509 "uuid": "8dd3fd4e-494f-5e1b-9866-1fb11d01773d", 00:16:34.509 "is_configured": true, 00:16:34.509 "data_offset": 2048, 00:16:34.509 "data_size": 63488 00:16:34.509 }, 00:16:34.509 { 00:16:34.509 "name": null, 00:16:34.509 "uuid": "32e7695f-3023-5ca3-91d7-944a840ccb33", 00:16:34.509 "is_configured": false, 00:16:34.509 "data_offset": 2048, 00:16:34.509 "data_size": 63488 00:16:34.509 }, 00:16:34.509 { 00:16:34.509 "name": null, 00:16:34.509 "uuid": "b670ed41-de58-510e-9bd5-7df33e418335", 00:16:34.509 "is_configured": false, 00:16:34.509 "data_offset": 2048, 00:16:34.509 "data_size": 63488 00:16:34.509 } 00:16:34.509 ] 00:16:34.509 }' 00:16:34.509 02:39:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:34.509 02:39:59 -- common/autotest_common.sh@10 -- # set +x 00:16:35.073 02:40:00 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:16:35.073 02:40:00 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:35.331 [2024-07-11 02:40:00.306574] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:35.331 [2024-07-11 02:40:00.306722] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.331 [2024-07-11 02:40:00.306766] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:35.332 [2024-07-11 02:40:00.306801] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.332 [2024-07-11 02:40:00.307361] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.332 [2024-07-11 02:40:00.307461] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:35.332 [2024-07-11 02:40:00.307562] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:35.332 [2024-07-11 02:40:00.307620] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:35.332 pt2 00:16:35.332 02:40:00 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:35.590 [2024-07-11 02:40:00.558641] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:35.590 02:40:00 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:16:35.590 02:40:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:35.590 02:40:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:35.590 02:40:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:35.590 02:40:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:35.590 02:40:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:35.590 02:40:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:35.590 02:40:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:35.590 02:40:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:35.590 02:40:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:35.590 02:40:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:35.590 02:40:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.848 02:40:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:35.848 "name": "raid_bdev1", 00:16:35.848 "uuid": "94c40156-2974-4972-a341-f11407f63cde", 00:16:35.848 "strip_size_kb": 0, 00:16:35.848 "state": "configuring", 00:16:35.848 "raid_level": "raid1", 00:16:35.848 "superblock": true, 00:16:35.848 "num_base_bdevs": 3, 00:16:35.848 "num_base_bdevs_discovered": 1, 00:16:35.848 "num_base_bdevs_operational": 3, 00:16:35.848 "base_bdevs_list": [ 00:16:35.848 { 00:16:35.848 "name": "pt1", 00:16:35.848 "uuid": "8dd3fd4e-494f-5e1b-9866-1fb11d01773d", 00:16:35.848 "is_configured": true, 00:16:35.848 "data_offset": 2048, 00:16:35.848 "data_size": 63488 00:16:35.848 }, 00:16:35.848 { 00:16:35.848 "name": null, 00:16:35.848 "uuid": "32e7695f-3023-5ca3-91d7-944a840ccb33", 00:16:35.848 "is_configured": false, 00:16:35.848 "data_offset": 2048, 00:16:35.848 "data_size": 63488 00:16:35.848 }, 00:16:35.848 { 00:16:35.848 "name": null, 00:16:35.848 "uuid": "b670ed41-de58-510e-9bd5-7df33e418335", 00:16:35.848 "is_configured": false, 00:16:35.848 "data_offset": 2048, 00:16:35.848 "data_size": 63488 00:16:35.848 } 00:16:35.848 ] 00:16:35.848 }' 00:16:35.848 02:40:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:35.848 02:40:00 -- common/autotest_common.sh@10 -- # set +x 00:16:36.781 02:40:01 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:16:36.781 02:40:01 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:36.781 02:40:01 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:36.781 [2024-07-11 02:40:01.753726] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:36.781 [2024-07-11 02:40:01.753830] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:36.781 [2024-07-11 02:40:01.753865] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:36.781 [2024-07-11 02:40:01.753893] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:36.781 [2024-07-11 02:40:01.754359] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:36.781 [2024-07-11 02:40:01.754402] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:36.781 [2024-07-11 02:40:01.754492] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:36.781 [2024-07-11 02:40:01.754519] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:36.781 pt2 00:16:36.781 02:40:01 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:36.781 02:40:01 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:36.781 02:40:01 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:37.039 [2024-07-11 02:40:01.945779] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:37.039 [2024-07-11 02:40:01.945856] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:37.039 [2024-07-11 02:40:01.945888] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:37.039 [2024-07-11 02:40:01.945913] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:37.039 [2024-07-11 02:40:01.946316] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:37.039 [2024-07-11 02:40:01.946362] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:37.039 [2024-07-11 02:40:01.946464] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:16:37.039 [2024-07-11 02:40:01.946497] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:37.039 [2024-07-11 02:40:01.946643] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:16:37.039 [2024-07-11 02:40:01.946664] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:37.039 [2024-07-11 02:40:01.946748] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:16:37.039 [2024-07-11 02:40:01.947117] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:16:37.039 [2024-07-11 02:40:01.947138] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:16:37.039 [2024-07-11 02:40:01.947239] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:37.039 pt3 00:16:37.039 02:40:01 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:37.039 02:40:01 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:37.039 02:40:01 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:37.039 02:40:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:37.040 02:40:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:37.040 02:40:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:37.040 02:40:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:37.040 02:40:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:37.040 02:40:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:37.040 02:40:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:37.040 02:40:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:37.040 02:40:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:37.040 02:40:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:37.040 02:40:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.298 02:40:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:37.298 "name": "raid_bdev1", 00:16:37.298 "uuid": "94c40156-2974-4972-a341-f11407f63cde", 00:16:37.298 "strip_size_kb": 0, 00:16:37.298 "state": "online", 00:16:37.298 "raid_level": "raid1", 00:16:37.298 "superblock": true, 00:16:37.298 "num_base_bdevs": 3, 00:16:37.298 "num_base_bdevs_discovered": 3, 00:16:37.298 "num_base_bdevs_operational": 3, 00:16:37.298 "base_bdevs_list": [ 00:16:37.298 { 00:16:37.298 "name": "pt1", 00:16:37.298 "uuid": "8dd3fd4e-494f-5e1b-9866-1fb11d01773d", 00:16:37.298 "is_configured": true, 00:16:37.298 "data_offset": 2048, 00:16:37.298 "data_size": 63488 00:16:37.298 }, 00:16:37.298 { 00:16:37.298 "name": "pt2", 00:16:37.298 "uuid": "32e7695f-3023-5ca3-91d7-944a840ccb33", 00:16:37.298 "is_configured": true, 00:16:37.298 "data_offset": 2048, 00:16:37.298 "data_size": 63488 00:16:37.298 }, 00:16:37.298 { 00:16:37.298 "name": "pt3", 00:16:37.298 "uuid": "b670ed41-de58-510e-9bd5-7df33e418335", 00:16:37.298 "is_configured": true, 00:16:37.298 "data_offset": 2048, 00:16:37.298 "data_size": 63488 00:16:37.298 } 00:16:37.298 ] 00:16:37.298 }' 00:16:37.298 02:40:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:37.298 02:40:02 -- common/autotest_common.sh@10 -- # set +x 00:16:37.865 02:40:02 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:37.865 02:40:02 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:16:38.123 [2024-07-11 02:40:02.990214] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:38.123 02:40:02 -- bdev/bdev_raid.sh@430 -- # '[' 94c40156-2974-4972-a341-f11407f63cde '!=' 94c40156-2974-4972-a341-f11407f63cde ']' 00:16:38.123 02:40:02 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:16:38.123 02:40:02 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:38.123 02:40:02 -- bdev/bdev_raid.sh@196 -- # return 0 00:16:38.123 02:40:02 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:38.123 [2024-07-11 02:40:03.214059] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:38.382 02:40:03 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:38.382 02:40:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:38.382 02:40:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:38.382 02:40:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:38.382 02:40:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:38.382 02:40:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:38.382 02:40:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:38.382 02:40:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:38.382 02:40:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:38.382 02:40:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:38.382 02:40:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:38.382 02:40:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.382 02:40:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:38.382 "name": "raid_bdev1", 00:16:38.382 "uuid": "94c40156-2974-4972-a341-f11407f63cde", 00:16:38.382 "strip_size_kb": 0, 00:16:38.382 "state": "online", 00:16:38.382 "raid_level": "raid1", 00:16:38.382 "superblock": true, 00:16:38.382 "num_base_bdevs": 3, 00:16:38.382 "num_base_bdevs_discovered": 2, 00:16:38.382 "num_base_bdevs_operational": 2, 00:16:38.382 "base_bdevs_list": [ 00:16:38.382 { 00:16:38.382 "name": null, 00:16:38.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.382 "is_configured": false, 00:16:38.382 "data_offset": 2048, 00:16:38.382 "data_size": 63488 00:16:38.382 }, 00:16:38.382 { 00:16:38.382 "name": "pt2", 00:16:38.382 "uuid": "32e7695f-3023-5ca3-91d7-944a840ccb33", 00:16:38.382 "is_configured": true, 00:16:38.382 "data_offset": 2048, 00:16:38.382 "data_size": 63488 00:16:38.382 }, 00:16:38.382 { 00:16:38.382 "name": "pt3", 00:16:38.382 "uuid": "b670ed41-de58-510e-9bd5-7df33e418335", 00:16:38.382 "is_configured": true, 00:16:38.382 "data_offset": 2048, 00:16:38.382 "data_size": 63488 00:16:38.382 } 00:16:38.382 ] 00:16:38.382 }' 00:16:38.382 02:40:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:38.382 02:40:03 -- common/autotest_common.sh@10 -- # set +x 00:16:39.319 02:40:04 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:39.319 [2024-07-11 02:40:04.234324] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:39.319 [2024-07-11 02:40:04.234354] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:39.319 [2024-07-11 02:40:04.234426] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:39.319 [2024-07-11 02:40:04.234491] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:39.319 [2024-07-11 02:40:04.234502] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:16:39.319 02:40:04 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:39.319 02:40:04 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:16:39.577 02:40:04 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:16:39.577 02:40:04 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:16:39.577 02:40:04 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:16:39.577 02:40:04 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:16:39.577 02:40:04 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:39.835 02:40:04 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:16:39.835 02:40:04 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:16:39.835 02:40:04 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:39.835 02:40:04 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:16:39.835 02:40:04 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:16:39.835 02:40:04 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:16:39.835 02:40:04 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:16:39.835 02:40:04 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:40.093 [2024-07-11 02:40:05.062475] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:40.093 [2024-07-11 02:40:05.062548] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:40.093 [2024-07-11 02:40:05.062583] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:16:40.093 [2024-07-11 02:40:05.062603] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:40.093 [2024-07-11 02:40:05.064798] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:40.093 [2024-07-11 02:40:05.064850] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:40.093 [2024-07-11 02:40:05.064960] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:40.093 [2024-07-11 02:40:05.065012] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:40.093 pt2 00:16:40.093 02:40:05 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:40.093 02:40:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:40.093 02:40:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:40.093 02:40:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:40.093 02:40:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:40.093 02:40:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:40.093 02:40:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:40.093 02:40:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:40.093 02:40:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:40.093 02:40:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:40.093 02:40:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:40.093 02:40:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.352 02:40:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:40.352 "name": "raid_bdev1", 00:16:40.352 "uuid": "94c40156-2974-4972-a341-f11407f63cde", 00:16:40.352 "strip_size_kb": 0, 00:16:40.352 "state": "configuring", 00:16:40.352 "raid_level": "raid1", 00:16:40.352 "superblock": true, 00:16:40.352 "num_base_bdevs": 3, 00:16:40.352 "num_base_bdevs_discovered": 1, 00:16:40.352 "num_base_bdevs_operational": 2, 00:16:40.352 "base_bdevs_list": [ 00:16:40.352 { 00:16:40.352 "name": null, 00:16:40.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.352 "is_configured": false, 00:16:40.352 "data_offset": 2048, 00:16:40.352 "data_size": 63488 00:16:40.352 }, 00:16:40.352 { 00:16:40.352 "name": "pt2", 00:16:40.352 "uuid": "32e7695f-3023-5ca3-91d7-944a840ccb33", 00:16:40.352 "is_configured": true, 00:16:40.352 "data_offset": 2048, 00:16:40.352 "data_size": 63488 00:16:40.352 }, 00:16:40.352 { 00:16:40.352 "name": null, 00:16:40.352 "uuid": "b670ed41-de58-510e-9bd5-7df33e418335", 00:16:40.352 "is_configured": false, 00:16:40.352 "data_offset": 2048, 00:16:40.352 "data_size": 63488 00:16:40.352 } 00:16:40.352 ] 00:16:40.352 }' 00:16:40.352 02:40:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:40.352 02:40:05 -- common/autotest_common.sh@10 -- # set +x 00:16:40.919 02:40:05 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:16:40.919 02:40:05 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:16:40.919 02:40:05 -- bdev/bdev_raid.sh@462 -- # i=2 00:16:40.919 02:40:05 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:41.178 [2024-07-11 02:40:06.230786] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:41.178 [2024-07-11 02:40:06.230900] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:41.178 [2024-07-11 02:40:06.230943] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:41.178 [2024-07-11 02:40:06.230964] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:41.178 [2024-07-11 02:40:06.231471] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:41.178 [2024-07-11 02:40:06.231515] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:41.178 [2024-07-11 02:40:06.231650] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:16:41.178 [2024-07-11 02:40:06.231680] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:41.178 [2024-07-11 02:40:06.231797] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009f80 00:16:41.178 [2024-07-11 02:40:06.231816] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:41.178 [2024-07-11 02:40:06.231892] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:16:41.178 [2024-07-11 02:40:06.232231] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009f80 00:16:41.178 [2024-07-11 02:40:06.232254] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009f80 00:16:41.178 [2024-07-11 02:40:06.232360] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:41.178 pt3 00:16:41.178 02:40:06 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:41.178 02:40:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:41.178 02:40:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:41.178 02:40:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:41.178 02:40:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:41.178 02:40:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:41.178 02:40:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:41.178 02:40:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:41.178 02:40:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:41.178 02:40:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:41.178 02:40:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:41.178 02:40:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.437 02:40:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:41.437 "name": "raid_bdev1", 00:16:41.437 "uuid": "94c40156-2974-4972-a341-f11407f63cde", 00:16:41.437 "strip_size_kb": 0, 00:16:41.437 "state": "online", 00:16:41.437 "raid_level": "raid1", 00:16:41.437 "superblock": true, 00:16:41.437 "num_base_bdevs": 3, 00:16:41.437 "num_base_bdevs_discovered": 2, 00:16:41.437 "num_base_bdevs_operational": 2, 00:16:41.437 "base_bdevs_list": [ 00:16:41.437 { 00:16:41.437 "name": null, 00:16:41.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.437 "is_configured": false, 00:16:41.437 "data_offset": 2048, 00:16:41.437 "data_size": 63488 00:16:41.437 }, 00:16:41.437 { 00:16:41.437 "name": "pt2", 00:16:41.437 "uuid": "32e7695f-3023-5ca3-91d7-944a840ccb33", 00:16:41.437 "is_configured": true, 00:16:41.437 "data_offset": 2048, 00:16:41.437 "data_size": 63488 00:16:41.437 }, 00:16:41.437 { 00:16:41.437 "name": "pt3", 00:16:41.437 "uuid": "b670ed41-de58-510e-9bd5-7df33e418335", 00:16:41.437 "is_configured": true, 00:16:41.437 "data_offset": 2048, 00:16:41.437 "data_size": 63488 00:16:41.437 } 00:16:41.437 ] 00:16:41.437 }' 00:16:41.437 02:40:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:41.437 02:40:06 -- common/autotest_common.sh@10 -- # set +x 00:16:42.373 02:40:07 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:16:42.373 02:40:07 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:42.373 [2024-07-11 02:40:07.378076] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:42.373 [2024-07-11 02:40:07.378129] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:42.373 [2024-07-11 02:40:07.378211] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:42.373 [2024-07-11 02:40:07.378276] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:42.373 [2024-07-11 02:40:07.378288] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009f80 name raid_bdev1, state offline 00:16:42.373 02:40:07 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:42.373 02:40:07 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:16:42.631 02:40:07 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:16:42.631 02:40:07 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:16:42.631 02:40:07 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:42.890 [2024-07-11 02:40:07.778160] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:42.890 [2024-07-11 02:40:07.778245] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:42.890 [2024-07-11 02:40:07.778284] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:16:42.890 [2024-07-11 02:40:07.778304] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:42.890 [2024-07-11 02:40:07.780480] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:42.890 [2024-07-11 02:40:07.780527] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:42.890 [2024-07-11 02:40:07.780633] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:16:42.890 [2024-07-11 02:40:07.780678] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:42.890 pt1 00:16:42.890 02:40:07 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:16:42.890 02:40:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:42.890 02:40:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:42.890 02:40:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:42.890 02:40:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:42.890 02:40:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:42.890 02:40:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:42.890 02:40:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:42.890 02:40:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:42.890 02:40:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:42.890 02:40:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:42.890 02:40:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.155 02:40:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:43.155 "name": "raid_bdev1", 00:16:43.155 "uuid": "94c40156-2974-4972-a341-f11407f63cde", 00:16:43.155 "strip_size_kb": 0, 00:16:43.155 "state": "configuring", 00:16:43.155 "raid_level": "raid1", 00:16:43.155 "superblock": true, 00:16:43.155 "num_base_bdevs": 3, 00:16:43.155 "num_base_bdevs_discovered": 1, 00:16:43.155 "num_base_bdevs_operational": 3, 00:16:43.155 "base_bdevs_list": [ 00:16:43.155 { 00:16:43.155 "name": "pt1", 00:16:43.155 "uuid": "8dd3fd4e-494f-5e1b-9866-1fb11d01773d", 00:16:43.155 "is_configured": true, 00:16:43.155 "data_offset": 2048, 00:16:43.155 "data_size": 63488 00:16:43.155 }, 00:16:43.155 { 00:16:43.155 "name": null, 00:16:43.155 "uuid": "32e7695f-3023-5ca3-91d7-944a840ccb33", 00:16:43.155 "is_configured": false, 00:16:43.155 "data_offset": 2048, 00:16:43.155 "data_size": 63488 00:16:43.155 }, 00:16:43.155 { 00:16:43.155 "name": null, 00:16:43.155 "uuid": "b670ed41-de58-510e-9bd5-7df33e418335", 00:16:43.155 "is_configured": false, 00:16:43.155 "data_offset": 2048, 00:16:43.155 "data_size": 63488 00:16:43.155 } 00:16:43.155 ] 00:16:43.155 }' 00:16:43.155 02:40:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:43.155 02:40:08 -- common/autotest_common.sh@10 -- # set +x 00:16:43.771 02:40:08 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:16:43.771 02:40:08 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:16:43.771 02:40:08 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:44.030 02:40:08 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:16:44.030 02:40:08 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:16:44.030 02:40:08 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:44.289 02:40:09 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:16:44.289 02:40:09 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:16:44.289 02:40:09 -- bdev/bdev_raid.sh@489 -- # i=2 00:16:44.289 02:40:09 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:44.548 [2024-07-11 02:40:09.430578] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:44.548 [2024-07-11 02:40:09.430695] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:44.548 [2024-07-11 02:40:09.430727] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:44.548 [2024-07-11 02:40:09.430755] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:44.548 [2024-07-11 02:40:09.431282] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:44.548 [2024-07-11 02:40:09.431321] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:44.548 [2024-07-11 02:40:09.431469] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:16:44.548 [2024-07-11 02:40:09.431484] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:44.548 [2024-07-11 02:40:09.431491] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:44.548 [2024-07-11 02:40:09.431526] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state configuring 00:16:44.548 [2024-07-11 02:40:09.431589] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:44.548 pt3 00:16:44.548 02:40:09 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:44.548 02:40:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:44.548 02:40:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:44.548 02:40:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:44.548 02:40:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:44.548 02:40:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:44.548 02:40:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:44.548 02:40:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:44.548 02:40:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:44.548 02:40:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:44.548 02:40:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:44.548 02:40:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.807 02:40:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:44.807 "name": "raid_bdev1", 00:16:44.807 "uuid": "94c40156-2974-4972-a341-f11407f63cde", 00:16:44.807 "strip_size_kb": 0, 00:16:44.807 "state": "configuring", 00:16:44.807 "raid_level": "raid1", 00:16:44.807 "superblock": true, 00:16:44.807 "num_base_bdevs": 3, 00:16:44.807 "num_base_bdevs_discovered": 1, 00:16:44.807 "num_base_bdevs_operational": 2, 00:16:44.807 "base_bdevs_list": [ 00:16:44.807 { 00:16:44.807 "name": null, 00:16:44.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.808 "is_configured": false, 00:16:44.808 "data_offset": 2048, 00:16:44.808 "data_size": 63488 00:16:44.808 }, 00:16:44.808 { 00:16:44.808 "name": null, 00:16:44.808 "uuid": "32e7695f-3023-5ca3-91d7-944a840ccb33", 00:16:44.808 "is_configured": false, 00:16:44.808 "data_offset": 2048, 00:16:44.808 "data_size": 63488 00:16:44.808 }, 00:16:44.808 { 00:16:44.808 "name": "pt3", 00:16:44.808 "uuid": "b670ed41-de58-510e-9bd5-7df33e418335", 00:16:44.808 "is_configured": true, 00:16:44.808 "data_offset": 2048, 00:16:44.808 "data_size": 63488 00:16:44.808 } 00:16:44.808 ] 00:16:44.808 }' 00:16:44.808 02:40:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:44.808 02:40:09 -- common/autotest_common.sh@10 -- # set +x 00:16:45.374 02:40:10 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:16:45.374 02:40:10 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:16:45.374 02:40:10 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:45.632 [2024-07-11 02:40:10.499272] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:45.632 [2024-07-11 02:40:10.499400] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.632 [2024-07-11 02:40:10.499468] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:45.632 [2024-07-11 02:40:10.499495] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.632 [2024-07-11 02:40:10.500021] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.632 [2024-07-11 02:40:10.500086] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:45.632 [2024-07-11 02:40:10.500178] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:45.632 [2024-07-11 02:40:10.500230] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:45.632 [2024-07-11 02:40:10.500353] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000b180 00:16:45.632 [2024-07-11 02:40:10.500368] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:45.632 [2024-07-11 02:40:10.500441] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002bb0 00:16:45.632 [2024-07-11 02:40:10.500776] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000b180 00:16:45.632 [2024-07-11 02:40:10.500800] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000b180 00:16:45.632 [2024-07-11 02:40:10.500908] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:45.632 pt2 00:16:45.632 02:40:10 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:16:45.632 02:40:10 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:16:45.632 02:40:10 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:45.632 02:40:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:45.632 02:40:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:45.632 02:40:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:45.632 02:40:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:45.632 02:40:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:45.632 02:40:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:45.632 02:40:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:45.632 02:40:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:45.632 02:40:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:45.632 02:40:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:45.632 02:40:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.632 02:40:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:45.632 "name": "raid_bdev1", 00:16:45.632 "uuid": "94c40156-2974-4972-a341-f11407f63cde", 00:16:45.632 "strip_size_kb": 0, 00:16:45.632 "state": "online", 00:16:45.632 "raid_level": "raid1", 00:16:45.632 "superblock": true, 00:16:45.632 "num_base_bdevs": 3, 00:16:45.632 "num_base_bdevs_discovered": 2, 00:16:45.632 "num_base_bdevs_operational": 2, 00:16:45.632 "base_bdevs_list": [ 00:16:45.632 { 00:16:45.632 "name": null, 00:16:45.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.632 "is_configured": false, 00:16:45.632 "data_offset": 2048, 00:16:45.632 "data_size": 63488 00:16:45.632 }, 00:16:45.632 { 00:16:45.632 "name": "pt2", 00:16:45.632 "uuid": "32e7695f-3023-5ca3-91d7-944a840ccb33", 00:16:45.632 "is_configured": true, 00:16:45.632 "data_offset": 2048, 00:16:45.632 "data_size": 63488 00:16:45.632 }, 00:16:45.632 { 00:16:45.632 "name": "pt3", 00:16:45.632 "uuid": "b670ed41-de58-510e-9bd5-7df33e418335", 00:16:45.632 "is_configured": true, 00:16:45.632 "data_offset": 2048, 00:16:45.632 "data_size": 63488 00:16:45.632 } 00:16:45.632 ] 00:16:45.632 }' 00:16:45.632 02:40:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:45.632 02:40:10 -- common/autotest_common.sh@10 -- # set +x 00:16:46.198 02:40:11 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:46.198 02:40:11 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:16:46.455 [2024-07-11 02:40:11.471639] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:46.455 02:40:11 -- bdev/bdev_raid.sh@506 -- # '[' 94c40156-2974-4972-a341-f11407f63cde '!=' 94c40156-2974-4972-a341-f11407f63cde ']' 00:16:46.455 02:40:11 -- bdev/bdev_raid.sh@511 -- # killprocess 130391 00:16:46.455 02:40:11 -- common/autotest_common.sh@926 -- # '[' -z 130391 ']' 00:16:46.455 02:40:11 -- common/autotest_common.sh@930 -- # kill -0 130391 00:16:46.455 02:40:11 -- common/autotest_common.sh@931 -- # uname 00:16:46.455 02:40:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:46.455 02:40:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 130391 00:16:46.455 killing process with pid 130391 00:16:46.455 02:40:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:46.455 02:40:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:46.455 02:40:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 130391' 00:16:46.455 02:40:11 -- common/autotest_common.sh@945 -- # kill 130391 00:16:46.455 02:40:11 -- common/autotest_common.sh@950 -- # wait 130391 00:16:46.456 [2024-07-11 02:40:11.502839] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:46.456 [2024-07-11 02:40:11.502910] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:46.456 [2024-07-11 02:40:11.503003] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:46.456 [2024-07-11 02:40:11.503024] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b180 name raid_bdev1, state offline 00:16:46.456 [2024-07-11 02:40:11.532021] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:46.714 ************************************ 00:16:46.714 END TEST raid_superblock_test 00:16:46.714 ************************************ 00:16:46.714 02:40:11 -- bdev/bdev_raid.sh@513 -- # return 0 00:16:46.714 00:16:46.714 real 0m18.248s 00:16:46.714 user 0m34.643s 00:16:46.714 sys 0m2.097s 00:16:46.714 02:40:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:46.714 02:40:11 -- common/autotest_common.sh@10 -- # set +x 00:16:46.714 02:40:11 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:16:46.714 02:40:11 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:16:46.714 02:40:11 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:16:46.714 02:40:11 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:16:46.714 02:40:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:46.714 02:40:11 -- common/autotest_common.sh@10 -- # set +x 00:16:46.714 ************************************ 00:16:46.714 START TEST raid_state_function_test 00:16:46.714 ************************************ 00:16:46.714 02:40:11 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 4 false 00:16:46.714 02:40:11 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:16:46.714 02:40:11 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:16:46.714 02:40:11 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:16:46.714 02:40:11 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:46.714 02:40:11 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:16:46.714 02:40:11 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:46.714 02:40:11 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:46.714 02:40:11 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:46.714 02:40:11 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:46.714 02:40:11 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:46.714 02:40:11 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:46.714 02:40:11 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:46.714 02:40:11 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:46.714 02:40:11 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:16:46.714 02:40:11 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:46.714 02:40:11 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:46.714 02:40:11 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:16:46.714 02:40:11 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:46.714 02:40:11 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:46.714 02:40:11 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:46.714 02:40:11 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:46.714 02:40:11 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:46.714 02:40:11 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:46.714 02:40:11 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:46.714 02:40:11 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:16:46.714 02:40:11 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:16:46.714 02:40:11 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:16:46.714 02:40:11 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:16:46.714 02:40:11 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:16:46.714 02:40:11 -- bdev/bdev_raid.sh@226 -- # raid_pid=131021 00:16:46.714 02:40:11 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:46.714 02:40:11 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 131021' 00:16:46.714 Process raid pid: 131021 00:16:46.714 02:40:11 -- bdev/bdev_raid.sh@228 -- # waitforlisten 131021 /var/tmp/spdk-raid.sock 00:16:46.973 02:40:11 -- common/autotest_common.sh@819 -- # '[' -z 131021 ']' 00:16:46.973 02:40:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:46.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:46.973 02:40:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:46.973 02:40:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:46.973 02:40:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:46.973 02:40:11 -- common/autotest_common.sh@10 -- # set +x 00:16:46.973 [2024-07-11 02:40:11.843337] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:16:46.973 [2024-07-11 02:40:11.844027] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:46.973 [2024-07-11 02:40:11.981673] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:46.973 [2024-07-11 02:40:12.039262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.231 [2024-07-11 02:40:12.090329] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:47.798 02:40:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:47.798 02:40:12 -- common/autotest_common.sh@852 -- # return 0 00:16:47.798 02:40:12 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:48.056 [2024-07-11 02:40:13.066120] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:48.056 [2024-07-11 02:40:13.066228] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:48.057 [2024-07-11 02:40:13.066243] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:48.057 [2024-07-11 02:40:13.066261] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:48.057 [2024-07-11 02:40:13.066269] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:48.057 [2024-07-11 02:40:13.066306] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:48.057 [2024-07-11 02:40:13.066314] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:48.057 [2024-07-11 02:40:13.066350] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:48.057 02:40:13 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:48.057 02:40:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:48.057 02:40:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:48.057 02:40:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:48.057 02:40:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:48.057 02:40:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:48.057 02:40:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:48.057 02:40:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:48.057 02:40:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:48.057 02:40:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:48.057 02:40:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:48.057 02:40:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:48.315 02:40:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:48.315 "name": "Existed_Raid", 00:16:48.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.315 "strip_size_kb": 64, 00:16:48.315 "state": "configuring", 00:16:48.315 "raid_level": "raid0", 00:16:48.315 "superblock": false, 00:16:48.315 "num_base_bdevs": 4, 00:16:48.315 "num_base_bdevs_discovered": 0, 00:16:48.315 "num_base_bdevs_operational": 4, 00:16:48.315 "base_bdevs_list": [ 00:16:48.315 { 00:16:48.315 "name": "BaseBdev1", 00:16:48.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.315 "is_configured": false, 00:16:48.315 "data_offset": 0, 00:16:48.315 "data_size": 0 00:16:48.315 }, 00:16:48.315 { 00:16:48.315 "name": "BaseBdev2", 00:16:48.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.315 "is_configured": false, 00:16:48.315 "data_offset": 0, 00:16:48.315 "data_size": 0 00:16:48.315 }, 00:16:48.315 { 00:16:48.315 "name": "BaseBdev3", 00:16:48.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.315 "is_configured": false, 00:16:48.315 "data_offset": 0, 00:16:48.315 "data_size": 0 00:16:48.315 }, 00:16:48.315 { 00:16:48.315 "name": "BaseBdev4", 00:16:48.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.315 "is_configured": false, 00:16:48.315 "data_offset": 0, 00:16:48.315 "data_size": 0 00:16:48.315 } 00:16:48.315 ] 00:16:48.315 }' 00:16:48.315 02:40:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:48.315 02:40:13 -- common/autotest_common.sh@10 -- # set +x 00:16:49.249 02:40:13 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:49.249 [2024-07-11 02:40:14.214250] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:49.249 [2024-07-11 02:40:14.214296] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:16:49.249 02:40:14 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:49.507 [2024-07-11 02:40:14.446277] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:49.507 [2024-07-11 02:40:14.446335] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:49.507 [2024-07-11 02:40:14.446362] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:49.507 [2024-07-11 02:40:14.446385] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:49.507 [2024-07-11 02:40:14.446393] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:49.507 [2024-07-11 02:40:14.446409] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:49.507 [2024-07-11 02:40:14.446416] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:49.507 [2024-07-11 02:40:14.446439] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:49.507 02:40:14 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:49.765 [2024-07-11 02:40:14.697367] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:49.765 BaseBdev1 00:16:49.765 02:40:14 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:49.765 02:40:14 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:49.765 02:40:14 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:49.765 02:40:14 -- common/autotest_common.sh@889 -- # local i 00:16:49.765 02:40:14 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:49.765 02:40:14 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:49.765 02:40:14 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:50.024 02:40:14 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:50.024 [ 00:16:50.024 { 00:16:50.024 "name": "BaseBdev1", 00:16:50.024 "aliases": [ 00:16:50.024 "d2ad808f-1ec3-49c3-81fc-5af13948cc60" 00:16:50.024 ], 00:16:50.024 "product_name": "Malloc disk", 00:16:50.024 "block_size": 512, 00:16:50.024 "num_blocks": 65536, 00:16:50.024 "uuid": "d2ad808f-1ec3-49c3-81fc-5af13948cc60", 00:16:50.024 "assigned_rate_limits": { 00:16:50.024 "rw_ios_per_sec": 0, 00:16:50.024 "rw_mbytes_per_sec": 0, 00:16:50.024 "r_mbytes_per_sec": 0, 00:16:50.024 "w_mbytes_per_sec": 0 00:16:50.024 }, 00:16:50.024 "claimed": true, 00:16:50.024 "claim_type": "exclusive_write", 00:16:50.024 "zoned": false, 00:16:50.024 "supported_io_types": { 00:16:50.024 "read": true, 00:16:50.024 "write": true, 00:16:50.024 "unmap": true, 00:16:50.024 "write_zeroes": true, 00:16:50.024 "flush": true, 00:16:50.024 "reset": true, 00:16:50.024 "compare": false, 00:16:50.024 "compare_and_write": false, 00:16:50.024 "abort": true, 00:16:50.024 "nvme_admin": false, 00:16:50.024 "nvme_io": false 00:16:50.024 }, 00:16:50.024 "memory_domains": [ 00:16:50.024 { 00:16:50.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.024 "dma_device_type": 2 00:16:50.024 } 00:16:50.024 ], 00:16:50.024 "driver_specific": {} 00:16:50.024 } 00:16:50.024 ] 00:16:50.024 02:40:15 -- common/autotest_common.sh@895 -- # return 0 00:16:50.024 02:40:15 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:50.024 02:40:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:50.024 02:40:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:50.024 02:40:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:50.024 02:40:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:50.024 02:40:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:50.024 02:40:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:50.024 02:40:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:50.024 02:40:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:50.024 02:40:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:50.024 02:40:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:50.024 02:40:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:50.282 02:40:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:50.282 "name": "Existed_Raid", 00:16:50.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.282 "strip_size_kb": 64, 00:16:50.282 "state": "configuring", 00:16:50.282 "raid_level": "raid0", 00:16:50.282 "superblock": false, 00:16:50.282 "num_base_bdevs": 4, 00:16:50.282 "num_base_bdevs_discovered": 1, 00:16:50.282 "num_base_bdevs_operational": 4, 00:16:50.282 "base_bdevs_list": [ 00:16:50.282 { 00:16:50.282 "name": "BaseBdev1", 00:16:50.282 "uuid": "d2ad808f-1ec3-49c3-81fc-5af13948cc60", 00:16:50.282 "is_configured": true, 00:16:50.282 "data_offset": 0, 00:16:50.282 "data_size": 65536 00:16:50.282 }, 00:16:50.282 { 00:16:50.282 "name": "BaseBdev2", 00:16:50.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.282 "is_configured": false, 00:16:50.282 "data_offset": 0, 00:16:50.282 "data_size": 0 00:16:50.282 }, 00:16:50.282 { 00:16:50.282 "name": "BaseBdev3", 00:16:50.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.282 "is_configured": false, 00:16:50.282 "data_offset": 0, 00:16:50.282 "data_size": 0 00:16:50.282 }, 00:16:50.282 { 00:16:50.282 "name": "BaseBdev4", 00:16:50.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.282 "is_configured": false, 00:16:50.282 "data_offset": 0, 00:16:50.282 "data_size": 0 00:16:50.282 } 00:16:50.282 ] 00:16:50.282 }' 00:16:50.282 02:40:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:50.282 02:40:15 -- common/autotest_common.sh@10 -- # set +x 00:16:51.217 02:40:15 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:51.217 [2024-07-11 02:40:16.137626] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:51.217 [2024-07-11 02:40:16.137693] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005a80 name Existed_Raid, state configuring 00:16:51.217 02:40:16 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:16:51.217 02:40:16 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:51.475 [2024-07-11 02:40:16.389750] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:51.475 [2024-07-11 02:40:16.391493] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:51.475 [2024-07-11 02:40:16.391564] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:51.475 [2024-07-11 02:40:16.391593] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:51.475 [2024-07-11 02:40:16.391616] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:51.475 [2024-07-11 02:40:16.391624] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:51.475 [2024-07-11 02:40:16.391638] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:51.475 02:40:16 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:51.475 02:40:16 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:51.475 02:40:16 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:51.475 02:40:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:51.475 02:40:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:51.475 02:40:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:51.475 02:40:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:51.475 02:40:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:51.475 02:40:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:51.475 02:40:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:51.475 02:40:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:51.475 02:40:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:51.475 02:40:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:51.475 02:40:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:51.734 02:40:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:51.734 "name": "Existed_Raid", 00:16:51.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.734 "strip_size_kb": 64, 00:16:51.734 "state": "configuring", 00:16:51.734 "raid_level": "raid0", 00:16:51.734 "superblock": false, 00:16:51.734 "num_base_bdevs": 4, 00:16:51.734 "num_base_bdevs_discovered": 1, 00:16:51.734 "num_base_bdevs_operational": 4, 00:16:51.734 "base_bdevs_list": [ 00:16:51.734 { 00:16:51.734 "name": "BaseBdev1", 00:16:51.734 "uuid": "d2ad808f-1ec3-49c3-81fc-5af13948cc60", 00:16:51.734 "is_configured": true, 00:16:51.734 "data_offset": 0, 00:16:51.734 "data_size": 65536 00:16:51.734 }, 00:16:51.734 { 00:16:51.734 "name": "BaseBdev2", 00:16:51.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.734 "is_configured": false, 00:16:51.734 "data_offset": 0, 00:16:51.734 "data_size": 0 00:16:51.734 }, 00:16:51.734 { 00:16:51.734 "name": "BaseBdev3", 00:16:51.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.734 "is_configured": false, 00:16:51.734 "data_offset": 0, 00:16:51.734 "data_size": 0 00:16:51.734 }, 00:16:51.734 { 00:16:51.734 "name": "BaseBdev4", 00:16:51.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.734 "is_configured": false, 00:16:51.734 "data_offset": 0, 00:16:51.734 "data_size": 0 00:16:51.734 } 00:16:51.734 ] 00:16:51.734 }' 00:16:51.734 02:40:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:51.734 02:40:16 -- common/autotest_common.sh@10 -- # set +x 00:16:52.300 02:40:17 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:52.558 [2024-07-11 02:40:17.497626] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:52.558 BaseBdev2 00:16:52.558 02:40:17 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:52.558 02:40:17 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:16:52.558 02:40:17 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:52.558 02:40:17 -- common/autotest_common.sh@889 -- # local i 00:16:52.558 02:40:17 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:52.558 02:40:17 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:52.558 02:40:17 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:52.816 02:40:17 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:52.816 [ 00:16:52.816 { 00:16:52.816 "name": "BaseBdev2", 00:16:52.816 "aliases": [ 00:16:52.816 "dcf03ffe-d382-4398-831b-f4e736b26aa6" 00:16:52.816 ], 00:16:52.816 "product_name": "Malloc disk", 00:16:52.816 "block_size": 512, 00:16:52.816 "num_blocks": 65536, 00:16:52.816 "uuid": "dcf03ffe-d382-4398-831b-f4e736b26aa6", 00:16:52.816 "assigned_rate_limits": { 00:16:52.816 "rw_ios_per_sec": 0, 00:16:52.816 "rw_mbytes_per_sec": 0, 00:16:52.816 "r_mbytes_per_sec": 0, 00:16:52.816 "w_mbytes_per_sec": 0 00:16:52.816 }, 00:16:52.816 "claimed": true, 00:16:52.816 "claim_type": "exclusive_write", 00:16:52.816 "zoned": false, 00:16:52.816 "supported_io_types": { 00:16:52.816 "read": true, 00:16:52.816 "write": true, 00:16:52.816 "unmap": true, 00:16:52.816 "write_zeroes": true, 00:16:52.816 "flush": true, 00:16:52.816 "reset": true, 00:16:52.816 "compare": false, 00:16:52.816 "compare_and_write": false, 00:16:52.816 "abort": true, 00:16:52.816 "nvme_admin": false, 00:16:52.816 "nvme_io": false 00:16:52.816 }, 00:16:52.816 "memory_domains": [ 00:16:52.816 { 00:16:52.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.816 "dma_device_type": 2 00:16:52.816 } 00:16:52.816 ], 00:16:52.816 "driver_specific": {} 00:16:52.816 } 00:16:52.816 ] 00:16:52.816 02:40:17 -- common/autotest_common.sh@895 -- # return 0 00:16:52.816 02:40:17 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:52.816 02:40:17 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:52.816 02:40:17 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:52.816 02:40:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:52.816 02:40:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:52.816 02:40:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:52.816 02:40:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:52.816 02:40:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:52.816 02:40:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:52.816 02:40:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:52.816 02:40:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:52.816 02:40:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:52.816 02:40:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:52.816 02:40:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:53.075 02:40:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:53.075 "name": "Existed_Raid", 00:16:53.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.075 "strip_size_kb": 64, 00:16:53.075 "state": "configuring", 00:16:53.075 "raid_level": "raid0", 00:16:53.075 "superblock": false, 00:16:53.075 "num_base_bdevs": 4, 00:16:53.075 "num_base_bdevs_discovered": 2, 00:16:53.075 "num_base_bdevs_operational": 4, 00:16:53.075 "base_bdevs_list": [ 00:16:53.075 { 00:16:53.075 "name": "BaseBdev1", 00:16:53.075 "uuid": "d2ad808f-1ec3-49c3-81fc-5af13948cc60", 00:16:53.075 "is_configured": true, 00:16:53.075 "data_offset": 0, 00:16:53.075 "data_size": 65536 00:16:53.075 }, 00:16:53.075 { 00:16:53.075 "name": "BaseBdev2", 00:16:53.075 "uuid": "dcf03ffe-d382-4398-831b-f4e736b26aa6", 00:16:53.075 "is_configured": true, 00:16:53.075 "data_offset": 0, 00:16:53.075 "data_size": 65536 00:16:53.075 }, 00:16:53.075 { 00:16:53.075 "name": "BaseBdev3", 00:16:53.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.075 "is_configured": false, 00:16:53.075 "data_offset": 0, 00:16:53.075 "data_size": 0 00:16:53.075 }, 00:16:53.075 { 00:16:53.075 "name": "BaseBdev4", 00:16:53.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.075 "is_configured": false, 00:16:53.075 "data_offset": 0, 00:16:53.075 "data_size": 0 00:16:53.075 } 00:16:53.075 ] 00:16:53.075 }' 00:16:53.075 02:40:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:53.075 02:40:18 -- common/autotest_common.sh@10 -- # set +x 00:16:54.009 02:40:18 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:54.009 [2024-07-11 02:40:18.975312] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:54.009 BaseBdev3 00:16:54.009 02:40:18 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:54.009 02:40:18 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:16:54.009 02:40:18 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:54.009 02:40:18 -- common/autotest_common.sh@889 -- # local i 00:16:54.009 02:40:18 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:54.009 02:40:18 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:54.009 02:40:18 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:54.267 02:40:19 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:54.525 [ 00:16:54.525 { 00:16:54.525 "name": "BaseBdev3", 00:16:54.525 "aliases": [ 00:16:54.525 "f2819bb5-56b2-41d4-9dc0-61056c7cb6eb" 00:16:54.525 ], 00:16:54.525 "product_name": "Malloc disk", 00:16:54.525 "block_size": 512, 00:16:54.525 "num_blocks": 65536, 00:16:54.525 "uuid": "f2819bb5-56b2-41d4-9dc0-61056c7cb6eb", 00:16:54.525 "assigned_rate_limits": { 00:16:54.525 "rw_ios_per_sec": 0, 00:16:54.525 "rw_mbytes_per_sec": 0, 00:16:54.525 "r_mbytes_per_sec": 0, 00:16:54.525 "w_mbytes_per_sec": 0 00:16:54.525 }, 00:16:54.525 "claimed": true, 00:16:54.525 "claim_type": "exclusive_write", 00:16:54.525 "zoned": false, 00:16:54.525 "supported_io_types": { 00:16:54.525 "read": true, 00:16:54.525 "write": true, 00:16:54.525 "unmap": true, 00:16:54.525 "write_zeroes": true, 00:16:54.525 "flush": true, 00:16:54.525 "reset": true, 00:16:54.525 "compare": false, 00:16:54.525 "compare_and_write": false, 00:16:54.525 "abort": true, 00:16:54.525 "nvme_admin": false, 00:16:54.525 "nvme_io": false 00:16:54.525 }, 00:16:54.525 "memory_domains": [ 00:16:54.525 { 00:16:54.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.525 "dma_device_type": 2 00:16:54.525 } 00:16:54.525 ], 00:16:54.525 "driver_specific": {} 00:16:54.525 } 00:16:54.525 ] 00:16:54.525 02:40:19 -- common/autotest_common.sh@895 -- # return 0 00:16:54.525 02:40:19 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:54.525 02:40:19 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:54.525 02:40:19 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:54.525 02:40:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:54.525 02:40:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:54.525 02:40:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:54.525 02:40:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:54.525 02:40:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:54.525 02:40:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:54.525 02:40:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:54.525 02:40:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:54.525 02:40:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:54.525 02:40:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:54.525 02:40:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:54.783 02:40:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:54.783 "name": "Existed_Raid", 00:16:54.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.783 "strip_size_kb": 64, 00:16:54.783 "state": "configuring", 00:16:54.783 "raid_level": "raid0", 00:16:54.783 "superblock": false, 00:16:54.783 "num_base_bdevs": 4, 00:16:54.783 "num_base_bdevs_discovered": 3, 00:16:54.783 "num_base_bdevs_operational": 4, 00:16:54.783 "base_bdevs_list": [ 00:16:54.783 { 00:16:54.783 "name": "BaseBdev1", 00:16:54.783 "uuid": "d2ad808f-1ec3-49c3-81fc-5af13948cc60", 00:16:54.783 "is_configured": true, 00:16:54.783 "data_offset": 0, 00:16:54.783 "data_size": 65536 00:16:54.783 }, 00:16:54.783 { 00:16:54.783 "name": "BaseBdev2", 00:16:54.783 "uuid": "dcf03ffe-d382-4398-831b-f4e736b26aa6", 00:16:54.783 "is_configured": true, 00:16:54.783 "data_offset": 0, 00:16:54.783 "data_size": 65536 00:16:54.783 }, 00:16:54.783 { 00:16:54.783 "name": "BaseBdev3", 00:16:54.783 "uuid": "f2819bb5-56b2-41d4-9dc0-61056c7cb6eb", 00:16:54.783 "is_configured": true, 00:16:54.783 "data_offset": 0, 00:16:54.783 "data_size": 65536 00:16:54.783 }, 00:16:54.783 { 00:16:54.783 "name": "BaseBdev4", 00:16:54.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.783 "is_configured": false, 00:16:54.783 "data_offset": 0, 00:16:54.783 "data_size": 0 00:16:54.783 } 00:16:54.783 ] 00:16:54.783 }' 00:16:54.783 02:40:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:54.783 02:40:19 -- common/autotest_common.sh@10 -- # set +x 00:16:55.355 02:40:20 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:16:55.632 [2024-07-11 02:40:20.598353] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:55.632 [2024-07-11 02:40:20.598398] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006380 00:16:55.632 [2024-07-11 02:40:20.598407] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:16:55.632 [2024-07-11 02:40:20.598568] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:16:55.632 [2024-07-11 02:40:20.599003] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006380 00:16:55.632 [2024-07-11 02:40:20.599017] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006380 00:16:55.632 BaseBdev4 00:16:55.632 [2024-07-11 02:40:20.599273] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:55.632 02:40:20 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:16:55.632 02:40:20 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:16:55.632 02:40:20 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:55.632 02:40:20 -- common/autotest_common.sh@889 -- # local i 00:16:55.632 02:40:20 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:55.632 02:40:20 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:55.632 02:40:20 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:55.891 02:40:20 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:56.149 [ 00:16:56.149 { 00:16:56.149 "name": "BaseBdev4", 00:16:56.149 "aliases": [ 00:16:56.149 "71598320-c705-4c26-9ead-b6f7aa47fea6" 00:16:56.149 ], 00:16:56.149 "product_name": "Malloc disk", 00:16:56.149 "block_size": 512, 00:16:56.149 "num_blocks": 65536, 00:16:56.149 "uuid": "71598320-c705-4c26-9ead-b6f7aa47fea6", 00:16:56.149 "assigned_rate_limits": { 00:16:56.149 "rw_ios_per_sec": 0, 00:16:56.149 "rw_mbytes_per_sec": 0, 00:16:56.149 "r_mbytes_per_sec": 0, 00:16:56.149 "w_mbytes_per_sec": 0 00:16:56.149 }, 00:16:56.149 "claimed": true, 00:16:56.149 "claim_type": "exclusive_write", 00:16:56.149 "zoned": false, 00:16:56.149 "supported_io_types": { 00:16:56.149 "read": true, 00:16:56.149 "write": true, 00:16:56.149 "unmap": true, 00:16:56.149 "write_zeroes": true, 00:16:56.149 "flush": true, 00:16:56.149 "reset": true, 00:16:56.149 "compare": false, 00:16:56.149 "compare_and_write": false, 00:16:56.149 "abort": true, 00:16:56.149 "nvme_admin": false, 00:16:56.149 "nvme_io": false 00:16:56.149 }, 00:16:56.149 "memory_domains": [ 00:16:56.149 { 00:16:56.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:56.149 "dma_device_type": 2 00:16:56.149 } 00:16:56.149 ], 00:16:56.149 "driver_specific": {} 00:16:56.149 } 00:16:56.149 ] 00:16:56.149 02:40:21 -- common/autotest_common.sh@895 -- # return 0 00:16:56.149 02:40:21 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:56.149 02:40:21 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:56.149 02:40:21 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:16:56.149 02:40:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:56.149 02:40:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:56.149 02:40:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:56.149 02:40:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:56.149 02:40:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:56.149 02:40:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:56.149 02:40:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:56.149 02:40:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:56.149 02:40:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:56.149 02:40:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:56.149 02:40:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:56.407 02:40:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:56.407 "name": "Existed_Raid", 00:16:56.407 "uuid": "0b926bb2-e834-4af9-88f9-f30826caf99a", 00:16:56.407 "strip_size_kb": 64, 00:16:56.407 "state": "online", 00:16:56.407 "raid_level": "raid0", 00:16:56.407 "superblock": false, 00:16:56.407 "num_base_bdevs": 4, 00:16:56.407 "num_base_bdevs_discovered": 4, 00:16:56.407 "num_base_bdevs_operational": 4, 00:16:56.407 "base_bdevs_list": [ 00:16:56.407 { 00:16:56.407 "name": "BaseBdev1", 00:16:56.407 "uuid": "d2ad808f-1ec3-49c3-81fc-5af13948cc60", 00:16:56.407 "is_configured": true, 00:16:56.407 "data_offset": 0, 00:16:56.407 "data_size": 65536 00:16:56.407 }, 00:16:56.407 { 00:16:56.407 "name": "BaseBdev2", 00:16:56.407 "uuid": "dcf03ffe-d382-4398-831b-f4e736b26aa6", 00:16:56.407 "is_configured": true, 00:16:56.407 "data_offset": 0, 00:16:56.407 "data_size": 65536 00:16:56.407 }, 00:16:56.407 { 00:16:56.407 "name": "BaseBdev3", 00:16:56.407 "uuid": "f2819bb5-56b2-41d4-9dc0-61056c7cb6eb", 00:16:56.407 "is_configured": true, 00:16:56.407 "data_offset": 0, 00:16:56.407 "data_size": 65536 00:16:56.407 }, 00:16:56.407 { 00:16:56.407 "name": "BaseBdev4", 00:16:56.407 "uuid": "71598320-c705-4c26-9ead-b6f7aa47fea6", 00:16:56.408 "is_configured": true, 00:16:56.408 "data_offset": 0, 00:16:56.408 "data_size": 65536 00:16:56.408 } 00:16:56.408 ] 00:16:56.408 }' 00:16:56.408 02:40:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:56.408 02:40:21 -- common/autotest_common.sh@10 -- # set +x 00:16:56.972 02:40:21 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:57.231 [2024-07-11 02:40:22.226895] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:57.231 [2024-07-11 02:40:22.226926] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:57.231 [2024-07-11 02:40:22.227007] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:57.231 02:40:22 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:57.231 02:40:22 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:16:57.232 02:40:22 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:57.232 02:40:22 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:57.232 02:40:22 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:16:57.232 02:40:22 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:16:57.232 02:40:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:57.232 02:40:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:16:57.232 02:40:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:57.232 02:40:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:57.232 02:40:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:57.232 02:40:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:57.232 02:40:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:57.232 02:40:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:57.232 02:40:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:57.232 02:40:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:57.232 02:40:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:57.489 02:40:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:57.489 "name": "Existed_Raid", 00:16:57.489 "uuid": "0b926bb2-e834-4af9-88f9-f30826caf99a", 00:16:57.489 "strip_size_kb": 64, 00:16:57.489 "state": "offline", 00:16:57.489 "raid_level": "raid0", 00:16:57.489 "superblock": false, 00:16:57.489 "num_base_bdevs": 4, 00:16:57.489 "num_base_bdevs_discovered": 3, 00:16:57.489 "num_base_bdevs_operational": 3, 00:16:57.489 "base_bdevs_list": [ 00:16:57.489 { 00:16:57.489 "name": null, 00:16:57.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.489 "is_configured": false, 00:16:57.489 "data_offset": 0, 00:16:57.489 "data_size": 65536 00:16:57.489 }, 00:16:57.489 { 00:16:57.489 "name": "BaseBdev2", 00:16:57.489 "uuid": "dcf03ffe-d382-4398-831b-f4e736b26aa6", 00:16:57.489 "is_configured": true, 00:16:57.489 "data_offset": 0, 00:16:57.489 "data_size": 65536 00:16:57.489 }, 00:16:57.489 { 00:16:57.489 "name": "BaseBdev3", 00:16:57.489 "uuid": "f2819bb5-56b2-41d4-9dc0-61056c7cb6eb", 00:16:57.489 "is_configured": true, 00:16:57.489 "data_offset": 0, 00:16:57.489 "data_size": 65536 00:16:57.489 }, 00:16:57.489 { 00:16:57.489 "name": "BaseBdev4", 00:16:57.489 "uuid": "71598320-c705-4c26-9ead-b6f7aa47fea6", 00:16:57.489 "is_configured": true, 00:16:57.489 "data_offset": 0, 00:16:57.489 "data_size": 65536 00:16:57.489 } 00:16:57.489 ] 00:16:57.489 }' 00:16:57.489 02:40:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:57.489 02:40:22 -- common/autotest_common.sh@10 -- # set +x 00:16:58.053 02:40:23 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:58.053 02:40:23 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:58.053 02:40:23 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:58.053 02:40:23 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:58.311 02:40:23 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:58.311 02:40:23 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:58.311 02:40:23 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:58.311 [2024-07-11 02:40:23.388857] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:58.569 02:40:23 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:58.569 02:40:23 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:58.569 02:40:23 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:58.569 02:40:23 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:58.569 02:40:23 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:58.569 02:40:23 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:58.569 02:40:23 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:58.827 [2024-07-11 02:40:23.850649] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:58.827 02:40:23 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:58.827 02:40:23 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:58.827 02:40:23 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:58.827 02:40:23 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:59.084 02:40:24 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:59.084 02:40:24 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:59.084 02:40:24 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:16:59.342 [2024-07-11 02:40:24.332715] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:59.342 [2024-07-11 02:40:24.332795] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state offline 00:16:59.342 02:40:24 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:59.342 02:40:24 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:59.342 02:40:24 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:59.342 02:40:24 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:59.600 02:40:24 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:59.600 02:40:24 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:59.600 02:40:24 -- bdev/bdev_raid.sh@287 -- # killprocess 131021 00:16:59.600 02:40:24 -- common/autotest_common.sh@926 -- # '[' -z 131021 ']' 00:16:59.600 02:40:24 -- common/autotest_common.sh@930 -- # kill -0 131021 00:16:59.600 02:40:24 -- common/autotest_common.sh@931 -- # uname 00:16:59.600 02:40:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:59.600 02:40:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 131021 00:16:59.600 killing process with pid 131021 00:16:59.600 02:40:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:59.600 02:40:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:59.600 02:40:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 131021' 00:16:59.600 02:40:24 -- common/autotest_common.sh@945 -- # kill 131021 00:16:59.600 02:40:24 -- common/autotest_common.sh@950 -- # wait 131021 00:16:59.600 [2024-07-11 02:40:24.560269] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:59.600 [2024-07-11 02:40:24.560372] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:59.859 ************************************ 00:16:59.859 END TEST raid_state_function_test 00:16:59.859 ************************************ 00:16:59.859 02:40:24 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:59.859 00:16:59.859 real 0m13.060s 00:16:59.859 user 0m24.150s 00:16:59.859 sys 0m1.709s 00:16:59.859 02:40:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:59.859 02:40:24 -- common/autotest_common.sh@10 -- # set +x 00:16:59.859 02:40:24 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:16:59.859 02:40:24 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:16:59.859 02:40:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:59.859 02:40:24 -- common/autotest_common.sh@10 -- # set +x 00:16:59.859 ************************************ 00:16:59.859 START TEST raid_state_function_test_sb 00:16:59.859 ************************************ 00:16:59.859 02:40:24 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 4 true 00:16:59.859 02:40:24 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:16:59.859 02:40:24 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:16:59.859 02:40:24 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:16:59.859 02:40:24 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:59.859 02:40:24 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:16:59.859 02:40:24 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:59.859 02:40:24 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:59.859 02:40:24 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:59.859 02:40:24 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:59.859 02:40:24 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:59.859 02:40:24 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:59.859 02:40:24 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:59.859 02:40:24 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:59.859 02:40:24 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:16:59.859 02:40:24 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:59.859 02:40:24 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:59.859 02:40:24 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:16:59.859 02:40:24 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:59.859 02:40:24 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:59.859 02:40:24 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:59.859 02:40:24 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:59.859 02:40:24 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:59.859 02:40:24 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:59.859 02:40:24 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:59.859 02:40:24 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:16:59.859 02:40:24 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:16:59.859 02:40:24 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:16:59.859 02:40:24 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:16:59.859 02:40:24 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:16:59.859 02:40:24 -- bdev/bdev_raid.sh@226 -- # raid_pid=131467 00:16:59.859 02:40:24 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 131467' 00:16:59.859 Process raid pid: 131467 00:16:59.859 02:40:24 -- bdev/bdev_raid.sh@228 -- # waitforlisten 131467 /var/tmp/spdk-raid.sock 00:16:59.859 02:40:24 -- common/autotest_common.sh@819 -- # '[' -z 131467 ']' 00:16:59.859 02:40:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:59.859 02:40:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:59.859 02:40:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:59.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:59.859 02:40:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:59.859 02:40:24 -- common/autotest_common.sh@10 -- # set +x 00:16:59.859 02:40:24 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:00.118 [2024-07-11 02:40:24.967746] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:17:00.118 [2024-07-11 02:40:24.968096] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:00.118 [2024-07-11 02:40:25.119604] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.118 [2024-07-11 02:40:25.186798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.376 [2024-07-11 02:40:25.242974] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:00.942 02:40:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:00.943 02:40:25 -- common/autotest_common.sh@852 -- # return 0 00:17:00.943 02:40:25 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:01.201 [2024-07-11 02:40:26.045301] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:01.201 [2024-07-11 02:40:26.045942] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:01.201 [2024-07-11 02:40:26.045993] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:01.201 [2024-07-11 02:40:26.046199] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:01.201 [2024-07-11 02:40:26.046227] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:01.201 [2024-07-11 02:40:26.046400] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:01.201 [2024-07-11 02:40:26.046427] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:01.201 [2024-07-11 02:40:26.046553] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:01.201 02:40:26 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:01.201 02:40:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:01.201 02:40:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:01.201 02:40:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:01.201 02:40:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:01.201 02:40:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:01.201 02:40:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:01.201 02:40:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:01.201 02:40:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:01.201 02:40:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:01.201 02:40:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:01.201 02:40:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:01.459 02:40:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:01.459 "name": "Existed_Raid", 00:17:01.459 "uuid": "0d6118bf-7a95-49c3-82ed-51720d6bb8e0", 00:17:01.459 "strip_size_kb": 64, 00:17:01.459 "state": "configuring", 00:17:01.459 "raid_level": "raid0", 00:17:01.459 "superblock": true, 00:17:01.459 "num_base_bdevs": 4, 00:17:01.459 "num_base_bdevs_discovered": 0, 00:17:01.459 "num_base_bdevs_operational": 4, 00:17:01.459 "base_bdevs_list": [ 00:17:01.459 { 00:17:01.459 "name": "BaseBdev1", 00:17:01.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.459 "is_configured": false, 00:17:01.459 "data_offset": 0, 00:17:01.459 "data_size": 0 00:17:01.459 }, 00:17:01.459 { 00:17:01.459 "name": "BaseBdev2", 00:17:01.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.459 "is_configured": false, 00:17:01.459 "data_offset": 0, 00:17:01.459 "data_size": 0 00:17:01.459 }, 00:17:01.459 { 00:17:01.459 "name": "BaseBdev3", 00:17:01.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.460 "is_configured": false, 00:17:01.460 "data_offset": 0, 00:17:01.460 "data_size": 0 00:17:01.460 }, 00:17:01.460 { 00:17:01.460 "name": "BaseBdev4", 00:17:01.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.460 "is_configured": false, 00:17:01.460 "data_offset": 0, 00:17:01.460 "data_size": 0 00:17:01.460 } 00:17:01.460 ] 00:17:01.460 }' 00:17:01.460 02:40:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:01.460 02:40:26 -- common/autotest_common.sh@10 -- # set +x 00:17:02.027 02:40:26 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:02.285 [2024-07-11 02:40:27.121315] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:02.286 [2024-07-11 02:40:27.121374] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:17:02.286 02:40:27 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:02.286 [2024-07-11 02:40:27.341410] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:02.286 [2024-07-11 02:40:27.341985] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:02.286 [2024-07-11 02:40:27.342020] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:02.286 [2024-07-11 02:40:27.342252] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:02.286 [2024-07-11 02:40:27.342296] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:02.286 [2024-07-11 02:40:27.342543] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:02.286 [2024-07-11 02:40:27.342571] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:02.286 [2024-07-11 02:40:27.342819] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:02.286 02:40:27 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:02.544 [2024-07-11 02:40:27.619621] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:02.544 BaseBdev1 00:17:02.544 02:40:27 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:02.544 02:40:27 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:02.544 02:40:27 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:02.544 02:40:27 -- common/autotest_common.sh@889 -- # local i 00:17:02.544 02:40:27 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:02.544 02:40:27 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:02.544 02:40:27 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:02.804 02:40:27 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:03.063 [ 00:17:03.063 { 00:17:03.063 "name": "BaseBdev1", 00:17:03.063 "aliases": [ 00:17:03.063 "cf5811a6-b794-4355-99fc-4d6d1ab15e3d" 00:17:03.063 ], 00:17:03.063 "product_name": "Malloc disk", 00:17:03.063 "block_size": 512, 00:17:03.063 "num_blocks": 65536, 00:17:03.063 "uuid": "cf5811a6-b794-4355-99fc-4d6d1ab15e3d", 00:17:03.063 "assigned_rate_limits": { 00:17:03.063 "rw_ios_per_sec": 0, 00:17:03.063 "rw_mbytes_per_sec": 0, 00:17:03.063 "r_mbytes_per_sec": 0, 00:17:03.063 "w_mbytes_per_sec": 0 00:17:03.063 }, 00:17:03.063 "claimed": true, 00:17:03.063 "claim_type": "exclusive_write", 00:17:03.063 "zoned": false, 00:17:03.063 "supported_io_types": { 00:17:03.063 "read": true, 00:17:03.063 "write": true, 00:17:03.063 "unmap": true, 00:17:03.063 "write_zeroes": true, 00:17:03.063 "flush": true, 00:17:03.063 "reset": true, 00:17:03.063 "compare": false, 00:17:03.063 "compare_and_write": false, 00:17:03.063 "abort": true, 00:17:03.063 "nvme_admin": false, 00:17:03.063 "nvme_io": false 00:17:03.063 }, 00:17:03.063 "memory_domains": [ 00:17:03.063 { 00:17:03.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.063 "dma_device_type": 2 00:17:03.063 } 00:17:03.063 ], 00:17:03.063 "driver_specific": {} 00:17:03.063 } 00:17:03.063 ] 00:17:03.063 02:40:28 -- common/autotest_common.sh@895 -- # return 0 00:17:03.063 02:40:28 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:03.063 02:40:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:03.063 02:40:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:03.063 02:40:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:03.063 02:40:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:03.063 02:40:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:03.063 02:40:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:03.063 02:40:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:03.063 02:40:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:03.063 02:40:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:03.063 02:40:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:03.063 02:40:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:03.322 02:40:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:03.322 "name": "Existed_Raid", 00:17:03.322 "uuid": "2bf00f65-38cf-4a44-aef0-584ef982120a", 00:17:03.322 "strip_size_kb": 64, 00:17:03.322 "state": "configuring", 00:17:03.322 "raid_level": "raid0", 00:17:03.322 "superblock": true, 00:17:03.322 "num_base_bdevs": 4, 00:17:03.322 "num_base_bdevs_discovered": 1, 00:17:03.322 "num_base_bdevs_operational": 4, 00:17:03.322 "base_bdevs_list": [ 00:17:03.322 { 00:17:03.322 "name": "BaseBdev1", 00:17:03.322 "uuid": "cf5811a6-b794-4355-99fc-4d6d1ab15e3d", 00:17:03.322 "is_configured": true, 00:17:03.322 "data_offset": 2048, 00:17:03.322 "data_size": 63488 00:17:03.322 }, 00:17:03.322 { 00:17:03.322 "name": "BaseBdev2", 00:17:03.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.322 "is_configured": false, 00:17:03.322 "data_offset": 0, 00:17:03.322 "data_size": 0 00:17:03.322 }, 00:17:03.322 { 00:17:03.322 "name": "BaseBdev3", 00:17:03.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.322 "is_configured": false, 00:17:03.322 "data_offset": 0, 00:17:03.322 "data_size": 0 00:17:03.322 }, 00:17:03.322 { 00:17:03.322 "name": "BaseBdev4", 00:17:03.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.322 "is_configured": false, 00:17:03.322 "data_offset": 0, 00:17:03.322 "data_size": 0 00:17:03.322 } 00:17:03.322 ] 00:17:03.322 }' 00:17:03.322 02:40:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:03.322 02:40:28 -- common/autotest_common.sh@10 -- # set +x 00:17:03.891 02:40:28 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:04.149 [2024-07-11 02:40:29.199935] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:04.149 [2024-07-11 02:40:29.200000] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005a80 name Existed_Raid, state configuring 00:17:04.149 02:40:29 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:17:04.149 02:40:29 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:04.408 02:40:29 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:04.667 BaseBdev1 00:17:04.667 02:40:29 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:17:04.667 02:40:29 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:04.667 02:40:29 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:04.667 02:40:29 -- common/autotest_common.sh@889 -- # local i 00:17:04.667 02:40:29 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:04.667 02:40:29 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:04.667 02:40:29 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:04.927 02:40:29 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:05.186 [ 00:17:05.186 { 00:17:05.186 "name": "BaseBdev1", 00:17:05.186 "aliases": [ 00:17:05.186 "5aa14d7e-3772-41c6-a8a2-c246e8ebb62b" 00:17:05.186 ], 00:17:05.186 "product_name": "Malloc disk", 00:17:05.186 "block_size": 512, 00:17:05.186 "num_blocks": 65536, 00:17:05.186 "uuid": "5aa14d7e-3772-41c6-a8a2-c246e8ebb62b", 00:17:05.186 "assigned_rate_limits": { 00:17:05.186 "rw_ios_per_sec": 0, 00:17:05.186 "rw_mbytes_per_sec": 0, 00:17:05.186 "r_mbytes_per_sec": 0, 00:17:05.186 "w_mbytes_per_sec": 0 00:17:05.186 }, 00:17:05.186 "claimed": false, 00:17:05.186 "zoned": false, 00:17:05.186 "supported_io_types": { 00:17:05.186 "read": true, 00:17:05.186 "write": true, 00:17:05.186 "unmap": true, 00:17:05.186 "write_zeroes": true, 00:17:05.186 "flush": true, 00:17:05.186 "reset": true, 00:17:05.186 "compare": false, 00:17:05.186 "compare_and_write": false, 00:17:05.186 "abort": true, 00:17:05.186 "nvme_admin": false, 00:17:05.186 "nvme_io": false 00:17:05.186 }, 00:17:05.186 "memory_domains": [ 00:17:05.186 { 00:17:05.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:05.186 "dma_device_type": 2 00:17:05.186 } 00:17:05.186 ], 00:17:05.186 "driver_specific": {} 00:17:05.186 } 00:17:05.186 ] 00:17:05.186 02:40:30 -- common/autotest_common.sh@895 -- # return 0 00:17:05.186 02:40:30 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:05.445 [2024-07-11 02:40:30.315851] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:05.445 [2024-07-11 02:40:30.317459] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:05.445 [2024-07-11 02:40:30.317869] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:05.445 [2024-07-11 02:40:30.317899] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:05.445 [2024-07-11 02:40:30.318024] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:05.445 [2024-07-11 02:40:30.318056] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:05.445 [2024-07-11 02:40:30.318167] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:05.445 02:40:30 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:05.445 02:40:30 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:05.445 02:40:30 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:05.445 02:40:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:05.445 02:40:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:05.445 02:40:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:05.445 02:40:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:05.445 02:40:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:05.445 02:40:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:05.445 02:40:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:05.445 02:40:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:05.445 02:40:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:05.445 02:40:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:05.445 02:40:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:05.703 02:40:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:05.703 "name": "Existed_Raid", 00:17:05.703 "uuid": "7529de35-1541-4b83-9c7e-ab4689f20af2", 00:17:05.703 "strip_size_kb": 64, 00:17:05.703 "state": "configuring", 00:17:05.703 "raid_level": "raid0", 00:17:05.703 "superblock": true, 00:17:05.703 "num_base_bdevs": 4, 00:17:05.703 "num_base_bdevs_discovered": 1, 00:17:05.703 "num_base_bdevs_operational": 4, 00:17:05.703 "base_bdevs_list": [ 00:17:05.703 { 00:17:05.703 "name": "BaseBdev1", 00:17:05.703 "uuid": "5aa14d7e-3772-41c6-a8a2-c246e8ebb62b", 00:17:05.703 "is_configured": true, 00:17:05.703 "data_offset": 2048, 00:17:05.703 "data_size": 63488 00:17:05.703 }, 00:17:05.703 { 00:17:05.703 "name": "BaseBdev2", 00:17:05.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.703 "is_configured": false, 00:17:05.703 "data_offset": 0, 00:17:05.704 "data_size": 0 00:17:05.704 }, 00:17:05.704 { 00:17:05.704 "name": "BaseBdev3", 00:17:05.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.704 "is_configured": false, 00:17:05.704 "data_offset": 0, 00:17:05.704 "data_size": 0 00:17:05.704 }, 00:17:05.704 { 00:17:05.704 "name": "BaseBdev4", 00:17:05.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.704 "is_configured": false, 00:17:05.704 "data_offset": 0, 00:17:05.704 "data_size": 0 00:17:05.704 } 00:17:05.704 ] 00:17:05.704 }' 00:17:05.704 02:40:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:05.704 02:40:30 -- common/autotest_common.sh@10 -- # set +x 00:17:06.270 02:40:31 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:06.528 [2024-07-11 02:40:31.545090] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:06.528 BaseBdev2 00:17:06.528 02:40:31 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:06.528 02:40:31 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:17:06.528 02:40:31 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:06.528 02:40:31 -- common/autotest_common.sh@889 -- # local i 00:17:06.528 02:40:31 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:06.528 02:40:31 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:06.528 02:40:31 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:06.790 02:40:31 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:07.056 [ 00:17:07.056 { 00:17:07.056 "name": "BaseBdev2", 00:17:07.056 "aliases": [ 00:17:07.056 "e1cbe223-7aab-4d6d-8db6-50cebf7ef33f" 00:17:07.056 ], 00:17:07.056 "product_name": "Malloc disk", 00:17:07.056 "block_size": 512, 00:17:07.056 "num_blocks": 65536, 00:17:07.056 "uuid": "e1cbe223-7aab-4d6d-8db6-50cebf7ef33f", 00:17:07.056 "assigned_rate_limits": { 00:17:07.056 "rw_ios_per_sec": 0, 00:17:07.056 "rw_mbytes_per_sec": 0, 00:17:07.056 "r_mbytes_per_sec": 0, 00:17:07.056 "w_mbytes_per_sec": 0 00:17:07.056 }, 00:17:07.056 "claimed": true, 00:17:07.056 "claim_type": "exclusive_write", 00:17:07.056 "zoned": false, 00:17:07.056 "supported_io_types": { 00:17:07.057 "read": true, 00:17:07.057 "write": true, 00:17:07.057 "unmap": true, 00:17:07.057 "write_zeroes": true, 00:17:07.057 "flush": true, 00:17:07.057 "reset": true, 00:17:07.057 "compare": false, 00:17:07.057 "compare_and_write": false, 00:17:07.057 "abort": true, 00:17:07.057 "nvme_admin": false, 00:17:07.057 "nvme_io": false 00:17:07.057 }, 00:17:07.057 "memory_domains": [ 00:17:07.057 { 00:17:07.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:07.057 "dma_device_type": 2 00:17:07.057 } 00:17:07.057 ], 00:17:07.057 "driver_specific": {} 00:17:07.057 } 00:17:07.057 ] 00:17:07.057 02:40:32 -- common/autotest_common.sh@895 -- # return 0 00:17:07.057 02:40:32 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:07.057 02:40:32 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:07.057 02:40:32 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:07.057 02:40:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:07.057 02:40:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:07.057 02:40:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:07.057 02:40:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:07.057 02:40:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:07.057 02:40:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:07.057 02:40:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:07.057 02:40:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:07.057 02:40:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:07.057 02:40:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:07.057 02:40:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:07.315 02:40:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:07.315 "name": "Existed_Raid", 00:17:07.315 "uuid": "7529de35-1541-4b83-9c7e-ab4689f20af2", 00:17:07.315 "strip_size_kb": 64, 00:17:07.315 "state": "configuring", 00:17:07.315 "raid_level": "raid0", 00:17:07.315 "superblock": true, 00:17:07.315 "num_base_bdevs": 4, 00:17:07.315 "num_base_bdevs_discovered": 2, 00:17:07.315 "num_base_bdevs_operational": 4, 00:17:07.315 "base_bdevs_list": [ 00:17:07.315 { 00:17:07.315 "name": "BaseBdev1", 00:17:07.315 "uuid": "5aa14d7e-3772-41c6-a8a2-c246e8ebb62b", 00:17:07.315 "is_configured": true, 00:17:07.315 "data_offset": 2048, 00:17:07.315 "data_size": 63488 00:17:07.315 }, 00:17:07.315 { 00:17:07.315 "name": "BaseBdev2", 00:17:07.315 "uuid": "e1cbe223-7aab-4d6d-8db6-50cebf7ef33f", 00:17:07.315 "is_configured": true, 00:17:07.315 "data_offset": 2048, 00:17:07.315 "data_size": 63488 00:17:07.315 }, 00:17:07.315 { 00:17:07.315 "name": "BaseBdev3", 00:17:07.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.315 "is_configured": false, 00:17:07.315 "data_offset": 0, 00:17:07.315 "data_size": 0 00:17:07.315 }, 00:17:07.315 { 00:17:07.315 "name": "BaseBdev4", 00:17:07.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.315 "is_configured": false, 00:17:07.315 "data_offset": 0, 00:17:07.315 "data_size": 0 00:17:07.315 } 00:17:07.315 ] 00:17:07.315 }' 00:17:07.315 02:40:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:07.315 02:40:32 -- common/autotest_common.sh@10 -- # set +x 00:17:07.882 02:40:32 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:08.140 [2024-07-11 02:40:33.129905] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:08.141 BaseBdev3 00:17:08.141 02:40:33 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:08.141 02:40:33 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:17:08.141 02:40:33 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:08.141 02:40:33 -- common/autotest_common.sh@889 -- # local i 00:17:08.141 02:40:33 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:08.141 02:40:33 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:08.141 02:40:33 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:08.399 02:40:33 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:08.657 [ 00:17:08.658 { 00:17:08.658 "name": "BaseBdev3", 00:17:08.658 "aliases": [ 00:17:08.658 "1e660075-6c9b-4a6f-b87c-7571ecc71153" 00:17:08.658 ], 00:17:08.658 "product_name": "Malloc disk", 00:17:08.658 "block_size": 512, 00:17:08.658 "num_blocks": 65536, 00:17:08.658 "uuid": "1e660075-6c9b-4a6f-b87c-7571ecc71153", 00:17:08.658 "assigned_rate_limits": { 00:17:08.658 "rw_ios_per_sec": 0, 00:17:08.658 "rw_mbytes_per_sec": 0, 00:17:08.658 "r_mbytes_per_sec": 0, 00:17:08.658 "w_mbytes_per_sec": 0 00:17:08.658 }, 00:17:08.658 "claimed": true, 00:17:08.658 "claim_type": "exclusive_write", 00:17:08.658 "zoned": false, 00:17:08.658 "supported_io_types": { 00:17:08.658 "read": true, 00:17:08.658 "write": true, 00:17:08.658 "unmap": true, 00:17:08.658 "write_zeroes": true, 00:17:08.658 "flush": true, 00:17:08.658 "reset": true, 00:17:08.658 "compare": false, 00:17:08.658 "compare_and_write": false, 00:17:08.658 "abort": true, 00:17:08.658 "nvme_admin": false, 00:17:08.658 "nvme_io": false 00:17:08.658 }, 00:17:08.658 "memory_domains": [ 00:17:08.658 { 00:17:08.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:08.658 "dma_device_type": 2 00:17:08.658 } 00:17:08.658 ], 00:17:08.658 "driver_specific": {} 00:17:08.658 } 00:17:08.658 ] 00:17:08.658 02:40:33 -- common/autotest_common.sh@895 -- # return 0 00:17:08.658 02:40:33 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:08.658 02:40:33 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:08.658 02:40:33 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:08.658 02:40:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:08.658 02:40:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:08.658 02:40:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:08.658 02:40:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:08.658 02:40:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:08.658 02:40:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:08.658 02:40:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:08.658 02:40:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:08.658 02:40:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:08.658 02:40:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:08.658 02:40:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:08.915 02:40:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:08.915 "name": "Existed_Raid", 00:17:08.915 "uuid": "7529de35-1541-4b83-9c7e-ab4689f20af2", 00:17:08.915 "strip_size_kb": 64, 00:17:08.915 "state": "configuring", 00:17:08.915 "raid_level": "raid0", 00:17:08.915 "superblock": true, 00:17:08.915 "num_base_bdevs": 4, 00:17:08.915 "num_base_bdevs_discovered": 3, 00:17:08.915 "num_base_bdevs_operational": 4, 00:17:08.915 "base_bdevs_list": [ 00:17:08.915 { 00:17:08.915 "name": "BaseBdev1", 00:17:08.915 "uuid": "5aa14d7e-3772-41c6-a8a2-c246e8ebb62b", 00:17:08.915 "is_configured": true, 00:17:08.915 "data_offset": 2048, 00:17:08.915 "data_size": 63488 00:17:08.915 }, 00:17:08.915 { 00:17:08.915 "name": "BaseBdev2", 00:17:08.915 "uuid": "e1cbe223-7aab-4d6d-8db6-50cebf7ef33f", 00:17:08.915 "is_configured": true, 00:17:08.915 "data_offset": 2048, 00:17:08.915 "data_size": 63488 00:17:08.915 }, 00:17:08.915 { 00:17:08.915 "name": "BaseBdev3", 00:17:08.915 "uuid": "1e660075-6c9b-4a6f-b87c-7571ecc71153", 00:17:08.915 "is_configured": true, 00:17:08.915 "data_offset": 2048, 00:17:08.915 "data_size": 63488 00:17:08.915 }, 00:17:08.915 { 00:17:08.915 "name": "BaseBdev4", 00:17:08.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.915 "is_configured": false, 00:17:08.915 "data_offset": 0, 00:17:08.915 "data_size": 0 00:17:08.915 } 00:17:08.915 ] 00:17:08.915 }' 00:17:08.915 02:40:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:08.915 02:40:33 -- common/autotest_common.sh@10 -- # set +x 00:17:09.482 02:40:34 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:09.741 [2024-07-11 02:40:34.742807] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:09.741 [2024-07-11 02:40:34.743056] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006980 00:17:09.741 [2024-07-11 02:40:34.743071] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:09.741 [2024-07-11 02:40:34.743260] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:17:09.741 BaseBdev4 00:17:09.741 [2024-07-11 02:40:34.743669] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006980 00:17:09.741 [2024-07-11 02:40:34.743694] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006980 00:17:09.741 [2024-07-11 02:40:34.743888] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:09.741 02:40:34 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:17:09.741 02:40:34 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:17:09.741 02:40:34 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:09.741 02:40:34 -- common/autotest_common.sh@889 -- # local i 00:17:09.741 02:40:34 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:09.741 02:40:34 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:09.741 02:40:34 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:09.999 02:40:34 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:10.258 [ 00:17:10.258 { 00:17:10.258 "name": "BaseBdev4", 00:17:10.258 "aliases": [ 00:17:10.258 "de553659-4d5a-4efc-b40e-15be04253470" 00:17:10.258 ], 00:17:10.258 "product_name": "Malloc disk", 00:17:10.258 "block_size": 512, 00:17:10.258 "num_blocks": 65536, 00:17:10.258 "uuid": "de553659-4d5a-4efc-b40e-15be04253470", 00:17:10.258 "assigned_rate_limits": { 00:17:10.258 "rw_ios_per_sec": 0, 00:17:10.258 "rw_mbytes_per_sec": 0, 00:17:10.258 "r_mbytes_per_sec": 0, 00:17:10.258 "w_mbytes_per_sec": 0 00:17:10.258 }, 00:17:10.258 "claimed": true, 00:17:10.258 "claim_type": "exclusive_write", 00:17:10.258 "zoned": false, 00:17:10.258 "supported_io_types": { 00:17:10.258 "read": true, 00:17:10.258 "write": true, 00:17:10.258 "unmap": true, 00:17:10.258 "write_zeroes": true, 00:17:10.258 "flush": true, 00:17:10.258 "reset": true, 00:17:10.258 "compare": false, 00:17:10.258 "compare_and_write": false, 00:17:10.258 "abort": true, 00:17:10.258 "nvme_admin": false, 00:17:10.258 "nvme_io": false 00:17:10.258 }, 00:17:10.258 "memory_domains": [ 00:17:10.258 { 00:17:10.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:10.258 "dma_device_type": 2 00:17:10.258 } 00:17:10.258 ], 00:17:10.258 "driver_specific": {} 00:17:10.258 } 00:17:10.258 ] 00:17:10.258 02:40:35 -- common/autotest_common.sh@895 -- # return 0 00:17:10.258 02:40:35 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:10.258 02:40:35 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:10.258 02:40:35 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:17:10.258 02:40:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:10.258 02:40:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:10.258 02:40:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:10.258 02:40:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:10.258 02:40:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:10.258 02:40:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:10.258 02:40:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:10.258 02:40:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:10.258 02:40:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:10.258 02:40:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:10.258 02:40:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:10.517 02:40:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:10.517 "name": "Existed_Raid", 00:17:10.517 "uuid": "7529de35-1541-4b83-9c7e-ab4689f20af2", 00:17:10.517 "strip_size_kb": 64, 00:17:10.517 "state": "online", 00:17:10.517 "raid_level": "raid0", 00:17:10.517 "superblock": true, 00:17:10.517 "num_base_bdevs": 4, 00:17:10.517 "num_base_bdevs_discovered": 4, 00:17:10.517 "num_base_bdevs_operational": 4, 00:17:10.517 "base_bdevs_list": [ 00:17:10.517 { 00:17:10.517 "name": "BaseBdev1", 00:17:10.517 "uuid": "5aa14d7e-3772-41c6-a8a2-c246e8ebb62b", 00:17:10.517 "is_configured": true, 00:17:10.517 "data_offset": 2048, 00:17:10.517 "data_size": 63488 00:17:10.517 }, 00:17:10.517 { 00:17:10.517 "name": "BaseBdev2", 00:17:10.517 "uuid": "e1cbe223-7aab-4d6d-8db6-50cebf7ef33f", 00:17:10.517 "is_configured": true, 00:17:10.517 "data_offset": 2048, 00:17:10.517 "data_size": 63488 00:17:10.517 }, 00:17:10.517 { 00:17:10.517 "name": "BaseBdev3", 00:17:10.517 "uuid": "1e660075-6c9b-4a6f-b87c-7571ecc71153", 00:17:10.517 "is_configured": true, 00:17:10.517 "data_offset": 2048, 00:17:10.517 "data_size": 63488 00:17:10.517 }, 00:17:10.517 { 00:17:10.517 "name": "BaseBdev4", 00:17:10.517 "uuid": "de553659-4d5a-4efc-b40e-15be04253470", 00:17:10.517 "is_configured": true, 00:17:10.517 "data_offset": 2048, 00:17:10.517 "data_size": 63488 00:17:10.517 } 00:17:10.517 ] 00:17:10.517 }' 00:17:10.517 02:40:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:10.517 02:40:35 -- common/autotest_common.sh@10 -- # set +x 00:17:11.084 02:40:35 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:11.343 [2024-07-11 02:40:36.243246] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:11.343 [2024-07-11 02:40:36.243283] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:11.343 [2024-07-11 02:40:36.243379] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:11.343 02:40:36 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:11.343 02:40:36 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:17:11.343 02:40:36 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:11.343 02:40:36 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:11.343 02:40:36 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:17:11.343 02:40:36 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:17:11.343 02:40:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:11.343 02:40:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:17:11.343 02:40:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:11.343 02:40:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:11.343 02:40:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:11.343 02:40:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:11.343 02:40:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:11.343 02:40:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:11.343 02:40:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:11.343 02:40:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:11.343 02:40:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:11.601 02:40:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:11.601 "name": "Existed_Raid", 00:17:11.601 "uuid": "7529de35-1541-4b83-9c7e-ab4689f20af2", 00:17:11.601 "strip_size_kb": 64, 00:17:11.601 "state": "offline", 00:17:11.601 "raid_level": "raid0", 00:17:11.601 "superblock": true, 00:17:11.601 "num_base_bdevs": 4, 00:17:11.601 "num_base_bdevs_discovered": 3, 00:17:11.601 "num_base_bdevs_operational": 3, 00:17:11.601 "base_bdevs_list": [ 00:17:11.601 { 00:17:11.601 "name": null, 00:17:11.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.601 "is_configured": false, 00:17:11.601 "data_offset": 2048, 00:17:11.601 "data_size": 63488 00:17:11.601 }, 00:17:11.601 { 00:17:11.601 "name": "BaseBdev2", 00:17:11.601 "uuid": "e1cbe223-7aab-4d6d-8db6-50cebf7ef33f", 00:17:11.601 "is_configured": true, 00:17:11.601 "data_offset": 2048, 00:17:11.601 "data_size": 63488 00:17:11.602 }, 00:17:11.602 { 00:17:11.602 "name": "BaseBdev3", 00:17:11.602 "uuid": "1e660075-6c9b-4a6f-b87c-7571ecc71153", 00:17:11.602 "is_configured": true, 00:17:11.602 "data_offset": 2048, 00:17:11.602 "data_size": 63488 00:17:11.602 }, 00:17:11.602 { 00:17:11.602 "name": "BaseBdev4", 00:17:11.602 "uuid": "de553659-4d5a-4efc-b40e-15be04253470", 00:17:11.602 "is_configured": true, 00:17:11.602 "data_offset": 2048, 00:17:11.602 "data_size": 63488 00:17:11.602 } 00:17:11.602 ] 00:17:11.602 }' 00:17:11.602 02:40:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:11.602 02:40:36 -- common/autotest_common.sh@10 -- # set +x 00:17:12.169 02:40:37 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:12.169 02:40:37 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:12.169 02:40:37 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:12.169 02:40:37 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:12.427 02:40:37 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:12.427 02:40:37 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:12.427 02:40:37 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:12.686 [2024-07-11 02:40:37.608391] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:12.686 02:40:37 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:12.686 02:40:37 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:12.686 02:40:37 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:12.686 02:40:37 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:12.945 02:40:37 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:12.945 02:40:37 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:12.945 02:40:37 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:12.945 [2024-07-11 02:40:37.998497] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:12.945 02:40:38 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:12.945 02:40:38 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:12.945 02:40:38 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:12.945 02:40:38 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:13.203 02:40:38 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:13.203 02:40:38 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:13.203 02:40:38 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:17:13.462 [2024-07-11 02:40:38.464623] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:13.462 [2024-07-11 02:40:38.464685] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state offline 00:17:13.462 02:40:38 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:13.462 02:40:38 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:13.462 02:40:38 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:13.462 02:40:38 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:13.721 02:40:38 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:13.721 02:40:38 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:13.721 02:40:38 -- bdev/bdev_raid.sh@287 -- # killprocess 131467 00:17:13.721 02:40:38 -- common/autotest_common.sh@926 -- # '[' -z 131467 ']' 00:17:13.721 02:40:38 -- common/autotest_common.sh@930 -- # kill -0 131467 00:17:13.721 02:40:38 -- common/autotest_common.sh@931 -- # uname 00:17:13.721 02:40:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:13.721 02:40:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 131467 00:17:13.721 killing process with pid 131467 00:17:13.721 02:40:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:13.721 02:40:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:13.721 02:40:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 131467' 00:17:13.721 02:40:38 -- common/autotest_common.sh@945 -- # kill 131467 00:17:13.721 02:40:38 -- common/autotest_common.sh@950 -- # wait 131467 00:17:13.721 [2024-07-11 02:40:38.726209] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:13.721 [2024-07-11 02:40:38.726549] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:13.980 ************************************ 00:17:13.980 END TEST raid_state_function_test_sb 00:17:13.980 ************************************ 00:17:13.980 02:40:38 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:13.980 00:17:13.980 real 0m14.038s 00:17:13.980 user 0m26.277s 00:17:13.980 sys 0m1.645s 00:17:13.980 02:40:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:13.980 02:40:38 -- common/autotest_common.sh@10 -- # set +x 00:17:13.980 02:40:38 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:17:13.980 02:40:38 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:17:13.980 02:40:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:13.980 02:40:38 -- common/autotest_common.sh@10 -- # set +x 00:17:13.980 ************************************ 00:17:13.980 START TEST raid_superblock_test 00:17:13.980 ************************************ 00:17:13.980 02:40:38 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 4 00:17:13.980 02:40:38 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:17:13.980 02:40:38 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:17:13.980 02:40:38 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:17:13.980 02:40:38 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:17:13.980 02:40:38 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:17:13.980 02:40:38 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:17:13.980 02:40:38 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:17:13.980 02:40:38 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:17:13.980 02:40:38 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:17:13.980 02:40:38 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:17:13.980 02:40:38 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:17:13.980 02:40:38 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:17:13.980 02:40:38 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:17:13.980 02:40:38 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:17:13.980 02:40:38 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:17:13.980 02:40:38 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:17:13.980 02:40:38 -- bdev/bdev_raid.sh@357 -- # raid_pid=131924 00:17:13.980 02:40:38 -- bdev/bdev_raid.sh@358 -- # waitforlisten 131924 /var/tmp/spdk-raid.sock 00:17:13.980 02:40:38 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:13.980 02:40:38 -- common/autotest_common.sh@819 -- # '[' -z 131924 ']' 00:17:13.980 02:40:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:13.980 02:40:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:13.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:13.980 02:40:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:13.980 02:40:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:13.980 02:40:38 -- common/autotest_common.sh@10 -- # set +x 00:17:13.980 [2024-07-11 02:40:39.045509] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:17:13.980 [2024-07-11 02:40:39.046537] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131924 ] 00:17:14.239 [2024-07-11 02:40:39.193928] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:14.239 [2024-07-11 02:40:39.257707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:14.239 [2024-07-11 02:40:39.313811] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:14.806 02:40:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:14.806 02:40:39 -- common/autotest_common.sh@852 -- # return 0 00:17:14.806 02:40:39 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:17:14.806 02:40:39 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:14.806 02:40:39 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:17:14.806 02:40:39 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:17:14.806 02:40:39 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:14.806 02:40:39 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:14.806 02:40:39 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:14.806 02:40:39 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:14.806 02:40:39 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:15.064 malloc1 00:17:15.064 02:40:40 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:15.322 [2024-07-11 02:40:40.314455] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:15.322 [2024-07-11 02:40:40.315005] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:15.322 [2024-07-11 02:40:40.315269] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005d80 00:17:15.322 [2024-07-11 02:40:40.315480] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:15.322 [2024-07-11 02:40:40.317713] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:15.322 [2024-07-11 02:40:40.317913] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:15.322 pt1 00:17:15.322 02:40:40 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:15.322 02:40:40 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:15.322 02:40:40 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:17:15.323 02:40:40 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:17:15.323 02:40:40 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:15.323 02:40:40 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:15.323 02:40:40 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:15.323 02:40:40 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:15.323 02:40:40 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:15.581 malloc2 00:17:15.581 02:40:40 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:15.840 [2024-07-11 02:40:40.816894] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:15.840 [2024-07-11 02:40:40.816984] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:15.840 [2024-07-11 02:40:40.817037] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:17:15.840 [2024-07-11 02:40:40.817092] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:15.840 [2024-07-11 02:40:40.819310] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:15.840 [2024-07-11 02:40:40.819364] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:15.840 pt2 00:17:15.840 02:40:40 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:15.840 02:40:40 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:15.840 02:40:40 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:17:15.840 02:40:40 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:17:15.840 02:40:40 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:15.840 02:40:40 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:15.840 02:40:40 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:15.840 02:40:40 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:15.840 02:40:40 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:17:16.098 malloc3 00:17:16.098 02:40:41 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:16.357 [2024-07-11 02:40:41.228776] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:16.357 [2024-07-11 02:40:41.228869] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:16.357 [2024-07-11 02:40:41.228940] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:17:16.357 [2024-07-11 02:40:41.228980] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:16.357 [2024-07-11 02:40:41.231036] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:16.357 [2024-07-11 02:40:41.231101] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:16.357 pt3 00:17:16.357 02:40:41 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:16.357 02:40:41 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:16.357 02:40:41 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:17:16.357 02:40:41 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:17:16.357 02:40:41 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:17:16.357 02:40:41 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:16.357 02:40:41 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:16.357 02:40:41 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:16.357 02:40:41 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:17:16.615 malloc4 00:17:16.615 02:40:41 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:16.615 [2024-07-11 02:40:41.694904] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:16.615 [2024-07-11 02:40:41.695021] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:16.615 [2024-07-11 02:40:41.695054] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:17:16.615 [2024-07-11 02:40:41.695137] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:16.615 [2024-07-11 02:40:41.697168] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:16.615 [2024-07-11 02:40:41.697218] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:16.615 pt4 00:17:16.616 02:40:41 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:16.616 02:40:41 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:16.616 02:40:41 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:17:16.874 [2024-07-11 02:40:41.903160] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:16.874 [2024-07-11 02:40:41.905030] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:16.874 [2024-07-11 02:40:41.905116] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:16.874 [2024-07-11 02:40:41.905164] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:16.874 [2024-07-11 02:40:41.905410] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780 00:17:16.874 [2024-07-11 02:40:41.905432] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:16.874 [2024-07-11 02:40:41.905582] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:17:16.874 [2024-07-11 02:40:41.906024] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780 00:17:16.874 [2024-07-11 02:40:41.906047] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008780 00:17:16.874 [2024-07-11 02:40:41.906262] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:16.874 02:40:41 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:17:16.874 02:40:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:16.874 02:40:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:16.874 02:40:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:16.874 02:40:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:16.874 02:40:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:16.874 02:40:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:16.874 02:40:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:16.874 02:40:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:16.874 02:40:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:16.874 02:40:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:16.874 02:40:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.132 02:40:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:17.132 "name": "raid_bdev1", 00:17:17.132 "uuid": "66cd7931-a8d9-4f30-aaaa-748f25491bb8", 00:17:17.132 "strip_size_kb": 64, 00:17:17.132 "state": "online", 00:17:17.132 "raid_level": "raid0", 00:17:17.132 "superblock": true, 00:17:17.132 "num_base_bdevs": 4, 00:17:17.132 "num_base_bdevs_discovered": 4, 00:17:17.132 "num_base_bdevs_operational": 4, 00:17:17.132 "base_bdevs_list": [ 00:17:17.132 { 00:17:17.132 "name": "pt1", 00:17:17.132 "uuid": "9f9cbeb6-694c-5102-85da-4f9244982bac", 00:17:17.132 "is_configured": true, 00:17:17.132 "data_offset": 2048, 00:17:17.132 "data_size": 63488 00:17:17.132 }, 00:17:17.132 { 00:17:17.132 "name": "pt2", 00:17:17.132 "uuid": "2fe02c53-b6d7-573a-bca9-53f33df5300f", 00:17:17.132 "is_configured": true, 00:17:17.132 "data_offset": 2048, 00:17:17.132 "data_size": 63488 00:17:17.132 }, 00:17:17.132 { 00:17:17.132 "name": "pt3", 00:17:17.132 "uuid": "25f80230-f9cb-5092-a622-8cbcc169ff00", 00:17:17.132 "is_configured": true, 00:17:17.132 "data_offset": 2048, 00:17:17.132 "data_size": 63488 00:17:17.132 }, 00:17:17.132 { 00:17:17.132 "name": "pt4", 00:17:17.132 "uuid": "1304488d-670c-58ba-b1d1-50c20ef31008", 00:17:17.132 "is_configured": true, 00:17:17.132 "data_offset": 2048, 00:17:17.132 "data_size": 63488 00:17:17.132 } 00:17:17.132 ] 00:17:17.132 }' 00:17:17.132 02:40:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:17.132 02:40:42 -- common/autotest_common.sh@10 -- # set +x 00:17:17.726 02:40:42 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:17.726 02:40:42 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:17:17.983 [2024-07-11 02:40:43.023405] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:17.983 02:40:43 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=66cd7931-a8d9-4f30-aaaa-748f25491bb8 00:17:17.983 02:40:43 -- bdev/bdev_raid.sh@380 -- # '[' -z 66cd7931-a8d9-4f30-aaaa-748f25491bb8 ']' 00:17:17.983 02:40:43 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:18.241 [2024-07-11 02:40:43.215228] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:18.241 [2024-07-11 02:40:43.215258] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:18.241 [2024-07-11 02:40:43.215351] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:18.241 [2024-07-11 02:40:43.215476] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:18.241 [2024-07-11 02:40:43.215496] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state offline 00:17:18.241 02:40:43 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:18.241 02:40:43 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:17:18.499 02:40:43 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:17:18.499 02:40:43 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:17:18.499 02:40:43 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:18.499 02:40:43 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:18.756 02:40:43 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:18.756 02:40:43 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:18.756 02:40:43 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:18.756 02:40:43 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:19.014 02:40:43 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:19.014 02:40:43 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:17:19.273 02:40:44 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:19.273 02:40:44 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:19.273 02:40:44 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:17:19.273 02:40:44 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:19.273 02:40:44 -- common/autotest_common.sh@640 -- # local es=0 00:17:19.273 02:40:44 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:19.273 02:40:44 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:19.273 02:40:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:19.273 02:40:44 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:19.273 02:40:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:19.273 02:40:44 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:19.273 02:40:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:19.273 02:40:44 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:19.273 02:40:44 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:19.273 02:40:44 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:19.532 [2024-07-11 02:40:44.587483] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:19.532 [2024-07-11 02:40:44.589191] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:19.532 [2024-07-11 02:40:44.589242] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:19.532 [2024-07-11 02:40:44.589275] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:17:19.532 [2024-07-11 02:40:44.589325] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:17:19.532 [2024-07-11 02:40:44.589415] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:17:19.532 [2024-07-11 02:40:44.589466] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:17:19.532 [2024-07-11 02:40:44.589513] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:17:19.532 [2024-07-11 02:40:44.589551] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:19.532 [2024-07-11 02:40:44.589561] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state configuring 00:17:19.532 request: 00:17:19.532 { 00:17:19.532 "name": "raid_bdev1", 00:17:19.532 "raid_level": "raid0", 00:17:19.532 "base_bdevs": [ 00:17:19.532 "malloc1", 00:17:19.532 "malloc2", 00:17:19.532 "malloc3", 00:17:19.532 "malloc4" 00:17:19.532 ], 00:17:19.532 "superblock": false, 00:17:19.532 "strip_size_kb": 64, 00:17:19.532 "method": "bdev_raid_create", 00:17:19.532 "req_id": 1 00:17:19.532 } 00:17:19.532 Got JSON-RPC error response 00:17:19.532 response: 00:17:19.532 { 00:17:19.532 "code": -17, 00:17:19.532 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:19.532 } 00:17:19.532 02:40:44 -- common/autotest_common.sh@643 -- # es=1 00:17:19.532 02:40:44 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:19.532 02:40:44 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:19.532 02:40:44 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:19.532 02:40:44 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:19.532 02:40:44 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:17:19.790 02:40:44 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:17:19.790 02:40:44 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:17:19.790 02:40:44 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:20.048 [2024-07-11 02:40:44.955490] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:20.048 [2024-07-11 02:40:44.955577] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:20.048 [2024-07-11 02:40:44.955609] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:20.048 [2024-07-11 02:40:44.955634] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:20.048 [2024-07-11 02:40:44.957618] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:20.048 [2024-07-11 02:40:44.957720] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:20.048 [2024-07-11 02:40:44.957823] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:17:20.048 [2024-07-11 02:40:44.957898] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:20.048 pt1 00:17:20.048 02:40:44 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:17:20.048 02:40:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:20.048 02:40:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:20.048 02:40:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:20.048 02:40:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:20.048 02:40:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:20.048 02:40:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:20.048 02:40:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:20.048 02:40:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:20.048 02:40:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:20.048 02:40:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:20.048 02:40:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.307 02:40:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:20.307 "name": "raid_bdev1", 00:17:20.307 "uuid": "66cd7931-a8d9-4f30-aaaa-748f25491bb8", 00:17:20.307 "strip_size_kb": 64, 00:17:20.307 "state": "configuring", 00:17:20.307 "raid_level": "raid0", 00:17:20.307 "superblock": true, 00:17:20.307 "num_base_bdevs": 4, 00:17:20.307 "num_base_bdevs_discovered": 1, 00:17:20.307 "num_base_bdevs_operational": 4, 00:17:20.307 "base_bdevs_list": [ 00:17:20.307 { 00:17:20.307 "name": "pt1", 00:17:20.307 "uuid": "9f9cbeb6-694c-5102-85da-4f9244982bac", 00:17:20.307 "is_configured": true, 00:17:20.307 "data_offset": 2048, 00:17:20.307 "data_size": 63488 00:17:20.307 }, 00:17:20.307 { 00:17:20.307 "name": null, 00:17:20.307 "uuid": "2fe02c53-b6d7-573a-bca9-53f33df5300f", 00:17:20.307 "is_configured": false, 00:17:20.307 "data_offset": 2048, 00:17:20.307 "data_size": 63488 00:17:20.307 }, 00:17:20.307 { 00:17:20.307 "name": null, 00:17:20.307 "uuid": "25f80230-f9cb-5092-a622-8cbcc169ff00", 00:17:20.307 "is_configured": false, 00:17:20.307 "data_offset": 2048, 00:17:20.307 "data_size": 63488 00:17:20.307 }, 00:17:20.307 { 00:17:20.307 "name": null, 00:17:20.307 "uuid": "1304488d-670c-58ba-b1d1-50c20ef31008", 00:17:20.307 "is_configured": false, 00:17:20.307 "data_offset": 2048, 00:17:20.307 "data_size": 63488 00:17:20.307 } 00:17:20.307 ] 00:17:20.307 }' 00:17:20.307 02:40:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:20.307 02:40:45 -- common/autotest_common.sh@10 -- # set +x 00:17:20.874 02:40:45 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:17:20.874 02:40:45 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:20.874 [2024-07-11 02:40:45.947758] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:20.874 [2024-07-11 02:40:45.947880] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:20.874 [2024-07-11 02:40:45.947923] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:17:20.874 [2024-07-11 02:40:45.947945] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:20.874 [2024-07-11 02:40:45.948454] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:20.874 [2024-07-11 02:40:45.948553] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:20.874 [2024-07-11 02:40:45.948645] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:20.874 [2024-07-11 02:40:45.948689] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:20.874 pt2 00:17:20.874 02:40:45 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:21.132 [2024-07-11 02:40:46.127757] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:21.132 02:40:46 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:17:21.132 02:40:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:21.132 02:40:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:21.132 02:40:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:21.132 02:40:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:21.132 02:40:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:21.132 02:40:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:21.132 02:40:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:21.132 02:40:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:21.132 02:40:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:21.132 02:40:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.132 02:40:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:21.390 02:40:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:21.390 "name": "raid_bdev1", 00:17:21.390 "uuid": "66cd7931-a8d9-4f30-aaaa-748f25491bb8", 00:17:21.390 "strip_size_kb": 64, 00:17:21.390 "state": "configuring", 00:17:21.390 "raid_level": "raid0", 00:17:21.390 "superblock": true, 00:17:21.390 "num_base_bdevs": 4, 00:17:21.390 "num_base_bdevs_discovered": 1, 00:17:21.390 "num_base_bdevs_operational": 4, 00:17:21.390 "base_bdevs_list": [ 00:17:21.390 { 00:17:21.390 "name": "pt1", 00:17:21.390 "uuid": "9f9cbeb6-694c-5102-85da-4f9244982bac", 00:17:21.390 "is_configured": true, 00:17:21.390 "data_offset": 2048, 00:17:21.390 "data_size": 63488 00:17:21.390 }, 00:17:21.390 { 00:17:21.390 "name": null, 00:17:21.390 "uuid": "2fe02c53-b6d7-573a-bca9-53f33df5300f", 00:17:21.390 "is_configured": false, 00:17:21.390 "data_offset": 2048, 00:17:21.390 "data_size": 63488 00:17:21.390 }, 00:17:21.390 { 00:17:21.390 "name": null, 00:17:21.390 "uuid": "25f80230-f9cb-5092-a622-8cbcc169ff00", 00:17:21.390 "is_configured": false, 00:17:21.390 "data_offset": 2048, 00:17:21.390 "data_size": 63488 00:17:21.390 }, 00:17:21.390 { 00:17:21.390 "name": null, 00:17:21.390 "uuid": "1304488d-670c-58ba-b1d1-50c20ef31008", 00:17:21.390 "is_configured": false, 00:17:21.390 "data_offset": 2048, 00:17:21.390 "data_size": 63488 00:17:21.390 } 00:17:21.390 ] 00:17:21.390 }' 00:17:21.390 02:40:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:21.390 02:40:46 -- common/autotest_common.sh@10 -- # set +x 00:17:22.326 02:40:47 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:17:22.326 02:40:47 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:22.326 02:40:47 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:22.326 [2024-07-11 02:40:47.320031] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:22.326 [2024-07-11 02:40:47.320148] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:22.326 [2024-07-11 02:40:47.320191] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:17:22.326 [2024-07-11 02:40:47.320212] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:22.326 [2024-07-11 02:40:47.320824] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:22.326 [2024-07-11 02:40:47.320903] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:22.326 [2024-07-11 02:40:47.321004] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:22.326 [2024-07-11 02:40:47.321062] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:22.326 pt2 00:17:22.326 02:40:47 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:22.326 02:40:47 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:22.326 02:40:47 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:22.585 [2024-07-11 02:40:47.556078] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:22.585 [2024-07-11 02:40:47.556188] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:22.585 [2024-07-11 02:40:47.556220] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:22.585 [2024-07-11 02:40:47.556245] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:22.585 [2024-07-11 02:40:47.556670] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:22.585 [2024-07-11 02:40:47.556724] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:22.585 [2024-07-11 02:40:47.556805] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:17:22.585 [2024-07-11 02:40:47.556831] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:22.585 pt3 00:17:22.585 02:40:47 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:22.585 02:40:47 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:22.585 02:40:47 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:22.843 [2024-07-11 02:40:47.800097] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:22.843 [2024-07-11 02:40:47.800173] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:22.843 [2024-07-11 02:40:47.800200] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:17:22.843 [2024-07-11 02:40:47.800222] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:22.843 [2024-07-11 02:40:47.800578] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:22.843 [2024-07-11 02:40:47.800630] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:22.843 [2024-07-11 02:40:47.800691] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:17:22.843 [2024-07-11 02:40:47.800714] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:22.843 [2024-07-11 02:40:47.800829] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:17:22.843 [2024-07-11 02:40:47.800841] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:22.843 [2024-07-11 02:40:47.800922] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:17:22.843 [2024-07-11 02:40:47.801236] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:17:22.843 [2024-07-11 02:40:47.801258] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:17:22.843 [2024-07-11 02:40:47.801353] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:22.843 pt4 00:17:22.843 02:40:47 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:22.843 02:40:47 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:22.843 02:40:47 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:17:22.843 02:40:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:22.843 02:40:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:22.843 02:40:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:22.843 02:40:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:22.843 02:40:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:22.843 02:40:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:22.843 02:40:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:22.843 02:40:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:22.843 02:40:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:22.843 02:40:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.843 02:40:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:23.101 02:40:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:23.101 "name": "raid_bdev1", 00:17:23.101 "uuid": "66cd7931-a8d9-4f30-aaaa-748f25491bb8", 00:17:23.101 "strip_size_kb": 64, 00:17:23.101 "state": "online", 00:17:23.101 "raid_level": "raid0", 00:17:23.101 "superblock": true, 00:17:23.101 "num_base_bdevs": 4, 00:17:23.101 "num_base_bdevs_discovered": 4, 00:17:23.101 "num_base_bdevs_operational": 4, 00:17:23.101 "base_bdevs_list": [ 00:17:23.101 { 00:17:23.102 "name": "pt1", 00:17:23.102 "uuid": "9f9cbeb6-694c-5102-85da-4f9244982bac", 00:17:23.102 "is_configured": true, 00:17:23.102 "data_offset": 2048, 00:17:23.102 "data_size": 63488 00:17:23.102 }, 00:17:23.102 { 00:17:23.102 "name": "pt2", 00:17:23.102 "uuid": "2fe02c53-b6d7-573a-bca9-53f33df5300f", 00:17:23.102 "is_configured": true, 00:17:23.102 "data_offset": 2048, 00:17:23.102 "data_size": 63488 00:17:23.102 }, 00:17:23.102 { 00:17:23.102 "name": "pt3", 00:17:23.102 "uuid": "25f80230-f9cb-5092-a622-8cbcc169ff00", 00:17:23.102 "is_configured": true, 00:17:23.102 "data_offset": 2048, 00:17:23.102 "data_size": 63488 00:17:23.102 }, 00:17:23.102 { 00:17:23.102 "name": "pt4", 00:17:23.102 "uuid": "1304488d-670c-58ba-b1d1-50c20ef31008", 00:17:23.102 "is_configured": true, 00:17:23.102 "data_offset": 2048, 00:17:23.102 "data_size": 63488 00:17:23.102 } 00:17:23.102 ] 00:17:23.102 }' 00:17:23.102 02:40:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:23.102 02:40:48 -- common/autotest_common.sh@10 -- # set +x 00:17:23.667 02:40:48 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:23.667 02:40:48 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:17:23.925 [2024-07-11 02:40:48.944557] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:23.925 02:40:48 -- bdev/bdev_raid.sh@430 -- # '[' 66cd7931-a8d9-4f30-aaaa-748f25491bb8 '!=' 66cd7931-a8d9-4f30-aaaa-748f25491bb8 ']' 00:17:23.925 02:40:48 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:17:23.925 02:40:48 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:23.925 02:40:48 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:23.925 02:40:48 -- bdev/bdev_raid.sh@511 -- # killprocess 131924 00:17:23.925 02:40:48 -- common/autotest_common.sh@926 -- # '[' -z 131924 ']' 00:17:23.925 02:40:48 -- common/autotest_common.sh@930 -- # kill -0 131924 00:17:23.925 02:40:48 -- common/autotest_common.sh@931 -- # uname 00:17:23.925 02:40:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:23.925 02:40:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 131924 00:17:23.925 killing process with pid 131924 00:17:23.925 02:40:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:23.926 02:40:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:23.926 02:40:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 131924' 00:17:23.926 02:40:48 -- common/autotest_common.sh@945 -- # kill 131924 00:17:23.926 02:40:48 -- common/autotest_common.sh@950 -- # wait 131924 00:17:23.926 [2024-07-11 02:40:48.982542] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:23.926 [2024-07-11 02:40:48.982639] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:23.926 [2024-07-11 02:40:48.982705] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:23.926 [2024-07-11 02:40:48.982731] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:17:24.184 [2024-07-11 02:40:49.022497] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:24.184 ************************************ 00:17:24.184 END TEST raid_superblock_test 00:17:24.184 ************************************ 00:17:24.184 02:40:49 -- bdev/bdev_raid.sh@513 -- # return 0 00:17:24.184 00:17:24.184 real 0m10.247s 00:17:24.184 user 0m18.762s 00:17:24.184 sys 0m1.259s 00:17:24.184 02:40:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:24.184 02:40:49 -- common/autotest_common.sh@10 -- # set +x 00:17:24.443 02:40:49 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:17:24.443 02:40:49 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:17:24.443 02:40:49 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:17:24.443 02:40:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:24.443 02:40:49 -- common/autotest_common.sh@10 -- # set +x 00:17:24.443 ************************************ 00:17:24.443 START TEST raid_state_function_test 00:17:24.443 ************************************ 00:17:24.443 02:40:49 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 4 false 00:17:24.443 02:40:49 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:17:24.443 02:40:49 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:17:24.443 02:40:49 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:17:24.443 02:40:49 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:24.443 02:40:49 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:17:24.443 02:40:49 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:24.443 02:40:49 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:24.443 02:40:49 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:24.443 02:40:49 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:24.443 02:40:49 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:24.443 02:40:49 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:24.443 02:40:49 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:24.443 02:40:49 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:24.443 02:40:49 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:17:24.443 02:40:49 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:24.443 02:40:49 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:24.443 02:40:49 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:17:24.443 02:40:49 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:24.443 02:40:49 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:24.443 02:40:49 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:24.443 02:40:49 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:24.443 02:40:49 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:24.443 02:40:49 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:24.443 02:40:49 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:24.443 02:40:49 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:17:24.443 02:40:49 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:17:24.443 02:40:49 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:17:24.443 02:40:49 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:17:24.443 02:40:49 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:17:24.443 02:40:49 -- bdev/bdev_raid.sh@226 -- # raid_pid=132248 00:17:24.443 02:40:49 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 132248' 00:17:24.443 Process raid pid: 132248 00:17:24.443 02:40:49 -- bdev/bdev_raid.sh@228 -- # waitforlisten 132248 /var/tmp/spdk-raid.sock 00:17:24.443 02:40:49 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:24.443 02:40:49 -- common/autotest_common.sh@819 -- # '[' -z 132248 ']' 00:17:24.443 02:40:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:24.443 02:40:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:24.443 02:40:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:24.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:24.443 02:40:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:24.443 02:40:49 -- common/autotest_common.sh@10 -- # set +x 00:17:24.443 [2024-07-11 02:40:49.348250] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:17:24.443 [2024-07-11 02:40:49.349062] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:24.443 [2024-07-11 02:40:49.488691] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.702 [2024-07-11 02:40:49.544851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:24.702 [2024-07-11 02:40:49.596180] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:25.270 02:40:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:25.270 02:40:50 -- common/autotest_common.sh@852 -- # return 0 00:17:25.270 02:40:50 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:25.529 [2024-07-11 02:40:50.571934] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:25.529 [2024-07-11 02:40:50.572009] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:25.529 [2024-07-11 02:40:50.572039] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:25.529 [2024-07-11 02:40:50.572056] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:25.529 [2024-07-11 02:40:50.572063] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:25.529 [2024-07-11 02:40:50.572099] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:25.529 [2024-07-11 02:40:50.572107] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:25.529 [2024-07-11 02:40:50.572129] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:25.529 02:40:50 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:25.529 02:40:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:25.529 02:40:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:25.529 02:40:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:25.529 02:40:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:25.529 02:40:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:25.529 02:40:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:25.529 02:40:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:25.529 02:40:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:25.529 02:40:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:25.529 02:40:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:25.529 02:40:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:25.788 02:40:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:25.788 "name": "Existed_Raid", 00:17:25.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.788 "strip_size_kb": 64, 00:17:25.788 "state": "configuring", 00:17:25.788 "raid_level": "concat", 00:17:25.788 "superblock": false, 00:17:25.788 "num_base_bdevs": 4, 00:17:25.788 "num_base_bdevs_discovered": 0, 00:17:25.788 "num_base_bdevs_operational": 4, 00:17:25.788 "base_bdevs_list": [ 00:17:25.788 { 00:17:25.788 "name": "BaseBdev1", 00:17:25.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.788 "is_configured": false, 00:17:25.788 "data_offset": 0, 00:17:25.788 "data_size": 0 00:17:25.788 }, 00:17:25.788 { 00:17:25.788 "name": "BaseBdev2", 00:17:25.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.788 "is_configured": false, 00:17:25.788 "data_offset": 0, 00:17:25.788 "data_size": 0 00:17:25.788 }, 00:17:25.788 { 00:17:25.788 "name": "BaseBdev3", 00:17:25.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.788 "is_configured": false, 00:17:25.788 "data_offset": 0, 00:17:25.788 "data_size": 0 00:17:25.788 }, 00:17:25.788 { 00:17:25.788 "name": "BaseBdev4", 00:17:25.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.788 "is_configured": false, 00:17:25.788 "data_offset": 0, 00:17:25.788 "data_size": 0 00:17:25.788 } 00:17:25.788 ] 00:17:25.788 }' 00:17:25.788 02:40:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:25.788 02:40:50 -- common/autotest_common.sh@10 -- # set +x 00:17:26.724 02:40:51 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:26.724 [2024-07-11 02:40:51.692028] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:26.724 [2024-07-11 02:40:51.692093] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:17:26.724 02:40:51 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:26.982 [2024-07-11 02:40:51.944078] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:26.982 [2024-07-11 02:40:51.944154] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:26.982 [2024-07-11 02:40:51.944183] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:26.982 [2024-07-11 02:40:51.944206] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:26.982 [2024-07-11 02:40:51.944214] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:26.982 [2024-07-11 02:40:51.944229] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:26.982 [2024-07-11 02:40:51.944235] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:26.982 [2024-07-11 02:40:51.944257] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:26.982 02:40:51 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:27.242 [2024-07-11 02:40:52.138153] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:27.242 BaseBdev1 00:17:27.242 02:40:52 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:27.242 02:40:52 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:27.242 02:40:52 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:27.242 02:40:52 -- common/autotest_common.sh@889 -- # local i 00:17:27.242 02:40:52 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:27.242 02:40:52 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:27.242 02:40:52 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:27.500 02:40:52 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:27.500 [ 00:17:27.500 { 00:17:27.500 "name": "BaseBdev1", 00:17:27.500 "aliases": [ 00:17:27.500 "d24125bb-4250-4276-9032-babab9aae6cd" 00:17:27.500 ], 00:17:27.500 "product_name": "Malloc disk", 00:17:27.500 "block_size": 512, 00:17:27.500 "num_blocks": 65536, 00:17:27.501 "uuid": "d24125bb-4250-4276-9032-babab9aae6cd", 00:17:27.501 "assigned_rate_limits": { 00:17:27.501 "rw_ios_per_sec": 0, 00:17:27.501 "rw_mbytes_per_sec": 0, 00:17:27.501 "r_mbytes_per_sec": 0, 00:17:27.501 "w_mbytes_per_sec": 0 00:17:27.501 }, 00:17:27.501 "claimed": true, 00:17:27.501 "claim_type": "exclusive_write", 00:17:27.501 "zoned": false, 00:17:27.501 "supported_io_types": { 00:17:27.501 "read": true, 00:17:27.501 "write": true, 00:17:27.501 "unmap": true, 00:17:27.501 "write_zeroes": true, 00:17:27.501 "flush": true, 00:17:27.501 "reset": true, 00:17:27.501 "compare": false, 00:17:27.501 "compare_and_write": false, 00:17:27.501 "abort": true, 00:17:27.501 "nvme_admin": false, 00:17:27.501 "nvme_io": false 00:17:27.501 }, 00:17:27.501 "memory_domains": [ 00:17:27.501 { 00:17:27.501 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:27.501 "dma_device_type": 2 00:17:27.501 } 00:17:27.501 ], 00:17:27.501 "driver_specific": {} 00:17:27.501 } 00:17:27.501 ] 00:17:27.501 02:40:52 -- common/autotest_common.sh@895 -- # return 0 00:17:27.501 02:40:52 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:27.501 02:40:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:27.501 02:40:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:27.501 02:40:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:27.501 02:40:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:27.501 02:40:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:27.501 02:40:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:27.501 02:40:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:27.501 02:40:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:27.501 02:40:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:27.501 02:40:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:27.501 02:40:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:27.759 02:40:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:27.759 "name": "Existed_Raid", 00:17:27.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.759 "strip_size_kb": 64, 00:17:27.759 "state": "configuring", 00:17:27.759 "raid_level": "concat", 00:17:27.759 "superblock": false, 00:17:27.759 "num_base_bdevs": 4, 00:17:27.759 "num_base_bdevs_discovered": 1, 00:17:27.759 "num_base_bdevs_operational": 4, 00:17:27.759 "base_bdevs_list": [ 00:17:27.759 { 00:17:27.759 "name": "BaseBdev1", 00:17:27.759 "uuid": "d24125bb-4250-4276-9032-babab9aae6cd", 00:17:27.759 "is_configured": true, 00:17:27.759 "data_offset": 0, 00:17:27.759 "data_size": 65536 00:17:27.759 }, 00:17:27.759 { 00:17:27.759 "name": "BaseBdev2", 00:17:27.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.759 "is_configured": false, 00:17:27.759 "data_offset": 0, 00:17:27.759 "data_size": 0 00:17:27.759 }, 00:17:27.759 { 00:17:27.759 "name": "BaseBdev3", 00:17:27.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.759 "is_configured": false, 00:17:27.759 "data_offset": 0, 00:17:27.759 "data_size": 0 00:17:27.759 }, 00:17:27.759 { 00:17:27.759 "name": "BaseBdev4", 00:17:27.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.759 "is_configured": false, 00:17:27.759 "data_offset": 0, 00:17:27.759 "data_size": 0 00:17:27.759 } 00:17:27.759 ] 00:17:27.759 }' 00:17:27.759 02:40:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:27.759 02:40:52 -- common/autotest_common.sh@10 -- # set +x 00:17:28.721 02:40:53 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:28.721 [2024-07-11 02:40:53.702516] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:28.721 [2024-07-11 02:40:53.702603] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005a80 name Existed_Raid, state configuring 00:17:28.721 02:40:53 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:17:28.721 02:40:53 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:28.979 [2024-07-11 02:40:53.950611] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:28.979 [2024-07-11 02:40:53.952287] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:28.979 [2024-07-11 02:40:53.952351] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:28.979 [2024-07-11 02:40:53.952380] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:28.979 [2024-07-11 02:40:53.952401] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:28.979 [2024-07-11 02:40:53.952409] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:28.979 [2024-07-11 02:40:53.952422] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:28.979 02:40:53 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:28.979 02:40:53 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:28.979 02:40:53 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:28.979 02:40:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:28.979 02:40:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:28.979 02:40:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:28.979 02:40:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:28.979 02:40:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:28.979 02:40:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:28.979 02:40:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:28.979 02:40:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:28.979 02:40:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:28.979 02:40:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:28.979 02:40:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:29.243 02:40:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:29.243 "name": "Existed_Raid", 00:17:29.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.243 "strip_size_kb": 64, 00:17:29.243 "state": "configuring", 00:17:29.243 "raid_level": "concat", 00:17:29.243 "superblock": false, 00:17:29.243 "num_base_bdevs": 4, 00:17:29.243 "num_base_bdevs_discovered": 1, 00:17:29.243 "num_base_bdevs_operational": 4, 00:17:29.243 "base_bdevs_list": [ 00:17:29.243 { 00:17:29.243 "name": "BaseBdev1", 00:17:29.243 "uuid": "d24125bb-4250-4276-9032-babab9aae6cd", 00:17:29.243 "is_configured": true, 00:17:29.243 "data_offset": 0, 00:17:29.243 "data_size": 65536 00:17:29.243 }, 00:17:29.243 { 00:17:29.243 "name": "BaseBdev2", 00:17:29.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.243 "is_configured": false, 00:17:29.243 "data_offset": 0, 00:17:29.243 "data_size": 0 00:17:29.243 }, 00:17:29.243 { 00:17:29.243 "name": "BaseBdev3", 00:17:29.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.243 "is_configured": false, 00:17:29.243 "data_offset": 0, 00:17:29.243 "data_size": 0 00:17:29.243 }, 00:17:29.243 { 00:17:29.243 "name": "BaseBdev4", 00:17:29.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.243 "is_configured": false, 00:17:29.243 "data_offset": 0, 00:17:29.243 "data_size": 0 00:17:29.243 } 00:17:29.243 ] 00:17:29.243 }' 00:17:29.243 02:40:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:29.243 02:40:54 -- common/autotest_common.sh@10 -- # set +x 00:17:29.810 02:40:54 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:30.068 [2024-07-11 02:40:55.033557] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:30.068 BaseBdev2 00:17:30.068 02:40:55 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:30.068 02:40:55 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:17:30.068 02:40:55 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:30.068 02:40:55 -- common/autotest_common.sh@889 -- # local i 00:17:30.068 02:40:55 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:30.068 02:40:55 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:30.068 02:40:55 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:30.327 02:40:55 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:30.585 [ 00:17:30.586 { 00:17:30.586 "name": "BaseBdev2", 00:17:30.586 "aliases": [ 00:17:30.586 "a94b2bc6-e40b-4b04-b6fa-2e12737e7af4" 00:17:30.586 ], 00:17:30.586 "product_name": "Malloc disk", 00:17:30.586 "block_size": 512, 00:17:30.586 "num_blocks": 65536, 00:17:30.586 "uuid": "a94b2bc6-e40b-4b04-b6fa-2e12737e7af4", 00:17:30.586 "assigned_rate_limits": { 00:17:30.586 "rw_ios_per_sec": 0, 00:17:30.586 "rw_mbytes_per_sec": 0, 00:17:30.586 "r_mbytes_per_sec": 0, 00:17:30.586 "w_mbytes_per_sec": 0 00:17:30.586 }, 00:17:30.586 "claimed": true, 00:17:30.586 "claim_type": "exclusive_write", 00:17:30.586 "zoned": false, 00:17:30.586 "supported_io_types": { 00:17:30.586 "read": true, 00:17:30.586 "write": true, 00:17:30.586 "unmap": true, 00:17:30.586 "write_zeroes": true, 00:17:30.586 "flush": true, 00:17:30.586 "reset": true, 00:17:30.586 "compare": false, 00:17:30.586 "compare_and_write": false, 00:17:30.586 "abort": true, 00:17:30.586 "nvme_admin": false, 00:17:30.586 "nvme_io": false 00:17:30.586 }, 00:17:30.586 "memory_domains": [ 00:17:30.586 { 00:17:30.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:30.586 "dma_device_type": 2 00:17:30.586 } 00:17:30.586 ], 00:17:30.586 "driver_specific": {} 00:17:30.586 } 00:17:30.586 ] 00:17:30.586 02:40:55 -- common/autotest_common.sh@895 -- # return 0 00:17:30.586 02:40:55 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:30.586 02:40:55 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:30.586 02:40:55 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:30.586 02:40:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:30.586 02:40:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:30.586 02:40:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:30.586 02:40:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:30.586 02:40:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:30.586 02:40:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:30.586 02:40:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:30.586 02:40:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:30.586 02:40:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:30.586 02:40:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:30.586 02:40:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:30.844 02:40:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:30.844 "name": "Existed_Raid", 00:17:30.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.844 "strip_size_kb": 64, 00:17:30.844 "state": "configuring", 00:17:30.844 "raid_level": "concat", 00:17:30.844 "superblock": false, 00:17:30.844 "num_base_bdevs": 4, 00:17:30.844 "num_base_bdevs_discovered": 2, 00:17:30.844 "num_base_bdevs_operational": 4, 00:17:30.844 "base_bdevs_list": [ 00:17:30.844 { 00:17:30.844 "name": "BaseBdev1", 00:17:30.844 "uuid": "d24125bb-4250-4276-9032-babab9aae6cd", 00:17:30.844 "is_configured": true, 00:17:30.844 "data_offset": 0, 00:17:30.844 "data_size": 65536 00:17:30.844 }, 00:17:30.844 { 00:17:30.844 "name": "BaseBdev2", 00:17:30.844 "uuid": "a94b2bc6-e40b-4b04-b6fa-2e12737e7af4", 00:17:30.844 "is_configured": true, 00:17:30.844 "data_offset": 0, 00:17:30.844 "data_size": 65536 00:17:30.844 }, 00:17:30.844 { 00:17:30.844 "name": "BaseBdev3", 00:17:30.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.844 "is_configured": false, 00:17:30.844 "data_offset": 0, 00:17:30.844 "data_size": 0 00:17:30.844 }, 00:17:30.844 { 00:17:30.844 "name": "BaseBdev4", 00:17:30.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.845 "is_configured": false, 00:17:30.845 "data_offset": 0, 00:17:30.845 "data_size": 0 00:17:30.845 } 00:17:30.845 ] 00:17:30.845 }' 00:17:30.845 02:40:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:30.845 02:40:55 -- common/autotest_common.sh@10 -- # set +x 00:17:31.421 02:40:56 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:31.685 [2024-07-11 02:40:56.630353] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:31.685 BaseBdev3 00:17:31.685 02:40:56 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:31.685 02:40:56 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:17:31.685 02:40:56 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:31.685 02:40:56 -- common/autotest_common.sh@889 -- # local i 00:17:31.685 02:40:56 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:31.685 02:40:56 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:31.685 02:40:56 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:31.944 02:40:56 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:32.201 [ 00:17:32.201 { 00:17:32.201 "name": "BaseBdev3", 00:17:32.201 "aliases": [ 00:17:32.201 "4234a18c-6f60-4f0a-be4e-fb5f284baf70" 00:17:32.201 ], 00:17:32.201 "product_name": "Malloc disk", 00:17:32.201 "block_size": 512, 00:17:32.201 "num_blocks": 65536, 00:17:32.201 "uuid": "4234a18c-6f60-4f0a-be4e-fb5f284baf70", 00:17:32.201 "assigned_rate_limits": { 00:17:32.201 "rw_ios_per_sec": 0, 00:17:32.201 "rw_mbytes_per_sec": 0, 00:17:32.202 "r_mbytes_per_sec": 0, 00:17:32.202 "w_mbytes_per_sec": 0 00:17:32.202 }, 00:17:32.202 "claimed": true, 00:17:32.202 "claim_type": "exclusive_write", 00:17:32.202 "zoned": false, 00:17:32.202 "supported_io_types": { 00:17:32.202 "read": true, 00:17:32.202 "write": true, 00:17:32.202 "unmap": true, 00:17:32.202 "write_zeroes": true, 00:17:32.202 "flush": true, 00:17:32.202 "reset": true, 00:17:32.202 "compare": false, 00:17:32.202 "compare_and_write": false, 00:17:32.202 "abort": true, 00:17:32.202 "nvme_admin": false, 00:17:32.202 "nvme_io": false 00:17:32.202 }, 00:17:32.202 "memory_domains": [ 00:17:32.202 { 00:17:32.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:32.202 "dma_device_type": 2 00:17:32.202 } 00:17:32.202 ], 00:17:32.202 "driver_specific": {} 00:17:32.202 } 00:17:32.202 ] 00:17:32.202 02:40:57 -- common/autotest_common.sh@895 -- # return 0 00:17:32.202 02:40:57 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:32.202 02:40:57 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:32.202 02:40:57 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:32.202 02:40:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:32.202 02:40:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:32.202 02:40:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:32.202 02:40:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:32.202 02:40:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:32.202 02:40:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:32.202 02:40:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:32.202 02:40:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:32.202 02:40:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:32.202 02:40:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:32.202 02:40:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:32.459 02:40:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:32.459 "name": "Existed_Raid", 00:17:32.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.459 "strip_size_kb": 64, 00:17:32.459 "state": "configuring", 00:17:32.459 "raid_level": "concat", 00:17:32.459 "superblock": false, 00:17:32.459 "num_base_bdevs": 4, 00:17:32.459 "num_base_bdevs_discovered": 3, 00:17:32.459 "num_base_bdevs_operational": 4, 00:17:32.459 "base_bdevs_list": [ 00:17:32.459 { 00:17:32.459 "name": "BaseBdev1", 00:17:32.459 "uuid": "d24125bb-4250-4276-9032-babab9aae6cd", 00:17:32.459 "is_configured": true, 00:17:32.459 "data_offset": 0, 00:17:32.459 "data_size": 65536 00:17:32.459 }, 00:17:32.459 { 00:17:32.459 "name": "BaseBdev2", 00:17:32.459 "uuid": "a94b2bc6-e40b-4b04-b6fa-2e12737e7af4", 00:17:32.459 "is_configured": true, 00:17:32.459 "data_offset": 0, 00:17:32.459 "data_size": 65536 00:17:32.459 }, 00:17:32.459 { 00:17:32.459 "name": "BaseBdev3", 00:17:32.460 "uuid": "4234a18c-6f60-4f0a-be4e-fb5f284baf70", 00:17:32.460 "is_configured": true, 00:17:32.460 "data_offset": 0, 00:17:32.460 "data_size": 65536 00:17:32.460 }, 00:17:32.460 { 00:17:32.460 "name": "BaseBdev4", 00:17:32.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.460 "is_configured": false, 00:17:32.460 "data_offset": 0, 00:17:32.460 "data_size": 0 00:17:32.460 } 00:17:32.460 ] 00:17:32.460 }' 00:17:32.460 02:40:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:32.460 02:40:57 -- common/autotest_common.sh@10 -- # set +x 00:17:33.025 02:40:57 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:33.283 [2024-07-11 02:40:58.191197] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:33.283 [2024-07-11 02:40:58.191270] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006380 00:17:33.283 [2024-07-11 02:40:58.191281] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:17:33.283 [2024-07-11 02:40:58.191428] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:17:33.283 [2024-07-11 02:40:58.191915] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006380 00:17:33.283 [2024-07-11 02:40:58.191938] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006380 00:17:33.283 [2024-07-11 02:40:58.192228] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:33.283 BaseBdev4 00:17:33.283 02:40:58 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:17:33.283 02:40:58 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:17:33.283 02:40:58 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:33.283 02:40:58 -- common/autotest_common.sh@889 -- # local i 00:17:33.283 02:40:58 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:33.283 02:40:58 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:33.283 02:40:58 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:33.543 02:40:58 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:33.543 [ 00:17:33.543 { 00:17:33.543 "name": "BaseBdev4", 00:17:33.543 "aliases": [ 00:17:33.543 "4bbc11eb-d10e-420b-95b9-5f7430c787f2" 00:17:33.543 ], 00:17:33.543 "product_name": "Malloc disk", 00:17:33.543 "block_size": 512, 00:17:33.543 "num_blocks": 65536, 00:17:33.543 "uuid": "4bbc11eb-d10e-420b-95b9-5f7430c787f2", 00:17:33.543 "assigned_rate_limits": { 00:17:33.543 "rw_ios_per_sec": 0, 00:17:33.543 "rw_mbytes_per_sec": 0, 00:17:33.543 "r_mbytes_per_sec": 0, 00:17:33.543 "w_mbytes_per_sec": 0 00:17:33.543 }, 00:17:33.543 "claimed": true, 00:17:33.543 "claim_type": "exclusive_write", 00:17:33.543 "zoned": false, 00:17:33.543 "supported_io_types": { 00:17:33.543 "read": true, 00:17:33.543 "write": true, 00:17:33.543 "unmap": true, 00:17:33.543 "write_zeroes": true, 00:17:33.543 "flush": true, 00:17:33.543 "reset": true, 00:17:33.543 "compare": false, 00:17:33.543 "compare_and_write": false, 00:17:33.543 "abort": true, 00:17:33.543 "nvme_admin": false, 00:17:33.543 "nvme_io": false 00:17:33.543 }, 00:17:33.543 "memory_domains": [ 00:17:33.543 { 00:17:33.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:33.543 "dma_device_type": 2 00:17:33.543 } 00:17:33.543 ], 00:17:33.543 "driver_specific": {} 00:17:33.543 } 00:17:33.543 ] 00:17:33.543 02:40:58 -- common/autotest_common.sh@895 -- # return 0 00:17:33.543 02:40:58 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:33.543 02:40:58 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:33.543 02:40:58 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:17:33.543 02:40:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:33.543 02:40:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:33.543 02:40:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:33.543 02:40:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:33.543 02:40:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:33.543 02:40:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:33.543 02:40:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:33.543 02:40:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:33.543 02:40:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:33.543 02:40:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:33.543 02:40:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:33.802 02:40:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:33.802 "name": "Existed_Raid", 00:17:33.802 "uuid": "1d25c3d8-2816-4768-8f9e-8179f83cb373", 00:17:33.802 "strip_size_kb": 64, 00:17:33.802 "state": "online", 00:17:33.802 "raid_level": "concat", 00:17:33.802 "superblock": false, 00:17:33.802 "num_base_bdevs": 4, 00:17:33.802 "num_base_bdevs_discovered": 4, 00:17:33.802 "num_base_bdevs_operational": 4, 00:17:33.802 "base_bdevs_list": [ 00:17:33.802 { 00:17:33.802 "name": "BaseBdev1", 00:17:33.802 "uuid": "d24125bb-4250-4276-9032-babab9aae6cd", 00:17:33.802 "is_configured": true, 00:17:33.802 "data_offset": 0, 00:17:33.802 "data_size": 65536 00:17:33.802 }, 00:17:33.802 { 00:17:33.802 "name": "BaseBdev2", 00:17:33.802 "uuid": "a94b2bc6-e40b-4b04-b6fa-2e12737e7af4", 00:17:33.802 "is_configured": true, 00:17:33.802 "data_offset": 0, 00:17:33.802 "data_size": 65536 00:17:33.802 }, 00:17:33.802 { 00:17:33.802 "name": "BaseBdev3", 00:17:33.802 "uuid": "4234a18c-6f60-4f0a-be4e-fb5f284baf70", 00:17:33.802 "is_configured": true, 00:17:33.802 "data_offset": 0, 00:17:33.802 "data_size": 65536 00:17:33.802 }, 00:17:33.802 { 00:17:33.802 "name": "BaseBdev4", 00:17:33.802 "uuid": "4bbc11eb-d10e-420b-95b9-5f7430c787f2", 00:17:33.802 "is_configured": true, 00:17:33.802 "data_offset": 0, 00:17:33.802 "data_size": 65536 00:17:33.802 } 00:17:33.802 ] 00:17:33.802 }' 00:17:33.802 02:40:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:33.802 02:40:58 -- common/autotest_common.sh@10 -- # set +x 00:17:34.369 02:40:59 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:34.627 [2024-07-11 02:40:59.662148] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:34.627 [2024-07-11 02:40:59.662184] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:34.627 [2024-07-11 02:40:59.662289] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:34.627 02:40:59 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:34.627 02:40:59 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:17:34.627 02:40:59 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:34.627 02:40:59 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:34.627 02:40:59 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:17:34.627 02:40:59 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:17:34.627 02:40:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:34.627 02:40:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:17:34.627 02:40:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:34.627 02:40:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:34.627 02:40:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:34.627 02:40:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:34.627 02:40:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:34.627 02:40:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:34.627 02:40:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:34.627 02:40:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:34.627 02:40:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:34.886 02:40:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:34.886 "name": "Existed_Raid", 00:17:34.886 "uuid": "1d25c3d8-2816-4768-8f9e-8179f83cb373", 00:17:34.886 "strip_size_kb": 64, 00:17:34.886 "state": "offline", 00:17:34.886 "raid_level": "concat", 00:17:34.886 "superblock": false, 00:17:34.886 "num_base_bdevs": 4, 00:17:34.886 "num_base_bdevs_discovered": 3, 00:17:34.886 "num_base_bdevs_operational": 3, 00:17:34.886 "base_bdevs_list": [ 00:17:34.886 { 00:17:34.886 "name": null, 00:17:34.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.886 "is_configured": false, 00:17:34.886 "data_offset": 0, 00:17:34.886 "data_size": 65536 00:17:34.886 }, 00:17:34.886 { 00:17:34.886 "name": "BaseBdev2", 00:17:34.886 "uuid": "a94b2bc6-e40b-4b04-b6fa-2e12737e7af4", 00:17:34.886 "is_configured": true, 00:17:34.886 "data_offset": 0, 00:17:34.886 "data_size": 65536 00:17:34.886 }, 00:17:34.886 { 00:17:34.886 "name": "BaseBdev3", 00:17:34.886 "uuid": "4234a18c-6f60-4f0a-be4e-fb5f284baf70", 00:17:34.886 "is_configured": true, 00:17:34.886 "data_offset": 0, 00:17:34.886 "data_size": 65536 00:17:34.886 }, 00:17:34.886 { 00:17:34.886 "name": "BaseBdev4", 00:17:34.886 "uuid": "4bbc11eb-d10e-420b-95b9-5f7430c787f2", 00:17:34.886 "is_configured": true, 00:17:34.886 "data_offset": 0, 00:17:34.886 "data_size": 65536 00:17:34.886 } 00:17:34.886 ] 00:17:34.886 }' 00:17:34.886 02:40:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:34.886 02:40:59 -- common/autotest_common.sh@10 -- # set +x 00:17:35.821 02:41:00 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:35.821 02:41:00 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:35.821 02:41:00 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:35.821 02:41:00 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:35.821 02:41:00 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:35.821 02:41:00 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:35.821 02:41:00 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:36.080 [2024-07-11 02:41:01.028088] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:36.080 02:41:01 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:36.080 02:41:01 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:36.080 02:41:01 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:36.080 02:41:01 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:36.339 02:41:01 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:36.339 02:41:01 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:36.339 02:41:01 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:36.598 [2024-07-11 02:41:01.533476] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:36.598 02:41:01 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:36.598 02:41:01 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:36.598 02:41:01 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:36.598 02:41:01 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:36.857 02:41:01 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:36.857 02:41:01 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:36.857 02:41:01 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:17:37.116 [2024-07-11 02:41:02.022040] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:37.116 [2024-07-11 02:41:02.022125] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state offline 00:17:37.116 02:41:02 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:37.116 02:41:02 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:37.116 02:41:02 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:37.116 02:41:02 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:37.375 02:41:02 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:37.375 02:41:02 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:37.375 02:41:02 -- bdev/bdev_raid.sh@287 -- # killprocess 132248 00:17:37.375 02:41:02 -- common/autotest_common.sh@926 -- # '[' -z 132248 ']' 00:17:37.375 02:41:02 -- common/autotest_common.sh@930 -- # kill -0 132248 00:17:37.375 02:41:02 -- common/autotest_common.sh@931 -- # uname 00:17:37.375 02:41:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:37.375 02:41:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 132248 00:17:37.375 killing process with pid 132248 00:17:37.375 02:41:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:37.375 02:41:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:37.375 02:41:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 132248' 00:17:37.375 02:41:02 -- common/autotest_common.sh@945 -- # kill 132248 00:17:37.375 02:41:02 -- common/autotest_common.sh@950 -- # wait 132248 00:17:37.375 [2024-07-11 02:41:02.313617] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:37.375 [2024-07-11 02:41:02.313759] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:37.634 ************************************ 00:17:37.634 END TEST raid_state_function_test 00:17:37.634 ************************************ 00:17:37.634 02:41:02 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:37.634 00:17:37.634 real 0m13.243s 00:17:37.634 user 0m24.835s 00:17:37.634 sys 0m1.498s 00:17:37.634 02:41:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:37.634 02:41:02 -- common/autotest_common.sh@10 -- # set +x 00:17:37.634 02:41:02 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:17:37.634 02:41:02 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:17:37.634 02:41:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:37.634 02:41:02 -- common/autotest_common.sh@10 -- # set +x 00:17:37.634 ************************************ 00:17:37.634 START TEST raid_state_function_test_sb 00:17:37.634 ************************************ 00:17:37.634 02:41:02 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 4 true 00:17:37.634 02:41:02 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:17:37.634 02:41:02 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:17:37.634 02:41:02 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:17:37.634 02:41:02 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:37.634 02:41:02 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:17:37.634 02:41:02 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:37.634 02:41:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:37.634 02:41:02 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:37.634 02:41:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:37.634 02:41:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:37.634 02:41:02 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:37.634 02:41:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:37.634 02:41:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:37.634 02:41:02 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:17:37.634 02:41:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:37.634 02:41:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:37.634 02:41:02 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:17:37.634 02:41:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:37.634 02:41:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:37.634 02:41:02 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:37.634 02:41:02 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:37.634 02:41:02 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:37.634 02:41:02 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:37.634 02:41:02 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:37.634 02:41:02 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:17:37.634 02:41:02 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:17:37.634 02:41:02 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:17:37.634 02:41:02 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:17:37.634 02:41:02 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:17:37.634 02:41:02 -- bdev/bdev_raid.sh@226 -- # raid_pid=132714 00:17:37.634 Process raid pid: 132714 00:17:37.634 02:41:02 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 132714' 00:17:37.634 02:41:02 -- bdev/bdev_raid.sh@228 -- # waitforlisten 132714 /var/tmp/spdk-raid.sock 00:17:37.634 02:41:02 -- common/autotest_common.sh@819 -- # '[' -z 132714 ']' 00:17:37.634 02:41:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:37.634 02:41:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:37.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:37.634 02:41:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:37.634 02:41:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:37.634 02:41:02 -- common/autotest_common.sh@10 -- # set +x 00:17:37.634 02:41:02 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:37.634 [2024-07-11 02:41:02.649990] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:17:37.634 [2024-07-11 02:41:02.651014] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:37.893 [2024-07-11 02:41:02.797313] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.893 [2024-07-11 02:41:02.865797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:37.893 [2024-07-11 02:41:02.922842] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:38.460 02:41:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:38.460 02:41:03 -- common/autotest_common.sh@852 -- # return 0 00:17:38.460 02:41:03 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:38.718 [2024-07-11 02:41:03.706512] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:38.718 [2024-07-11 02:41:03.707077] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:38.718 [2024-07-11 02:41:03.707232] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:38.718 [2024-07-11 02:41:03.707378] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:38.718 [2024-07-11 02:41:03.707573] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:38.718 [2024-07-11 02:41:03.707745] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:38.718 [2024-07-11 02:41:03.707847] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:38.718 [2024-07-11 02:41:03.707995] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:38.718 02:41:03 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:38.718 02:41:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:38.718 02:41:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:38.718 02:41:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:38.718 02:41:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:38.718 02:41:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:38.718 02:41:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:38.718 02:41:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:38.718 02:41:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:38.718 02:41:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:38.718 02:41:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:38.718 02:41:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:38.977 02:41:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:38.977 "name": "Existed_Raid", 00:17:38.977 "uuid": "4221a54f-b5de-4612-a432-9b018c899d06", 00:17:38.977 "strip_size_kb": 64, 00:17:38.977 "state": "configuring", 00:17:38.977 "raid_level": "concat", 00:17:38.977 "superblock": true, 00:17:38.977 "num_base_bdevs": 4, 00:17:38.977 "num_base_bdevs_discovered": 0, 00:17:38.977 "num_base_bdevs_operational": 4, 00:17:38.977 "base_bdevs_list": [ 00:17:38.977 { 00:17:38.977 "name": "BaseBdev1", 00:17:38.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.977 "is_configured": false, 00:17:38.977 "data_offset": 0, 00:17:38.977 "data_size": 0 00:17:38.977 }, 00:17:38.977 { 00:17:38.977 "name": "BaseBdev2", 00:17:38.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.977 "is_configured": false, 00:17:38.977 "data_offset": 0, 00:17:38.977 "data_size": 0 00:17:38.977 }, 00:17:38.977 { 00:17:38.977 "name": "BaseBdev3", 00:17:38.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.977 "is_configured": false, 00:17:38.977 "data_offset": 0, 00:17:38.977 "data_size": 0 00:17:38.977 }, 00:17:38.977 { 00:17:38.977 "name": "BaseBdev4", 00:17:38.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.977 "is_configured": false, 00:17:38.977 "data_offset": 0, 00:17:38.977 "data_size": 0 00:17:38.977 } 00:17:38.977 ] 00:17:38.977 }' 00:17:38.977 02:41:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:38.977 02:41:03 -- common/autotest_common.sh@10 -- # set +x 00:17:39.546 02:41:04 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:39.805 [2024-07-11 02:41:04.744098] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:39.805 [2024-07-11 02:41:04.744294] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:17:39.805 02:41:04 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:40.063 [2024-07-11 02:41:05.012150] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:40.063 [2024-07-11 02:41:05.012682] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:40.063 [2024-07-11 02:41:05.012819] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:40.063 [2024-07-11 02:41:05.012973] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:40.063 [2024-07-11 02:41:05.013123] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:40.063 [2024-07-11 02:41:05.013264] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:40.063 [2024-07-11 02:41:05.013364] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:40.063 [2024-07-11 02:41:05.013509] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:40.063 02:41:05 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:40.321 [2024-07-11 02:41:05.258938] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:40.321 BaseBdev1 00:17:40.321 02:41:05 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:40.321 02:41:05 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:40.321 02:41:05 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:40.321 02:41:05 -- common/autotest_common.sh@889 -- # local i 00:17:40.321 02:41:05 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:40.321 02:41:05 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:40.321 02:41:05 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:40.580 02:41:05 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:40.838 [ 00:17:40.838 { 00:17:40.838 "name": "BaseBdev1", 00:17:40.838 "aliases": [ 00:17:40.838 "612a7344-bb1f-4402-b17b-cae65b0b3cae" 00:17:40.838 ], 00:17:40.838 "product_name": "Malloc disk", 00:17:40.838 "block_size": 512, 00:17:40.838 "num_blocks": 65536, 00:17:40.838 "uuid": "612a7344-bb1f-4402-b17b-cae65b0b3cae", 00:17:40.838 "assigned_rate_limits": { 00:17:40.838 "rw_ios_per_sec": 0, 00:17:40.838 "rw_mbytes_per_sec": 0, 00:17:40.838 "r_mbytes_per_sec": 0, 00:17:40.838 "w_mbytes_per_sec": 0 00:17:40.838 }, 00:17:40.838 "claimed": true, 00:17:40.838 "claim_type": "exclusive_write", 00:17:40.838 "zoned": false, 00:17:40.838 "supported_io_types": { 00:17:40.838 "read": true, 00:17:40.838 "write": true, 00:17:40.838 "unmap": true, 00:17:40.838 "write_zeroes": true, 00:17:40.838 "flush": true, 00:17:40.838 "reset": true, 00:17:40.838 "compare": false, 00:17:40.838 "compare_and_write": false, 00:17:40.838 "abort": true, 00:17:40.838 "nvme_admin": false, 00:17:40.838 "nvme_io": false 00:17:40.838 }, 00:17:40.838 "memory_domains": [ 00:17:40.838 { 00:17:40.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:40.838 "dma_device_type": 2 00:17:40.838 } 00:17:40.838 ], 00:17:40.838 "driver_specific": {} 00:17:40.838 } 00:17:40.838 ] 00:17:40.838 02:41:05 -- common/autotest_common.sh@895 -- # return 0 00:17:40.838 02:41:05 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:40.838 02:41:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:40.838 02:41:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:40.838 02:41:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:40.838 02:41:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:40.838 02:41:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:40.838 02:41:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:40.838 02:41:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:40.838 02:41:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:40.838 02:41:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:40.838 02:41:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:40.838 02:41:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:40.838 02:41:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:40.838 "name": "Existed_Raid", 00:17:40.838 "uuid": "d8e93bad-ec37-4a8e-979a-68d741d23020", 00:17:40.838 "strip_size_kb": 64, 00:17:40.838 "state": "configuring", 00:17:40.838 "raid_level": "concat", 00:17:40.838 "superblock": true, 00:17:40.838 "num_base_bdevs": 4, 00:17:40.838 "num_base_bdevs_discovered": 1, 00:17:40.838 "num_base_bdevs_operational": 4, 00:17:40.838 "base_bdevs_list": [ 00:17:40.838 { 00:17:40.838 "name": "BaseBdev1", 00:17:40.838 "uuid": "612a7344-bb1f-4402-b17b-cae65b0b3cae", 00:17:40.838 "is_configured": true, 00:17:40.838 "data_offset": 2048, 00:17:40.838 "data_size": 63488 00:17:40.838 }, 00:17:40.838 { 00:17:40.838 "name": "BaseBdev2", 00:17:40.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.838 "is_configured": false, 00:17:40.838 "data_offset": 0, 00:17:40.838 "data_size": 0 00:17:40.838 }, 00:17:40.838 { 00:17:40.838 "name": "BaseBdev3", 00:17:40.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.839 "is_configured": false, 00:17:40.839 "data_offset": 0, 00:17:40.839 "data_size": 0 00:17:40.839 }, 00:17:40.839 { 00:17:40.839 "name": "BaseBdev4", 00:17:40.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.839 "is_configured": false, 00:17:40.839 "data_offset": 0, 00:17:40.839 "data_size": 0 00:17:40.839 } 00:17:40.839 ] 00:17:40.839 }' 00:17:40.839 02:41:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:40.839 02:41:05 -- common/autotest_common.sh@10 -- # set +x 00:17:41.773 02:41:06 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:41.773 [2024-07-11 02:41:06.775314] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:41.773 [2024-07-11 02:41:06.775518] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005a80 name Existed_Raid, state configuring 00:17:41.773 02:41:06 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:17:41.773 02:41:06 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:42.030 02:41:06 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:42.288 BaseBdev1 00:17:42.288 02:41:07 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:17:42.288 02:41:07 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:42.288 02:41:07 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:42.288 02:41:07 -- common/autotest_common.sh@889 -- # local i 00:17:42.288 02:41:07 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:42.288 02:41:07 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:42.288 02:41:07 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:42.545 02:41:07 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:42.545 [ 00:17:42.545 { 00:17:42.545 "name": "BaseBdev1", 00:17:42.545 "aliases": [ 00:17:42.545 "fc474984-138a-46e2-90a9-1a6a16d08050" 00:17:42.545 ], 00:17:42.545 "product_name": "Malloc disk", 00:17:42.545 "block_size": 512, 00:17:42.545 "num_blocks": 65536, 00:17:42.545 "uuid": "fc474984-138a-46e2-90a9-1a6a16d08050", 00:17:42.545 "assigned_rate_limits": { 00:17:42.545 "rw_ios_per_sec": 0, 00:17:42.545 "rw_mbytes_per_sec": 0, 00:17:42.545 "r_mbytes_per_sec": 0, 00:17:42.545 "w_mbytes_per_sec": 0 00:17:42.545 }, 00:17:42.545 "claimed": false, 00:17:42.545 "zoned": false, 00:17:42.545 "supported_io_types": { 00:17:42.545 "read": true, 00:17:42.545 "write": true, 00:17:42.545 "unmap": true, 00:17:42.545 "write_zeroes": true, 00:17:42.545 "flush": true, 00:17:42.545 "reset": true, 00:17:42.545 "compare": false, 00:17:42.545 "compare_and_write": false, 00:17:42.545 "abort": true, 00:17:42.545 "nvme_admin": false, 00:17:42.545 "nvme_io": false 00:17:42.545 }, 00:17:42.545 "memory_domains": [ 00:17:42.545 { 00:17:42.545 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.545 "dma_device_type": 2 00:17:42.545 } 00:17:42.545 ], 00:17:42.545 "driver_specific": {} 00:17:42.545 } 00:17:42.545 ] 00:17:42.545 02:41:07 -- common/autotest_common.sh@895 -- # return 0 00:17:42.545 02:41:07 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:42.803 [2024-07-11 02:41:07.830508] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:42.803 [2024-07-11 02:41:07.832437] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:42.803 [2024-07-11 02:41:07.833070] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:42.803 [2024-07-11 02:41:07.833241] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:42.803 [2024-07-11 02:41:07.833401] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:42.803 [2024-07-11 02:41:07.833572] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:42.803 [2024-07-11 02:41:07.833760] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:42.803 02:41:07 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:42.803 02:41:07 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:42.803 02:41:07 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:42.803 02:41:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:42.803 02:41:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:42.803 02:41:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:42.803 02:41:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:42.803 02:41:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:42.803 02:41:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:42.803 02:41:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:42.803 02:41:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:42.803 02:41:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:42.803 02:41:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:42.803 02:41:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:43.060 02:41:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:43.060 "name": "Existed_Raid", 00:17:43.060 "uuid": "fa800e18-82ff-4ab7-ace4-74b832917708", 00:17:43.060 "strip_size_kb": 64, 00:17:43.060 "state": "configuring", 00:17:43.060 "raid_level": "concat", 00:17:43.060 "superblock": true, 00:17:43.060 "num_base_bdevs": 4, 00:17:43.060 "num_base_bdevs_discovered": 1, 00:17:43.060 "num_base_bdevs_operational": 4, 00:17:43.060 "base_bdevs_list": [ 00:17:43.060 { 00:17:43.060 "name": "BaseBdev1", 00:17:43.060 "uuid": "fc474984-138a-46e2-90a9-1a6a16d08050", 00:17:43.060 "is_configured": true, 00:17:43.060 "data_offset": 2048, 00:17:43.060 "data_size": 63488 00:17:43.060 }, 00:17:43.060 { 00:17:43.060 "name": "BaseBdev2", 00:17:43.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.060 "is_configured": false, 00:17:43.060 "data_offset": 0, 00:17:43.060 "data_size": 0 00:17:43.060 }, 00:17:43.060 { 00:17:43.060 "name": "BaseBdev3", 00:17:43.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.060 "is_configured": false, 00:17:43.060 "data_offset": 0, 00:17:43.060 "data_size": 0 00:17:43.060 }, 00:17:43.060 { 00:17:43.060 "name": "BaseBdev4", 00:17:43.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.061 "is_configured": false, 00:17:43.061 "data_offset": 0, 00:17:43.061 "data_size": 0 00:17:43.061 } 00:17:43.061 ] 00:17:43.061 }' 00:17:43.061 02:41:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:43.061 02:41:08 -- common/autotest_common.sh@10 -- # set +x 00:17:43.626 02:41:08 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:43.883 [2024-07-11 02:41:08.906130] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:43.883 BaseBdev2 00:17:43.883 02:41:08 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:43.883 02:41:08 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:17:43.883 02:41:08 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:43.883 02:41:08 -- common/autotest_common.sh@889 -- # local i 00:17:43.883 02:41:08 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:43.883 02:41:08 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:43.883 02:41:08 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:44.139 02:41:09 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:44.397 [ 00:17:44.397 { 00:17:44.397 "name": "BaseBdev2", 00:17:44.397 "aliases": [ 00:17:44.397 "5134ff31-58c2-4dff-92f2-77d35cc0c498" 00:17:44.397 ], 00:17:44.397 "product_name": "Malloc disk", 00:17:44.397 "block_size": 512, 00:17:44.397 "num_blocks": 65536, 00:17:44.397 "uuid": "5134ff31-58c2-4dff-92f2-77d35cc0c498", 00:17:44.397 "assigned_rate_limits": { 00:17:44.397 "rw_ios_per_sec": 0, 00:17:44.397 "rw_mbytes_per_sec": 0, 00:17:44.397 "r_mbytes_per_sec": 0, 00:17:44.397 "w_mbytes_per_sec": 0 00:17:44.397 }, 00:17:44.397 "claimed": true, 00:17:44.397 "claim_type": "exclusive_write", 00:17:44.397 "zoned": false, 00:17:44.397 "supported_io_types": { 00:17:44.397 "read": true, 00:17:44.397 "write": true, 00:17:44.397 "unmap": true, 00:17:44.397 "write_zeroes": true, 00:17:44.397 "flush": true, 00:17:44.397 "reset": true, 00:17:44.397 "compare": false, 00:17:44.397 "compare_and_write": false, 00:17:44.397 "abort": true, 00:17:44.397 "nvme_admin": false, 00:17:44.397 "nvme_io": false 00:17:44.397 }, 00:17:44.397 "memory_domains": [ 00:17:44.397 { 00:17:44.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:44.397 "dma_device_type": 2 00:17:44.397 } 00:17:44.397 ], 00:17:44.397 "driver_specific": {} 00:17:44.397 } 00:17:44.397 ] 00:17:44.397 02:41:09 -- common/autotest_common.sh@895 -- # return 0 00:17:44.397 02:41:09 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:44.397 02:41:09 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:44.397 02:41:09 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:44.397 02:41:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:44.397 02:41:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:44.397 02:41:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:44.397 02:41:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:44.397 02:41:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:44.397 02:41:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:44.397 02:41:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:44.397 02:41:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:44.397 02:41:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:44.397 02:41:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:44.397 02:41:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:44.656 02:41:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:44.656 "name": "Existed_Raid", 00:17:44.656 "uuid": "fa800e18-82ff-4ab7-ace4-74b832917708", 00:17:44.656 "strip_size_kb": 64, 00:17:44.656 "state": "configuring", 00:17:44.656 "raid_level": "concat", 00:17:44.656 "superblock": true, 00:17:44.656 "num_base_bdevs": 4, 00:17:44.656 "num_base_bdevs_discovered": 2, 00:17:44.656 "num_base_bdevs_operational": 4, 00:17:44.656 "base_bdevs_list": [ 00:17:44.656 { 00:17:44.656 "name": "BaseBdev1", 00:17:44.656 "uuid": "fc474984-138a-46e2-90a9-1a6a16d08050", 00:17:44.656 "is_configured": true, 00:17:44.656 "data_offset": 2048, 00:17:44.656 "data_size": 63488 00:17:44.656 }, 00:17:44.656 { 00:17:44.656 "name": "BaseBdev2", 00:17:44.656 "uuid": "5134ff31-58c2-4dff-92f2-77d35cc0c498", 00:17:44.656 "is_configured": true, 00:17:44.656 "data_offset": 2048, 00:17:44.656 "data_size": 63488 00:17:44.656 }, 00:17:44.656 { 00:17:44.656 "name": "BaseBdev3", 00:17:44.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.656 "is_configured": false, 00:17:44.656 "data_offset": 0, 00:17:44.656 "data_size": 0 00:17:44.656 }, 00:17:44.656 { 00:17:44.656 "name": "BaseBdev4", 00:17:44.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.656 "is_configured": false, 00:17:44.656 "data_offset": 0, 00:17:44.656 "data_size": 0 00:17:44.656 } 00:17:44.656 ] 00:17:44.656 }' 00:17:44.656 02:41:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:44.656 02:41:09 -- common/autotest_common.sh@10 -- # set +x 00:17:45.222 02:41:10 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:45.481 [2024-07-11 02:41:10.406766] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:45.481 BaseBdev3 00:17:45.481 02:41:10 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:45.481 02:41:10 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:17:45.481 02:41:10 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:45.481 02:41:10 -- common/autotest_common.sh@889 -- # local i 00:17:45.481 02:41:10 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:45.481 02:41:10 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:45.481 02:41:10 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:45.740 02:41:10 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:45.999 [ 00:17:45.999 { 00:17:45.999 "name": "BaseBdev3", 00:17:45.999 "aliases": [ 00:17:45.999 "09b17dc1-f980-4dbd-860c-54a61a662425" 00:17:45.999 ], 00:17:45.999 "product_name": "Malloc disk", 00:17:45.999 "block_size": 512, 00:17:45.999 "num_blocks": 65536, 00:17:45.999 "uuid": "09b17dc1-f980-4dbd-860c-54a61a662425", 00:17:45.999 "assigned_rate_limits": { 00:17:45.999 "rw_ios_per_sec": 0, 00:17:45.999 "rw_mbytes_per_sec": 0, 00:17:45.999 "r_mbytes_per_sec": 0, 00:17:45.999 "w_mbytes_per_sec": 0 00:17:45.999 }, 00:17:45.999 "claimed": true, 00:17:45.999 "claim_type": "exclusive_write", 00:17:45.999 "zoned": false, 00:17:45.999 "supported_io_types": { 00:17:45.999 "read": true, 00:17:45.999 "write": true, 00:17:45.999 "unmap": true, 00:17:45.999 "write_zeroes": true, 00:17:45.999 "flush": true, 00:17:45.999 "reset": true, 00:17:45.999 "compare": false, 00:17:45.999 "compare_and_write": false, 00:17:45.999 "abort": true, 00:17:45.999 "nvme_admin": false, 00:17:45.999 "nvme_io": false 00:17:45.999 }, 00:17:45.999 "memory_domains": [ 00:17:45.999 { 00:17:45.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:45.999 "dma_device_type": 2 00:17:45.999 } 00:17:45.999 ], 00:17:45.999 "driver_specific": {} 00:17:45.999 } 00:17:45.999 ] 00:17:45.999 02:41:10 -- common/autotest_common.sh@895 -- # return 0 00:17:45.999 02:41:10 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:45.999 02:41:10 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:45.999 02:41:10 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:45.999 02:41:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:45.999 02:41:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:45.999 02:41:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:45.999 02:41:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:45.999 02:41:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:45.999 02:41:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:45.999 02:41:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:45.999 02:41:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:45.999 02:41:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:45.999 02:41:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:45.999 02:41:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:46.258 02:41:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:46.258 "name": "Existed_Raid", 00:17:46.258 "uuid": "fa800e18-82ff-4ab7-ace4-74b832917708", 00:17:46.258 "strip_size_kb": 64, 00:17:46.258 "state": "configuring", 00:17:46.258 "raid_level": "concat", 00:17:46.258 "superblock": true, 00:17:46.258 "num_base_bdevs": 4, 00:17:46.258 "num_base_bdevs_discovered": 3, 00:17:46.258 "num_base_bdevs_operational": 4, 00:17:46.258 "base_bdevs_list": [ 00:17:46.258 { 00:17:46.258 "name": "BaseBdev1", 00:17:46.258 "uuid": "fc474984-138a-46e2-90a9-1a6a16d08050", 00:17:46.258 "is_configured": true, 00:17:46.258 "data_offset": 2048, 00:17:46.258 "data_size": 63488 00:17:46.258 }, 00:17:46.258 { 00:17:46.258 "name": "BaseBdev2", 00:17:46.258 "uuid": "5134ff31-58c2-4dff-92f2-77d35cc0c498", 00:17:46.258 "is_configured": true, 00:17:46.258 "data_offset": 2048, 00:17:46.258 "data_size": 63488 00:17:46.258 }, 00:17:46.258 { 00:17:46.258 "name": "BaseBdev3", 00:17:46.258 "uuid": "09b17dc1-f980-4dbd-860c-54a61a662425", 00:17:46.258 "is_configured": true, 00:17:46.258 "data_offset": 2048, 00:17:46.258 "data_size": 63488 00:17:46.258 }, 00:17:46.258 { 00:17:46.258 "name": "BaseBdev4", 00:17:46.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.258 "is_configured": false, 00:17:46.258 "data_offset": 0, 00:17:46.258 "data_size": 0 00:17:46.258 } 00:17:46.258 ] 00:17:46.258 }' 00:17:46.258 02:41:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:46.258 02:41:11 -- common/autotest_common.sh@10 -- # set +x 00:17:46.825 02:41:11 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:47.084 [2024-07-11 02:41:12.098829] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:47.084 [2024-07-11 02:41:12.099078] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006980 00:17:47.084 [2024-07-11 02:41:12.099093] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:47.084 [2024-07-11 02:41:12.099300] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:17:47.084 BaseBdev4 00:17:47.084 [2024-07-11 02:41:12.099693] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006980 00:17:47.084 [2024-07-11 02:41:12.099718] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006980 00:17:47.084 [2024-07-11 02:41:12.099904] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:47.084 02:41:12 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:17:47.084 02:41:12 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:17:47.084 02:41:12 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:47.084 02:41:12 -- common/autotest_common.sh@889 -- # local i 00:17:47.084 02:41:12 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:47.084 02:41:12 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:47.084 02:41:12 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:47.343 02:41:12 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:47.602 [ 00:17:47.602 { 00:17:47.602 "name": "BaseBdev4", 00:17:47.602 "aliases": [ 00:17:47.602 "ad6d3096-a2bd-4be7-8bba-d3140cb59c16" 00:17:47.602 ], 00:17:47.602 "product_name": "Malloc disk", 00:17:47.602 "block_size": 512, 00:17:47.602 "num_blocks": 65536, 00:17:47.602 "uuid": "ad6d3096-a2bd-4be7-8bba-d3140cb59c16", 00:17:47.602 "assigned_rate_limits": { 00:17:47.602 "rw_ios_per_sec": 0, 00:17:47.602 "rw_mbytes_per_sec": 0, 00:17:47.602 "r_mbytes_per_sec": 0, 00:17:47.602 "w_mbytes_per_sec": 0 00:17:47.602 }, 00:17:47.602 "claimed": true, 00:17:47.602 "claim_type": "exclusive_write", 00:17:47.602 "zoned": false, 00:17:47.602 "supported_io_types": { 00:17:47.602 "read": true, 00:17:47.602 "write": true, 00:17:47.602 "unmap": true, 00:17:47.602 "write_zeroes": true, 00:17:47.602 "flush": true, 00:17:47.602 "reset": true, 00:17:47.602 "compare": false, 00:17:47.602 "compare_and_write": false, 00:17:47.602 "abort": true, 00:17:47.602 "nvme_admin": false, 00:17:47.602 "nvme_io": false 00:17:47.602 }, 00:17:47.602 "memory_domains": [ 00:17:47.602 { 00:17:47.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:47.602 "dma_device_type": 2 00:17:47.602 } 00:17:47.602 ], 00:17:47.602 "driver_specific": {} 00:17:47.602 } 00:17:47.602 ] 00:17:47.602 02:41:12 -- common/autotest_common.sh@895 -- # return 0 00:17:47.602 02:41:12 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:47.602 02:41:12 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:47.602 02:41:12 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:17:47.602 02:41:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:47.602 02:41:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:47.602 02:41:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:47.602 02:41:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:47.602 02:41:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:47.602 02:41:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:47.602 02:41:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:47.602 02:41:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:47.602 02:41:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:47.602 02:41:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:47.602 02:41:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:47.861 02:41:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:47.861 "name": "Existed_Raid", 00:17:47.861 "uuid": "fa800e18-82ff-4ab7-ace4-74b832917708", 00:17:47.861 "strip_size_kb": 64, 00:17:47.861 "state": "online", 00:17:47.861 "raid_level": "concat", 00:17:47.861 "superblock": true, 00:17:47.861 "num_base_bdevs": 4, 00:17:47.861 "num_base_bdevs_discovered": 4, 00:17:47.861 "num_base_bdevs_operational": 4, 00:17:47.861 "base_bdevs_list": [ 00:17:47.861 { 00:17:47.861 "name": "BaseBdev1", 00:17:47.861 "uuid": "fc474984-138a-46e2-90a9-1a6a16d08050", 00:17:47.861 "is_configured": true, 00:17:47.861 "data_offset": 2048, 00:17:47.861 "data_size": 63488 00:17:47.861 }, 00:17:47.861 { 00:17:47.861 "name": "BaseBdev2", 00:17:47.861 "uuid": "5134ff31-58c2-4dff-92f2-77d35cc0c498", 00:17:47.861 "is_configured": true, 00:17:47.861 "data_offset": 2048, 00:17:47.861 "data_size": 63488 00:17:47.861 }, 00:17:47.861 { 00:17:47.861 "name": "BaseBdev3", 00:17:47.861 "uuid": "09b17dc1-f980-4dbd-860c-54a61a662425", 00:17:47.861 "is_configured": true, 00:17:47.861 "data_offset": 2048, 00:17:47.861 "data_size": 63488 00:17:47.861 }, 00:17:47.861 { 00:17:47.861 "name": "BaseBdev4", 00:17:47.861 "uuid": "ad6d3096-a2bd-4be7-8bba-d3140cb59c16", 00:17:47.861 "is_configured": true, 00:17:47.861 "data_offset": 2048, 00:17:47.861 "data_size": 63488 00:17:47.861 } 00:17:47.861 ] 00:17:47.861 }' 00:17:47.861 02:41:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:47.861 02:41:12 -- common/autotest_common.sh@10 -- # set +x 00:17:48.430 02:41:13 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:48.689 [2024-07-11 02:41:13.626174] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:48.689 [2024-07-11 02:41:13.626209] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:48.689 [2024-07-11 02:41:13.626295] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:48.689 02:41:13 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:48.689 02:41:13 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:17:48.689 02:41:13 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:48.689 02:41:13 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:48.689 02:41:13 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:17:48.689 02:41:13 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:17:48.689 02:41:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:48.689 02:41:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:17:48.689 02:41:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:48.689 02:41:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:48.689 02:41:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:48.689 02:41:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:48.689 02:41:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:48.689 02:41:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:48.689 02:41:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:48.689 02:41:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:48.689 02:41:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:48.948 02:41:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:48.948 "name": "Existed_Raid", 00:17:48.948 "uuid": "fa800e18-82ff-4ab7-ace4-74b832917708", 00:17:48.948 "strip_size_kb": 64, 00:17:48.948 "state": "offline", 00:17:48.948 "raid_level": "concat", 00:17:48.948 "superblock": true, 00:17:48.948 "num_base_bdevs": 4, 00:17:48.948 "num_base_bdevs_discovered": 3, 00:17:48.948 "num_base_bdevs_operational": 3, 00:17:48.948 "base_bdevs_list": [ 00:17:48.948 { 00:17:48.948 "name": null, 00:17:48.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.948 "is_configured": false, 00:17:48.948 "data_offset": 2048, 00:17:48.948 "data_size": 63488 00:17:48.948 }, 00:17:48.948 { 00:17:48.948 "name": "BaseBdev2", 00:17:48.948 "uuid": "5134ff31-58c2-4dff-92f2-77d35cc0c498", 00:17:48.948 "is_configured": true, 00:17:48.948 "data_offset": 2048, 00:17:48.948 "data_size": 63488 00:17:48.948 }, 00:17:48.948 { 00:17:48.948 "name": "BaseBdev3", 00:17:48.948 "uuid": "09b17dc1-f980-4dbd-860c-54a61a662425", 00:17:48.948 "is_configured": true, 00:17:48.948 "data_offset": 2048, 00:17:48.948 "data_size": 63488 00:17:48.948 }, 00:17:48.948 { 00:17:48.948 "name": "BaseBdev4", 00:17:48.948 "uuid": "ad6d3096-a2bd-4be7-8bba-d3140cb59c16", 00:17:48.948 "is_configured": true, 00:17:48.948 "data_offset": 2048, 00:17:48.948 "data_size": 63488 00:17:48.948 } 00:17:48.948 ] 00:17:48.948 }' 00:17:48.948 02:41:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:48.948 02:41:13 -- common/autotest_common.sh@10 -- # set +x 00:17:49.515 02:41:14 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:49.515 02:41:14 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:49.515 02:41:14 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:49.515 02:41:14 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:49.774 02:41:14 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:49.774 02:41:14 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:49.774 02:41:14 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:50.034 [2024-07-11 02:41:15.048774] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:50.034 02:41:15 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:50.034 02:41:15 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:50.034 02:41:15 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:50.034 02:41:15 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:50.295 02:41:15 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:50.295 02:41:15 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:50.295 02:41:15 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:50.553 [2024-07-11 02:41:15.486263] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:50.553 02:41:15 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:50.553 02:41:15 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:50.553 02:41:15 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:50.553 02:41:15 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:50.812 02:41:15 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:50.812 02:41:15 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:50.812 02:41:15 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:17:51.071 [2024-07-11 02:41:16.000463] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:51.071 [2024-07-11 02:41:16.000520] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state offline 00:17:51.071 02:41:16 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:51.071 02:41:16 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:51.071 02:41:16 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:51.071 02:41:16 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:51.330 02:41:16 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:51.330 02:41:16 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:51.330 02:41:16 -- bdev/bdev_raid.sh@287 -- # killprocess 132714 00:17:51.330 02:41:16 -- common/autotest_common.sh@926 -- # '[' -z 132714 ']' 00:17:51.330 02:41:16 -- common/autotest_common.sh@930 -- # kill -0 132714 00:17:51.330 02:41:16 -- common/autotest_common.sh@931 -- # uname 00:17:51.330 02:41:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:51.330 02:41:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 132714 00:17:51.330 killing process with pid 132714 00:17:51.330 02:41:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:51.330 02:41:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:51.330 02:41:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 132714' 00:17:51.330 02:41:16 -- common/autotest_common.sh@945 -- # kill 132714 00:17:51.330 02:41:16 -- common/autotest_common.sh@950 -- # wait 132714 00:17:51.330 [2024-07-11 02:41:16.283231] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:51.330 [2024-07-11 02:41:16.283344] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:51.589 ************************************ 00:17:51.589 END TEST raid_state_function_test_sb 00:17:51.589 ************************************ 00:17:51.589 02:41:16 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:51.589 00:17:51.589 real 0m13.934s 00:17:51.589 user 0m25.959s 00:17:51.589 sys 0m1.639s 00:17:51.589 02:41:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:51.589 02:41:16 -- common/autotest_common.sh@10 -- # set +x 00:17:51.589 02:41:16 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:17:51.589 02:41:16 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:17:51.589 02:41:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:51.589 02:41:16 -- common/autotest_common.sh@10 -- # set +x 00:17:51.589 ************************************ 00:17:51.589 START TEST raid_superblock_test 00:17:51.589 ************************************ 00:17:51.589 02:41:16 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 4 00:17:51.589 02:41:16 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:17:51.589 02:41:16 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:17:51.589 02:41:16 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:17:51.589 02:41:16 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:17:51.589 02:41:16 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:17:51.589 02:41:16 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:17:51.589 02:41:16 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:17:51.589 02:41:16 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:17:51.589 02:41:16 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:17:51.589 02:41:16 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:17:51.589 02:41:16 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:17:51.589 02:41:16 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:17:51.589 02:41:16 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:17:51.589 02:41:16 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:17:51.589 02:41:16 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:17:51.589 02:41:16 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:17:51.589 02:41:16 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:51.589 02:41:16 -- bdev/bdev_raid.sh@357 -- # raid_pid=133173 00:17:51.589 02:41:16 -- bdev/bdev_raid.sh@358 -- # waitforlisten 133173 /var/tmp/spdk-raid.sock 00:17:51.589 02:41:16 -- common/autotest_common.sh@819 -- # '[' -z 133173 ']' 00:17:51.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:51.589 02:41:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:51.589 02:41:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:51.589 02:41:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:51.589 02:41:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:51.589 02:41:16 -- common/autotest_common.sh@10 -- # set +x 00:17:51.589 [2024-07-11 02:41:16.622222] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:17:51.589 [2024-07-11 02:41:16.622948] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133173 ] 00:17:51.857 [2024-07-11 02:41:16.765009] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.857 [2024-07-11 02:41:16.838788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.858 [2024-07-11 02:41:16.896528] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:52.430 02:41:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:52.430 02:41:17 -- common/autotest_common.sh@852 -- # return 0 00:17:52.430 02:41:17 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:17:52.430 02:41:17 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:52.430 02:41:17 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:17:52.430 02:41:17 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:17:52.430 02:41:17 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:52.430 02:41:17 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:52.430 02:41:17 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:52.430 02:41:17 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:52.430 02:41:17 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:52.688 malloc1 00:17:52.688 02:41:17 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:52.946 [2024-07-11 02:41:17.882021] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:52.946 [2024-07-11 02:41:17.882247] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:52.946 [2024-07-11 02:41:17.882382] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005d80 00:17:52.946 [2024-07-11 02:41:17.882508] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:52.946 [2024-07-11 02:41:17.884701] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:52.946 [2024-07-11 02:41:17.884866] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:52.946 pt1 00:17:52.946 02:41:17 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:52.946 02:41:17 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:52.946 02:41:17 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:17:52.946 02:41:17 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:17:52.946 02:41:17 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:52.946 02:41:17 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:52.946 02:41:17 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:52.946 02:41:17 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:52.946 02:41:17 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:53.205 malloc2 00:17:53.205 02:41:18 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:53.205 [2024-07-11 02:41:18.275934] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:53.205 [2024-07-11 02:41:18.276188] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.205 [2024-07-11 02:41:18.276258] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:17:53.205 [2024-07-11 02:41:18.276485] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.205 [2024-07-11 02:41:18.278710] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.205 [2024-07-11 02:41:18.278868] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:53.205 pt2 00:17:53.205 02:41:18 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:53.205 02:41:18 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:53.205 02:41:18 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:17:53.205 02:41:18 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:17:53.205 02:41:18 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:53.205 02:41:18 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:53.205 02:41:18 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:53.205 02:41:18 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:53.205 02:41:18 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:17:53.463 malloc3 00:17:53.463 02:41:18 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:53.720 [2024-07-11 02:41:18.671175] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:53.721 [2024-07-11 02:41:18.671405] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.721 [2024-07-11 02:41:18.671567] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:17:53.721 [2024-07-11 02:41:18.671700] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.721 [2024-07-11 02:41:18.673814] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.721 [2024-07-11 02:41:18.673996] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:53.721 pt3 00:17:53.721 02:41:18 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:53.721 02:41:18 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:53.721 02:41:18 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:17:53.721 02:41:18 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:17:53.721 02:41:18 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:17:53.721 02:41:18 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:53.721 02:41:18 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:53.721 02:41:18 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:53.721 02:41:18 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:17:53.979 malloc4 00:17:53.979 02:41:18 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:54.236 [2024-07-11 02:41:19.105770] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:54.236 [2024-07-11 02:41:19.105992] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:54.236 [2024-07-11 02:41:19.106156] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:17:54.237 [2024-07-11 02:41:19.106304] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:54.237 [2024-07-11 02:41:19.108347] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:54.237 [2024-07-11 02:41:19.108512] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:54.237 pt4 00:17:54.237 02:41:19 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:54.237 02:41:19 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:54.237 02:41:19 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:17:54.237 [2024-07-11 02:41:19.285891] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:54.237 [2024-07-11 02:41:19.287639] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:54.237 [2024-07-11 02:41:19.287831] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:54.237 [2024-07-11 02:41:19.287975] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:54.237 [2024-07-11 02:41:19.288214] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780 00:17:54.237 [2024-07-11 02:41:19.288323] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:54.237 [2024-07-11 02:41:19.288473] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:17:54.237 [2024-07-11 02:41:19.288836] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780 00:17:54.237 [2024-07-11 02:41:19.288934] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008780 00:17:54.237 [2024-07-11 02:41:19.289146] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:54.237 02:41:19 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:17:54.237 02:41:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:54.237 02:41:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:54.237 02:41:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:54.237 02:41:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:54.237 02:41:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:54.237 02:41:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:54.237 02:41:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:54.237 02:41:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:54.237 02:41:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:54.237 02:41:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:54.237 02:41:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.495 02:41:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:54.495 "name": "raid_bdev1", 00:17:54.495 "uuid": "fb4a1289-84c5-48fe-8468-167b7789edf6", 00:17:54.495 "strip_size_kb": 64, 00:17:54.495 "state": "online", 00:17:54.495 "raid_level": "concat", 00:17:54.495 "superblock": true, 00:17:54.495 "num_base_bdevs": 4, 00:17:54.495 "num_base_bdevs_discovered": 4, 00:17:54.495 "num_base_bdevs_operational": 4, 00:17:54.495 "base_bdevs_list": [ 00:17:54.495 { 00:17:54.495 "name": "pt1", 00:17:54.495 "uuid": "50577bc1-03e5-5021-b3fb-03a32c7c49a7", 00:17:54.495 "is_configured": true, 00:17:54.495 "data_offset": 2048, 00:17:54.495 "data_size": 63488 00:17:54.495 }, 00:17:54.495 { 00:17:54.495 "name": "pt2", 00:17:54.495 "uuid": "f3200937-4124-5a29-aac8-f496ad39efb3", 00:17:54.495 "is_configured": true, 00:17:54.495 "data_offset": 2048, 00:17:54.495 "data_size": 63488 00:17:54.495 }, 00:17:54.495 { 00:17:54.495 "name": "pt3", 00:17:54.495 "uuid": "52aba57d-f47d-5a9d-9084-329233a20aa2", 00:17:54.495 "is_configured": true, 00:17:54.495 "data_offset": 2048, 00:17:54.495 "data_size": 63488 00:17:54.495 }, 00:17:54.495 { 00:17:54.495 "name": "pt4", 00:17:54.495 "uuid": "fc10d036-13d0-50fd-ae1c-17600917df8d", 00:17:54.495 "is_configured": true, 00:17:54.495 "data_offset": 2048, 00:17:54.495 "data_size": 63488 00:17:54.495 } 00:17:54.495 ] 00:17:54.495 }' 00:17:54.495 02:41:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:54.495 02:41:19 -- common/autotest_common.sh@10 -- # set +x 00:17:55.429 02:41:20 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:55.429 02:41:20 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:17:55.429 [2024-07-11 02:41:20.362449] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:55.429 02:41:20 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=fb4a1289-84c5-48fe-8468-167b7789edf6 00:17:55.429 02:41:20 -- bdev/bdev_raid.sh@380 -- # '[' -z fb4a1289-84c5-48fe-8468-167b7789edf6 ']' 00:17:55.429 02:41:20 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:55.688 [2024-07-11 02:41:20.594241] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:55.688 [2024-07-11 02:41:20.594392] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:55.688 [2024-07-11 02:41:20.594597] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:55.688 [2024-07-11 02:41:20.594776] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:55.688 [2024-07-11 02:41:20.594882] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state offline 00:17:55.688 02:41:20 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:55.688 02:41:20 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:17:55.946 02:41:20 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:17:55.946 02:41:20 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:17:55.946 02:41:20 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:55.946 02:41:20 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:55.946 02:41:20 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:55.946 02:41:20 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:56.210 02:41:21 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:56.210 02:41:21 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:56.472 02:41:21 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:56.472 02:41:21 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:17:56.729 02:41:21 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:56.729 02:41:21 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:56.986 02:41:21 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:17:56.986 02:41:21 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:56.986 02:41:21 -- common/autotest_common.sh@640 -- # local es=0 00:17:56.986 02:41:21 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:56.986 02:41:21 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:56.986 02:41:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:56.986 02:41:21 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:56.986 02:41:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:56.986 02:41:21 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:56.986 02:41:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:56.986 02:41:21 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:56.986 02:41:21 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:56.986 02:41:21 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:56.986 [2024-07-11 02:41:22.034475] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:56.986 [2024-07-11 02:41:22.036276] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:56.986 [2024-07-11 02:41:22.036471] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:56.986 [2024-07-11 02:41:22.036540] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:17:56.986 [2024-07-11 02:41:22.036724] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:17:56.986 [2024-07-11 02:41:22.036914] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:17:56.986 [2024-07-11 02:41:22.037050] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:17:56.986 [2024-07-11 02:41:22.037220] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:17:56.986 [2024-07-11 02:41:22.037356] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:56.986 [2024-07-11 02:41:22.037487] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state configuring 00:17:56.986 request: 00:17:56.986 { 00:17:56.986 "name": "raid_bdev1", 00:17:56.986 "raid_level": "concat", 00:17:56.986 "base_bdevs": [ 00:17:56.986 "malloc1", 00:17:56.986 "malloc2", 00:17:56.986 "malloc3", 00:17:56.986 "malloc4" 00:17:56.987 ], 00:17:56.987 "superblock": false, 00:17:56.987 "strip_size_kb": 64, 00:17:56.987 "method": "bdev_raid_create", 00:17:56.987 "req_id": 1 00:17:56.987 } 00:17:56.987 Got JSON-RPC error response 00:17:56.987 response: 00:17:56.987 { 00:17:56.987 "code": -17, 00:17:56.987 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:56.987 } 00:17:56.987 02:41:22 -- common/autotest_common.sh@643 -- # es=1 00:17:56.987 02:41:22 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:56.987 02:41:22 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:56.987 02:41:22 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:56.987 02:41:22 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:56.987 02:41:22 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:17:57.244 02:41:22 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:17:57.244 02:41:22 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:17:57.244 02:41:22 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:57.608 [2024-07-11 02:41:22.458502] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:57.608 [2024-07-11 02:41:22.458720] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:57.608 [2024-07-11 02:41:22.458852] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:57.608 [2024-07-11 02:41:22.458960] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:57.608 [2024-07-11 02:41:22.461076] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:57.608 [2024-07-11 02:41:22.461247] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:57.608 [2024-07-11 02:41:22.461442] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:17:57.608 [2024-07-11 02:41:22.461600] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:57.608 pt1 00:17:57.608 02:41:22 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:17:57.608 02:41:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:57.608 02:41:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:57.608 02:41:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:57.608 02:41:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:57.608 02:41:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:57.608 02:41:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:57.608 02:41:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:57.608 02:41:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:57.608 02:41:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:57.608 02:41:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:57.608 02:41:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.880 02:41:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:57.880 "name": "raid_bdev1", 00:17:57.880 "uuid": "fb4a1289-84c5-48fe-8468-167b7789edf6", 00:17:57.880 "strip_size_kb": 64, 00:17:57.880 "state": "configuring", 00:17:57.880 "raid_level": "concat", 00:17:57.880 "superblock": true, 00:17:57.880 "num_base_bdevs": 4, 00:17:57.880 "num_base_bdevs_discovered": 1, 00:17:57.880 "num_base_bdevs_operational": 4, 00:17:57.880 "base_bdevs_list": [ 00:17:57.880 { 00:17:57.880 "name": "pt1", 00:17:57.880 "uuid": "50577bc1-03e5-5021-b3fb-03a32c7c49a7", 00:17:57.880 "is_configured": true, 00:17:57.880 "data_offset": 2048, 00:17:57.880 "data_size": 63488 00:17:57.880 }, 00:17:57.880 { 00:17:57.880 "name": null, 00:17:57.880 "uuid": "f3200937-4124-5a29-aac8-f496ad39efb3", 00:17:57.880 "is_configured": false, 00:17:57.880 "data_offset": 2048, 00:17:57.880 "data_size": 63488 00:17:57.880 }, 00:17:57.880 { 00:17:57.880 "name": null, 00:17:57.880 "uuid": "52aba57d-f47d-5a9d-9084-329233a20aa2", 00:17:57.880 "is_configured": false, 00:17:57.880 "data_offset": 2048, 00:17:57.880 "data_size": 63488 00:17:57.880 }, 00:17:57.880 { 00:17:57.880 "name": null, 00:17:57.880 "uuid": "fc10d036-13d0-50fd-ae1c-17600917df8d", 00:17:57.880 "is_configured": false, 00:17:57.880 "data_offset": 2048, 00:17:57.880 "data_size": 63488 00:17:57.880 } 00:17:57.880 ] 00:17:57.880 }' 00:17:57.880 02:41:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:57.880 02:41:22 -- common/autotest_common.sh@10 -- # set +x 00:17:58.446 02:41:23 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:17:58.446 02:41:23 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:58.703 [2024-07-11 02:41:23.626712] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:58.703 [2024-07-11 02:41:23.626807] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.703 [2024-07-11 02:41:23.626866] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:17:58.703 [2024-07-11 02:41:23.626903] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.703 [2024-07-11 02:41:23.627384] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.703 [2024-07-11 02:41:23.627433] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:58.703 [2024-07-11 02:41:23.627547] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:58.703 [2024-07-11 02:41:23.627575] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:58.703 pt2 00:17:58.703 02:41:23 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:58.961 [2024-07-11 02:41:23.841944] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:58.961 02:41:23 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:17:58.961 02:41:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:58.961 02:41:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:58.961 02:41:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:58.961 02:41:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:58.961 02:41:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:58.961 02:41:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:58.961 02:41:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:58.961 02:41:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:58.961 02:41:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:58.961 02:41:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:58.961 02:41:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.219 02:41:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:59.219 "name": "raid_bdev1", 00:17:59.219 "uuid": "fb4a1289-84c5-48fe-8468-167b7789edf6", 00:17:59.219 "strip_size_kb": 64, 00:17:59.219 "state": "configuring", 00:17:59.219 "raid_level": "concat", 00:17:59.219 "superblock": true, 00:17:59.219 "num_base_bdevs": 4, 00:17:59.219 "num_base_bdevs_discovered": 1, 00:17:59.219 "num_base_bdevs_operational": 4, 00:17:59.219 "base_bdevs_list": [ 00:17:59.219 { 00:17:59.219 "name": "pt1", 00:17:59.219 "uuid": "50577bc1-03e5-5021-b3fb-03a32c7c49a7", 00:17:59.219 "is_configured": true, 00:17:59.219 "data_offset": 2048, 00:17:59.219 "data_size": 63488 00:17:59.219 }, 00:17:59.219 { 00:17:59.219 "name": null, 00:17:59.219 "uuid": "f3200937-4124-5a29-aac8-f496ad39efb3", 00:17:59.219 "is_configured": false, 00:17:59.219 "data_offset": 2048, 00:17:59.219 "data_size": 63488 00:17:59.219 }, 00:17:59.219 { 00:17:59.219 "name": null, 00:17:59.219 "uuid": "52aba57d-f47d-5a9d-9084-329233a20aa2", 00:17:59.219 "is_configured": false, 00:17:59.219 "data_offset": 2048, 00:17:59.219 "data_size": 63488 00:17:59.219 }, 00:17:59.219 { 00:17:59.219 "name": null, 00:17:59.219 "uuid": "fc10d036-13d0-50fd-ae1c-17600917df8d", 00:17:59.219 "is_configured": false, 00:17:59.219 "data_offset": 2048, 00:17:59.219 "data_size": 63488 00:17:59.219 } 00:17:59.219 ] 00:17:59.219 }' 00:17:59.219 02:41:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:59.219 02:41:24 -- common/autotest_common.sh@10 -- # set +x 00:17:59.784 02:41:24 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:17:59.785 02:41:24 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:59.785 02:41:24 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:00.042 [2024-07-11 02:41:24.950276] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:00.042 [2024-07-11 02:41:24.950796] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:00.042 [2024-07-11 02:41:24.950983] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:18:00.042 [2024-07-11 02:41:24.951115] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:00.042 [2024-07-11 02:41:24.951767] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:00.042 [2024-07-11 02:41:24.951951] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:00.042 [2024-07-11 02:41:24.952169] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:00.042 [2024-07-11 02:41:24.952213] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:00.042 pt2 00:18:00.042 02:41:24 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:00.042 02:41:24 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:00.042 02:41:24 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:00.299 [2024-07-11 02:41:25.198320] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:00.299 [2024-07-11 02:41:25.198531] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:00.299 [2024-07-11 02:41:25.198704] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:00.299 [2024-07-11 02:41:25.198849] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:00.299 [2024-07-11 02:41:25.199381] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:00.299 [2024-07-11 02:41:25.199576] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:00.299 [2024-07-11 02:41:25.199765] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:18:00.299 [2024-07-11 02:41:25.199807] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:00.299 pt3 00:18:00.299 02:41:25 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:00.299 02:41:25 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:00.299 02:41:25 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:00.299 [2024-07-11 02:41:25.386345] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:00.299 [2024-07-11 02:41:25.386601] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:00.300 [2024-07-11 02:41:25.386761] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:18:00.300 [2024-07-11 02:41:25.386898] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:00.300 [2024-07-11 02:41:25.387438] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:00.300 [2024-07-11 02:41:25.387615] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:00.300 [2024-07-11 02:41:25.387838] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:18:00.300 [2024-07-11 02:41:25.387880] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:00.300 [2024-07-11 02:41:25.388038] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:18:00.300 [2024-07-11 02:41:25.388062] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:00.300 [2024-07-11 02:41:25.388150] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:18:00.300 [2024-07-11 02:41:25.388466] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:18:00.300 [2024-07-11 02:41:25.388489] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:18:00.300 [2024-07-11 02:41:25.388596] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:00.300 pt4 00:18:00.557 02:41:25 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:00.557 02:41:25 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:00.558 02:41:25 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:18:00.558 02:41:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:00.558 02:41:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:00.558 02:41:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:00.558 02:41:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:00.558 02:41:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:00.558 02:41:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:00.558 02:41:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:00.558 02:41:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:00.558 02:41:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:00.558 02:41:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:00.558 02:41:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.558 02:41:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:00.558 "name": "raid_bdev1", 00:18:00.558 "uuid": "fb4a1289-84c5-48fe-8468-167b7789edf6", 00:18:00.558 "strip_size_kb": 64, 00:18:00.558 "state": "online", 00:18:00.558 "raid_level": "concat", 00:18:00.558 "superblock": true, 00:18:00.558 "num_base_bdevs": 4, 00:18:00.558 "num_base_bdevs_discovered": 4, 00:18:00.558 "num_base_bdevs_operational": 4, 00:18:00.558 "base_bdevs_list": [ 00:18:00.558 { 00:18:00.558 "name": "pt1", 00:18:00.558 "uuid": "50577bc1-03e5-5021-b3fb-03a32c7c49a7", 00:18:00.558 "is_configured": true, 00:18:00.558 "data_offset": 2048, 00:18:00.558 "data_size": 63488 00:18:00.558 }, 00:18:00.558 { 00:18:00.558 "name": "pt2", 00:18:00.558 "uuid": "f3200937-4124-5a29-aac8-f496ad39efb3", 00:18:00.558 "is_configured": true, 00:18:00.558 "data_offset": 2048, 00:18:00.558 "data_size": 63488 00:18:00.558 }, 00:18:00.558 { 00:18:00.558 "name": "pt3", 00:18:00.558 "uuid": "52aba57d-f47d-5a9d-9084-329233a20aa2", 00:18:00.558 "is_configured": true, 00:18:00.558 "data_offset": 2048, 00:18:00.558 "data_size": 63488 00:18:00.558 }, 00:18:00.558 { 00:18:00.558 "name": "pt4", 00:18:00.558 "uuid": "fc10d036-13d0-50fd-ae1c-17600917df8d", 00:18:00.558 "is_configured": true, 00:18:00.558 "data_offset": 2048, 00:18:00.558 "data_size": 63488 00:18:00.558 } 00:18:00.558 ] 00:18:00.558 }' 00:18:00.558 02:41:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:00.558 02:41:25 -- common/autotest_common.sh@10 -- # set +x 00:18:01.491 02:41:26 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:01.491 02:41:26 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:18:01.491 [2024-07-11 02:41:26.451458] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:01.491 02:41:26 -- bdev/bdev_raid.sh@430 -- # '[' fb4a1289-84c5-48fe-8468-167b7789edf6 '!=' fb4a1289-84c5-48fe-8468-167b7789edf6 ']' 00:18:01.491 02:41:26 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:18:01.491 02:41:26 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:01.491 02:41:26 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:01.491 02:41:26 -- bdev/bdev_raid.sh@511 -- # killprocess 133173 00:18:01.491 02:41:26 -- common/autotest_common.sh@926 -- # '[' -z 133173 ']' 00:18:01.491 02:41:26 -- common/autotest_common.sh@930 -- # kill -0 133173 00:18:01.491 02:41:26 -- common/autotest_common.sh@931 -- # uname 00:18:01.491 02:41:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:01.491 02:41:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 133173 00:18:01.491 killing process with pid 133173 00:18:01.491 02:41:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:01.491 02:41:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:01.491 02:41:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 133173' 00:18:01.491 02:41:26 -- common/autotest_common.sh@945 -- # kill 133173 00:18:01.491 02:41:26 -- common/autotest_common.sh@950 -- # wait 133173 00:18:01.491 [2024-07-11 02:41:26.484134] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:01.491 [2024-07-11 02:41:26.484236] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:01.491 [2024-07-11 02:41:26.484324] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:01.491 [2024-07-11 02:41:26.484344] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:18:01.491 [2024-07-11 02:41:26.522345] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:01.749 ************************************ 00:18:01.749 END TEST raid_superblock_test 00:18:01.749 ************************************ 00:18:01.749 02:41:26 -- bdev/bdev_raid.sh@513 -- # return 0 00:18:01.749 00:18:01.749 real 0m10.177s 00:18:01.749 user 0m18.776s 00:18:01.749 sys 0m1.181s 00:18:01.749 02:41:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:01.749 02:41:26 -- common/autotest_common.sh@10 -- # set +x 00:18:01.749 02:41:26 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:18:01.749 02:41:26 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:18:01.749 02:41:26 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:18:01.749 02:41:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:01.749 02:41:26 -- common/autotest_common.sh@10 -- # set +x 00:18:01.749 ************************************ 00:18:01.749 START TEST raid_state_function_test 00:18:01.749 ************************************ 00:18:01.749 02:41:26 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 4 false 00:18:01.749 02:41:26 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:18:01.749 02:41:26 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:18:01.749 02:41:26 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:18:01.749 02:41:26 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:01.749 02:41:26 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:18:01.749 02:41:26 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:01.750 02:41:26 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:01.750 02:41:26 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:18:01.750 02:41:26 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:01.750 02:41:26 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:01.750 02:41:26 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:18:01.750 02:41:26 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:01.750 02:41:26 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:01.750 02:41:26 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:18:01.750 02:41:26 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:01.750 02:41:26 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:01.750 02:41:26 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:18:01.750 02:41:26 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:01.750 02:41:26 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:01.750 02:41:26 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:01.750 02:41:26 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:01.750 02:41:26 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:01.750 02:41:26 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:01.750 02:41:26 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:01.750 02:41:26 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:18:01.750 02:41:26 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:18:01.750 02:41:26 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:18:01.750 02:41:26 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:18:01.750 02:41:26 -- bdev/bdev_raid.sh@226 -- # raid_pid=133492 00:18:01.750 02:41:26 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 133492' 00:18:01.750 Process raid pid: 133492 00:18:01.750 02:41:26 -- bdev/bdev_raid.sh@228 -- # waitforlisten 133492 /var/tmp/spdk-raid.sock 00:18:01.750 02:41:26 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:01.750 02:41:26 -- common/autotest_common.sh@819 -- # '[' -z 133492 ']' 00:18:01.750 02:41:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:01.750 02:41:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:01.750 02:41:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:01.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:01.750 02:41:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:01.750 02:41:26 -- common/autotest_common.sh@10 -- # set +x 00:18:02.007 [2024-07-11 02:41:26.879241] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:18:02.007 [2024-07-11 02:41:26.879463] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:02.007 [2024-07-11 02:41:27.028288] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.265 [2024-07-11 02:41:27.104810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:02.265 [2024-07-11 02:41:27.161012] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:02.832 02:41:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:02.832 02:41:27 -- common/autotest_common.sh@852 -- # return 0 00:18:02.832 02:41:27 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:03.090 [2024-07-11 02:41:27.986311] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:03.090 [2024-07-11 02:41:27.986374] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:03.090 [2024-07-11 02:41:27.986402] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:03.090 [2024-07-11 02:41:27.986417] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:03.090 [2024-07-11 02:41:27.986424] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:03.090 [2024-07-11 02:41:27.986455] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:03.090 [2024-07-11 02:41:27.986464] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:03.090 [2024-07-11 02:41:27.986484] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:03.090 02:41:27 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:03.090 02:41:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:03.090 02:41:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:03.090 02:41:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:03.090 02:41:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:03.090 02:41:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:03.090 02:41:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:03.090 02:41:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:03.090 02:41:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:03.090 02:41:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:03.090 02:41:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:03.090 02:41:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:03.348 02:41:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:03.348 "name": "Existed_Raid", 00:18:03.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.348 "strip_size_kb": 0, 00:18:03.348 "state": "configuring", 00:18:03.348 "raid_level": "raid1", 00:18:03.348 "superblock": false, 00:18:03.348 "num_base_bdevs": 4, 00:18:03.348 "num_base_bdevs_discovered": 0, 00:18:03.348 "num_base_bdevs_operational": 4, 00:18:03.348 "base_bdevs_list": [ 00:18:03.348 { 00:18:03.348 "name": "BaseBdev1", 00:18:03.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.348 "is_configured": false, 00:18:03.348 "data_offset": 0, 00:18:03.348 "data_size": 0 00:18:03.348 }, 00:18:03.348 { 00:18:03.348 "name": "BaseBdev2", 00:18:03.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.348 "is_configured": false, 00:18:03.348 "data_offset": 0, 00:18:03.348 "data_size": 0 00:18:03.348 }, 00:18:03.348 { 00:18:03.348 "name": "BaseBdev3", 00:18:03.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.348 "is_configured": false, 00:18:03.348 "data_offset": 0, 00:18:03.348 "data_size": 0 00:18:03.348 }, 00:18:03.348 { 00:18:03.348 "name": "BaseBdev4", 00:18:03.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.348 "is_configured": false, 00:18:03.348 "data_offset": 0, 00:18:03.348 "data_size": 0 00:18:03.348 } 00:18:03.348 ] 00:18:03.348 }' 00:18:03.348 02:41:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:03.348 02:41:28 -- common/autotest_common.sh@10 -- # set +x 00:18:03.913 02:41:28 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:04.171 [2024-07-11 02:41:29.154426] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:04.171 [2024-07-11 02:41:29.154484] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:18:04.171 02:41:29 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:04.429 [2024-07-11 02:41:29.406514] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:04.429 [2024-07-11 02:41:29.406592] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:04.429 [2024-07-11 02:41:29.406619] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:04.429 [2024-07-11 02:41:29.406643] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:04.429 [2024-07-11 02:41:29.406652] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:04.429 [2024-07-11 02:41:29.406668] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:04.429 [2024-07-11 02:41:29.406674] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:04.429 [2024-07-11 02:41:29.406698] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:04.429 02:41:29 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:04.687 [2024-07-11 02:41:29.612812] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:04.687 BaseBdev1 00:18:04.687 02:41:29 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:04.687 02:41:29 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:04.687 02:41:29 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:04.687 02:41:29 -- common/autotest_common.sh@889 -- # local i 00:18:04.687 02:41:29 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:04.687 02:41:29 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:04.687 02:41:29 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:04.945 02:41:29 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:05.204 [ 00:18:05.204 { 00:18:05.204 "name": "BaseBdev1", 00:18:05.204 "aliases": [ 00:18:05.204 "20d32c69-3441-4284-8de6-5e2d3b905a1d" 00:18:05.204 ], 00:18:05.204 "product_name": "Malloc disk", 00:18:05.204 "block_size": 512, 00:18:05.204 "num_blocks": 65536, 00:18:05.204 "uuid": "20d32c69-3441-4284-8de6-5e2d3b905a1d", 00:18:05.204 "assigned_rate_limits": { 00:18:05.204 "rw_ios_per_sec": 0, 00:18:05.204 "rw_mbytes_per_sec": 0, 00:18:05.204 "r_mbytes_per_sec": 0, 00:18:05.204 "w_mbytes_per_sec": 0 00:18:05.204 }, 00:18:05.204 "claimed": true, 00:18:05.204 "claim_type": "exclusive_write", 00:18:05.204 "zoned": false, 00:18:05.204 "supported_io_types": { 00:18:05.204 "read": true, 00:18:05.204 "write": true, 00:18:05.204 "unmap": true, 00:18:05.204 "write_zeroes": true, 00:18:05.204 "flush": true, 00:18:05.204 "reset": true, 00:18:05.204 "compare": false, 00:18:05.204 "compare_and_write": false, 00:18:05.204 "abort": true, 00:18:05.204 "nvme_admin": false, 00:18:05.204 "nvme_io": false 00:18:05.204 }, 00:18:05.204 "memory_domains": [ 00:18:05.204 { 00:18:05.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:05.204 "dma_device_type": 2 00:18:05.204 } 00:18:05.204 ], 00:18:05.204 "driver_specific": {} 00:18:05.204 } 00:18:05.204 ] 00:18:05.204 02:41:30 -- common/autotest_common.sh@895 -- # return 0 00:18:05.204 02:41:30 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:05.204 02:41:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:05.204 02:41:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:05.204 02:41:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:05.204 02:41:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:05.204 02:41:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:05.204 02:41:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:05.204 02:41:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:05.204 02:41:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:05.204 02:41:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:05.204 02:41:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:05.204 02:41:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:05.462 02:41:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:05.462 "name": "Existed_Raid", 00:18:05.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.462 "strip_size_kb": 0, 00:18:05.462 "state": "configuring", 00:18:05.462 "raid_level": "raid1", 00:18:05.463 "superblock": false, 00:18:05.463 "num_base_bdevs": 4, 00:18:05.463 "num_base_bdevs_discovered": 1, 00:18:05.463 "num_base_bdevs_operational": 4, 00:18:05.463 "base_bdevs_list": [ 00:18:05.463 { 00:18:05.463 "name": "BaseBdev1", 00:18:05.463 "uuid": "20d32c69-3441-4284-8de6-5e2d3b905a1d", 00:18:05.463 "is_configured": true, 00:18:05.463 "data_offset": 0, 00:18:05.463 "data_size": 65536 00:18:05.463 }, 00:18:05.463 { 00:18:05.463 "name": "BaseBdev2", 00:18:05.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.463 "is_configured": false, 00:18:05.463 "data_offset": 0, 00:18:05.463 "data_size": 0 00:18:05.463 }, 00:18:05.463 { 00:18:05.463 "name": "BaseBdev3", 00:18:05.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.463 "is_configured": false, 00:18:05.463 "data_offset": 0, 00:18:05.463 "data_size": 0 00:18:05.463 }, 00:18:05.463 { 00:18:05.463 "name": "BaseBdev4", 00:18:05.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.463 "is_configured": false, 00:18:05.463 "data_offset": 0, 00:18:05.463 "data_size": 0 00:18:05.463 } 00:18:05.463 ] 00:18:05.463 }' 00:18:05.463 02:41:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:05.463 02:41:30 -- common/autotest_common.sh@10 -- # set +x 00:18:06.027 02:41:30 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:06.286 [2024-07-11 02:41:31.133105] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:06.286 [2024-07-11 02:41:31.133183] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005a80 name Existed_Raid, state configuring 00:18:06.286 02:41:31 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:18:06.286 02:41:31 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:06.544 [2024-07-11 02:41:31.389192] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:06.544 [2024-07-11 02:41:31.390877] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:06.544 [2024-07-11 02:41:31.390942] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:06.544 [2024-07-11 02:41:31.390969] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:06.544 [2024-07-11 02:41:31.390989] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:06.544 [2024-07-11 02:41:31.390997] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:06.544 [2024-07-11 02:41:31.391010] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:06.544 02:41:31 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:06.544 02:41:31 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:06.544 02:41:31 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:06.544 02:41:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:06.544 02:41:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:06.544 02:41:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:06.544 02:41:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:06.544 02:41:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:06.544 02:41:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:06.544 02:41:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:06.544 02:41:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:06.544 02:41:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:06.544 02:41:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:06.544 02:41:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:06.802 02:41:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:06.802 "name": "Existed_Raid", 00:18:06.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.802 "strip_size_kb": 0, 00:18:06.802 "state": "configuring", 00:18:06.802 "raid_level": "raid1", 00:18:06.802 "superblock": false, 00:18:06.802 "num_base_bdevs": 4, 00:18:06.802 "num_base_bdevs_discovered": 1, 00:18:06.802 "num_base_bdevs_operational": 4, 00:18:06.802 "base_bdevs_list": [ 00:18:06.802 { 00:18:06.802 "name": "BaseBdev1", 00:18:06.802 "uuid": "20d32c69-3441-4284-8de6-5e2d3b905a1d", 00:18:06.802 "is_configured": true, 00:18:06.802 "data_offset": 0, 00:18:06.802 "data_size": 65536 00:18:06.802 }, 00:18:06.802 { 00:18:06.802 "name": "BaseBdev2", 00:18:06.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.802 "is_configured": false, 00:18:06.802 "data_offset": 0, 00:18:06.802 "data_size": 0 00:18:06.802 }, 00:18:06.802 { 00:18:06.802 "name": "BaseBdev3", 00:18:06.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.802 "is_configured": false, 00:18:06.802 "data_offset": 0, 00:18:06.802 "data_size": 0 00:18:06.802 }, 00:18:06.802 { 00:18:06.802 "name": "BaseBdev4", 00:18:06.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.802 "is_configured": false, 00:18:06.802 "data_offset": 0, 00:18:06.802 "data_size": 0 00:18:06.802 } 00:18:06.802 ] 00:18:06.802 }' 00:18:06.802 02:41:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:06.802 02:41:31 -- common/autotest_common.sh@10 -- # set +x 00:18:07.368 02:41:32 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:07.626 [2024-07-11 02:41:32.521918] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:07.626 BaseBdev2 00:18:07.626 02:41:32 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:07.626 02:41:32 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:18:07.626 02:41:32 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:07.626 02:41:32 -- common/autotest_common.sh@889 -- # local i 00:18:07.626 02:41:32 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:07.626 02:41:32 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:07.626 02:41:32 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:07.884 02:41:32 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:08.142 [ 00:18:08.142 { 00:18:08.142 "name": "BaseBdev2", 00:18:08.142 "aliases": [ 00:18:08.142 "ba2778d8-06be-4239-b7b3-5608db268eb9" 00:18:08.142 ], 00:18:08.142 "product_name": "Malloc disk", 00:18:08.142 "block_size": 512, 00:18:08.142 "num_blocks": 65536, 00:18:08.142 "uuid": "ba2778d8-06be-4239-b7b3-5608db268eb9", 00:18:08.142 "assigned_rate_limits": { 00:18:08.142 "rw_ios_per_sec": 0, 00:18:08.142 "rw_mbytes_per_sec": 0, 00:18:08.142 "r_mbytes_per_sec": 0, 00:18:08.142 "w_mbytes_per_sec": 0 00:18:08.142 }, 00:18:08.142 "claimed": true, 00:18:08.142 "claim_type": "exclusive_write", 00:18:08.142 "zoned": false, 00:18:08.142 "supported_io_types": { 00:18:08.142 "read": true, 00:18:08.142 "write": true, 00:18:08.142 "unmap": true, 00:18:08.142 "write_zeroes": true, 00:18:08.142 "flush": true, 00:18:08.142 "reset": true, 00:18:08.142 "compare": false, 00:18:08.142 "compare_and_write": false, 00:18:08.142 "abort": true, 00:18:08.142 "nvme_admin": false, 00:18:08.142 "nvme_io": false 00:18:08.142 }, 00:18:08.142 "memory_domains": [ 00:18:08.142 { 00:18:08.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:08.142 "dma_device_type": 2 00:18:08.142 } 00:18:08.142 ], 00:18:08.142 "driver_specific": {} 00:18:08.142 } 00:18:08.142 ] 00:18:08.142 02:41:33 -- common/autotest_common.sh@895 -- # return 0 00:18:08.142 02:41:33 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:08.142 02:41:33 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:08.142 02:41:33 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:08.142 02:41:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:08.142 02:41:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:08.142 02:41:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:08.142 02:41:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:08.142 02:41:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:08.142 02:41:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:08.142 02:41:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:08.142 02:41:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:08.142 02:41:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:08.142 02:41:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:08.142 02:41:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:08.401 02:41:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:08.401 "name": "Existed_Raid", 00:18:08.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.401 "strip_size_kb": 0, 00:18:08.401 "state": "configuring", 00:18:08.401 "raid_level": "raid1", 00:18:08.401 "superblock": false, 00:18:08.401 "num_base_bdevs": 4, 00:18:08.401 "num_base_bdevs_discovered": 2, 00:18:08.401 "num_base_bdevs_operational": 4, 00:18:08.401 "base_bdevs_list": [ 00:18:08.401 { 00:18:08.401 "name": "BaseBdev1", 00:18:08.401 "uuid": "20d32c69-3441-4284-8de6-5e2d3b905a1d", 00:18:08.401 "is_configured": true, 00:18:08.401 "data_offset": 0, 00:18:08.401 "data_size": 65536 00:18:08.401 }, 00:18:08.401 { 00:18:08.401 "name": "BaseBdev2", 00:18:08.401 "uuid": "ba2778d8-06be-4239-b7b3-5608db268eb9", 00:18:08.401 "is_configured": true, 00:18:08.401 "data_offset": 0, 00:18:08.401 "data_size": 65536 00:18:08.401 }, 00:18:08.401 { 00:18:08.401 "name": "BaseBdev3", 00:18:08.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.401 "is_configured": false, 00:18:08.401 "data_offset": 0, 00:18:08.401 "data_size": 0 00:18:08.401 }, 00:18:08.401 { 00:18:08.401 "name": "BaseBdev4", 00:18:08.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.401 "is_configured": false, 00:18:08.401 "data_offset": 0, 00:18:08.401 "data_size": 0 00:18:08.401 } 00:18:08.401 ] 00:18:08.401 }' 00:18:08.401 02:41:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:08.401 02:41:33 -- common/autotest_common.sh@10 -- # set +x 00:18:08.968 02:41:33 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:09.226 [2024-07-11 02:41:34.182252] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:09.226 BaseBdev3 00:18:09.226 02:41:34 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:18:09.226 02:41:34 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:18:09.226 02:41:34 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:09.226 02:41:34 -- common/autotest_common.sh@889 -- # local i 00:18:09.226 02:41:34 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:09.226 02:41:34 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:09.226 02:41:34 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:09.483 02:41:34 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:09.483 [ 00:18:09.483 { 00:18:09.483 "name": "BaseBdev3", 00:18:09.483 "aliases": [ 00:18:09.483 "b4ffe70b-8338-4055-a752-bbd1d7c5ac0c" 00:18:09.483 ], 00:18:09.483 "product_name": "Malloc disk", 00:18:09.483 "block_size": 512, 00:18:09.483 "num_blocks": 65536, 00:18:09.483 "uuid": "b4ffe70b-8338-4055-a752-bbd1d7c5ac0c", 00:18:09.483 "assigned_rate_limits": { 00:18:09.483 "rw_ios_per_sec": 0, 00:18:09.483 "rw_mbytes_per_sec": 0, 00:18:09.483 "r_mbytes_per_sec": 0, 00:18:09.483 "w_mbytes_per_sec": 0 00:18:09.483 }, 00:18:09.483 "claimed": true, 00:18:09.483 "claim_type": "exclusive_write", 00:18:09.483 "zoned": false, 00:18:09.483 "supported_io_types": { 00:18:09.483 "read": true, 00:18:09.483 "write": true, 00:18:09.483 "unmap": true, 00:18:09.483 "write_zeroes": true, 00:18:09.483 "flush": true, 00:18:09.483 "reset": true, 00:18:09.483 "compare": false, 00:18:09.483 "compare_and_write": false, 00:18:09.483 "abort": true, 00:18:09.483 "nvme_admin": false, 00:18:09.483 "nvme_io": false 00:18:09.483 }, 00:18:09.483 "memory_domains": [ 00:18:09.483 { 00:18:09.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:09.483 "dma_device_type": 2 00:18:09.483 } 00:18:09.483 ], 00:18:09.483 "driver_specific": {} 00:18:09.483 } 00:18:09.483 ] 00:18:09.483 02:41:34 -- common/autotest_common.sh@895 -- # return 0 00:18:09.483 02:41:34 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:09.483 02:41:34 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:09.483 02:41:34 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:09.483 02:41:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:09.483 02:41:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:09.483 02:41:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:09.483 02:41:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:09.483 02:41:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:09.483 02:41:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:09.483 02:41:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:09.483 02:41:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:09.483 02:41:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:09.483 02:41:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:09.483 02:41:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:09.741 02:41:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:09.741 "name": "Existed_Raid", 00:18:09.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.741 "strip_size_kb": 0, 00:18:09.741 "state": "configuring", 00:18:09.741 "raid_level": "raid1", 00:18:09.741 "superblock": false, 00:18:09.741 "num_base_bdevs": 4, 00:18:09.741 "num_base_bdevs_discovered": 3, 00:18:09.741 "num_base_bdevs_operational": 4, 00:18:09.741 "base_bdevs_list": [ 00:18:09.741 { 00:18:09.741 "name": "BaseBdev1", 00:18:09.741 "uuid": "20d32c69-3441-4284-8de6-5e2d3b905a1d", 00:18:09.741 "is_configured": true, 00:18:09.741 "data_offset": 0, 00:18:09.741 "data_size": 65536 00:18:09.741 }, 00:18:09.741 { 00:18:09.741 "name": "BaseBdev2", 00:18:09.741 "uuid": "ba2778d8-06be-4239-b7b3-5608db268eb9", 00:18:09.741 "is_configured": true, 00:18:09.741 "data_offset": 0, 00:18:09.741 "data_size": 65536 00:18:09.741 }, 00:18:09.741 { 00:18:09.741 "name": "BaseBdev3", 00:18:09.741 "uuid": "b4ffe70b-8338-4055-a752-bbd1d7c5ac0c", 00:18:09.741 "is_configured": true, 00:18:09.741 "data_offset": 0, 00:18:09.741 "data_size": 65536 00:18:09.741 }, 00:18:09.741 { 00:18:09.741 "name": "BaseBdev4", 00:18:09.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.741 "is_configured": false, 00:18:09.741 "data_offset": 0, 00:18:09.741 "data_size": 0 00:18:09.741 } 00:18:09.741 ] 00:18:09.741 }' 00:18:09.741 02:41:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:09.741 02:41:34 -- common/autotest_common.sh@10 -- # set +x 00:18:10.675 02:41:35 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:10.675 [2024-07-11 02:41:35.673936] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:10.675 [2024-07-11 02:41:35.674012] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006380 00:18:10.675 [2024-07-11 02:41:35.674024] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:18:10.675 [2024-07-11 02:41:35.674182] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:18:10.675 [2024-07-11 02:41:35.674589] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006380 00:18:10.675 [2024-07-11 02:41:35.674610] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006380 00:18:10.675 [2024-07-11 02:41:35.674883] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:10.675 BaseBdev4 00:18:10.675 02:41:35 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:18:10.675 02:41:35 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:18:10.675 02:41:35 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:10.675 02:41:35 -- common/autotest_common.sh@889 -- # local i 00:18:10.675 02:41:35 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:10.675 02:41:35 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:10.676 02:41:35 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:10.934 02:41:35 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:11.192 [ 00:18:11.192 { 00:18:11.192 "name": "BaseBdev4", 00:18:11.192 "aliases": [ 00:18:11.192 "c064b374-31f6-41da-9aa8-c70f7e453085" 00:18:11.192 ], 00:18:11.192 "product_name": "Malloc disk", 00:18:11.192 "block_size": 512, 00:18:11.192 "num_blocks": 65536, 00:18:11.192 "uuid": "c064b374-31f6-41da-9aa8-c70f7e453085", 00:18:11.192 "assigned_rate_limits": { 00:18:11.192 "rw_ios_per_sec": 0, 00:18:11.192 "rw_mbytes_per_sec": 0, 00:18:11.192 "r_mbytes_per_sec": 0, 00:18:11.192 "w_mbytes_per_sec": 0 00:18:11.192 }, 00:18:11.192 "claimed": true, 00:18:11.192 "claim_type": "exclusive_write", 00:18:11.192 "zoned": false, 00:18:11.192 "supported_io_types": { 00:18:11.192 "read": true, 00:18:11.192 "write": true, 00:18:11.192 "unmap": true, 00:18:11.192 "write_zeroes": true, 00:18:11.192 "flush": true, 00:18:11.192 "reset": true, 00:18:11.192 "compare": false, 00:18:11.192 "compare_and_write": false, 00:18:11.192 "abort": true, 00:18:11.192 "nvme_admin": false, 00:18:11.192 "nvme_io": false 00:18:11.192 }, 00:18:11.192 "memory_domains": [ 00:18:11.192 { 00:18:11.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:11.192 "dma_device_type": 2 00:18:11.192 } 00:18:11.192 ], 00:18:11.192 "driver_specific": {} 00:18:11.192 } 00:18:11.192 ] 00:18:11.192 02:41:36 -- common/autotest_common.sh@895 -- # return 0 00:18:11.192 02:41:36 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:11.192 02:41:36 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:11.193 02:41:36 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:18:11.193 02:41:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:11.193 02:41:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:11.193 02:41:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:11.193 02:41:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:11.193 02:41:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:11.193 02:41:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:11.193 02:41:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:11.193 02:41:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:11.193 02:41:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:11.193 02:41:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:11.193 02:41:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:11.451 02:41:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:11.451 "name": "Existed_Raid", 00:18:11.451 "uuid": "0f88bbae-0c3e-4d29-8565-1c2b1f5e97a9", 00:18:11.451 "strip_size_kb": 0, 00:18:11.451 "state": "online", 00:18:11.451 "raid_level": "raid1", 00:18:11.451 "superblock": false, 00:18:11.451 "num_base_bdevs": 4, 00:18:11.451 "num_base_bdevs_discovered": 4, 00:18:11.451 "num_base_bdevs_operational": 4, 00:18:11.451 "base_bdevs_list": [ 00:18:11.451 { 00:18:11.451 "name": "BaseBdev1", 00:18:11.451 "uuid": "20d32c69-3441-4284-8de6-5e2d3b905a1d", 00:18:11.451 "is_configured": true, 00:18:11.451 "data_offset": 0, 00:18:11.451 "data_size": 65536 00:18:11.451 }, 00:18:11.451 { 00:18:11.451 "name": "BaseBdev2", 00:18:11.451 "uuid": "ba2778d8-06be-4239-b7b3-5608db268eb9", 00:18:11.451 "is_configured": true, 00:18:11.451 "data_offset": 0, 00:18:11.451 "data_size": 65536 00:18:11.451 }, 00:18:11.451 { 00:18:11.451 "name": "BaseBdev3", 00:18:11.451 "uuid": "b4ffe70b-8338-4055-a752-bbd1d7c5ac0c", 00:18:11.451 "is_configured": true, 00:18:11.451 "data_offset": 0, 00:18:11.451 "data_size": 65536 00:18:11.451 }, 00:18:11.451 { 00:18:11.451 "name": "BaseBdev4", 00:18:11.451 "uuid": "c064b374-31f6-41da-9aa8-c70f7e453085", 00:18:11.451 "is_configured": true, 00:18:11.451 "data_offset": 0, 00:18:11.451 "data_size": 65536 00:18:11.451 } 00:18:11.451 ] 00:18:11.451 }' 00:18:11.451 02:41:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:11.451 02:41:36 -- common/autotest_common.sh@10 -- # set +x 00:18:12.019 02:41:36 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:12.279 [2024-07-11 02:41:37.174463] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:12.279 02:41:37 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:12.279 02:41:37 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:18:12.279 02:41:37 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:12.279 02:41:37 -- bdev/bdev_raid.sh@196 -- # return 0 00:18:12.279 02:41:37 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:18:12.279 02:41:37 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:18:12.279 02:41:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:12.279 02:41:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:12.279 02:41:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:12.279 02:41:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:12.279 02:41:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:12.279 02:41:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:12.279 02:41:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:12.279 02:41:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:12.279 02:41:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:12.279 02:41:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:12.279 02:41:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:12.538 02:41:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:12.538 "name": "Existed_Raid", 00:18:12.538 "uuid": "0f88bbae-0c3e-4d29-8565-1c2b1f5e97a9", 00:18:12.538 "strip_size_kb": 0, 00:18:12.538 "state": "online", 00:18:12.538 "raid_level": "raid1", 00:18:12.538 "superblock": false, 00:18:12.538 "num_base_bdevs": 4, 00:18:12.538 "num_base_bdevs_discovered": 3, 00:18:12.538 "num_base_bdevs_operational": 3, 00:18:12.538 "base_bdevs_list": [ 00:18:12.538 { 00:18:12.538 "name": null, 00:18:12.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.538 "is_configured": false, 00:18:12.538 "data_offset": 0, 00:18:12.538 "data_size": 65536 00:18:12.538 }, 00:18:12.538 { 00:18:12.538 "name": "BaseBdev2", 00:18:12.538 "uuid": "ba2778d8-06be-4239-b7b3-5608db268eb9", 00:18:12.538 "is_configured": true, 00:18:12.538 "data_offset": 0, 00:18:12.538 "data_size": 65536 00:18:12.538 }, 00:18:12.538 { 00:18:12.538 "name": "BaseBdev3", 00:18:12.538 "uuid": "b4ffe70b-8338-4055-a752-bbd1d7c5ac0c", 00:18:12.538 "is_configured": true, 00:18:12.538 "data_offset": 0, 00:18:12.538 "data_size": 65536 00:18:12.538 }, 00:18:12.538 { 00:18:12.538 "name": "BaseBdev4", 00:18:12.538 "uuid": "c064b374-31f6-41da-9aa8-c70f7e453085", 00:18:12.538 "is_configured": true, 00:18:12.538 "data_offset": 0, 00:18:12.538 "data_size": 65536 00:18:12.538 } 00:18:12.538 ] 00:18:12.538 }' 00:18:12.538 02:41:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:12.538 02:41:37 -- common/autotest_common.sh@10 -- # set +x 00:18:13.106 02:41:38 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:13.106 02:41:38 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:13.106 02:41:38 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:13.106 02:41:38 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:13.365 02:41:38 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:13.365 02:41:38 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:13.365 02:41:38 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:13.623 [2024-07-11 02:41:38.559729] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:13.623 02:41:38 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:13.623 02:41:38 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:13.623 02:41:38 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:13.623 02:41:38 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:13.882 02:41:38 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:13.882 02:41:38 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:13.882 02:41:38 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:14.141 [2024-07-11 02:41:39.014324] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:14.141 02:41:39 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:14.141 02:41:39 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:14.141 02:41:39 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:14.141 02:41:39 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:14.141 02:41:39 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:14.141 02:41:39 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:14.141 02:41:39 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:14.400 [2024-07-11 02:41:39.452216] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:14.400 [2024-07-11 02:41:39.452252] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:14.400 [2024-07-11 02:41:39.452342] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:14.400 [2024-07-11 02:41:39.462307] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:14.400 [2024-07-11 02:41:39.462343] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state offline 00:18:14.400 02:41:39 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:14.400 02:41:39 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:14.400 02:41:39 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:14.400 02:41:39 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:14.668 02:41:39 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:14.668 02:41:39 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:14.668 02:41:39 -- bdev/bdev_raid.sh@287 -- # killprocess 133492 00:18:14.668 02:41:39 -- common/autotest_common.sh@926 -- # '[' -z 133492 ']' 00:18:14.668 02:41:39 -- common/autotest_common.sh@930 -- # kill -0 133492 00:18:14.668 02:41:39 -- common/autotest_common.sh@931 -- # uname 00:18:14.668 02:41:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:14.668 02:41:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 133492 00:18:14.668 killing process with pid 133492 00:18:14.668 02:41:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:14.668 02:41:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:14.668 02:41:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 133492' 00:18:14.668 02:41:39 -- common/autotest_common.sh@945 -- # kill 133492 00:18:14.668 02:41:39 -- common/autotest_common.sh@950 -- # wait 133492 00:18:14.668 [2024-07-11 02:41:39.724086] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:14.668 [2024-07-11 02:41:39.724159] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:14.940 ************************************ 00:18:14.940 END TEST raid_state_function_test 00:18:14.940 ************************************ 00:18:14.940 02:41:39 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:14.940 00:18:14.940 real 0m13.121s 00:18:14.940 user 0m24.599s 00:18:14.940 sys 0m1.427s 00:18:14.940 02:41:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:14.940 02:41:39 -- common/autotest_common.sh@10 -- # set +x 00:18:14.940 02:41:39 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:18:14.940 02:41:39 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:18:14.940 02:41:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:14.940 02:41:39 -- common/autotest_common.sh@10 -- # set +x 00:18:14.940 ************************************ 00:18:14.940 START TEST raid_state_function_test_sb 00:18:14.940 ************************************ 00:18:14.940 02:41:39 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 4 true 00:18:14.940 02:41:39 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:18:14.940 02:41:39 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:18:14.940 02:41:39 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:18:14.940 02:41:39 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:14.940 02:41:39 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:18:14.940 02:41:39 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:14.940 02:41:39 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:14.940 02:41:39 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:18:14.940 02:41:39 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:14.940 02:41:39 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:14.940 02:41:39 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:18:14.940 02:41:39 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:14.940 02:41:39 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:14.940 02:41:39 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:18:14.940 02:41:39 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:14.940 02:41:39 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:14.940 02:41:39 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:18:14.940 02:41:39 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:14.940 02:41:39 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:14.940 02:41:39 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:14.940 02:41:39 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:14.940 02:41:39 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:14.940 02:41:39 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:14.940 02:41:39 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:14.940 02:41:39 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:18:14.940 02:41:39 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:18:14.940 02:41:39 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:18:14.940 02:41:39 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:18:14.940 02:41:39 -- bdev/bdev_raid.sh@226 -- # raid_pid=133936 00:18:14.940 Process raid pid: 133936 00:18:14.940 02:41:39 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 133936' 00:18:14.940 02:41:39 -- bdev/bdev_raid.sh@228 -- # waitforlisten 133936 /var/tmp/spdk-raid.sock 00:18:14.940 02:41:39 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:14.940 02:41:39 -- common/autotest_common.sh@819 -- # '[' -z 133936 ']' 00:18:14.941 02:41:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:14.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:14.941 02:41:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:14.941 02:41:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:14.941 02:41:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:14.941 02:41:39 -- common/autotest_common.sh@10 -- # set +x 00:18:15.199 [2024-07-11 02:41:40.038845] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:18:15.199 [2024-07-11 02:41:40.039042] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:15.199 [2024-07-11 02:41:40.177627] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.199 [2024-07-11 02:41:40.236326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:15.200 [2024-07-11 02:41:40.287721] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:16.135 02:41:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:16.135 02:41:40 -- common/autotest_common.sh@852 -- # return 0 00:18:16.135 02:41:40 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:16.135 [2024-07-11 02:41:41.152459] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:16.135 [2024-07-11 02:41:41.152542] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:16.135 [2024-07-11 02:41:41.152554] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:16.135 [2024-07-11 02:41:41.152571] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:16.135 [2024-07-11 02:41:41.152578] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:16.135 [2024-07-11 02:41:41.152615] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:16.135 [2024-07-11 02:41:41.152623] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:16.135 [2024-07-11 02:41:41.152645] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:16.135 02:41:41 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:16.135 02:41:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:16.135 02:41:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:16.135 02:41:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:16.135 02:41:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:16.135 02:41:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:16.135 02:41:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:16.135 02:41:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:16.135 02:41:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:16.135 02:41:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:16.135 02:41:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:16.135 02:41:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:16.393 02:41:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:16.394 "name": "Existed_Raid", 00:18:16.394 "uuid": "07da2af5-03b4-4832-af89-b043ec77982e", 00:18:16.394 "strip_size_kb": 0, 00:18:16.394 "state": "configuring", 00:18:16.394 "raid_level": "raid1", 00:18:16.394 "superblock": true, 00:18:16.394 "num_base_bdevs": 4, 00:18:16.394 "num_base_bdevs_discovered": 0, 00:18:16.394 "num_base_bdevs_operational": 4, 00:18:16.394 "base_bdevs_list": [ 00:18:16.394 { 00:18:16.394 "name": "BaseBdev1", 00:18:16.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.394 "is_configured": false, 00:18:16.394 "data_offset": 0, 00:18:16.394 "data_size": 0 00:18:16.394 }, 00:18:16.394 { 00:18:16.394 "name": "BaseBdev2", 00:18:16.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.394 "is_configured": false, 00:18:16.394 "data_offset": 0, 00:18:16.394 "data_size": 0 00:18:16.394 }, 00:18:16.394 { 00:18:16.394 "name": "BaseBdev3", 00:18:16.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.394 "is_configured": false, 00:18:16.394 "data_offset": 0, 00:18:16.394 "data_size": 0 00:18:16.394 }, 00:18:16.394 { 00:18:16.394 "name": "BaseBdev4", 00:18:16.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.394 "is_configured": false, 00:18:16.394 "data_offset": 0, 00:18:16.394 "data_size": 0 00:18:16.394 } 00:18:16.394 ] 00:18:16.394 }' 00:18:16.394 02:41:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:16.394 02:41:41 -- common/autotest_common.sh@10 -- # set +x 00:18:16.960 02:41:42 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:17.219 [2024-07-11 02:41:42.288569] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:17.219 [2024-07-11 02:41:42.288614] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:18:17.219 02:41:42 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:17.478 [2024-07-11 02:41:42.552632] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:17.478 [2024-07-11 02:41:42.552684] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:17.478 [2024-07-11 02:41:42.552709] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:17.478 [2024-07-11 02:41:42.552732] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:17.478 [2024-07-11 02:41:42.552739] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:17.478 [2024-07-11 02:41:42.552754] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:17.478 [2024-07-11 02:41:42.552760] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:17.478 [2024-07-11 02:41:42.552782] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:17.478 02:41:42 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:18.045 [2024-07-11 02:41:42.834805] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:18.045 BaseBdev1 00:18:18.045 02:41:42 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:18.045 02:41:42 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:18.045 02:41:42 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:18.045 02:41:42 -- common/autotest_common.sh@889 -- # local i 00:18:18.045 02:41:42 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:18.045 02:41:42 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:18.045 02:41:42 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:18.045 02:41:43 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:18.304 [ 00:18:18.304 { 00:18:18.304 "name": "BaseBdev1", 00:18:18.304 "aliases": [ 00:18:18.304 "6c4efe48-8934-43ed-811e-fd8e82259bb3" 00:18:18.304 ], 00:18:18.304 "product_name": "Malloc disk", 00:18:18.304 "block_size": 512, 00:18:18.304 "num_blocks": 65536, 00:18:18.304 "uuid": "6c4efe48-8934-43ed-811e-fd8e82259bb3", 00:18:18.304 "assigned_rate_limits": { 00:18:18.304 "rw_ios_per_sec": 0, 00:18:18.304 "rw_mbytes_per_sec": 0, 00:18:18.304 "r_mbytes_per_sec": 0, 00:18:18.304 "w_mbytes_per_sec": 0 00:18:18.304 }, 00:18:18.304 "claimed": true, 00:18:18.304 "claim_type": "exclusive_write", 00:18:18.304 "zoned": false, 00:18:18.304 "supported_io_types": { 00:18:18.304 "read": true, 00:18:18.304 "write": true, 00:18:18.304 "unmap": true, 00:18:18.304 "write_zeroes": true, 00:18:18.304 "flush": true, 00:18:18.304 "reset": true, 00:18:18.304 "compare": false, 00:18:18.304 "compare_and_write": false, 00:18:18.304 "abort": true, 00:18:18.304 "nvme_admin": false, 00:18:18.304 "nvme_io": false 00:18:18.304 }, 00:18:18.304 "memory_domains": [ 00:18:18.304 { 00:18:18.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:18.304 "dma_device_type": 2 00:18:18.304 } 00:18:18.304 ], 00:18:18.304 "driver_specific": {} 00:18:18.304 } 00:18:18.304 ] 00:18:18.304 02:41:43 -- common/autotest_common.sh@895 -- # return 0 00:18:18.304 02:41:43 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:18.304 02:41:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:18.304 02:41:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:18.304 02:41:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:18.304 02:41:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:18.304 02:41:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:18.304 02:41:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:18.304 02:41:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:18.304 02:41:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:18.304 02:41:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:18.304 02:41:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:18.304 02:41:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:18.562 02:41:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:18.562 "name": "Existed_Raid", 00:18:18.562 "uuid": "8c2d6168-c0fc-4cb7-8dce-3578217c3753", 00:18:18.563 "strip_size_kb": 0, 00:18:18.563 "state": "configuring", 00:18:18.563 "raid_level": "raid1", 00:18:18.563 "superblock": true, 00:18:18.563 "num_base_bdevs": 4, 00:18:18.563 "num_base_bdevs_discovered": 1, 00:18:18.563 "num_base_bdevs_operational": 4, 00:18:18.563 "base_bdevs_list": [ 00:18:18.563 { 00:18:18.563 "name": "BaseBdev1", 00:18:18.563 "uuid": "6c4efe48-8934-43ed-811e-fd8e82259bb3", 00:18:18.563 "is_configured": true, 00:18:18.563 "data_offset": 2048, 00:18:18.563 "data_size": 63488 00:18:18.563 }, 00:18:18.563 { 00:18:18.563 "name": "BaseBdev2", 00:18:18.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.563 "is_configured": false, 00:18:18.563 "data_offset": 0, 00:18:18.563 "data_size": 0 00:18:18.563 }, 00:18:18.563 { 00:18:18.563 "name": "BaseBdev3", 00:18:18.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.563 "is_configured": false, 00:18:18.563 "data_offset": 0, 00:18:18.563 "data_size": 0 00:18:18.563 }, 00:18:18.563 { 00:18:18.563 "name": "BaseBdev4", 00:18:18.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.563 "is_configured": false, 00:18:18.563 "data_offset": 0, 00:18:18.563 "data_size": 0 00:18:18.563 } 00:18:18.563 ] 00:18:18.563 }' 00:18:18.563 02:41:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:18.563 02:41:43 -- common/autotest_common.sh@10 -- # set +x 00:18:19.128 02:41:44 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:19.386 [2024-07-11 02:41:44.291095] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:19.386 [2024-07-11 02:41:44.291185] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005a80 name Existed_Raid, state configuring 00:18:19.386 02:41:44 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:18:19.386 02:41:44 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:19.644 02:41:44 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:19.644 BaseBdev1 00:18:19.644 02:41:44 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:18:19.644 02:41:44 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:19.644 02:41:44 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:19.644 02:41:44 -- common/autotest_common.sh@889 -- # local i 00:18:19.644 02:41:44 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:19.644 02:41:44 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:19.644 02:41:44 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:19.903 02:41:44 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:20.162 [ 00:18:20.162 { 00:18:20.162 "name": "BaseBdev1", 00:18:20.162 "aliases": [ 00:18:20.162 "40e33fd2-a206-48eb-8fd0-6a97fb83ee20" 00:18:20.162 ], 00:18:20.162 "product_name": "Malloc disk", 00:18:20.162 "block_size": 512, 00:18:20.162 "num_blocks": 65536, 00:18:20.162 "uuid": "40e33fd2-a206-48eb-8fd0-6a97fb83ee20", 00:18:20.162 "assigned_rate_limits": { 00:18:20.162 "rw_ios_per_sec": 0, 00:18:20.162 "rw_mbytes_per_sec": 0, 00:18:20.162 "r_mbytes_per_sec": 0, 00:18:20.162 "w_mbytes_per_sec": 0 00:18:20.162 }, 00:18:20.162 "claimed": false, 00:18:20.162 "zoned": false, 00:18:20.162 "supported_io_types": { 00:18:20.162 "read": true, 00:18:20.162 "write": true, 00:18:20.162 "unmap": true, 00:18:20.162 "write_zeroes": true, 00:18:20.162 "flush": true, 00:18:20.162 "reset": true, 00:18:20.162 "compare": false, 00:18:20.162 "compare_and_write": false, 00:18:20.162 "abort": true, 00:18:20.162 "nvme_admin": false, 00:18:20.162 "nvme_io": false 00:18:20.162 }, 00:18:20.162 "memory_domains": [ 00:18:20.162 { 00:18:20.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:20.162 "dma_device_type": 2 00:18:20.162 } 00:18:20.162 ], 00:18:20.162 "driver_specific": {} 00:18:20.162 } 00:18:20.162 ] 00:18:20.162 02:41:45 -- common/autotest_common.sh@895 -- # return 0 00:18:20.162 02:41:45 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:20.420 [2024-07-11 02:41:45.267437] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:20.420 [2024-07-11 02:41:45.269101] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:20.420 [2024-07-11 02:41:45.269167] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:20.420 [2024-07-11 02:41:45.269193] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:20.421 [2024-07-11 02:41:45.269213] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:20.421 [2024-07-11 02:41:45.269220] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:20.421 [2024-07-11 02:41:45.269233] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:20.421 02:41:45 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:20.421 02:41:45 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:20.421 02:41:45 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:20.421 02:41:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:20.421 02:41:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:20.421 02:41:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:20.421 02:41:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:20.421 02:41:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:20.421 02:41:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:20.421 02:41:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:20.421 02:41:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:20.421 02:41:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:20.421 02:41:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:20.421 02:41:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:20.421 02:41:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:20.421 "name": "Existed_Raid", 00:18:20.421 "uuid": "0d722e3e-195e-4f95-b209-e48cf93e994f", 00:18:20.421 "strip_size_kb": 0, 00:18:20.421 "state": "configuring", 00:18:20.421 "raid_level": "raid1", 00:18:20.421 "superblock": true, 00:18:20.421 "num_base_bdevs": 4, 00:18:20.421 "num_base_bdevs_discovered": 1, 00:18:20.421 "num_base_bdevs_operational": 4, 00:18:20.421 "base_bdevs_list": [ 00:18:20.421 { 00:18:20.421 "name": "BaseBdev1", 00:18:20.421 "uuid": "40e33fd2-a206-48eb-8fd0-6a97fb83ee20", 00:18:20.421 "is_configured": true, 00:18:20.421 "data_offset": 2048, 00:18:20.421 "data_size": 63488 00:18:20.421 }, 00:18:20.421 { 00:18:20.421 "name": "BaseBdev2", 00:18:20.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.421 "is_configured": false, 00:18:20.421 "data_offset": 0, 00:18:20.421 "data_size": 0 00:18:20.421 }, 00:18:20.421 { 00:18:20.421 "name": "BaseBdev3", 00:18:20.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.421 "is_configured": false, 00:18:20.421 "data_offset": 0, 00:18:20.421 "data_size": 0 00:18:20.421 }, 00:18:20.421 { 00:18:20.421 "name": "BaseBdev4", 00:18:20.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.421 "is_configured": false, 00:18:20.421 "data_offset": 0, 00:18:20.421 "data_size": 0 00:18:20.421 } 00:18:20.421 ] 00:18:20.421 }' 00:18:20.421 02:41:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:20.421 02:41:45 -- common/autotest_common.sh@10 -- # set +x 00:18:21.356 02:41:46 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:21.356 [2024-07-11 02:41:46.353777] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:21.356 BaseBdev2 00:18:21.356 02:41:46 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:21.356 02:41:46 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:18:21.356 02:41:46 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:21.356 02:41:46 -- common/autotest_common.sh@889 -- # local i 00:18:21.356 02:41:46 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:21.356 02:41:46 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:21.356 02:41:46 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:21.614 02:41:46 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:21.872 [ 00:18:21.872 { 00:18:21.872 "name": "BaseBdev2", 00:18:21.872 "aliases": [ 00:18:21.872 "774e590f-7b24-453e-bc3f-962f601c1535" 00:18:21.872 ], 00:18:21.872 "product_name": "Malloc disk", 00:18:21.872 "block_size": 512, 00:18:21.872 "num_blocks": 65536, 00:18:21.872 "uuid": "774e590f-7b24-453e-bc3f-962f601c1535", 00:18:21.872 "assigned_rate_limits": { 00:18:21.872 "rw_ios_per_sec": 0, 00:18:21.872 "rw_mbytes_per_sec": 0, 00:18:21.872 "r_mbytes_per_sec": 0, 00:18:21.872 "w_mbytes_per_sec": 0 00:18:21.872 }, 00:18:21.872 "claimed": true, 00:18:21.872 "claim_type": "exclusive_write", 00:18:21.872 "zoned": false, 00:18:21.872 "supported_io_types": { 00:18:21.872 "read": true, 00:18:21.872 "write": true, 00:18:21.872 "unmap": true, 00:18:21.872 "write_zeroes": true, 00:18:21.872 "flush": true, 00:18:21.872 "reset": true, 00:18:21.872 "compare": false, 00:18:21.872 "compare_and_write": false, 00:18:21.872 "abort": true, 00:18:21.872 "nvme_admin": false, 00:18:21.872 "nvme_io": false 00:18:21.872 }, 00:18:21.872 "memory_domains": [ 00:18:21.872 { 00:18:21.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:21.872 "dma_device_type": 2 00:18:21.872 } 00:18:21.872 ], 00:18:21.872 "driver_specific": {} 00:18:21.872 } 00:18:21.872 ] 00:18:21.872 02:41:46 -- common/autotest_common.sh@895 -- # return 0 00:18:21.872 02:41:46 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:21.872 02:41:46 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:21.872 02:41:46 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:21.872 02:41:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:21.872 02:41:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:21.872 02:41:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:21.872 02:41:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:21.872 02:41:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:21.872 02:41:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:21.872 02:41:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:21.872 02:41:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:21.872 02:41:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:21.872 02:41:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:21.872 02:41:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:22.130 02:41:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:22.130 "name": "Existed_Raid", 00:18:22.130 "uuid": "0d722e3e-195e-4f95-b209-e48cf93e994f", 00:18:22.130 "strip_size_kb": 0, 00:18:22.130 "state": "configuring", 00:18:22.130 "raid_level": "raid1", 00:18:22.130 "superblock": true, 00:18:22.130 "num_base_bdevs": 4, 00:18:22.130 "num_base_bdevs_discovered": 2, 00:18:22.130 "num_base_bdevs_operational": 4, 00:18:22.130 "base_bdevs_list": [ 00:18:22.130 { 00:18:22.130 "name": "BaseBdev1", 00:18:22.130 "uuid": "40e33fd2-a206-48eb-8fd0-6a97fb83ee20", 00:18:22.130 "is_configured": true, 00:18:22.130 "data_offset": 2048, 00:18:22.130 "data_size": 63488 00:18:22.130 }, 00:18:22.130 { 00:18:22.130 "name": "BaseBdev2", 00:18:22.130 "uuid": "774e590f-7b24-453e-bc3f-962f601c1535", 00:18:22.130 "is_configured": true, 00:18:22.131 "data_offset": 2048, 00:18:22.131 "data_size": 63488 00:18:22.131 }, 00:18:22.131 { 00:18:22.131 "name": "BaseBdev3", 00:18:22.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.131 "is_configured": false, 00:18:22.131 "data_offset": 0, 00:18:22.131 "data_size": 0 00:18:22.131 }, 00:18:22.131 { 00:18:22.131 "name": "BaseBdev4", 00:18:22.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.131 "is_configured": false, 00:18:22.131 "data_offset": 0, 00:18:22.131 "data_size": 0 00:18:22.131 } 00:18:22.131 ] 00:18:22.131 }' 00:18:22.131 02:41:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:22.131 02:41:46 -- common/autotest_common.sh@10 -- # set +x 00:18:22.697 02:41:47 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:22.955 [2024-07-11 02:41:47.946654] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:22.955 BaseBdev3 00:18:22.955 02:41:47 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:18:22.955 02:41:47 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:18:22.955 02:41:47 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:22.955 02:41:47 -- common/autotest_common.sh@889 -- # local i 00:18:22.955 02:41:47 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:22.955 02:41:47 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:22.955 02:41:47 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:23.214 02:41:48 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:23.471 [ 00:18:23.471 { 00:18:23.471 "name": "BaseBdev3", 00:18:23.471 "aliases": [ 00:18:23.471 "66a34969-94ff-4679-8058-3a1b63800c40" 00:18:23.471 ], 00:18:23.471 "product_name": "Malloc disk", 00:18:23.471 "block_size": 512, 00:18:23.471 "num_blocks": 65536, 00:18:23.471 "uuid": "66a34969-94ff-4679-8058-3a1b63800c40", 00:18:23.471 "assigned_rate_limits": { 00:18:23.471 "rw_ios_per_sec": 0, 00:18:23.471 "rw_mbytes_per_sec": 0, 00:18:23.471 "r_mbytes_per_sec": 0, 00:18:23.471 "w_mbytes_per_sec": 0 00:18:23.471 }, 00:18:23.471 "claimed": true, 00:18:23.471 "claim_type": "exclusive_write", 00:18:23.471 "zoned": false, 00:18:23.471 "supported_io_types": { 00:18:23.471 "read": true, 00:18:23.471 "write": true, 00:18:23.471 "unmap": true, 00:18:23.471 "write_zeroes": true, 00:18:23.471 "flush": true, 00:18:23.471 "reset": true, 00:18:23.471 "compare": false, 00:18:23.471 "compare_and_write": false, 00:18:23.471 "abort": true, 00:18:23.471 "nvme_admin": false, 00:18:23.471 "nvme_io": false 00:18:23.471 }, 00:18:23.471 "memory_domains": [ 00:18:23.471 { 00:18:23.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:23.471 "dma_device_type": 2 00:18:23.471 } 00:18:23.471 ], 00:18:23.471 "driver_specific": {} 00:18:23.471 } 00:18:23.471 ] 00:18:23.471 02:41:48 -- common/autotest_common.sh@895 -- # return 0 00:18:23.471 02:41:48 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:23.471 02:41:48 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:23.471 02:41:48 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:23.471 02:41:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:23.471 02:41:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:23.471 02:41:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:23.471 02:41:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:23.471 02:41:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:23.471 02:41:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:23.471 02:41:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:23.471 02:41:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:23.471 02:41:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:23.472 02:41:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:23.472 02:41:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:23.729 02:41:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:23.729 "name": "Existed_Raid", 00:18:23.729 "uuid": "0d722e3e-195e-4f95-b209-e48cf93e994f", 00:18:23.729 "strip_size_kb": 0, 00:18:23.729 "state": "configuring", 00:18:23.729 "raid_level": "raid1", 00:18:23.729 "superblock": true, 00:18:23.729 "num_base_bdevs": 4, 00:18:23.729 "num_base_bdevs_discovered": 3, 00:18:23.729 "num_base_bdevs_operational": 4, 00:18:23.729 "base_bdevs_list": [ 00:18:23.729 { 00:18:23.729 "name": "BaseBdev1", 00:18:23.729 "uuid": "40e33fd2-a206-48eb-8fd0-6a97fb83ee20", 00:18:23.729 "is_configured": true, 00:18:23.729 "data_offset": 2048, 00:18:23.729 "data_size": 63488 00:18:23.729 }, 00:18:23.729 { 00:18:23.729 "name": "BaseBdev2", 00:18:23.729 "uuid": "774e590f-7b24-453e-bc3f-962f601c1535", 00:18:23.729 "is_configured": true, 00:18:23.729 "data_offset": 2048, 00:18:23.729 "data_size": 63488 00:18:23.729 }, 00:18:23.729 { 00:18:23.729 "name": "BaseBdev3", 00:18:23.729 "uuid": "66a34969-94ff-4679-8058-3a1b63800c40", 00:18:23.729 "is_configured": true, 00:18:23.729 "data_offset": 2048, 00:18:23.729 "data_size": 63488 00:18:23.729 }, 00:18:23.729 { 00:18:23.729 "name": "BaseBdev4", 00:18:23.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.729 "is_configured": false, 00:18:23.729 "data_offset": 0, 00:18:23.729 "data_size": 0 00:18:23.729 } 00:18:23.729 ] 00:18:23.729 }' 00:18:23.729 02:41:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:23.729 02:41:48 -- common/autotest_common.sh@10 -- # set +x 00:18:24.295 02:41:49 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:24.553 [2024-07-11 02:41:49.535356] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:24.553 [2024-07-11 02:41:49.535645] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006980 00:18:24.553 [2024-07-11 02:41:49.535665] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:24.553 [2024-07-11 02:41:49.535785] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:18:24.553 BaseBdev4 00:18:24.553 [2024-07-11 02:41:49.536172] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006980 00:18:24.553 [2024-07-11 02:41:49.536191] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006980 00:18:24.553 [2024-07-11 02:41:49.536345] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:24.553 02:41:49 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:18:24.553 02:41:49 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:18:24.553 02:41:49 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:24.553 02:41:49 -- common/autotest_common.sh@889 -- # local i 00:18:24.553 02:41:49 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:24.553 02:41:49 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:24.553 02:41:49 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:24.811 02:41:49 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:25.070 [ 00:18:25.070 { 00:18:25.070 "name": "BaseBdev4", 00:18:25.070 "aliases": [ 00:18:25.070 "1a89c704-6eac-4a39-8b3b-01b31311216e" 00:18:25.070 ], 00:18:25.070 "product_name": "Malloc disk", 00:18:25.070 "block_size": 512, 00:18:25.070 "num_blocks": 65536, 00:18:25.070 "uuid": "1a89c704-6eac-4a39-8b3b-01b31311216e", 00:18:25.070 "assigned_rate_limits": { 00:18:25.070 "rw_ios_per_sec": 0, 00:18:25.070 "rw_mbytes_per_sec": 0, 00:18:25.070 "r_mbytes_per_sec": 0, 00:18:25.070 "w_mbytes_per_sec": 0 00:18:25.070 }, 00:18:25.070 "claimed": true, 00:18:25.070 "claim_type": "exclusive_write", 00:18:25.070 "zoned": false, 00:18:25.070 "supported_io_types": { 00:18:25.070 "read": true, 00:18:25.070 "write": true, 00:18:25.070 "unmap": true, 00:18:25.070 "write_zeroes": true, 00:18:25.070 "flush": true, 00:18:25.070 "reset": true, 00:18:25.070 "compare": false, 00:18:25.070 "compare_and_write": false, 00:18:25.070 "abort": true, 00:18:25.070 "nvme_admin": false, 00:18:25.070 "nvme_io": false 00:18:25.070 }, 00:18:25.070 "memory_domains": [ 00:18:25.070 { 00:18:25.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:25.070 "dma_device_type": 2 00:18:25.070 } 00:18:25.070 ], 00:18:25.070 "driver_specific": {} 00:18:25.070 } 00:18:25.070 ] 00:18:25.070 02:41:49 -- common/autotest_common.sh@895 -- # return 0 00:18:25.070 02:41:49 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:25.070 02:41:49 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:25.070 02:41:49 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:18:25.070 02:41:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:25.070 02:41:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:25.070 02:41:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:25.070 02:41:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:25.070 02:41:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:25.070 02:41:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:25.070 02:41:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:25.070 02:41:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:25.070 02:41:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:25.070 02:41:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:25.070 02:41:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:25.328 02:41:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:25.328 "name": "Existed_Raid", 00:18:25.328 "uuid": "0d722e3e-195e-4f95-b209-e48cf93e994f", 00:18:25.328 "strip_size_kb": 0, 00:18:25.328 "state": "online", 00:18:25.328 "raid_level": "raid1", 00:18:25.328 "superblock": true, 00:18:25.328 "num_base_bdevs": 4, 00:18:25.328 "num_base_bdevs_discovered": 4, 00:18:25.328 "num_base_bdevs_operational": 4, 00:18:25.328 "base_bdevs_list": [ 00:18:25.328 { 00:18:25.328 "name": "BaseBdev1", 00:18:25.328 "uuid": "40e33fd2-a206-48eb-8fd0-6a97fb83ee20", 00:18:25.328 "is_configured": true, 00:18:25.328 "data_offset": 2048, 00:18:25.328 "data_size": 63488 00:18:25.328 }, 00:18:25.328 { 00:18:25.328 "name": "BaseBdev2", 00:18:25.328 "uuid": "774e590f-7b24-453e-bc3f-962f601c1535", 00:18:25.328 "is_configured": true, 00:18:25.328 "data_offset": 2048, 00:18:25.328 "data_size": 63488 00:18:25.328 }, 00:18:25.328 { 00:18:25.328 "name": "BaseBdev3", 00:18:25.328 "uuid": "66a34969-94ff-4679-8058-3a1b63800c40", 00:18:25.328 "is_configured": true, 00:18:25.328 "data_offset": 2048, 00:18:25.328 "data_size": 63488 00:18:25.328 }, 00:18:25.328 { 00:18:25.328 "name": "BaseBdev4", 00:18:25.328 "uuid": "1a89c704-6eac-4a39-8b3b-01b31311216e", 00:18:25.328 "is_configured": true, 00:18:25.328 "data_offset": 2048, 00:18:25.328 "data_size": 63488 00:18:25.328 } 00:18:25.328 ] 00:18:25.328 }' 00:18:25.328 02:41:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:25.328 02:41:50 -- common/autotest_common.sh@10 -- # set +x 00:18:25.894 02:41:50 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:26.162 [2024-07-11 02:41:51.051843] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:26.162 02:41:51 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:26.162 02:41:51 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:18:26.162 02:41:51 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:26.162 02:41:51 -- bdev/bdev_raid.sh@196 -- # return 0 00:18:26.162 02:41:51 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:18:26.162 02:41:51 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:18:26.162 02:41:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:26.162 02:41:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:26.162 02:41:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:26.162 02:41:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:26.162 02:41:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:26.162 02:41:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:26.162 02:41:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:26.162 02:41:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:26.162 02:41:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:26.162 02:41:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:26.162 02:41:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:26.445 02:41:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:26.445 "name": "Existed_Raid", 00:18:26.445 "uuid": "0d722e3e-195e-4f95-b209-e48cf93e994f", 00:18:26.445 "strip_size_kb": 0, 00:18:26.445 "state": "online", 00:18:26.445 "raid_level": "raid1", 00:18:26.445 "superblock": true, 00:18:26.445 "num_base_bdevs": 4, 00:18:26.445 "num_base_bdevs_discovered": 3, 00:18:26.445 "num_base_bdevs_operational": 3, 00:18:26.445 "base_bdevs_list": [ 00:18:26.445 { 00:18:26.445 "name": null, 00:18:26.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.445 "is_configured": false, 00:18:26.445 "data_offset": 2048, 00:18:26.445 "data_size": 63488 00:18:26.445 }, 00:18:26.445 { 00:18:26.445 "name": "BaseBdev2", 00:18:26.445 "uuid": "774e590f-7b24-453e-bc3f-962f601c1535", 00:18:26.445 "is_configured": true, 00:18:26.445 "data_offset": 2048, 00:18:26.445 "data_size": 63488 00:18:26.445 }, 00:18:26.445 { 00:18:26.445 "name": "BaseBdev3", 00:18:26.445 "uuid": "66a34969-94ff-4679-8058-3a1b63800c40", 00:18:26.445 "is_configured": true, 00:18:26.445 "data_offset": 2048, 00:18:26.445 "data_size": 63488 00:18:26.445 }, 00:18:26.445 { 00:18:26.445 "name": "BaseBdev4", 00:18:26.445 "uuid": "1a89c704-6eac-4a39-8b3b-01b31311216e", 00:18:26.446 "is_configured": true, 00:18:26.446 "data_offset": 2048, 00:18:26.446 "data_size": 63488 00:18:26.446 } 00:18:26.446 ] 00:18:26.446 }' 00:18:26.446 02:41:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:26.446 02:41:51 -- common/autotest_common.sh@10 -- # set +x 00:18:27.011 02:41:51 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:27.011 02:41:51 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:27.011 02:41:51 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:27.011 02:41:51 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:27.268 02:41:52 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:27.268 02:41:52 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:27.268 02:41:52 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:27.526 [2024-07-11 02:41:52.396968] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:27.526 02:41:52 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:27.526 02:41:52 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:27.526 02:41:52 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:27.526 02:41:52 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:27.784 02:41:52 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:27.784 02:41:52 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:27.784 02:41:52 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:28.042 [2024-07-11 02:41:52.928362] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:28.042 02:41:52 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:28.042 02:41:52 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:28.042 02:41:52 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:28.042 02:41:52 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:28.300 02:41:53 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:28.300 02:41:53 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:28.300 02:41:53 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:28.300 [2024-07-11 02:41:53.377324] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:28.300 [2024-07-11 02:41:53.377378] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:28.300 [2024-07-11 02:41:53.377460] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:28.300 [2024-07-11 02:41:53.387741] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:28.301 [2024-07-11 02:41:53.387779] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state offline 00:18:28.559 02:41:53 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:28.559 02:41:53 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:28.559 02:41:53 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:28.559 02:41:53 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:28.559 02:41:53 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:28.559 02:41:53 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:28.559 02:41:53 -- bdev/bdev_raid.sh@287 -- # killprocess 133936 00:18:28.559 02:41:53 -- common/autotest_common.sh@926 -- # '[' -z 133936 ']' 00:18:28.559 02:41:53 -- common/autotest_common.sh@930 -- # kill -0 133936 00:18:28.559 02:41:53 -- common/autotest_common.sh@931 -- # uname 00:18:28.559 02:41:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:28.559 02:41:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 133936 00:18:28.559 killing process with pid 133936 00:18:28.559 02:41:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:28.559 02:41:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:28.559 02:41:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 133936' 00:18:28.559 02:41:53 -- common/autotest_common.sh@945 -- # kill 133936 00:18:28.559 02:41:53 -- common/autotest_common.sh@950 -- # wait 133936 00:18:28.559 [2024-07-11 02:41:53.617818] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:28.559 [2024-07-11 02:41:53.617922] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:28.817 ************************************ 00:18:28.817 END TEST raid_state_function_test_sb 00:18:28.817 ************************************ 00:18:28.817 02:41:53 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:28.817 00:18:28.817 real 0m13.835s 00:18:28.817 user 0m25.937s 00:18:28.817 sys 0m1.499s 00:18:28.817 02:41:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:28.817 02:41:53 -- common/autotest_common.sh@10 -- # set +x 00:18:28.817 02:41:53 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:18:28.817 02:41:53 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:18:28.817 02:41:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:28.817 02:41:53 -- common/autotest_common.sh@10 -- # set +x 00:18:28.817 ************************************ 00:18:28.817 START TEST raid_superblock_test 00:18:28.817 ************************************ 00:18:28.817 02:41:53 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 4 00:18:28.817 02:41:53 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:18:28.817 02:41:53 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:18:28.817 02:41:53 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:18:28.817 02:41:53 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:18:28.817 02:41:53 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:18:28.817 02:41:53 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:18:28.817 02:41:53 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:18:28.817 02:41:53 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:18:28.817 02:41:53 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:18:28.817 02:41:53 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:18:28.817 02:41:53 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:18:28.817 02:41:53 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:18:28.817 02:41:53 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:18:28.817 02:41:53 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:18:28.817 02:41:53 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:18:28.817 02:41:53 -- bdev/bdev_raid.sh@357 -- # raid_pid=134414 00:18:28.817 02:41:53 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:28.817 02:41:53 -- bdev/bdev_raid.sh@358 -- # waitforlisten 134414 /var/tmp/spdk-raid.sock 00:18:28.817 02:41:53 -- common/autotest_common.sh@819 -- # '[' -z 134414 ']' 00:18:28.817 02:41:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:28.817 02:41:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:28.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:28.817 02:41:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:28.817 02:41:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:28.817 02:41:53 -- common/autotest_common.sh@10 -- # set +x 00:18:29.075 [2024-07-11 02:41:53.931175] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:18:29.075 [2024-07-11 02:41:53.931451] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134414 ] 00:18:29.075 [2024-07-11 02:41:54.072850] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.075 [2024-07-11 02:41:54.139060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:29.332 [2024-07-11 02:41:54.195436] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:29.895 02:41:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:29.895 02:41:54 -- common/autotest_common.sh@852 -- # return 0 00:18:29.895 02:41:54 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:18:29.895 02:41:54 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:29.895 02:41:54 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:18:29.895 02:41:54 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:18:29.895 02:41:54 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:29.895 02:41:54 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:29.895 02:41:54 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:29.895 02:41:54 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:29.895 02:41:54 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:18:30.152 malloc1 00:18:30.152 02:41:55 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:30.410 [2024-07-11 02:41:55.331755] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:30.410 [2024-07-11 02:41:55.331857] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:30.410 [2024-07-11 02:41:55.331893] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005d80 00:18:30.410 [2024-07-11 02:41:55.331933] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:30.410 [2024-07-11 02:41:55.334098] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:30.410 [2024-07-11 02:41:55.334158] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:30.410 pt1 00:18:30.410 02:41:55 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:30.410 02:41:55 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:30.410 02:41:55 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:18:30.410 02:41:55 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:18:30.410 02:41:55 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:30.410 02:41:55 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:30.410 02:41:55 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:30.410 02:41:55 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:30.410 02:41:55 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:18:30.667 malloc2 00:18:30.667 02:41:55 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:30.924 [2024-07-11 02:41:55.789674] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:30.924 [2024-07-11 02:41:55.789787] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:30.924 [2024-07-11 02:41:55.789847] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:18:30.924 [2024-07-11 02:41:55.789904] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:30.924 [2024-07-11 02:41:55.793120] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:30.924 [2024-07-11 02:41:55.793223] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:30.924 pt2 00:18:30.924 02:41:55 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:30.924 02:41:55 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:30.924 02:41:55 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:18:30.924 02:41:55 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:18:30.924 02:41:55 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:30.924 02:41:55 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:30.924 02:41:55 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:30.924 02:41:55 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:30.924 02:41:55 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:18:31.180 malloc3 00:18:31.180 02:41:56 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:31.180 [2024-07-11 02:41:56.204023] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:31.180 [2024-07-11 02:41:56.204121] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.180 [2024-07-11 02:41:56.204165] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:18:31.180 [2024-07-11 02:41:56.204210] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.180 [2024-07-11 02:41:56.206255] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.180 [2024-07-11 02:41:56.206326] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:31.180 pt3 00:18:31.180 02:41:56 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:31.180 02:41:56 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:31.180 02:41:56 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:18:31.180 02:41:56 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:18:31.180 02:41:56 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:18:31.180 02:41:56 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:31.180 02:41:56 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:31.180 02:41:56 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:31.180 02:41:56 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:18:31.437 malloc4 00:18:31.437 02:41:56 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:31.695 [2024-07-11 02:41:56.585939] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:31.695 [2024-07-11 02:41:56.586041] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.695 [2024-07-11 02:41:56.586078] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:18:31.695 [2024-07-11 02:41:56.586122] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.695 [2024-07-11 02:41:56.588106] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.695 [2024-07-11 02:41:56.588165] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:31.695 pt4 00:18:31.695 02:41:56 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:31.695 02:41:56 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:31.695 02:41:56 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:18:31.695 [2024-07-11 02:41:56.770052] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:31.695 [2024-07-11 02:41:56.771806] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:31.695 [2024-07-11 02:41:56.771906] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:31.695 [2024-07-11 02:41:56.771963] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:31.695 [2024-07-11 02:41:56.772223] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780 00:18:31.695 [2024-07-11 02:41:56.772251] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:31.695 [2024-07-11 02:41:56.772403] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:18:31.695 [2024-07-11 02:41:56.772826] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780 00:18:31.695 [2024-07-11 02:41:56.772866] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008780 00:18:31.695 [2024-07-11 02:41:56.773027] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:31.695 02:41:56 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:31.695 02:41:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:31.695 02:41:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:31.695 02:41:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:31.695 02:41:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:31.695 02:41:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:31.695 02:41:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:31.695 02:41:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:31.695 02:41:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:31.695 02:41:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:31.695 02:41:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:31.695 02:41:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.952 02:41:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:31.952 "name": "raid_bdev1", 00:18:31.952 "uuid": "82918398-072f-4cdd-85f4-ae03243500cc", 00:18:31.952 "strip_size_kb": 0, 00:18:31.952 "state": "online", 00:18:31.953 "raid_level": "raid1", 00:18:31.953 "superblock": true, 00:18:31.953 "num_base_bdevs": 4, 00:18:31.953 "num_base_bdevs_discovered": 4, 00:18:31.953 "num_base_bdevs_operational": 4, 00:18:31.953 "base_bdevs_list": [ 00:18:31.953 { 00:18:31.953 "name": "pt1", 00:18:31.953 "uuid": "216d0033-6704-5cf2-acda-72798441c2a8", 00:18:31.953 "is_configured": true, 00:18:31.953 "data_offset": 2048, 00:18:31.953 "data_size": 63488 00:18:31.953 }, 00:18:31.953 { 00:18:31.953 "name": "pt2", 00:18:31.953 "uuid": "d228de85-2cc9-5425-a056-c2dd28e44d9f", 00:18:31.953 "is_configured": true, 00:18:31.953 "data_offset": 2048, 00:18:31.953 "data_size": 63488 00:18:31.953 }, 00:18:31.953 { 00:18:31.953 "name": "pt3", 00:18:31.953 "uuid": "b8d9f480-3184-5748-9241-aacc627a3c3b", 00:18:31.953 "is_configured": true, 00:18:31.953 "data_offset": 2048, 00:18:31.953 "data_size": 63488 00:18:31.953 }, 00:18:31.953 { 00:18:31.953 "name": "pt4", 00:18:31.953 "uuid": "8d1ae1fb-ad38-50ad-ad43-af8a355f5c81", 00:18:31.953 "is_configured": true, 00:18:31.953 "data_offset": 2048, 00:18:31.953 "data_size": 63488 00:18:31.953 } 00:18:31.953 ] 00:18:31.953 }' 00:18:31.953 02:41:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:31.953 02:41:56 -- common/autotest_common.sh@10 -- # set +x 00:18:32.520 02:41:57 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:32.520 02:41:57 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:18:32.778 [2024-07-11 02:41:57.826488] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:32.778 02:41:57 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=82918398-072f-4cdd-85f4-ae03243500cc 00:18:32.778 02:41:57 -- bdev/bdev_raid.sh@380 -- # '[' -z 82918398-072f-4cdd-85f4-ae03243500cc ']' 00:18:32.778 02:41:57 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:33.037 [2024-07-11 02:41:58.074290] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:33.037 [2024-07-11 02:41:58.074322] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:33.037 [2024-07-11 02:41:58.074454] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:33.037 [2024-07-11 02:41:58.074585] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:33.037 [2024-07-11 02:41:58.074617] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state offline 00:18:33.037 02:41:58 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:33.037 02:41:58 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:18:33.295 02:41:58 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:18:33.295 02:41:58 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:18:33.295 02:41:58 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:33.295 02:41:58 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:33.554 02:41:58 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:33.554 02:41:58 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:33.813 02:41:58 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:33.813 02:41:58 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:33.813 02:41:58 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:33.813 02:41:58 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:18:34.071 02:41:59 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:18:34.071 02:41:59 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:34.329 02:41:59 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:18:34.329 02:41:59 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:34.329 02:41:59 -- common/autotest_common.sh@640 -- # local es=0 00:18:34.329 02:41:59 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:34.329 02:41:59 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:34.329 02:41:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:34.329 02:41:59 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:34.329 02:41:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:34.329 02:41:59 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:34.329 02:41:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:34.329 02:41:59 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:34.329 02:41:59 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:34.329 02:41:59 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:34.587 [2024-07-11 02:41:59.502494] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:34.587 [2024-07-11 02:41:59.504308] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:34.587 [2024-07-11 02:41:59.504366] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:34.587 [2024-07-11 02:41:59.504405] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:18:34.587 [2024-07-11 02:41:59.504460] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:18:34.587 [2024-07-11 02:41:59.504590] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:18:34.587 [2024-07-11 02:41:59.504650] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:18:34.587 [2024-07-11 02:41:59.504708] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:18:34.587 [2024-07-11 02:41:59.504753] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:34.587 [2024-07-11 02:41:59.504766] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state configuring 00:18:34.587 request: 00:18:34.587 { 00:18:34.587 "name": "raid_bdev1", 00:18:34.587 "raid_level": "raid1", 00:18:34.587 "base_bdevs": [ 00:18:34.587 "malloc1", 00:18:34.587 "malloc2", 00:18:34.587 "malloc3", 00:18:34.587 "malloc4" 00:18:34.587 ], 00:18:34.587 "superblock": false, 00:18:34.587 "method": "bdev_raid_create", 00:18:34.587 "req_id": 1 00:18:34.587 } 00:18:34.587 Got JSON-RPC error response 00:18:34.587 response: 00:18:34.587 { 00:18:34.587 "code": -17, 00:18:34.587 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:34.587 } 00:18:34.587 02:41:59 -- common/autotest_common.sh@643 -- # es=1 00:18:34.587 02:41:59 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:34.587 02:41:59 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:18:34.587 02:41:59 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:34.587 02:41:59 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:34.587 02:41:59 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:18:34.845 02:41:59 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:18:34.845 02:41:59 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:18:34.845 02:41:59 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:34.845 [2024-07-11 02:41:59.886522] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:34.845 [2024-07-11 02:41:59.886622] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:34.845 [2024-07-11 02:41:59.886660] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:34.845 [2024-07-11 02:41:59.886689] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:34.845 [2024-07-11 02:41:59.888759] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:34.845 [2024-07-11 02:41:59.888846] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:34.845 [2024-07-11 02:41:59.888954] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:18:34.845 [2024-07-11 02:41:59.889043] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:34.845 pt1 00:18:34.845 02:41:59 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:18:34.845 02:41:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:34.845 02:41:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:34.845 02:41:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:34.845 02:41:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:34.845 02:41:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:34.845 02:41:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:34.845 02:41:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:34.845 02:41:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:34.845 02:41:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:34.845 02:41:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:34.845 02:41:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.103 02:42:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:35.103 "name": "raid_bdev1", 00:18:35.103 "uuid": "82918398-072f-4cdd-85f4-ae03243500cc", 00:18:35.103 "strip_size_kb": 0, 00:18:35.103 "state": "configuring", 00:18:35.103 "raid_level": "raid1", 00:18:35.103 "superblock": true, 00:18:35.103 "num_base_bdevs": 4, 00:18:35.103 "num_base_bdevs_discovered": 1, 00:18:35.103 "num_base_bdevs_operational": 4, 00:18:35.103 "base_bdevs_list": [ 00:18:35.103 { 00:18:35.103 "name": "pt1", 00:18:35.103 "uuid": "216d0033-6704-5cf2-acda-72798441c2a8", 00:18:35.103 "is_configured": true, 00:18:35.103 "data_offset": 2048, 00:18:35.103 "data_size": 63488 00:18:35.103 }, 00:18:35.103 { 00:18:35.103 "name": null, 00:18:35.103 "uuid": "d228de85-2cc9-5425-a056-c2dd28e44d9f", 00:18:35.103 "is_configured": false, 00:18:35.103 "data_offset": 2048, 00:18:35.103 "data_size": 63488 00:18:35.103 }, 00:18:35.103 { 00:18:35.103 "name": null, 00:18:35.103 "uuid": "b8d9f480-3184-5748-9241-aacc627a3c3b", 00:18:35.103 "is_configured": false, 00:18:35.103 "data_offset": 2048, 00:18:35.103 "data_size": 63488 00:18:35.103 }, 00:18:35.103 { 00:18:35.103 "name": null, 00:18:35.103 "uuid": "8d1ae1fb-ad38-50ad-ad43-af8a355f5c81", 00:18:35.103 "is_configured": false, 00:18:35.103 "data_offset": 2048, 00:18:35.103 "data_size": 63488 00:18:35.103 } 00:18:35.103 ] 00:18:35.103 }' 00:18:35.103 02:42:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:35.103 02:42:00 -- common/autotest_common.sh@10 -- # set +x 00:18:35.670 02:42:00 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:18:35.670 02:42:00 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:35.928 [2024-07-11 02:42:00.910750] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:35.928 [2024-07-11 02:42:00.910861] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:35.928 [2024-07-11 02:42:00.910910] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:18:35.928 [2024-07-11 02:42:00.910984] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:35.928 [2024-07-11 02:42:00.911534] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:35.928 [2024-07-11 02:42:00.911656] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:35.928 [2024-07-11 02:42:00.911771] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:35.928 [2024-07-11 02:42:00.911804] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:35.928 pt2 00:18:35.928 02:42:00 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:36.187 [2024-07-11 02:42:01.142757] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:36.187 02:42:01 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:18:36.187 02:42:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:36.187 02:42:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:36.187 02:42:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:36.187 02:42:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:36.187 02:42:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:36.187 02:42:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:36.187 02:42:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:36.187 02:42:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:36.187 02:42:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:36.187 02:42:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:36.187 02:42:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.445 02:42:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:36.445 "name": "raid_bdev1", 00:18:36.445 "uuid": "82918398-072f-4cdd-85f4-ae03243500cc", 00:18:36.445 "strip_size_kb": 0, 00:18:36.445 "state": "configuring", 00:18:36.445 "raid_level": "raid1", 00:18:36.445 "superblock": true, 00:18:36.445 "num_base_bdevs": 4, 00:18:36.445 "num_base_bdevs_discovered": 1, 00:18:36.445 "num_base_bdevs_operational": 4, 00:18:36.445 "base_bdevs_list": [ 00:18:36.445 { 00:18:36.445 "name": "pt1", 00:18:36.445 "uuid": "216d0033-6704-5cf2-acda-72798441c2a8", 00:18:36.445 "is_configured": true, 00:18:36.445 "data_offset": 2048, 00:18:36.445 "data_size": 63488 00:18:36.445 }, 00:18:36.445 { 00:18:36.445 "name": null, 00:18:36.445 "uuid": "d228de85-2cc9-5425-a056-c2dd28e44d9f", 00:18:36.445 "is_configured": false, 00:18:36.445 "data_offset": 2048, 00:18:36.445 "data_size": 63488 00:18:36.445 }, 00:18:36.445 { 00:18:36.445 "name": null, 00:18:36.445 "uuid": "b8d9f480-3184-5748-9241-aacc627a3c3b", 00:18:36.445 "is_configured": false, 00:18:36.445 "data_offset": 2048, 00:18:36.445 "data_size": 63488 00:18:36.445 }, 00:18:36.445 { 00:18:36.445 "name": null, 00:18:36.445 "uuid": "8d1ae1fb-ad38-50ad-ad43-af8a355f5c81", 00:18:36.445 "is_configured": false, 00:18:36.445 "data_offset": 2048, 00:18:36.445 "data_size": 63488 00:18:36.445 } 00:18:36.445 ] 00:18:36.445 }' 00:18:36.445 02:42:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:36.445 02:42:01 -- common/autotest_common.sh@10 -- # set +x 00:18:37.011 02:42:02 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:18:37.011 02:42:02 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:37.011 02:42:02 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:37.270 [2024-07-11 02:42:02.258973] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:37.270 [2024-07-11 02:42:02.259095] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:37.270 [2024-07-11 02:42:02.259141] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:18:37.270 [2024-07-11 02:42:02.259168] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:37.270 [2024-07-11 02:42:02.259675] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:37.270 [2024-07-11 02:42:02.259728] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:37.270 [2024-07-11 02:42:02.259814] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:37.270 [2024-07-11 02:42:02.259844] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:37.270 pt2 00:18:37.270 02:42:02 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:37.270 02:42:02 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:37.270 02:42:02 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:37.528 [2024-07-11 02:42:02.527135] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:37.528 [2024-07-11 02:42:02.527263] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:37.528 [2024-07-11 02:42:02.527307] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:37.528 [2024-07-11 02:42:02.527342] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:37.528 [2024-07-11 02:42:02.527828] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:37.528 [2024-07-11 02:42:02.527927] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:37.528 [2024-07-11 02:42:02.528033] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:18:37.528 [2024-07-11 02:42:02.528065] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:37.528 pt3 00:18:37.528 02:42:02 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:37.528 02:42:02 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:37.528 02:42:02 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:37.787 [2024-07-11 02:42:02.735127] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:37.787 [2024-07-11 02:42:02.735245] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:37.787 [2024-07-11 02:42:02.735287] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:18:37.787 [2024-07-11 02:42:02.735318] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:37.787 [2024-07-11 02:42:02.735811] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:37.787 [2024-07-11 02:42:02.735873] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:37.787 [2024-07-11 02:42:02.735960] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:18:37.787 [2024-07-11 02:42:02.735992] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:37.787 [2024-07-11 02:42:02.736155] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:18:37.787 [2024-07-11 02:42:02.736171] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:37.787 [2024-07-11 02:42:02.736267] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:18:37.787 [2024-07-11 02:42:02.736621] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:18:37.787 [2024-07-11 02:42:02.736637] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:18:37.787 [2024-07-11 02:42:02.736743] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:37.787 pt4 00:18:37.787 02:42:02 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:37.787 02:42:02 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:37.787 02:42:02 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:37.787 02:42:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:37.787 02:42:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:37.787 02:42:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:37.787 02:42:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:37.787 02:42:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:37.787 02:42:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:37.787 02:42:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:37.787 02:42:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:37.787 02:42:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:37.787 02:42:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:37.787 02:42:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.045 02:42:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:38.045 "name": "raid_bdev1", 00:18:38.045 "uuid": "82918398-072f-4cdd-85f4-ae03243500cc", 00:18:38.045 "strip_size_kb": 0, 00:18:38.045 "state": "online", 00:18:38.045 "raid_level": "raid1", 00:18:38.045 "superblock": true, 00:18:38.045 "num_base_bdevs": 4, 00:18:38.045 "num_base_bdevs_discovered": 4, 00:18:38.045 "num_base_bdevs_operational": 4, 00:18:38.045 "base_bdevs_list": [ 00:18:38.045 { 00:18:38.045 "name": "pt1", 00:18:38.045 "uuid": "216d0033-6704-5cf2-acda-72798441c2a8", 00:18:38.045 "is_configured": true, 00:18:38.045 "data_offset": 2048, 00:18:38.045 "data_size": 63488 00:18:38.045 }, 00:18:38.045 { 00:18:38.045 "name": "pt2", 00:18:38.045 "uuid": "d228de85-2cc9-5425-a056-c2dd28e44d9f", 00:18:38.045 "is_configured": true, 00:18:38.045 "data_offset": 2048, 00:18:38.045 "data_size": 63488 00:18:38.045 }, 00:18:38.045 { 00:18:38.045 "name": "pt3", 00:18:38.045 "uuid": "b8d9f480-3184-5748-9241-aacc627a3c3b", 00:18:38.045 "is_configured": true, 00:18:38.045 "data_offset": 2048, 00:18:38.045 "data_size": 63488 00:18:38.045 }, 00:18:38.045 { 00:18:38.045 "name": "pt4", 00:18:38.045 "uuid": "8d1ae1fb-ad38-50ad-ad43-af8a355f5c81", 00:18:38.045 "is_configured": true, 00:18:38.045 "data_offset": 2048, 00:18:38.045 "data_size": 63488 00:18:38.045 } 00:18:38.045 ] 00:18:38.045 }' 00:18:38.045 02:42:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:38.045 02:42:02 -- common/autotest_common.sh@10 -- # set +x 00:18:38.611 02:42:03 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:38.611 02:42:03 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:18:38.867 [2024-07-11 02:42:03.739557] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:38.868 02:42:03 -- bdev/bdev_raid.sh@430 -- # '[' 82918398-072f-4cdd-85f4-ae03243500cc '!=' 82918398-072f-4cdd-85f4-ae03243500cc ']' 00:18:38.868 02:42:03 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:18:38.868 02:42:03 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:38.868 02:42:03 -- bdev/bdev_raid.sh@196 -- # return 0 00:18:38.868 02:42:03 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:38.868 [2024-07-11 02:42:03.939421] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:38.868 02:42:03 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:38.868 02:42:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:38.868 02:42:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:38.868 02:42:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:38.868 02:42:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:38.868 02:42:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:38.868 02:42:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:38.868 02:42:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:38.868 02:42:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:38.868 02:42:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:38.868 02:42:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:38.868 02:42:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.125 02:42:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:39.125 "name": "raid_bdev1", 00:18:39.125 "uuid": "82918398-072f-4cdd-85f4-ae03243500cc", 00:18:39.125 "strip_size_kb": 0, 00:18:39.125 "state": "online", 00:18:39.125 "raid_level": "raid1", 00:18:39.125 "superblock": true, 00:18:39.125 "num_base_bdevs": 4, 00:18:39.125 "num_base_bdevs_discovered": 3, 00:18:39.125 "num_base_bdevs_operational": 3, 00:18:39.125 "base_bdevs_list": [ 00:18:39.125 { 00:18:39.125 "name": null, 00:18:39.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.125 "is_configured": false, 00:18:39.125 "data_offset": 2048, 00:18:39.125 "data_size": 63488 00:18:39.125 }, 00:18:39.125 { 00:18:39.125 "name": "pt2", 00:18:39.125 "uuid": "d228de85-2cc9-5425-a056-c2dd28e44d9f", 00:18:39.125 "is_configured": true, 00:18:39.125 "data_offset": 2048, 00:18:39.125 "data_size": 63488 00:18:39.125 }, 00:18:39.125 { 00:18:39.125 "name": "pt3", 00:18:39.125 "uuid": "b8d9f480-3184-5748-9241-aacc627a3c3b", 00:18:39.125 "is_configured": true, 00:18:39.125 "data_offset": 2048, 00:18:39.125 "data_size": 63488 00:18:39.125 }, 00:18:39.125 { 00:18:39.125 "name": "pt4", 00:18:39.125 "uuid": "8d1ae1fb-ad38-50ad-ad43-af8a355f5c81", 00:18:39.125 "is_configured": true, 00:18:39.125 "data_offset": 2048, 00:18:39.125 "data_size": 63488 00:18:39.125 } 00:18:39.125 ] 00:18:39.125 }' 00:18:39.125 02:42:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:39.125 02:42:04 -- common/autotest_common.sh@10 -- # set +x 00:18:40.060 02:42:04 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:40.060 [2024-07-11 02:42:05.087666] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:40.060 [2024-07-11 02:42:05.087696] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:40.060 [2024-07-11 02:42:05.087777] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:40.060 [2024-07-11 02:42:05.087849] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:40.060 [2024-07-11 02:42:05.087859] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:18:40.060 02:42:05 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:40.060 02:42:05 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:18:40.318 02:42:05 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:18:40.318 02:42:05 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:18:40.318 02:42:05 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:18:40.318 02:42:05 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:18:40.318 02:42:05 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:40.576 02:42:05 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:18:40.576 02:42:05 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:18:40.576 02:42:05 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:40.834 02:42:05 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:18:40.834 02:42:05 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:18:40.834 02:42:05 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:18:41.093 02:42:06 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:18:41.093 02:42:06 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:18:41.093 02:42:06 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:18:41.093 02:42:06 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:18:41.093 02:42:06 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:41.351 [2024-07-11 02:42:06.231841] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:41.351 [2024-07-11 02:42:06.231951] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:41.351 [2024-07-11 02:42:06.231987] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:41.351 [2024-07-11 02:42:06.232033] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:41.351 [2024-07-11 02:42:06.234070] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:41.351 [2024-07-11 02:42:06.234161] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:41.351 [2024-07-11 02:42:06.234256] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:41.351 [2024-07-11 02:42:06.234300] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:41.351 pt2 00:18:41.351 02:42:06 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:18:41.351 02:42:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:41.351 02:42:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:41.351 02:42:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:41.351 02:42:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:41.351 02:42:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:41.351 02:42:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:41.351 02:42:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:41.351 02:42:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:41.351 02:42:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:41.351 02:42:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:41.351 02:42:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.351 02:42:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:41.351 "name": "raid_bdev1", 00:18:41.351 "uuid": "82918398-072f-4cdd-85f4-ae03243500cc", 00:18:41.351 "strip_size_kb": 0, 00:18:41.351 "state": "configuring", 00:18:41.351 "raid_level": "raid1", 00:18:41.351 "superblock": true, 00:18:41.351 "num_base_bdevs": 4, 00:18:41.351 "num_base_bdevs_discovered": 1, 00:18:41.351 "num_base_bdevs_operational": 3, 00:18:41.351 "base_bdevs_list": [ 00:18:41.351 { 00:18:41.351 "name": null, 00:18:41.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.351 "is_configured": false, 00:18:41.351 "data_offset": 2048, 00:18:41.351 "data_size": 63488 00:18:41.351 }, 00:18:41.351 { 00:18:41.351 "name": "pt2", 00:18:41.351 "uuid": "d228de85-2cc9-5425-a056-c2dd28e44d9f", 00:18:41.351 "is_configured": true, 00:18:41.351 "data_offset": 2048, 00:18:41.351 "data_size": 63488 00:18:41.351 }, 00:18:41.351 { 00:18:41.351 "name": null, 00:18:41.351 "uuid": "b8d9f480-3184-5748-9241-aacc627a3c3b", 00:18:41.351 "is_configured": false, 00:18:41.351 "data_offset": 2048, 00:18:41.351 "data_size": 63488 00:18:41.351 }, 00:18:41.351 { 00:18:41.351 "name": null, 00:18:41.351 "uuid": "8d1ae1fb-ad38-50ad-ad43-af8a355f5c81", 00:18:41.351 "is_configured": false, 00:18:41.351 "data_offset": 2048, 00:18:41.351 "data_size": 63488 00:18:41.351 } 00:18:41.351 ] 00:18:41.351 }' 00:18:41.351 02:42:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:41.351 02:42:06 -- common/autotest_common.sh@10 -- # set +x 00:18:42.312 02:42:07 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:18:42.312 02:42:07 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:18:42.312 02:42:07 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:42.312 [2024-07-11 02:42:07.306227] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:42.312 [2024-07-11 02:42:07.306324] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:42.312 [2024-07-11 02:42:07.306368] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:42.312 [2024-07-11 02:42:07.306390] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:42.312 [2024-07-11 02:42:07.306880] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:42.312 [2024-07-11 02:42:07.306936] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:42.312 [2024-07-11 02:42:07.307043] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:18:42.312 [2024-07-11 02:42:07.307088] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:42.312 pt3 00:18:42.312 02:42:07 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:18:42.312 02:42:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:42.312 02:42:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:42.312 02:42:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:42.312 02:42:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:42.312 02:42:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:42.312 02:42:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:42.312 02:42:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:42.312 02:42:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:42.312 02:42:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:42.312 02:42:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:42.312 02:42:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.570 02:42:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:42.570 "name": "raid_bdev1", 00:18:42.570 "uuid": "82918398-072f-4cdd-85f4-ae03243500cc", 00:18:42.570 "strip_size_kb": 0, 00:18:42.570 "state": "configuring", 00:18:42.570 "raid_level": "raid1", 00:18:42.570 "superblock": true, 00:18:42.570 "num_base_bdevs": 4, 00:18:42.570 "num_base_bdevs_discovered": 2, 00:18:42.570 "num_base_bdevs_operational": 3, 00:18:42.570 "base_bdevs_list": [ 00:18:42.570 { 00:18:42.570 "name": null, 00:18:42.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.570 "is_configured": false, 00:18:42.570 "data_offset": 2048, 00:18:42.570 "data_size": 63488 00:18:42.570 }, 00:18:42.570 { 00:18:42.570 "name": "pt2", 00:18:42.570 "uuid": "d228de85-2cc9-5425-a056-c2dd28e44d9f", 00:18:42.570 "is_configured": true, 00:18:42.570 "data_offset": 2048, 00:18:42.570 "data_size": 63488 00:18:42.570 }, 00:18:42.570 { 00:18:42.570 "name": "pt3", 00:18:42.570 "uuid": "b8d9f480-3184-5748-9241-aacc627a3c3b", 00:18:42.570 "is_configured": true, 00:18:42.570 "data_offset": 2048, 00:18:42.570 "data_size": 63488 00:18:42.570 }, 00:18:42.570 { 00:18:42.570 "name": null, 00:18:42.570 "uuid": "8d1ae1fb-ad38-50ad-ad43-af8a355f5c81", 00:18:42.570 "is_configured": false, 00:18:42.570 "data_offset": 2048, 00:18:42.570 "data_size": 63488 00:18:42.570 } 00:18:42.570 ] 00:18:42.570 }' 00:18:42.570 02:42:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:42.570 02:42:07 -- common/autotest_common.sh@10 -- # set +x 00:18:43.503 02:42:08 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:18:43.503 02:42:08 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:18:43.503 02:42:08 -- bdev/bdev_raid.sh@462 -- # i=3 00:18:43.503 02:42:08 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:43.503 [2024-07-11 02:42:08.454496] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:43.503 [2024-07-11 02:42:08.454616] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:43.503 [2024-07-11 02:42:08.454656] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:18:43.503 [2024-07-11 02:42:08.454678] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:43.503 [2024-07-11 02:42:08.455174] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:43.503 [2024-07-11 02:42:08.455254] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:43.503 [2024-07-11 02:42:08.455353] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:18:43.503 [2024-07-11 02:42:08.455384] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:43.503 [2024-07-11 02:42:08.455517] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ae80 00:18:43.503 [2024-07-11 02:42:08.455537] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:43.503 [2024-07-11 02:42:08.455645] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002bb0 00:18:43.503 [2024-07-11 02:42:08.456009] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ae80 00:18:43.503 [2024-07-11 02:42:08.456033] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ae80 00:18:43.503 [2024-07-11 02:42:08.456140] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:43.503 pt4 00:18:43.503 02:42:08 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:43.503 02:42:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:43.503 02:42:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:43.503 02:42:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:43.503 02:42:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:43.503 02:42:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:43.503 02:42:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:43.503 02:42:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:43.503 02:42:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:43.503 02:42:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:43.503 02:42:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:43.503 02:42:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.762 02:42:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:43.762 "name": "raid_bdev1", 00:18:43.762 "uuid": "82918398-072f-4cdd-85f4-ae03243500cc", 00:18:43.762 "strip_size_kb": 0, 00:18:43.762 "state": "online", 00:18:43.762 "raid_level": "raid1", 00:18:43.762 "superblock": true, 00:18:43.762 "num_base_bdevs": 4, 00:18:43.762 "num_base_bdevs_discovered": 3, 00:18:43.762 "num_base_bdevs_operational": 3, 00:18:43.762 "base_bdevs_list": [ 00:18:43.762 { 00:18:43.762 "name": null, 00:18:43.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.762 "is_configured": false, 00:18:43.762 "data_offset": 2048, 00:18:43.762 "data_size": 63488 00:18:43.762 }, 00:18:43.762 { 00:18:43.762 "name": "pt2", 00:18:43.762 "uuid": "d228de85-2cc9-5425-a056-c2dd28e44d9f", 00:18:43.762 "is_configured": true, 00:18:43.762 "data_offset": 2048, 00:18:43.762 "data_size": 63488 00:18:43.762 }, 00:18:43.762 { 00:18:43.762 "name": "pt3", 00:18:43.762 "uuid": "b8d9f480-3184-5748-9241-aacc627a3c3b", 00:18:43.762 "is_configured": true, 00:18:43.762 "data_offset": 2048, 00:18:43.762 "data_size": 63488 00:18:43.762 }, 00:18:43.762 { 00:18:43.762 "name": "pt4", 00:18:43.762 "uuid": "8d1ae1fb-ad38-50ad-ad43-af8a355f5c81", 00:18:43.762 "is_configured": true, 00:18:43.762 "data_offset": 2048, 00:18:43.762 "data_size": 63488 00:18:43.762 } 00:18:43.762 ] 00:18:43.762 }' 00:18:43.762 02:42:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:43.762 02:42:08 -- common/autotest_common.sh@10 -- # set +x 00:18:44.328 02:42:09 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:18:44.328 02:42:09 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:44.586 [2024-07-11 02:42:09.582654] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:44.586 [2024-07-11 02:42:09.582687] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:44.586 [2024-07-11 02:42:09.582760] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:44.586 [2024-07-11 02:42:09.582828] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:44.586 [2024-07-11 02:42:09.582838] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ae80 name raid_bdev1, state offline 00:18:44.586 02:42:09 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:44.586 02:42:09 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:18:44.844 02:42:09 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:18:44.844 02:42:09 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:18:44.844 02:42:09 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:45.103 [2024-07-11 02:42:10.090697] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:45.103 [2024-07-11 02:42:10.090782] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:45.103 [2024-07-11 02:42:10.090826] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:18:45.103 [2024-07-11 02:42:10.090847] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:45.103 [2024-07-11 02:42:10.092946] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:45.103 [2024-07-11 02:42:10.093009] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:45.103 [2024-07-11 02:42:10.093100] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:18:45.103 [2024-07-11 02:42:10.093144] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:45.103 pt1 00:18:45.103 02:42:10 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:18:45.103 02:42:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:45.103 02:42:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:45.103 02:42:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:45.103 02:42:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:45.103 02:42:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:45.103 02:42:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:45.103 02:42:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:45.103 02:42:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:45.103 02:42:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:45.103 02:42:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:45.103 02:42:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.361 02:42:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:45.361 "name": "raid_bdev1", 00:18:45.361 "uuid": "82918398-072f-4cdd-85f4-ae03243500cc", 00:18:45.361 "strip_size_kb": 0, 00:18:45.361 "state": "configuring", 00:18:45.361 "raid_level": "raid1", 00:18:45.361 "superblock": true, 00:18:45.361 "num_base_bdevs": 4, 00:18:45.361 "num_base_bdevs_discovered": 1, 00:18:45.361 "num_base_bdevs_operational": 4, 00:18:45.361 "base_bdevs_list": [ 00:18:45.361 { 00:18:45.361 "name": "pt1", 00:18:45.361 "uuid": "216d0033-6704-5cf2-acda-72798441c2a8", 00:18:45.361 "is_configured": true, 00:18:45.361 "data_offset": 2048, 00:18:45.361 "data_size": 63488 00:18:45.361 }, 00:18:45.361 { 00:18:45.361 "name": null, 00:18:45.361 "uuid": "d228de85-2cc9-5425-a056-c2dd28e44d9f", 00:18:45.361 "is_configured": false, 00:18:45.361 "data_offset": 2048, 00:18:45.361 "data_size": 63488 00:18:45.361 }, 00:18:45.361 { 00:18:45.361 "name": null, 00:18:45.361 "uuid": "b8d9f480-3184-5748-9241-aacc627a3c3b", 00:18:45.361 "is_configured": false, 00:18:45.361 "data_offset": 2048, 00:18:45.361 "data_size": 63488 00:18:45.361 }, 00:18:45.361 { 00:18:45.361 "name": null, 00:18:45.361 "uuid": "8d1ae1fb-ad38-50ad-ad43-af8a355f5c81", 00:18:45.361 "is_configured": false, 00:18:45.361 "data_offset": 2048, 00:18:45.361 "data_size": 63488 00:18:45.361 } 00:18:45.361 ] 00:18:45.361 }' 00:18:45.361 02:42:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:45.361 02:42:10 -- common/autotest_common.sh@10 -- # set +x 00:18:45.926 02:42:10 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:18:45.926 02:42:10 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:18:45.926 02:42:10 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:46.183 02:42:11 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:18:46.183 02:42:11 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:18:46.183 02:42:11 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:46.442 02:42:11 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:18:46.442 02:42:11 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:18:46.442 02:42:11 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:18:46.700 02:42:11 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:18:46.700 02:42:11 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:18:46.700 02:42:11 -- bdev/bdev_raid.sh@489 -- # i=3 00:18:46.700 02:42:11 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:46.958 [2024-07-11 02:42:11.854016] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:46.958 [2024-07-11 02:42:11.854126] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:46.958 [2024-07-11 02:42:11.854180] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:18:46.958 [2024-07-11 02:42:11.854205] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:46.959 [2024-07-11 02:42:11.854649] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:46.959 [2024-07-11 02:42:11.854707] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:46.959 [2024-07-11 02:42:11.854797] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:18:46.959 [2024-07-11 02:42:11.854826] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:46.959 [2024-07-11 02:42:11.854833] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:46.959 [2024-07-11 02:42:11.854861] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state configuring 00:18:46.959 [2024-07-11 02:42:11.854923] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:46.959 pt4 00:18:46.959 02:42:11 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:18:46.959 02:42:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:46.959 02:42:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:46.959 02:42:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:46.959 02:42:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:46.959 02:42:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:46.959 02:42:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:46.959 02:42:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:46.959 02:42:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:46.959 02:42:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:46.959 02:42:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:46.959 02:42:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.217 02:42:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:47.217 "name": "raid_bdev1", 00:18:47.217 "uuid": "82918398-072f-4cdd-85f4-ae03243500cc", 00:18:47.217 "strip_size_kb": 0, 00:18:47.217 "state": "configuring", 00:18:47.217 "raid_level": "raid1", 00:18:47.217 "superblock": true, 00:18:47.217 "num_base_bdevs": 4, 00:18:47.217 "num_base_bdevs_discovered": 1, 00:18:47.217 "num_base_bdevs_operational": 3, 00:18:47.217 "base_bdevs_list": [ 00:18:47.217 { 00:18:47.217 "name": null, 00:18:47.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.217 "is_configured": false, 00:18:47.217 "data_offset": 2048, 00:18:47.217 "data_size": 63488 00:18:47.217 }, 00:18:47.217 { 00:18:47.217 "name": null, 00:18:47.217 "uuid": "d228de85-2cc9-5425-a056-c2dd28e44d9f", 00:18:47.217 "is_configured": false, 00:18:47.217 "data_offset": 2048, 00:18:47.217 "data_size": 63488 00:18:47.217 }, 00:18:47.217 { 00:18:47.217 "name": null, 00:18:47.217 "uuid": "b8d9f480-3184-5748-9241-aacc627a3c3b", 00:18:47.217 "is_configured": false, 00:18:47.217 "data_offset": 2048, 00:18:47.217 "data_size": 63488 00:18:47.217 }, 00:18:47.217 { 00:18:47.217 "name": "pt4", 00:18:47.217 "uuid": "8d1ae1fb-ad38-50ad-ad43-af8a355f5c81", 00:18:47.217 "is_configured": true, 00:18:47.217 "data_offset": 2048, 00:18:47.217 "data_size": 63488 00:18:47.217 } 00:18:47.217 ] 00:18:47.217 }' 00:18:47.217 02:42:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:47.217 02:42:12 -- common/autotest_common.sh@10 -- # set +x 00:18:47.783 02:42:12 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:18:47.783 02:42:12 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:18:47.783 02:42:12 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:48.042 [2024-07-11 02:42:12.978356] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:48.042 [2024-07-11 02:42:12.978472] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:48.042 [2024-07-11 02:42:12.978509] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:18:48.042 [2024-07-11 02:42:12.978535] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:48.042 [2024-07-11 02:42:12.979009] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:48.042 [2024-07-11 02:42:12.979070] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:48.042 [2024-07-11 02:42:12.979179] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:48.042 [2024-07-11 02:42:12.979227] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:48.042 pt2 00:18:48.042 02:42:12 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:18:48.042 02:42:12 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:18:48.042 02:42:12 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:48.300 [2024-07-11 02:42:13.238445] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:48.300 [2024-07-11 02:42:13.238548] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:48.300 [2024-07-11 02:42:13.238593] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:18:48.300 [2024-07-11 02:42:13.238626] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:48.300 [2024-07-11 02:42:13.239145] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:48.300 [2024-07-11 02:42:13.239248] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:48.300 [2024-07-11 02:42:13.239352] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:18:48.300 [2024-07-11 02:42:13.239386] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:48.300 [2024-07-11 02:42:13.239560] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c380 00:18:48.300 [2024-07-11 02:42:13.239576] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:48.300 [2024-07-11 02:42:13.239670] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002fc0 00:18:48.300 [2024-07-11 02:42:13.240080] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c380 00:18:48.300 [2024-07-11 02:42:13.240111] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c380 00:18:48.300 [2024-07-11 02:42:13.240252] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:48.300 pt3 00:18:48.300 02:42:13 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:18:48.300 02:42:13 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:18:48.300 02:42:13 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:48.300 02:42:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:48.300 02:42:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:48.300 02:42:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:48.300 02:42:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:48.300 02:42:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:48.300 02:42:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:48.300 02:42:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:48.300 02:42:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:48.300 02:42:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:48.300 02:42:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:48.300 02:42:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.558 02:42:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:48.558 "name": "raid_bdev1", 00:18:48.558 "uuid": "82918398-072f-4cdd-85f4-ae03243500cc", 00:18:48.558 "strip_size_kb": 0, 00:18:48.558 "state": "online", 00:18:48.558 "raid_level": "raid1", 00:18:48.558 "superblock": true, 00:18:48.558 "num_base_bdevs": 4, 00:18:48.558 "num_base_bdevs_discovered": 3, 00:18:48.558 "num_base_bdevs_operational": 3, 00:18:48.558 "base_bdevs_list": [ 00:18:48.558 { 00:18:48.558 "name": null, 00:18:48.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.558 "is_configured": false, 00:18:48.558 "data_offset": 2048, 00:18:48.558 "data_size": 63488 00:18:48.558 }, 00:18:48.558 { 00:18:48.558 "name": "pt2", 00:18:48.558 "uuid": "d228de85-2cc9-5425-a056-c2dd28e44d9f", 00:18:48.558 "is_configured": true, 00:18:48.558 "data_offset": 2048, 00:18:48.558 "data_size": 63488 00:18:48.558 }, 00:18:48.558 { 00:18:48.558 "name": "pt3", 00:18:48.558 "uuid": "b8d9f480-3184-5748-9241-aacc627a3c3b", 00:18:48.558 "is_configured": true, 00:18:48.558 "data_offset": 2048, 00:18:48.558 "data_size": 63488 00:18:48.558 }, 00:18:48.558 { 00:18:48.558 "name": "pt4", 00:18:48.558 "uuid": "8d1ae1fb-ad38-50ad-ad43-af8a355f5c81", 00:18:48.558 "is_configured": true, 00:18:48.558 "data_offset": 2048, 00:18:48.558 "data_size": 63488 00:18:48.558 } 00:18:48.558 ] 00:18:48.558 }' 00:18:48.558 02:42:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:48.558 02:42:13 -- common/autotest_common.sh@10 -- # set +x 00:18:49.123 02:42:14 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:49.123 02:42:14 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:18:49.381 [2024-07-11 02:42:14.322792] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:49.381 02:42:14 -- bdev/bdev_raid.sh@506 -- # '[' 82918398-072f-4cdd-85f4-ae03243500cc '!=' 82918398-072f-4cdd-85f4-ae03243500cc ']' 00:18:49.381 02:42:14 -- bdev/bdev_raid.sh@511 -- # killprocess 134414 00:18:49.381 02:42:14 -- common/autotest_common.sh@926 -- # '[' -z 134414 ']' 00:18:49.381 02:42:14 -- common/autotest_common.sh@930 -- # kill -0 134414 00:18:49.381 02:42:14 -- common/autotest_common.sh@931 -- # uname 00:18:49.381 02:42:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:49.381 02:42:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 134414 00:18:49.381 killing process with pid 134414 00:18:49.381 02:42:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:49.381 02:42:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:49.381 02:42:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 134414' 00:18:49.381 02:42:14 -- common/autotest_common.sh@945 -- # kill 134414 00:18:49.381 02:42:14 -- common/autotest_common.sh@950 -- # wait 134414 00:18:49.381 [2024-07-11 02:42:14.360771] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:49.381 [2024-07-11 02:42:14.360883] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:49.381 [2024-07-11 02:42:14.361004] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:49.381 [2024-07-11 02:42:14.361034] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c380 name raid_bdev1, state offline 00:18:49.381 [2024-07-11 02:42:14.405488] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:49.640 ************************************ 00:18:49.640 END TEST raid_superblock_test 00:18:49.640 ************************************ 00:18:49.640 02:42:14 -- bdev/bdev_raid.sh@513 -- # return 0 00:18:49.640 00:18:49.640 real 0m20.759s 00:18:49.640 user 0m39.587s 00:18:49.640 sys 0m2.174s 00:18:49.640 02:42:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:49.640 02:42:14 -- common/autotest_common.sh@10 -- # set +x 00:18:49.640 02:42:14 -- bdev/bdev_raid.sh@733 -- # '[' true = true ']' 00:18:49.640 02:42:14 -- bdev/bdev_raid.sh@734 -- # for n in 2 4 00:18:49.640 02:42:14 -- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false 00:18:49.640 02:42:14 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:18:49.640 02:42:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:49.640 02:42:14 -- common/autotest_common.sh@10 -- # set +x 00:18:49.640 ************************************ 00:18:49.640 START TEST raid_rebuild_test 00:18:49.640 ************************************ 00:18:49.640 02:42:14 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 false false 00:18:49.640 02:42:14 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:18:49.640 02:42:14 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:18:49.640 02:42:14 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:18:49.640 02:42:14 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:18:49.640 02:42:14 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:18:49.640 02:42:14 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:18:49.640 02:42:14 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:18:49.640 02:42:14 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:18:49.640 02:42:14 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:18:49.640 02:42:14 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:18:49.640 02:42:14 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:18:49.640 02:42:14 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:18:49.640 02:42:14 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:18:49.640 02:42:14 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:18:49.640 02:42:14 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:18:49.640 02:42:14 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:18:49.640 02:42:14 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:18:49.640 02:42:14 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:18:49.640 02:42:14 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:18:49.640 02:42:14 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:18:49.640 02:42:14 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:18:49.640 02:42:14 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:18:49.640 02:42:14 -- bdev/bdev_raid.sh@544 -- # raid_pid=135119 00:18:49.640 02:42:14 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:49.640 02:42:14 -- bdev/bdev_raid.sh@545 -- # waitforlisten 135119 /var/tmp/spdk-raid.sock 00:18:49.640 02:42:14 -- common/autotest_common.sh@819 -- # '[' -z 135119 ']' 00:18:49.640 02:42:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:49.640 02:42:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:49.640 02:42:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:49.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:49.640 02:42:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:49.640 02:42:14 -- common/autotest_common.sh@10 -- # set +x 00:18:49.899 [2024-07-11 02:42:14.762060] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:18:49.899 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:49.899 Zero copy mechanism will not be used. 00:18:49.899 [2024-07-11 02:42:14.762288] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135119 ] 00:18:49.899 [2024-07-11 02:42:14.911763] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.157 [2024-07-11 02:42:14.992991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.157 [2024-07-11 02:42:15.050709] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:50.724 02:42:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:50.724 02:42:15 -- common/autotest_common.sh@852 -- # return 0 00:18:50.724 02:42:15 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:18:50.724 02:42:15 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:18:50.724 02:42:15 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:50.983 BaseBdev1 00:18:50.983 02:42:15 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:18:50.983 02:42:15 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:18:50.983 02:42:15 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:51.243 BaseBdev2 00:18:51.243 02:42:16 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:18:51.502 spare_malloc 00:18:51.502 02:42:16 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:51.502 spare_delay 00:18:51.761 02:42:16 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:18:51.761 [2024-07-11 02:42:16.835182] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:51.761 [2024-07-11 02:42:16.835299] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:51.762 [2024-07-11 02:42:16.835332] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:18:51.762 [2024-07-11 02:42:16.835419] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:51.762 [2024-07-11 02:42:16.837542] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:51.762 [2024-07-11 02:42:16.837587] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:51.762 spare 00:18:51.762 02:42:16 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:18:52.021 [2024-07-11 02:42:17.023241] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:52.021 [2024-07-11 02:42:17.024894] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:52.021 [2024-07-11 02:42:17.024974] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:18:52.021 [2024-07-11 02:42:17.024987] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:18:52.021 [2024-07-11 02:42:17.025133] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:18:52.021 [2024-07-11 02:42:17.025510] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:18:52.021 [2024-07-11 02:42:17.025547] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007580 00:18:52.021 [2024-07-11 02:42:17.025765] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:52.021 02:42:17 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:52.021 02:42:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:52.021 02:42:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:52.021 02:42:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:52.021 02:42:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:52.021 02:42:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:52.021 02:42:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:52.021 02:42:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:52.021 02:42:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:52.021 02:42:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:52.021 02:42:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:52.021 02:42:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.279 02:42:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:52.279 "name": "raid_bdev1", 00:18:52.279 "uuid": "634968ae-bb52-4beb-875f-f3fbc1053591", 00:18:52.279 "strip_size_kb": 0, 00:18:52.279 "state": "online", 00:18:52.279 "raid_level": "raid1", 00:18:52.279 "superblock": false, 00:18:52.279 "num_base_bdevs": 2, 00:18:52.279 "num_base_bdevs_discovered": 2, 00:18:52.279 "num_base_bdevs_operational": 2, 00:18:52.279 "base_bdevs_list": [ 00:18:52.279 { 00:18:52.279 "name": "BaseBdev1", 00:18:52.279 "uuid": "3da8e497-f298-4925-bf96-25a12ac6913c", 00:18:52.279 "is_configured": true, 00:18:52.279 "data_offset": 0, 00:18:52.279 "data_size": 65536 00:18:52.279 }, 00:18:52.279 { 00:18:52.279 "name": "BaseBdev2", 00:18:52.279 "uuid": "e234e0b7-baa6-443c-8c17-3e53e00c0173", 00:18:52.279 "is_configured": true, 00:18:52.279 "data_offset": 0, 00:18:52.279 "data_size": 65536 00:18:52.279 } 00:18:52.279 ] 00:18:52.279 }' 00:18:52.279 02:42:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:52.279 02:42:17 -- common/autotest_common.sh@10 -- # set +x 00:18:52.844 02:42:17 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:52.844 02:42:17 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:18:53.103 [2024-07-11 02:42:18.083601] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:53.103 02:42:18 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:18:53.103 02:42:18 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:53.103 02:42:18 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:53.361 02:42:18 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:18:53.361 02:42:18 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:18:53.361 02:42:18 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:18:53.361 02:42:18 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:18:53.361 02:42:18 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:53.361 02:42:18 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:18:53.361 02:42:18 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:53.361 02:42:18 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:18:53.361 02:42:18 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:53.361 02:42:18 -- bdev/nbd_common.sh@12 -- # local i 00:18:53.361 02:42:18 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:53.361 02:42:18 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:53.361 02:42:18 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:53.620 [2024-07-11 02:42:18.587591] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:18:53.620 /dev/nbd0 00:18:53.620 02:42:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:53.620 02:42:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:53.620 02:42:18 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:18:53.620 02:42:18 -- common/autotest_common.sh@857 -- # local i 00:18:53.620 02:42:18 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:18:53.620 02:42:18 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:18:53.620 02:42:18 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:18:53.620 02:42:18 -- common/autotest_common.sh@861 -- # break 00:18:53.620 02:42:18 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:18:53.620 02:42:18 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:18:53.620 02:42:18 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:53.620 1+0 records in 00:18:53.620 1+0 records out 00:18:53.620 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266824 s, 15.4 MB/s 00:18:53.620 02:42:18 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:53.620 02:42:18 -- common/autotest_common.sh@874 -- # size=4096 00:18:53.620 02:42:18 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:53.620 02:42:18 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:18:53.620 02:42:18 -- common/autotest_common.sh@877 -- # return 0 00:18:53.620 02:42:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:53.620 02:42:18 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:53.620 02:42:18 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:18:53.620 02:42:18 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:18:53.620 02:42:18 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:18:57.825 65536+0 records in 00:18:57.825 65536+0 records out 00:18:57.825 33554432 bytes (34 MB, 32 MiB) copied, 4.23012 s, 7.9 MB/s 00:18:57.825 02:42:22 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:18:57.825 02:42:22 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:57.825 02:42:22 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:18:57.825 02:42:22 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:57.825 02:42:22 -- bdev/nbd_common.sh@51 -- # local i 00:18:57.825 02:42:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:57.825 02:42:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:18:58.084 02:42:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:58.084 02:42:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:58.084 02:42:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:58.084 02:42:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:58.084 02:42:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:58.084 02:42:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:58.084 02:42:23 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:18:58.084 [2024-07-11 02:42:23.136043] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:58.343 02:42:23 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:18:58.343 02:42:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:58.343 02:42:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:58.343 02:42:23 -- bdev/nbd_common.sh@41 -- # break 00:18:58.343 02:42:23 -- bdev/nbd_common.sh@45 -- # return 0 00:18:58.343 02:42:23 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:18:58.343 [2024-07-11 02:42:23.407691] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:58.343 02:42:23 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:58.343 02:42:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:58.343 02:42:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:58.343 02:42:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:58.343 02:42:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:58.343 02:42:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:18:58.343 02:42:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:58.343 02:42:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:58.343 02:42:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:58.343 02:42:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:58.343 02:42:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:58.343 02:42:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:58.602 02:42:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:58.602 "name": "raid_bdev1", 00:18:58.602 "uuid": "634968ae-bb52-4beb-875f-f3fbc1053591", 00:18:58.602 "strip_size_kb": 0, 00:18:58.602 "state": "online", 00:18:58.602 "raid_level": "raid1", 00:18:58.602 "superblock": false, 00:18:58.602 "num_base_bdevs": 2, 00:18:58.602 "num_base_bdevs_discovered": 1, 00:18:58.602 "num_base_bdevs_operational": 1, 00:18:58.602 "base_bdevs_list": [ 00:18:58.602 { 00:18:58.602 "name": null, 00:18:58.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.602 "is_configured": false, 00:18:58.602 "data_offset": 0, 00:18:58.602 "data_size": 65536 00:18:58.602 }, 00:18:58.602 { 00:18:58.602 "name": "BaseBdev2", 00:18:58.602 "uuid": "e234e0b7-baa6-443c-8c17-3e53e00c0173", 00:18:58.602 "is_configured": true, 00:18:58.602 "data_offset": 0, 00:18:58.602 "data_size": 65536 00:18:58.602 } 00:18:58.602 ] 00:18:58.602 }' 00:18:58.602 02:42:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:58.602 02:42:23 -- common/autotest_common.sh@10 -- # set +x 00:18:59.535 02:42:24 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:18:59.535 [2024-07-11 02:42:24.559896] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:18:59.535 [2024-07-11 02:42:24.559964] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:59.535 [2024-07-11 02:42:24.565003] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d07e90 00:18:59.535 [2024-07-11 02:42:24.566866] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:59.535 02:42:24 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:19:00.910 02:42:25 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:00.910 02:42:25 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:00.910 02:42:25 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:00.910 02:42:25 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:00.910 02:42:25 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:00.910 02:42:25 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:00.910 02:42:25 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:00.910 02:42:25 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:00.910 "name": "raid_bdev1", 00:19:00.910 "uuid": "634968ae-bb52-4beb-875f-f3fbc1053591", 00:19:00.910 "strip_size_kb": 0, 00:19:00.910 "state": "online", 00:19:00.910 "raid_level": "raid1", 00:19:00.910 "superblock": false, 00:19:00.910 "num_base_bdevs": 2, 00:19:00.910 "num_base_bdevs_discovered": 2, 00:19:00.910 "num_base_bdevs_operational": 2, 00:19:00.910 "process": { 00:19:00.910 "type": "rebuild", 00:19:00.910 "target": "spare", 00:19:00.910 "progress": { 00:19:00.910 "blocks": 24576, 00:19:00.910 "percent": 37 00:19:00.910 } 00:19:00.910 }, 00:19:00.910 "base_bdevs_list": [ 00:19:00.910 { 00:19:00.910 "name": "spare", 00:19:00.910 "uuid": "55b70a9b-6985-52dc-8c99-bbd0924b6eaa", 00:19:00.910 "is_configured": true, 00:19:00.910 "data_offset": 0, 00:19:00.910 "data_size": 65536 00:19:00.910 }, 00:19:00.910 { 00:19:00.910 "name": "BaseBdev2", 00:19:00.910 "uuid": "e234e0b7-baa6-443c-8c17-3e53e00c0173", 00:19:00.910 "is_configured": true, 00:19:00.910 "data_offset": 0, 00:19:00.910 "data_size": 65536 00:19:00.910 } 00:19:00.910 ] 00:19:00.910 }' 00:19:00.910 02:42:25 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:00.910 02:42:25 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:00.910 02:42:25 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:00.910 02:42:25 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:00.910 02:42:25 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:19:01.169 [2024-07-11 02:42:26.145621] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:01.169 [2024-07-11 02:42:26.175645] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:01.169 [2024-07-11 02:42:26.175739] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:01.169 02:42:26 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:01.169 02:42:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:01.169 02:42:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:01.169 02:42:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:01.169 02:42:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:01.169 02:42:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:19:01.169 02:42:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:01.169 02:42:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:01.169 02:42:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:01.169 02:42:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:01.169 02:42:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:01.169 02:42:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.427 02:42:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:01.427 "name": "raid_bdev1", 00:19:01.427 "uuid": "634968ae-bb52-4beb-875f-f3fbc1053591", 00:19:01.427 "strip_size_kb": 0, 00:19:01.427 "state": "online", 00:19:01.427 "raid_level": "raid1", 00:19:01.427 "superblock": false, 00:19:01.427 "num_base_bdevs": 2, 00:19:01.427 "num_base_bdevs_discovered": 1, 00:19:01.427 "num_base_bdevs_operational": 1, 00:19:01.427 "base_bdevs_list": [ 00:19:01.427 { 00:19:01.427 "name": null, 00:19:01.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.428 "is_configured": false, 00:19:01.428 "data_offset": 0, 00:19:01.428 "data_size": 65536 00:19:01.428 }, 00:19:01.428 { 00:19:01.428 "name": "BaseBdev2", 00:19:01.428 "uuid": "e234e0b7-baa6-443c-8c17-3e53e00c0173", 00:19:01.428 "is_configured": true, 00:19:01.428 "data_offset": 0, 00:19:01.428 "data_size": 65536 00:19:01.428 } 00:19:01.428 ] 00:19:01.428 }' 00:19:01.428 02:42:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:01.428 02:42:26 -- common/autotest_common.sh@10 -- # set +x 00:19:01.994 02:42:27 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:01.994 02:42:27 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:01.994 02:42:27 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:19:01.994 02:42:27 -- bdev/bdev_raid.sh@185 -- # local target=none 00:19:01.994 02:42:27 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:01.994 02:42:27 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:01.994 02:42:27 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.253 02:42:27 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:02.253 "name": "raid_bdev1", 00:19:02.253 "uuid": "634968ae-bb52-4beb-875f-f3fbc1053591", 00:19:02.253 "strip_size_kb": 0, 00:19:02.253 "state": "online", 00:19:02.253 "raid_level": "raid1", 00:19:02.253 "superblock": false, 00:19:02.253 "num_base_bdevs": 2, 00:19:02.253 "num_base_bdevs_discovered": 1, 00:19:02.253 "num_base_bdevs_operational": 1, 00:19:02.253 "base_bdevs_list": [ 00:19:02.253 { 00:19:02.253 "name": null, 00:19:02.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.253 "is_configured": false, 00:19:02.253 "data_offset": 0, 00:19:02.253 "data_size": 65536 00:19:02.253 }, 00:19:02.253 { 00:19:02.253 "name": "BaseBdev2", 00:19:02.253 "uuid": "e234e0b7-baa6-443c-8c17-3e53e00c0173", 00:19:02.253 "is_configured": true, 00:19:02.253 "data_offset": 0, 00:19:02.253 "data_size": 65536 00:19:02.253 } 00:19:02.253 ] 00:19:02.253 }' 00:19:02.253 02:42:27 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:02.253 02:42:27 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:02.253 02:42:27 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:02.512 02:42:27 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:19:02.512 02:42:27 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:02.770 [2024-07-11 02:42:27.612687] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:19:02.770 [2024-07-11 02:42:27.612729] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:02.770 [2024-07-11 02:42:27.617601] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d08030 00:19:02.770 [2024-07-11 02:42:27.619462] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:02.770 02:42:27 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:19:03.705 02:42:28 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:03.705 02:42:28 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:03.705 02:42:28 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:03.705 02:42:28 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:03.705 02:42:28 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:03.705 02:42:28 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:03.705 02:42:28 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.964 02:42:28 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:03.964 "name": "raid_bdev1", 00:19:03.964 "uuid": "634968ae-bb52-4beb-875f-f3fbc1053591", 00:19:03.964 "strip_size_kb": 0, 00:19:03.964 "state": "online", 00:19:03.964 "raid_level": "raid1", 00:19:03.964 "superblock": false, 00:19:03.964 "num_base_bdevs": 2, 00:19:03.964 "num_base_bdevs_discovered": 2, 00:19:03.964 "num_base_bdevs_operational": 2, 00:19:03.964 "process": { 00:19:03.964 "type": "rebuild", 00:19:03.964 "target": "spare", 00:19:03.964 "progress": { 00:19:03.964 "blocks": 24576, 00:19:03.964 "percent": 37 00:19:03.964 } 00:19:03.964 }, 00:19:03.964 "base_bdevs_list": [ 00:19:03.964 { 00:19:03.964 "name": "spare", 00:19:03.964 "uuid": "55b70a9b-6985-52dc-8c99-bbd0924b6eaa", 00:19:03.964 "is_configured": true, 00:19:03.964 "data_offset": 0, 00:19:03.964 "data_size": 65536 00:19:03.964 }, 00:19:03.964 { 00:19:03.964 "name": "BaseBdev2", 00:19:03.964 "uuid": "e234e0b7-baa6-443c-8c17-3e53e00c0173", 00:19:03.964 "is_configured": true, 00:19:03.964 "data_offset": 0, 00:19:03.964 "data_size": 65536 00:19:03.964 } 00:19:03.964 ] 00:19:03.964 }' 00:19:03.964 02:42:28 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:03.964 02:42:28 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:03.964 02:42:28 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:03.964 02:42:28 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:03.964 02:42:28 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:19:03.964 02:42:28 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:19:03.964 02:42:28 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:19:03.964 02:42:28 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:19:03.964 02:42:28 -- bdev/bdev_raid.sh@657 -- # local timeout=362 00:19:03.964 02:42:28 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:03.964 02:42:28 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:03.964 02:42:28 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:03.964 02:42:28 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:03.964 02:42:28 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:03.964 02:42:28 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:03.964 02:42:28 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:03.964 02:42:28 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.222 02:42:29 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:04.222 "name": "raid_bdev1", 00:19:04.222 "uuid": "634968ae-bb52-4beb-875f-f3fbc1053591", 00:19:04.222 "strip_size_kb": 0, 00:19:04.222 "state": "online", 00:19:04.222 "raid_level": "raid1", 00:19:04.222 "superblock": false, 00:19:04.222 "num_base_bdevs": 2, 00:19:04.222 "num_base_bdevs_discovered": 2, 00:19:04.222 "num_base_bdevs_operational": 2, 00:19:04.222 "process": { 00:19:04.222 "type": "rebuild", 00:19:04.222 "target": "spare", 00:19:04.222 "progress": { 00:19:04.222 "blocks": 30720, 00:19:04.222 "percent": 46 00:19:04.222 } 00:19:04.222 }, 00:19:04.222 "base_bdevs_list": [ 00:19:04.222 { 00:19:04.222 "name": "spare", 00:19:04.222 "uuid": "55b70a9b-6985-52dc-8c99-bbd0924b6eaa", 00:19:04.222 "is_configured": true, 00:19:04.222 "data_offset": 0, 00:19:04.222 "data_size": 65536 00:19:04.222 }, 00:19:04.222 { 00:19:04.222 "name": "BaseBdev2", 00:19:04.222 "uuid": "e234e0b7-baa6-443c-8c17-3e53e00c0173", 00:19:04.222 "is_configured": true, 00:19:04.222 "data_offset": 0, 00:19:04.223 "data_size": 65536 00:19:04.223 } 00:19:04.223 ] 00:19:04.223 }' 00:19:04.223 02:42:29 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:04.223 02:42:29 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:04.223 02:42:29 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:04.223 02:42:29 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:04.223 02:42:29 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:19:05.596 02:42:30 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:05.596 02:42:30 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:05.596 02:42:30 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:05.596 02:42:30 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:05.596 02:42:30 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:05.596 02:42:30 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:05.596 02:42:30 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:05.596 02:42:30 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.596 02:42:30 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:05.596 "name": "raid_bdev1", 00:19:05.596 "uuid": "634968ae-bb52-4beb-875f-f3fbc1053591", 00:19:05.596 "strip_size_kb": 0, 00:19:05.596 "state": "online", 00:19:05.596 "raid_level": "raid1", 00:19:05.596 "superblock": false, 00:19:05.596 "num_base_bdevs": 2, 00:19:05.596 "num_base_bdevs_discovered": 2, 00:19:05.596 "num_base_bdevs_operational": 2, 00:19:05.596 "process": { 00:19:05.596 "type": "rebuild", 00:19:05.596 "target": "spare", 00:19:05.596 "progress": { 00:19:05.596 "blocks": 57344, 00:19:05.596 "percent": 87 00:19:05.596 } 00:19:05.596 }, 00:19:05.596 "base_bdevs_list": [ 00:19:05.596 { 00:19:05.596 "name": "spare", 00:19:05.596 "uuid": "55b70a9b-6985-52dc-8c99-bbd0924b6eaa", 00:19:05.596 "is_configured": true, 00:19:05.596 "data_offset": 0, 00:19:05.596 "data_size": 65536 00:19:05.596 }, 00:19:05.596 { 00:19:05.596 "name": "BaseBdev2", 00:19:05.596 "uuid": "e234e0b7-baa6-443c-8c17-3e53e00c0173", 00:19:05.596 "is_configured": true, 00:19:05.596 "data_offset": 0, 00:19:05.596 "data_size": 65536 00:19:05.596 } 00:19:05.596 ] 00:19:05.596 }' 00:19:05.596 02:42:30 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:05.596 02:42:30 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:05.596 02:42:30 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:05.596 02:42:30 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:05.596 02:42:30 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:19:05.854 [2024-07-11 02:42:30.835336] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:05.854 [2024-07-11 02:42:30.835409] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:05.854 [2024-07-11 02:42:30.835495] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:06.788 02:42:31 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:06.788 02:42:31 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:06.788 02:42:31 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:06.788 02:42:31 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:06.788 02:42:31 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:06.788 02:42:31 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:06.788 02:42:31 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:06.788 02:42:31 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.788 02:42:31 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:06.788 "name": "raid_bdev1", 00:19:06.788 "uuid": "634968ae-bb52-4beb-875f-f3fbc1053591", 00:19:06.788 "strip_size_kb": 0, 00:19:06.788 "state": "online", 00:19:06.788 "raid_level": "raid1", 00:19:06.788 "superblock": false, 00:19:06.788 "num_base_bdevs": 2, 00:19:06.788 "num_base_bdevs_discovered": 2, 00:19:06.788 "num_base_bdevs_operational": 2, 00:19:06.788 "base_bdevs_list": [ 00:19:06.788 { 00:19:06.788 "name": "spare", 00:19:06.788 "uuid": "55b70a9b-6985-52dc-8c99-bbd0924b6eaa", 00:19:06.788 "is_configured": true, 00:19:06.788 "data_offset": 0, 00:19:06.788 "data_size": 65536 00:19:06.788 }, 00:19:06.788 { 00:19:06.788 "name": "BaseBdev2", 00:19:06.788 "uuid": "e234e0b7-baa6-443c-8c17-3e53e00c0173", 00:19:06.788 "is_configured": true, 00:19:06.788 "data_offset": 0, 00:19:06.788 "data_size": 65536 00:19:06.788 } 00:19:06.788 ] 00:19:06.788 }' 00:19:06.788 02:42:31 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:07.046 02:42:31 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:07.046 02:42:31 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:07.046 02:42:31 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:19:07.046 02:42:31 -- bdev/bdev_raid.sh@660 -- # break 00:19:07.046 02:42:31 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:07.046 02:42:31 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:07.046 02:42:31 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:19:07.046 02:42:31 -- bdev/bdev_raid.sh@185 -- # local target=none 00:19:07.046 02:42:31 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:07.046 02:42:31 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:07.046 02:42:31 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.304 02:42:32 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:07.304 "name": "raid_bdev1", 00:19:07.304 "uuid": "634968ae-bb52-4beb-875f-f3fbc1053591", 00:19:07.304 "strip_size_kb": 0, 00:19:07.304 "state": "online", 00:19:07.304 "raid_level": "raid1", 00:19:07.304 "superblock": false, 00:19:07.304 "num_base_bdevs": 2, 00:19:07.304 "num_base_bdevs_discovered": 2, 00:19:07.304 "num_base_bdevs_operational": 2, 00:19:07.304 "base_bdevs_list": [ 00:19:07.304 { 00:19:07.304 "name": "spare", 00:19:07.304 "uuid": "55b70a9b-6985-52dc-8c99-bbd0924b6eaa", 00:19:07.304 "is_configured": true, 00:19:07.304 "data_offset": 0, 00:19:07.304 "data_size": 65536 00:19:07.304 }, 00:19:07.304 { 00:19:07.304 "name": "BaseBdev2", 00:19:07.304 "uuid": "e234e0b7-baa6-443c-8c17-3e53e00c0173", 00:19:07.304 "is_configured": true, 00:19:07.304 "data_offset": 0, 00:19:07.304 "data_size": 65536 00:19:07.304 } 00:19:07.304 ] 00:19:07.304 }' 00:19:07.304 02:42:32 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:07.304 02:42:32 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:07.304 02:42:32 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:07.304 02:42:32 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:19:07.304 02:42:32 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:07.304 02:42:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:07.304 02:42:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:07.304 02:42:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:07.304 02:42:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:07.304 02:42:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:07.304 02:42:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:07.304 02:42:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:07.304 02:42:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:07.304 02:42:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:07.304 02:42:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:07.304 02:42:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.563 02:42:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:07.563 "name": "raid_bdev1", 00:19:07.563 "uuid": "634968ae-bb52-4beb-875f-f3fbc1053591", 00:19:07.563 "strip_size_kb": 0, 00:19:07.563 "state": "online", 00:19:07.563 "raid_level": "raid1", 00:19:07.563 "superblock": false, 00:19:07.563 "num_base_bdevs": 2, 00:19:07.563 "num_base_bdevs_discovered": 2, 00:19:07.563 "num_base_bdevs_operational": 2, 00:19:07.563 "base_bdevs_list": [ 00:19:07.563 { 00:19:07.563 "name": "spare", 00:19:07.563 "uuid": "55b70a9b-6985-52dc-8c99-bbd0924b6eaa", 00:19:07.563 "is_configured": true, 00:19:07.563 "data_offset": 0, 00:19:07.563 "data_size": 65536 00:19:07.563 }, 00:19:07.563 { 00:19:07.563 "name": "BaseBdev2", 00:19:07.563 "uuid": "e234e0b7-baa6-443c-8c17-3e53e00c0173", 00:19:07.563 "is_configured": true, 00:19:07.563 "data_offset": 0, 00:19:07.563 "data_size": 65536 00:19:07.563 } 00:19:07.563 ] 00:19:07.563 }' 00:19:07.563 02:42:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:07.563 02:42:32 -- common/autotest_common.sh@10 -- # set +x 00:19:08.499 02:42:33 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:08.499 [2024-07-11 02:42:33.572787] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:08.499 [2024-07-11 02:42:33.572822] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:08.499 [2024-07-11 02:42:33.572963] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:08.499 [2024-07-11 02:42:33.573070] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:08.499 [2024-07-11 02:42:33.573083] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name raid_bdev1, state offline 00:19:08.499 02:42:33 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:08.756 02:42:33 -- bdev/bdev_raid.sh@671 -- # jq length 00:19:08.756 02:42:33 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:19:08.756 02:42:33 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:19:08.756 02:42:33 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:08.756 02:42:33 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:08.756 02:42:33 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:19:08.756 02:42:33 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:08.756 02:42:33 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:19:08.756 02:42:33 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:08.756 02:42:33 -- bdev/nbd_common.sh@12 -- # local i 00:19:08.756 02:42:33 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:08.756 02:42:33 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:08.757 02:42:33 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:09.014 /dev/nbd0 00:19:09.014 02:42:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:09.014 02:42:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:09.014 02:42:33 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:19:09.014 02:42:33 -- common/autotest_common.sh@857 -- # local i 00:19:09.014 02:42:33 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:19:09.014 02:42:33 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:19:09.014 02:42:33 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:19:09.014 02:42:33 -- common/autotest_common.sh@861 -- # break 00:19:09.014 02:42:33 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:19:09.014 02:42:33 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:19:09.014 02:42:33 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:09.014 1+0 records in 00:19:09.014 1+0 records out 00:19:09.014 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346783 s, 11.8 MB/s 00:19:09.014 02:42:34 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:09.014 02:42:34 -- common/autotest_common.sh@874 -- # size=4096 00:19:09.014 02:42:34 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:09.014 02:42:34 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:19:09.014 02:42:34 -- common/autotest_common.sh@877 -- # return 0 00:19:09.014 02:42:34 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:09.014 02:42:34 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:09.014 02:42:34 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:19:09.272 /dev/nbd1 00:19:09.272 02:42:34 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:09.272 02:42:34 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:09.272 02:42:34 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:19:09.272 02:42:34 -- common/autotest_common.sh@857 -- # local i 00:19:09.272 02:42:34 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:19:09.272 02:42:34 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:19:09.272 02:42:34 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:19:09.272 02:42:34 -- common/autotest_common.sh@861 -- # break 00:19:09.272 02:42:34 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:19:09.272 02:42:34 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:19:09.272 02:42:34 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:09.272 1+0 records in 00:19:09.272 1+0 records out 00:19:09.272 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000436441 s, 9.4 MB/s 00:19:09.272 02:42:34 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:09.272 02:42:34 -- common/autotest_common.sh@874 -- # size=4096 00:19:09.272 02:42:34 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:09.272 02:42:34 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:19:09.272 02:42:34 -- common/autotest_common.sh@877 -- # return 0 00:19:09.272 02:42:34 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:09.272 02:42:34 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:09.272 02:42:34 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:19:09.272 02:42:34 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:19:09.272 02:42:34 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:09.272 02:42:34 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:19:09.272 02:42:34 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:09.272 02:42:34 -- bdev/nbd_common.sh@51 -- # local i 00:19:09.272 02:42:34 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:09.272 02:42:34 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:19:09.530 02:42:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:09.530 02:42:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:09.530 02:42:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:09.530 02:42:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:09.530 02:42:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:09.530 02:42:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:09.530 02:42:34 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:19:09.789 02:42:34 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:19:09.789 02:42:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:09.789 02:42:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:09.789 02:42:34 -- bdev/nbd_common.sh@41 -- # break 00:19:09.789 02:42:34 -- bdev/nbd_common.sh@45 -- # return 0 00:19:09.789 02:42:34 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:09.789 02:42:34 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:19:10.048 02:42:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:10.048 02:42:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:10.048 02:42:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:10.048 02:42:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:10.048 02:42:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:10.048 02:42:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:10.048 02:42:34 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:19:10.048 02:42:35 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:19:10.048 02:42:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:10.048 02:42:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:10.048 02:42:35 -- bdev/nbd_common.sh@41 -- # break 00:19:10.048 02:42:35 -- bdev/nbd_common.sh@45 -- # return 0 00:19:10.048 02:42:35 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:19:10.048 02:42:35 -- bdev/bdev_raid.sh@709 -- # killprocess 135119 00:19:10.048 02:42:35 -- common/autotest_common.sh@926 -- # '[' -z 135119 ']' 00:19:10.049 02:42:35 -- common/autotest_common.sh@930 -- # kill -0 135119 00:19:10.049 02:42:35 -- common/autotest_common.sh@931 -- # uname 00:19:10.049 02:42:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:10.049 02:42:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 135119 00:19:10.049 killing process with pid 135119 00:19:10.049 Received shutdown signal, test time was about 60.000000 seconds 00:19:10.049 00:19:10.049 Latency(us) 00:19:10.049 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:10.049 =================================================================================================================== 00:19:10.049 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:10.049 02:42:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:10.049 02:42:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:10.049 02:42:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 135119' 00:19:10.049 02:42:35 -- common/autotest_common.sh@945 -- # kill 135119 00:19:10.049 02:42:35 -- common/autotest_common.sh@950 -- # wait 135119 00:19:10.049 [2024-07-11 02:42:35.080828] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:10.049 [2024-07-11 02:42:35.106097] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:10.308 ************************************ 00:19:10.308 END TEST raid_rebuild_test 00:19:10.308 ************************************ 00:19:10.308 02:42:35 -- bdev/bdev_raid.sh@711 -- # return 0 00:19:10.308 00:19:10.308 real 0m20.627s 00:19:10.308 user 0m29.424s 00:19:10.308 sys 0m3.610s 00:19:10.308 02:42:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:10.308 02:42:35 -- common/autotest_common.sh@10 -- # set +x 00:19:10.308 02:42:35 -- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false 00:19:10.308 02:42:35 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:19:10.308 02:42:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:10.308 02:42:35 -- common/autotest_common.sh@10 -- # set +x 00:19:10.308 ************************************ 00:19:10.308 START TEST raid_rebuild_test_sb 00:19:10.308 ************************************ 00:19:10.308 02:42:35 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 true false 00:19:10.308 02:42:35 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:19:10.308 02:42:35 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:19:10.308 02:42:35 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:19:10.308 02:42:35 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:19:10.308 02:42:35 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:19:10.308 02:42:35 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:19:10.308 02:42:35 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:10.308 02:42:35 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:19:10.308 02:42:35 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:19:10.308 02:42:35 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:10.308 02:42:35 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:19:10.308 02:42:35 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:19:10.308 02:42:35 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:10.308 02:42:35 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:19:10.308 02:42:35 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:19:10.308 02:42:35 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:19:10.308 02:42:35 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:19:10.308 02:42:35 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:19:10.308 02:42:35 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:19:10.308 02:42:35 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:19:10.308 02:42:35 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:19:10.308 02:42:35 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:19:10.308 02:42:35 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:19:10.308 02:42:35 -- bdev/bdev_raid.sh@544 -- # raid_pid=135698 00:19:10.308 02:42:35 -- bdev/bdev_raid.sh@545 -- # waitforlisten 135698 /var/tmp/spdk-raid.sock 00:19:10.308 02:42:35 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:10.308 02:42:35 -- common/autotest_common.sh@819 -- # '[' -z 135698 ']' 00:19:10.308 02:42:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:10.308 02:42:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:10.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:10.308 02:42:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:10.308 02:42:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:10.308 02:42:35 -- common/autotest_common.sh@10 -- # set +x 00:19:10.567 [2024-07-11 02:42:35.438661] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:19:10.567 [2024-07-11 02:42:35.438918] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135698 ] 00:19:10.567 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:10.567 Zero copy mechanism will not be used. 00:19:10.567 [2024-07-11 02:42:35.575931] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.567 [2024-07-11 02:42:35.633269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:10.826 [2024-07-11 02:42:35.684239] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:11.394 02:42:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:11.394 02:42:36 -- common/autotest_common.sh@852 -- # return 0 00:19:11.394 02:42:36 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:19:11.394 02:42:36 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:19:11.394 02:42:36 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:11.653 BaseBdev1_malloc 00:19:11.653 02:42:36 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:11.911 [2024-07-11 02:42:36.838689] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:11.911 [2024-07-11 02:42:36.838807] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:11.911 [2024-07-11 02:42:36.838841] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005d80 00:19:11.911 [2024-07-11 02:42:36.838879] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:11.911 [2024-07-11 02:42:36.841078] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:11.911 [2024-07-11 02:42:36.841125] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:11.911 BaseBdev1 00:19:11.911 02:42:36 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:19:11.911 02:42:36 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:19:11.911 02:42:36 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:12.170 BaseBdev2_malloc 00:19:12.170 02:42:37 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:12.170 [2024-07-11 02:42:37.224534] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:12.170 [2024-07-11 02:42:37.224646] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:12.170 [2024-07-11 02:42:37.224684] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:19:12.170 [2024-07-11 02:42:37.224720] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:12.170 [2024-07-11 02:42:37.226806] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:12.170 [2024-07-11 02:42:37.226865] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:12.170 BaseBdev2 00:19:12.170 02:42:37 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:19:12.428 spare_malloc 00:19:12.428 02:42:37 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:12.687 spare_delay 00:19:12.687 02:42:37 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:12.946 [2024-07-11 02:42:37.836927] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:12.946 [2024-07-11 02:42:37.837041] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:12.946 [2024-07-11 02:42:37.837086] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:19:12.946 [2024-07-11 02:42:37.837126] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:12.946 [2024-07-11 02:42:37.839391] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:12.946 [2024-07-11 02:42:37.839446] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:12.946 spare 00:19:12.946 02:42:37 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:19:13.205 [2024-07-11 02:42:38.085060] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:13.205 [2024-07-11 02:42:38.087251] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:13.205 [2024-07-11 02:42:38.087500] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:19:13.205 [2024-07-11 02:42:38.087517] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:13.205 [2024-07-11 02:42:38.087684] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:19:13.205 [2024-07-11 02:42:38.088114] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:19:13.205 [2024-07-11 02:42:38.088151] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008180 00:19:13.205 [2024-07-11 02:42:38.088375] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:13.205 02:42:38 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:13.205 02:42:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:13.205 02:42:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:13.205 02:42:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:13.205 02:42:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:13.205 02:42:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:13.205 02:42:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:13.205 02:42:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:13.205 02:42:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:13.205 02:42:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:13.205 02:42:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:13.205 02:42:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.464 02:42:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:13.464 "name": "raid_bdev1", 00:19:13.464 "uuid": "d39953bf-8e1a-42ba-8d4f-7197094e2fc9", 00:19:13.464 "strip_size_kb": 0, 00:19:13.464 "state": "online", 00:19:13.464 "raid_level": "raid1", 00:19:13.464 "superblock": true, 00:19:13.464 "num_base_bdevs": 2, 00:19:13.464 "num_base_bdevs_discovered": 2, 00:19:13.464 "num_base_bdevs_operational": 2, 00:19:13.464 "base_bdevs_list": [ 00:19:13.464 { 00:19:13.464 "name": "BaseBdev1", 00:19:13.464 "uuid": "4c0c280e-9b3b-5bfc-8316-68f415170dcc", 00:19:13.464 "is_configured": true, 00:19:13.464 "data_offset": 2048, 00:19:13.464 "data_size": 63488 00:19:13.464 }, 00:19:13.464 { 00:19:13.464 "name": "BaseBdev2", 00:19:13.464 "uuid": "9a9b110b-3916-59d4-8517-7b83d2892cdc", 00:19:13.464 "is_configured": true, 00:19:13.464 "data_offset": 2048, 00:19:13.464 "data_size": 63488 00:19:13.464 } 00:19:13.464 ] 00:19:13.464 }' 00:19:13.464 02:42:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:13.464 02:42:38 -- common/autotest_common.sh@10 -- # set +x 00:19:14.031 02:42:39 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:14.031 02:42:39 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:19:14.290 [2024-07-11 02:42:39.265454] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:14.290 02:42:39 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:19:14.290 02:42:39 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:14.290 02:42:39 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:14.547 02:42:39 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:19:14.547 02:42:39 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:19:14.547 02:42:39 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:19:14.547 02:42:39 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:19:14.547 02:42:39 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:14.547 02:42:39 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:19:14.547 02:42:39 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:14.547 02:42:39 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:19:14.547 02:42:39 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:14.547 02:42:39 -- bdev/nbd_common.sh@12 -- # local i 00:19:14.547 02:42:39 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:14.547 02:42:39 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:14.547 02:42:39 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:14.805 [2024-07-11 02:42:39.701301] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:19:14.805 /dev/nbd0 00:19:14.805 02:42:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:14.805 02:42:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:14.805 02:42:39 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:19:14.805 02:42:39 -- common/autotest_common.sh@857 -- # local i 00:19:14.805 02:42:39 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:19:14.805 02:42:39 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:19:14.805 02:42:39 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:19:14.805 02:42:39 -- common/autotest_common.sh@861 -- # break 00:19:14.805 02:42:39 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:19:14.805 02:42:39 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:19:14.805 02:42:39 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:14.805 1+0 records in 00:19:14.805 1+0 records out 00:19:14.805 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000299036 s, 13.7 MB/s 00:19:14.805 02:42:39 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:14.805 02:42:39 -- common/autotest_common.sh@874 -- # size=4096 00:19:14.805 02:42:39 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:14.805 02:42:39 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:19:14.805 02:42:39 -- common/autotest_common.sh@877 -- # return 0 00:19:14.805 02:42:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:14.805 02:42:39 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:14.805 02:42:39 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:19:14.805 02:42:39 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:19:14.805 02:42:39 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:19:20.068 63488+0 records in 00:19:20.068 63488+0 records out 00:19:20.068 32505856 bytes (33 MB, 31 MiB) copied, 4.40176 s, 7.4 MB/s 00:19:20.068 02:42:44 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:19:20.068 02:42:44 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:20.068 02:42:44 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:19:20.068 02:42:44 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:20.068 02:42:44 -- bdev/nbd_common.sh@51 -- # local i 00:19:20.068 02:42:44 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:20.068 02:42:44 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:19:20.068 02:42:44 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:20.068 02:42:44 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:20.068 02:42:44 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:20.068 02:42:44 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:20.068 02:42:44 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:20.068 02:42:44 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:20.068 02:42:44 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:19:20.068 [2024-07-11 02:42:44.368660] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:20.068 02:42:44 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:19:20.069 02:42:44 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:20.069 02:42:44 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:20.069 02:42:44 -- bdev/nbd_common.sh@41 -- # break 00:19:20.069 02:42:44 -- bdev/nbd_common.sh@45 -- # return 0 00:19:20.069 02:42:44 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:19:20.069 [2024-07-11 02:42:44.716210] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:20.069 02:42:44 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:20.069 02:42:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:20.069 02:42:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:20.069 02:42:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:20.069 02:42:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:20.069 02:42:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:19:20.069 02:42:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:20.069 02:42:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:20.069 02:42:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:20.069 02:42:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:20.069 02:42:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:20.069 02:42:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.069 02:42:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:20.069 "name": "raid_bdev1", 00:19:20.069 "uuid": "d39953bf-8e1a-42ba-8d4f-7197094e2fc9", 00:19:20.069 "strip_size_kb": 0, 00:19:20.069 "state": "online", 00:19:20.069 "raid_level": "raid1", 00:19:20.069 "superblock": true, 00:19:20.069 "num_base_bdevs": 2, 00:19:20.069 "num_base_bdevs_discovered": 1, 00:19:20.069 "num_base_bdevs_operational": 1, 00:19:20.069 "base_bdevs_list": [ 00:19:20.069 { 00:19:20.069 "name": null, 00:19:20.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:20.069 "is_configured": false, 00:19:20.069 "data_offset": 2048, 00:19:20.069 "data_size": 63488 00:19:20.069 }, 00:19:20.069 { 00:19:20.069 "name": "BaseBdev2", 00:19:20.069 "uuid": "9a9b110b-3916-59d4-8517-7b83d2892cdc", 00:19:20.069 "is_configured": true, 00:19:20.069 "data_offset": 2048, 00:19:20.069 "data_size": 63488 00:19:20.069 } 00:19:20.069 ] 00:19:20.069 }' 00:19:20.069 02:42:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:20.069 02:42:44 -- common/autotest_common.sh@10 -- # set +x 00:19:20.634 02:42:45 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:20.891 [2024-07-11 02:42:45.856476] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:19:20.891 [2024-07-11 02:42:45.856540] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:20.891 [2024-07-11 02:42:45.861580] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca17c0 00:19:20.891 [2024-07-11 02:42:45.863630] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:20.891 02:42:45 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:19:21.822 02:42:46 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:21.822 02:42:46 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:21.822 02:42:46 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:21.822 02:42:46 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:21.822 02:42:46 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:21.822 02:42:46 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:21.822 02:42:46 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.080 02:42:47 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:22.080 "name": "raid_bdev1", 00:19:22.080 "uuid": "d39953bf-8e1a-42ba-8d4f-7197094e2fc9", 00:19:22.080 "strip_size_kb": 0, 00:19:22.080 "state": "online", 00:19:22.080 "raid_level": "raid1", 00:19:22.080 "superblock": true, 00:19:22.080 "num_base_bdevs": 2, 00:19:22.080 "num_base_bdevs_discovered": 2, 00:19:22.080 "num_base_bdevs_operational": 2, 00:19:22.080 "process": { 00:19:22.080 "type": "rebuild", 00:19:22.080 "target": "spare", 00:19:22.080 "progress": { 00:19:22.080 "blocks": 22528, 00:19:22.080 "percent": 35 00:19:22.080 } 00:19:22.080 }, 00:19:22.080 "base_bdevs_list": [ 00:19:22.080 { 00:19:22.080 "name": "spare", 00:19:22.080 "uuid": "194f9344-cb28-5503-9407-7a800fd5f5c9", 00:19:22.080 "is_configured": true, 00:19:22.080 "data_offset": 2048, 00:19:22.080 "data_size": 63488 00:19:22.080 }, 00:19:22.080 { 00:19:22.080 "name": "BaseBdev2", 00:19:22.080 "uuid": "9a9b110b-3916-59d4-8517-7b83d2892cdc", 00:19:22.080 "is_configured": true, 00:19:22.080 "data_offset": 2048, 00:19:22.080 "data_size": 63488 00:19:22.080 } 00:19:22.080 ] 00:19:22.080 }' 00:19:22.080 02:42:47 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:22.080 02:42:47 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:22.080 02:42:47 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:22.338 02:42:47 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:22.338 02:42:47 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:19:22.338 [2024-07-11 02:42:47.418361] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:22.596 [2024-07-11 02:42:47.472324] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:22.596 [2024-07-11 02:42:47.472421] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:22.596 02:42:47 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:22.596 02:42:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:22.596 02:42:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:22.596 02:42:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:22.596 02:42:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:22.596 02:42:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:19:22.596 02:42:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:22.596 02:42:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:22.596 02:42:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:22.596 02:42:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:22.596 02:42:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.596 02:42:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:22.855 02:42:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:22.855 "name": "raid_bdev1", 00:19:22.855 "uuid": "d39953bf-8e1a-42ba-8d4f-7197094e2fc9", 00:19:22.855 "strip_size_kb": 0, 00:19:22.855 "state": "online", 00:19:22.855 "raid_level": "raid1", 00:19:22.855 "superblock": true, 00:19:22.855 "num_base_bdevs": 2, 00:19:22.855 "num_base_bdevs_discovered": 1, 00:19:22.855 "num_base_bdevs_operational": 1, 00:19:22.855 "base_bdevs_list": [ 00:19:22.855 { 00:19:22.855 "name": null, 00:19:22.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.855 "is_configured": false, 00:19:22.855 "data_offset": 2048, 00:19:22.855 "data_size": 63488 00:19:22.855 }, 00:19:22.855 { 00:19:22.855 "name": "BaseBdev2", 00:19:22.855 "uuid": "9a9b110b-3916-59d4-8517-7b83d2892cdc", 00:19:22.855 "is_configured": true, 00:19:22.855 "data_offset": 2048, 00:19:22.855 "data_size": 63488 00:19:22.855 } 00:19:22.855 ] 00:19:22.855 }' 00:19:22.855 02:42:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:22.855 02:42:47 -- common/autotest_common.sh@10 -- # set +x 00:19:23.424 02:42:48 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:23.424 02:42:48 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:23.424 02:42:48 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:19:23.424 02:42:48 -- bdev/bdev_raid.sh@185 -- # local target=none 00:19:23.424 02:42:48 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:23.424 02:42:48 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:23.424 02:42:48 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.682 02:42:48 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:23.682 "name": "raid_bdev1", 00:19:23.682 "uuid": "d39953bf-8e1a-42ba-8d4f-7197094e2fc9", 00:19:23.682 "strip_size_kb": 0, 00:19:23.682 "state": "online", 00:19:23.682 "raid_level": "raid1", 00:19:23.682 "superblock": true, 00:19:23.682 "num_base_bdevs": 2, 00:19:23.682 "num_base_bdevs_discovered": 1, 00:19:23.682 "num_base_bdevs_operational": 1, 00:19:23.682 "base_bdevs_list": [ 00:19:23.682 { 00:19:23.682 "name": null, 00:19:23.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.682 "is_configured": false, 00:19:23.682 "data_offset": 2048, 00:19:23.682 "data_size": 63488 00:19:23.682 }, 00:19:23.682 { 00:19:23.682 "name": "BaseBdev2", 00:19:23.682 "uuid": "9a9b110b-3916-59d4-8517-7b83d2892cdc", 00:19:23.682 "is_configured": true, 00:19:23.682 "data_offset": 2048, 00:19:23.682 "data_size": 63488 00:19:23.682 } 00:19:23.682 ] 00:19:23.682 }' 00:19:23.682 02:42:48 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:23.682 02:42:48 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:23.682 02:42:48 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:23.682 02:42:48 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:19:23.682 02:42:48 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:23.941 [2024-07-11 02:42:48.973490] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:19:23.941 [2024-07-11 02:42:48.973550] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:23.941 [2024-07-11 02:42:48.978539] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca1960 00:19:23.941 [2024-07-11 02:42:48.980586] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:23.941 02:42:48 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:19:25.316 02:42:49 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:25.316 02:42:49 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:25.316 02:42:49 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:25.316 02:42:49 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:25.317 02:42:49 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:25.317 02:42:49 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:25.317 02:42:49 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.317 02:42:50 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:25.317 "name": "raid_bdev1", 00:19:25.317 "uuid": "d39953bf-8e1a-42ba-8d4f-7197094e2fc9", 00:19:25.317 "strip_size_kb": 0, 00:19:25.317 "state": "online", 00:19:25.317 "raid_level": "raid1", 00:19:25.317 "superblock": true, 00:19:25.317 "num_base_bdevs": 2, 00:19:25.317 "num_base_bdevs_discovered": 2, 00:19:25.317 "num_base_bdevs_operational": 2, 00:19:25.317 "process": { 00:19:25.317 "type": "rebuild", 00:19:25.317 "target": "spare", 00:19:25.317 "progress": { 00:19:25.317 "blocks": 24576, 00:19:25.317 "percent": 38 00:19:25.317 } 00:19:25.317 }, 00:19:25.317 "base_bdevs_list": [ 00:19:25.317 { 00:19:25.317 "name": "spare", 00:19:25.317 "uuid": "194f9344-cb28-5503-9407-7a800fd5f5c9", 00:19:25.317 "is_configured": true, 00:19:25.317 "data_offset": 2048, 00:19:25.317 "data_size": 63488 00:19:25.317 }, 00:19:25.317 { 00:19:25.317 "name": "BaseBdev2", 00:19:25.317 "uuid": "9a9b110b-3916-59d4-8517-7b83d2892cdc", 00:19:25.317 "is_configured": true, 00:19:25.317 "data_offset": 2048, 00:19:25.317 "data_size": 63488 00:19:25.317 } 00:19:25.317 ] 00:19:25.317 }' 00:19:25.317 02:42:50 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:25.317 02:42:50 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:25.317 02:42:50 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:25.317 02:42:50 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:25.317 02:42:50 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:19:25.317 02:42:50 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:19:25.317 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:19:25.317 02:42:50 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:19:25.317 02:42:50 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:19:25.317 02:42:50 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:19:25.317 02:42:50 -- bdev/bdev_raid.sh@657 -- # local timeout=384 00:19:25.317 02:42:50 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:25.317 02:42:50 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:25.317 02:42:50 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:25.317 02:42:50 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:25.317 02:42:50 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:25.317 02:42:50 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:25.317 02:42:50 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:25.317 02:42:50 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.577 02:42:50 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:25.577 "name": "raid_bdev1", 00:19:25.577 "uuid": "d39953bf-8e1a-42ba-8d4f-7197094e2fc9", 00:19:25.577 "strip_size_kb": 0, 00:19:25.577 "state": "online", 00:19:25.577 "raid_level": "raid1", 00:19:25.577 "superblock": true, 00:19:25.577 "num_base_bdevs": 2, 00:19:25.577 "num_base_bdevs_discovered": 2, 00:19:25.577 "num_base_bdevs_operational": 2, 00:19:25.577 "process": { 00:19:25.577 "type": "rebuild", 00:19:25.577 "target": "spare", 00:19:25.577 "progress": { 00:19:25.577 "blocks": 30720, 00:19:25.577 "percent": 48 00:19:25.577 } 00:19:25.577 }, 00:19:25.577 "base_bdevs_list": [ 00:19:25.577 { 00:19:25.577 "name": "spare", 00:19:25.577 "uuid": "194f9344-cb28-5503-9407-7a800fd5f5c9", 00:19:25.577 "is_configured": true, 00:19:25.577 "data_offset": 2048, 00:19:25.577 "data_size": 63488 00:19:25.577 }, 00:19:25.577 { 00:19:25.577 "name": "BaseBdev2", 00:19:25.577 "uuid": "9a9b110b-3916-59d4-8517-7b83d2892cdc", 00:19:25.577 "is_configured": true, 00:19:25.577 "data_offset": 2048, 00:19:25.577 "data_size": 63488 00:19:25.577 } 00:19:25.577 ] 00:19:25.577 }' 00:19:25.577 02:42:50 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:25.577 02:42:50 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:25.577 02:42:50 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:25.836 02:42:50 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:25.836 02:42:50 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:19:26.770 02:42:51 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:26.770 02:42:51 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:26.770 02:42:51 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:26.770 02:42:51 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:26.770 02:42:51 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:26.770 02:42:51 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:26.770 02:42:51 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:26.770 02:42:51 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.029 02:42:51 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:27.029 "name": "raid_bdev1", 00:19:27.029 "uuid": "d39953bf-8e1a-42ba-8d4f-7197094e2fc9", 00:19:27.029 "strip_size_kb": 0, 00:19:27.029 "state": "online", 00:19:27.029 "raid_level": "raid1", 00:19:27.029 "superblock": true, 00:19:27.029 "num_base_bdevs": 2, 00:19:27.029 "num_base_bdevs_discovered": 2, 00:19:27.029 "num_base_bdevs_operational": 2, 00:19:27.029 "process": { 00:19:27.029 "type": "rebuild", 00:19:27.029 "target": "spare", 00:19:27.029 "progress": { 00:19:27.029 "blocks": 59392, 00:19:27.029 "percent": 93 00:19:27.029 } 00:19:27.029 }, 00:19:27.029 "base_bdevs_list": [ 00:19:27.029 { 00:19:27.029 "name": "spare", 00:19:27.029 "uuid": "194f9344-cb28-5503-9407-7a800fd5f5c9", 00:19:27.029 "is_configured": true, 00:19:27.029 "data_offset": 2048, 00:19:27.029 "data_size": 63488 00:19:27.029 }, 00:19:27.029 { 00:19:27.029 "name": "BaseBdev2", 00:19:27.029 "uuid": "9a9b110b-3916-59d4-8517-7b83d2892cdc", 00:19:27.029 "is_configured": true, 00:19:27.029 "data_offset": 2048, 00:19:27.029 "data_size": 63488 00:19:27.029 } 00:19:27.029 ] 00:19:27.029 }' 00:19:27.029 02:42:51 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:27.029 02:42:51 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:27.029 02:42:51 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:27.029 02:42:52 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:27.029 02:42:52 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:19:27.029 [2024-07-11 02:42:52.096508] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:27.029 [2024-07-11 02:42:52.096575] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:27.029 [2024-07-11 02:42:52.096708] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:27.964 02:42:53 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:27.964 02:42:53 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:27.964 02:42:53 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:27.964 02:42:53 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:27.964 02:42:53 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:27.964 02:42:53 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:28.222 02:42:53 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:28.222 02:42:53 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:28.222 02:42:53 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:28.222 "name": "raid_bdev1", 00:19:28.222 "uuid": "d39953bf-8e1a-42ba-8d4f-7197094e2fc9", 00:19:28.222 "strip_size_kb": 0, 00:19:28.222 "state": "online", 00:19:28.222 "raid_level": "raid1", 00:19:28.222 "superblock": true, 00:19:28.222 "num_base_bdevs": 2, 00:19:28.222 "num_base_bdevs_discovered": 2, 00:19:28.222 "num_base_bdevs_operational": 2, 00:19:28.222 "base_bdevs_list": [ 00:19:28.222 { 00:19:28.222 "name": "spare", 00:19:28.222 "uuid": "194f9344-cb28-5503-9407-7a800fd5f5c9", 00:19:28.222 "is_configured": true, 00:19:28.222 "data_offset": 2048, 00:19:28.222 "data_size": 63488 00:19:28.222 }, 00:19:28.222 { 00:19:28.222 "name": "BaseBdev2", 00:19:28.222 "uuid": "9a9b110b-3916-59d4-8517-7b83d2892cdc", 00:19:28.222 "is_configured": true, 00:19:28.222 "data_offset": 2048, 00:19:28.222 "data_size": 63488 00:19:28.222 } 00:19:28.222 ] 00:19:28.222 }' 00:19:28.222 02:42:53 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:28.480 02:42:53 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:28.480 02:42:53 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:28.480 02:42:53 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:19:28.480 02:42:53 -- bdev/bdev_raid.sh@660 -- # break 00:19:28.480 02:42:53 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:28.480 02:42:53 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:28.480 02:42:53 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:19:28.480 02:42:53 -- bdev/bdev_raid.sh@185 -- # local target=none 00:19:28.480 02:42:53 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:28.480 02:42:53 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:28.480 02:42:53 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:28.737 02:42:53 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:28.737 "name": "raid_bdev1", 00:19:28.737 "uuid": "d39953bf-8e1a-42ba-8d4f-7197094e2fc9", 00:19:28.737 "strip_size_kb": 0, 00:19:28.737 "state": "online", 00:19:28.737 "raid_level": "raid1", 00:19:28.737 "superblock": true, 00:19:28.737 "num_base_bdevs": 2, 00:19:28.737 "num_base_bdevs_discovered": 2, 00:19:28.737 "num_base_bdevs_operational": 2, 00:19:28.737 "base_bdevs_list": [ 00:19:28.737 { 00:19:28.737 "name": "spare", 00:19:28.737 "uuid": "194f9344-cb28-5503-9407-7a800fd5f5c9", 00:19:28.737 "is_configured": true, 00:19:28.737 "data_offset": 2048, 00:19:28.737 "data_size": 63488 00:19:28.737 }, 00:19:28.737 { 00:19:28.737 "name": "BaseBdev2", 00:19:28.737 "uuid": "9a9b110b-3916-59d4-8517-7b83d2892cdc", 00:19:28.737 "is_configured": true, 00:19:28.737 "data_offset": 2048, 00:19:28.737 "data_size": 63488 00:19:28.737 } 00:19:28.737 ] 00:19:28.737 }' 00:19:28.737 02:42:53 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:28.737 02:42:53 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:28.737 02:42:53 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:28.737 02:42:53 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:19:28.737 02:42:53 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:28.737 02:42:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:28.737 02:42:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:28.737 02:42:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:28.737 02:42:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:28.737 02:42:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:28.737 02:42:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:28.737 02:42:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:28.737 02:42:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:28.737 02:42:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:28.737 02:42:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:28.737 02:42:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:28.995 02:42:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:28.995 "name": "raid_bdev1", 00:19:28.995 "uuid": "d39953bf-8e1a-42ba-8d4f-7197094e2fc9", 00:19:28.995 "strip_size_kb": 0, 00:19:28.995 "state": "online", 00:19:28.995 "raid_level": "raid1", 00:19:28.995 "superblock": true, 00:19:28.995 "num_base_bdevs": 2, 00:19:28.995 "num_base_bdevs_discovered": 2, 00:19:28.995 "num_base_bdevs_operational": 2, 00:19:28.995 "base_bdevs_list": [ 00:19:28.995 { 00:19:28.995 "name": "spare", 00:19:28.995 "uuid": "194f9344-cb28-5503-9407-7a800fd5f5c9", 00:19:28.995 "is_configured": true, 00:19:28.995 "data_offset": 2048, 00:19:28.995 "data_size": 63488 00:19:28.995 }, 00:19:28.995 { 00:19:28.995 "name": "BaseBdev2", 00:19:28.995 "uuid": "9a9b110b-3916-59d4-8517-7b83d2892cdc", 00:19:28.995 "is_configured": true, 00:19:28.995 "data_offset": 2048, 00:19:28.995 "data_size": 63488 00:19:28.995 } 00:19:28.995 ] 00:19:28.995 }' 00:19:28.995 02:42:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:28.995 02:42:53 -- common/autotest_common.sh@10 -- # set +x 00:19:29.561 02:42:54 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:29.820 [2024-07-11 02:42:54.842462] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:29.820 [2024-07-11 02:42:54.842499] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:29.820 [2024-07-11 02:42:54.842623] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:29.820 [2024-07-11 02:42:54.842710] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:29.820 [2024-07-11 02:42:54.842756] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state offline 00:19:29.820 02:42:54 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:29.820 02:42:54 -- bdev/bdev_raid.sh@671 -- # jq length 00:19:30.078 02:42:55 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:19:30.078 02:42:55 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:19:30.078 02:42:55 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:30.078 02:42:55 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:30.078 02:42:55 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:19:30.078 02:42:55 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:30.078 02:42:55 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:19:30.078 02:42:55 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:30.078 02:42:55 -- bdev/nbd_common.sh@12 -- # local i 00:19:30.078 02:42:55 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:30.078 02:42:55 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:30.078 02:42:55 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:30.337 /dev/nbd0 00:19:30.337 02:42:55 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:30.337 02:42:55 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:30.337 02:42:55 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:19:30.337 02:42:55 -- common/autotest_common.sh@857 -- # local i 00:19:30.337 02:42:55 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:19:30.337 02:42:55 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:19:30.337 02:42:55 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:19:30.337 02:42:55 -- common/autotest_common.sh@861 -- # break 00:19:30.337 02:42:55 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:19:30.337 02:42:55 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:19:30.337 02:42:55 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:30.337 1+0 records in 00:19:30.337 1+0 records out 00:19:30.337 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000506535 s, 8.1 MB/s 00:19:30.337 02:42:55 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:30.337 02:42:55 -- common/autotest_common.sh@874 -- # size=4096 00:19:30.337 02:42:55 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:30.337 02:42:55 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:19:30.337 02:42:55 -- common/autotest_common.sh@877 -- # return 0 00:19:30.337 02:42:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:30.337 02:42:55 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:30.337 02:42:55 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:19:30.596 /dev/nbd1 00:19:30.596 02:42:55 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:30.596 02:42:55 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:30.596 02:42:55 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:19:30.596 02:42:55 -- common/autotest_common.sh@857 -- # local i 00:19:30.596 02:42:55 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:19:30.596 02:42:55 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:19:30.596 02:42:55 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:19:30.596 02:42:55 -- common/autotest_common.sh@861 -- # break 00:19:30.596 02:42:55 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:19:30.596 02:42:55 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:19:30.596 02:42:55 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:30.596 1+0 records in 00:19:30.596 1+0 records out 00:19:30.596 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000829111 s, 4.9 MB/s 00:19:30.596 02:42:55 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:30.596 02:42:55 -- common/autotest_common.sh@874 -- # size=4096 00:19:30.596 02:42:55 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:30.596 02:42:55 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:19:30.596 02:42:55 -- common/autotest_common.sh@877 -- # return 0 00:19:30.596 02:42:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:30.596 02:42:55 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:30.596 02:42:55 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:30.596 02:42:55 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:19:30.596 02:42:55 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:30.596 02:42:55 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:19:30.596 02:42:55 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:30.596 02:42:55 -- bdev/nbd_common.sh@51 -- # local i 00:19:30.596 02:42:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:30.596 02:42:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:19:30.854 02:42:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:30.854 02:42:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:30.854 02:42:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:30.854 02:42:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:30.854 02:42:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:30.854 02:42:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:30.854 02:42:55 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:19:31.113 02:42:55 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:19:31.113 02:42:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:31.113 02:42:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:31.113 02:42:55 -- bdev/nbd_common.sh@41 -- # break 00:19:31.113 02:42:55 -- bdev/nbd_common.sh@45 -- # return 0 00:19:31.113 02:42:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:31.113 02:42:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:19:31.397 02:42:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:31.397 02:42:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:31.397 02:42:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:31.397 02:42:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:31.397 02:42:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:31.397 02:42:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:31.397 02:42:56 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:19:31.397 02:42:56 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:19:31.397 02:42:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:31.397 02:42:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:31.397 02:42:56 -- bdev/nbd_common.sh@41 -- # break 00:19:31.398 02:42:56 -- bdev/nbd_common.sh@45 -- # return 0 00:19:31.398 02:42:56 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:19:31.398 02:42:56 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:19:31.398 02:42:56 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:19:31.398 02:42:56 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:19:31.656 02:42:56 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:31.914 [2024-07-11 02:42:56.854311] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:31.914 [2024-07-11 02:42:56.854429] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:31.914 [2024-07-11 02:42:56.854467] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:31.914 [2024-07-11 02:42:56.854503] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:31.914 [2024-07-11 02:42:56.856904] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:31.914 [2024-07-11 02:42:56.856985] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:31.914 [2024-07-11 02:42:56.857086] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:31.914 [2024-07-11 02:42:56.857198] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:31.914 BaseBdev1 00:19:31.914 02:42:56 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:19:31.914 02:42:56 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:19:31.914 02:42:56 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:19:32.172 02:42:57 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:32.172 [2024-07-11 02:42:57.258400] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:32.172 [2024-07-11 02:42:57.258485] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:32.172 [2024-07-11 02:42:57.258533] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:19:32.172 [2024-07-11 02:42:57.258559] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:32.172 [2024-07-11 02:42:57.259005] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:32.172 [2024-07-11 02:42:57.259076] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:32.172 [2024-07-11 02:42:57.259186] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:19:32.172 [2024-07-11 02:42:57.259201] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:19:32.172 [2024-07-11 02:42:57.259209] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:32.172 [2024-07-11 02:42:57.259233] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state configuring 00:19:32.172 [2024-07-11 02:42:57.259300] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:32.172 BaseBdev2 00:19:32.431 02:42:57 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:19:32.431 02:42:57 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:32.689 [2024-07-11 02:42:57.658460] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:32.690 [2024-07-11 02:42:57.658558] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:32.690 [2024-07-11 02:42:57.658598] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:19:32.690 [2024-07-11 02:42:57.658619] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:32.690 [2024-07-11 02:42:57.659049] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:32.690 [2024-07-11 02:42:57.659113] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:32.690 [2024-07-11 02:42:57.659240] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:19:32.690 [2024-07-11 02:42:57.659308] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:32.690 spare 00:19:32.690 02:42:57 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:32.690 02:42:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:32.690 02:42:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:32.690 02:42:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:32.690 02:42:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:32.690 02:42:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:32.690 02:42:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:32.690 02:42:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:32.690 02:42:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:32.690 02:42:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:32.690 02:42:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:32.690 02:42:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.690 [2024-07-11 02:42:57.759416] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:19:32.690 [2024-07-11 02:42:57.759445] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:32.690 [2024-07-11 02:42:57.759630] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc24a0 00:19:32.690 [2024-07-11 02:42:57.760023] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:19:32.690 [2024-07-11 02:42:57.760048] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:19:32.690 [2024-07-11 02:42:57.760172] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:32.948 02:42:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:32.948 "name": "raid_bdev1", 00:19:32.948 "uuid": "d39953bf-8e1a-42ba-8d4f-7197094e2fc9", 00:19:32.948 "strip_size_kb": 0, 00:19:32.948 "state": "online", 00:19:32.948 "raid_level": "raid1", 00:19:32.948 "superblock": true, 00:19:32.948 "num_base_bdevs": 2, 00:19:32.948 "num_base_bdevs_discovered": 2, 00:19:32.948 "num_base_bdevs_operational": 2, 00:19:32.948 "base_bdevs_list": [ 00:19:32.948 { 00:19:32.948 "name": "spare", 00:19:32.948 "uuid": "194f9344-cb28-5503-9407-7a800fd5f5c9", 00:19:32.948 "is_configured": true, 00:19:32.948 "data_offset": 2048, 00:19:32.948 "data_size": 63488 00:19:32.948 }, 00:19:32.948 { 00:19:32.948 "name": "BaseBdev2", 00:19:32.948 "uuid": "9a9b110b-3916-59d4-8517-7b83d2892cdc", 00:19:32.948 "is_configured": true, 00:19:32.948 "data_offset": 2048, 00:19:32.948 "data_size": 63488 00:19:32.948 } 00:19:32.948 ] 00:19:32.948 }' 00:19:32.948 02:42:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:32.948 02:42:57 -- common/autotest_common.sh@10 -- # set +x 00:19:33.514 02:42:58 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:33.514 02:42:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:33.514 02:42:58 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:19:33.514 02:42:58 -- bdev/bdev_raid.sh@185 -- # local target=none 00:19:33.514 02:42:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:33.514 02:42:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:33.514 02:42:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:33.773 02:42:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:33.773 "name": "raid_bdev1", 00:19:33.773 "uuid": "d39953bf-8e1a-42ba-8d4f-7197094e2fc9", 00:19:33.773 "strip_size_kb": 0, 00:19:33.773 "state": "online", 00:19:33.773 "raid_level": "raid1", 00:19:33.773 "superblock": true, 00:19:33.773 "num_base_bdevs": 2, 00:19:33.773 "num_base_bdevs_discovered": 2, 00:19:33.773 "num_base_bdevs_operational": 2, 00:19:33.773 "base_bdevs_list": [ 00:19:33.773 { 00:19:33.773 "name": "spare", 00:19:33.773 "uuid": "194f9344-cb28-5503-9407-7a800fd5f5c9", 00:19:33.773 "is_configured": true, 00:19:33.773 "data_offset": 2048, 00:19:33.773 "data_size": 63488 00:19:33.773 }, 00:19:33.773 { 00:19:33.773 "name": "BaseBdev2", 00:19:33.773 "uuid": "9a9b110b-3916-59d4-8517-7b83d2892cdc", 00:19:33.773 "is_configured": true, 00:19:33.773 "data_offset": 2048, 00:19:33.773 "data_size": 63488 00:19:33.773 } 00:19:33.773 ] 00:19:33.773 }' 00:19:33.773 02:42:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:33.773 02:42:58 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:33.773 02:42:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:33.773 02:42:58 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:19:33.773 02:42:58 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:33.773 02:42:58 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:34.031 02:42:58 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:19:34.031 02:42:58 -- bdev/bdev_raid.sh@709 -- # killprocess 135698 00:19:34.031 02:42:58 -- common/autotest_common.sh@926 -- # '[' -z 135698 ']' 00:19:34.031 02:42:58 -- common/autotest_common.sh@930 -- # kill -0 135698 00:19:34.031 02:42:58 -- common/autotest_common.sh@931 -- # uname 00:19:34.031 02:42:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:34.031 02:42:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 135698 00:19:34.031 killing process with pid 135698 00:19:34.031 02:42:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:34.031 02:42:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:34.031 02:42:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 135698' 00:19:34.031 02:42:58 -- common/autotest_common.sh@945 -- # kill 135698 00:19:34.031 02:42:58 -- common/autotest_common.sh@950 -- # wait 135698 00:19:34.031 Received shutdown signal, test time was about 60.000000 seconds 00:19:34.031 00:19:34.031 Latency(us) 00:19:34.031 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:34.031 =================================================================================================================== 00:19:34.031 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:34.031 [2024-07-11 02:42:58.962173] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:34.031 [2024-07-11 02:42:58.962280] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:34.031 [2024-07-11 02:42:58.962343] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:34.031 [2024-07-11 02:42:58.962353] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:19:34.031 [2024-07-11 02:42:58.989142] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:34.290 ************************************ 00:19:34.290 END TEST raid_rebuild_test_sb 00:19:34.290 ************************************ 00:19:34.290 02:42:59 -- bdev/bdev_raid.sh@711 -- # return 0 00:19:34.290 00:19:34.290 real 0m23.821s 00:19:34.290 user 0m35.379s 00:19:34.290 sys 0m3.740s 00:19:34.290 02:42:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:34.290 02:42:59 -- common/autotest_common.sh@10 -- # set +x 00:19:34.290 02:42:59 -- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true 00:19:34.290 02:42:59 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:19:34.290 02:42:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:34.290 02:42:59 -- common/autotest_common.sh@10 -- # set +x 00:19:34.290 ************************************ 00:19:34.290 START TEST raid_rebuild_test_io 00:19:34.290 ************************************ 00:19:34.290 02:42:59 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 false true 00:19:34.290 02:42:59 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:19:34.290 02:42:59 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:19:34.290 02:42:59 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:19:34.290 02:42:59 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:19:34.290 02:42:59 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:19:34.290 02:42:59 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:19:34.290 02:42:59 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:34.290 02:42:59 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:19:34.290 02:42:59 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:19:34.290 02:42:59 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:34.290 02:42:59 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:19:34.290 02:42:59 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:19:34.290 02:42:59 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:34.290 02:42:59 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:19:34.290 02:42:59 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:19:34.290 02:42:59 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:19:34.290 02:42:59 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:19:34.290 02:42:59 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:19:34.290 02:42:59 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:19:34.290 02:42:59 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:19:34.290 02:42:59 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:19:34.290 02:42:59 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:19:34.290 02:42:59 -- bdev/bdev_raid.sh@544 -- # raid_pid=136349 00:19:34.290 02:42:59 -- bdev/bdev_raid.sh@545 -- # waitforlisten 136349 /var/tmp/spdk-raid.sock 00:19:34.290 02:42:59 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:34.290 02:42:59 -- common/autotest_common.sh@819 -- # '[' -z 136349 ']' 00:19:34.290 02:42:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:34.290 02:42:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:34.290 02:42:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:34.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:34.290 02:42:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:34.290 02:42:59 -- common/autotest_common.sh@10 -- # set +x 00:19:34.290 [2024-07-11 02:42:59.325396] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:19:34.290 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:34.290 Zero copy mechanism will not be used. 00:19:34.290 [2024-07-11 02:42:59.325656] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136349 ] 00:19:34.550 [2024-07-11 02:42:59.471557] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.550 [2024-07-11 02:42:59.534718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:34.550 [2024-07-11 02:42:59.587436] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:35.485 02:43:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:35.485 02:43:00 -- common/autotest_common.sh@852 -- # return 0 00:19:35.485 02:43:00 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:19:35.485 02:43:00 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:19:35.485 02:43:00 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:35.485 BaseBdev1 00:19:35.485 02:43:00 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:19:35.485 02:43:00 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:19:35.485 02:43:00 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:35.744 BaseBdev2 00:19:35.744 02:43:00 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:19:36.003 spare_malloc 00:19:36.003 02:43:00 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:36.261 spare_delay 00:19:36.262 02:43:01 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:36.262 [2024-07-11 02:43:01.280626] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:36.262 [2024-07-11 02:43:01.280727] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:36.262 [2024-07-11 02:43:01.280757] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:19:36.262 [2024-07-11 02:43:01.280791] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:36.262 [2024-07-11 02:43:01.283413] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:36.262 [2024-07-11 02:43:01.283465] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:36.262 spare 00:19:36.262 02:43:01 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:19:36.520 [2024-07-11 02:43:01.464699] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:36.520 [2024-07-11 02:43:01.466483] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:36.520 [2024-07-11 02:43:01.466561] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:19:36.520 [2024-07-11 02:43:01.466573] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:36.520 [2024-07-11 02:43:01.466708] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:19:36.520 [2024-07-11 02:43:01.467076] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:19:36.520 [2024-07-11 02:43:01.467099] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007580 00:19:36.520 [2024-07-11 02:43:01.467248] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:36.520 02:43:01 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:36.520 02:43:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:36.520 02:43:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:36.520 02:43:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:36.520 02:43:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:36.520 02:43:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:36.520 02:43:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:36.520 02:43:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:36.520 02:43:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:36.520 02:43:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:36.520 02:43:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:36.520 02:43:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.779 02:43:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:36.779 "name": "raid_bdev1", 00:19:36.779 "uuid": "60121813-cfc2-404b-9356-3d5c0beb69ba", 00:19:36.779 "strip_size_kb": 0, 00:19:36.779 "state": "online", 00:19:36.779 "raid_level": "raid1", 00:19:36.779 "superblock": false, 00:19:36.779 "num_base_bdevs": 2, 00:19:36.779 "num_base_bdevs_discovered": 2, 00:19:36.779 "num_base_bdevs_operational": 2, 00:19:36.779 "base_bdevs_list": [ 00:19:36.779 { 00:19:36.779 "name": "BaseBdev1", 00:19:36.779 "uuid": "28ce20d4-f109-405c-a0ad-f75f0264c711", 00:19:36.779 "is_configured": true, 00:19:36.779 "data_offset": 0, 00:19:36.779 "data_size": 65536 00:19:36.779 }, 00:19:36.779 { 00:19:36.779 "name": "BaseBdev2", 00:19:36.779 "uuid": "7b48f5e9-053e-443e-9e77-0f39ef711ed9", 00:19:36.779 "is_configured": true, 00:19:36.779 "data_offset": 0, 00:19:36.779 "data_size": 65536 00:19:36.779 } 00:19:36.779 ] 00:19:36.779 }' 00:19:36.779 02:43:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:36.779 02:43:01 -- common/autotest_common.sh@10 -- # set +x 00:19:37.347 02:43:02 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:37.347 02:43:02 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:19:37.605 [2024-07-11 02:43:02.533070] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:37.605 02:43:02 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:19:37.605 02:43:02 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:37.605 02:43:02 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:37.864 02:43:02 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:19:37.864 02:43:02 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:19:37.864 02:43:02 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:19:37.865 02:43:02 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:19:37.865 [2024-07-11 02:43:02.847414] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:19:37.865 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:37.865 Zero copy mechanism will not be used. 00:19:37.865 Running I/O for 60 seconds... 00:19:37.865 [2024-07-11 02:43:02.937612] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:37.865 [2024-07-11 02:43:02.944367] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002120 00:19:38.124 02:43:02 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:38.124 02:43:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:38.124 02:43:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:38.124 02:43:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:38.124 02:43:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:38.124 02:43:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:19:38.124 02:43:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:38.124 02:43:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:38.124 02:43:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:38.124 02:43:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:38.124 02:43:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:38.124 02:43:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.124 02:43:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:38.124 "name": "raid_bdev1", 00:19:38.124 "uuid": "60121813-cfc2-404b-9356-3d5c0beb69ba", 00:19:38.124 "strip_size_kb": 0, 00:19:38.124 "state": "online", 00:19:38.124 "raid_level": "raid1", 00:19:38.124 "superblock": false, 00:19:38.124 "num_base_bdevs": 2, 00:19:38.124 "num_base_bdevs_discovered": 1, 00:19:38.124 "num_base_bdevs_operational": 1, 00:19:38.124 "base_bdevs_list": [ 00:19:38.124 { 00:19:38.124 "name": null, 00:19:38.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.124 "is_configured": false, 00:19:38.124 "data_offset": 0, 00:19:38.124 "data_size": 65536 00:19:38.124 }, 00:19:38.124 { 00:19:38.124 "name": "BaseBdev2", 00:19:38.124 "uuid": "7b48f5e9-053e-443e-9e77-0f39ef711ed9", 00:19:38.124 "is_configured": true, 00:19:38.124 "data_offset": 0, 00:19:38.124 "data_size": 65536 00:19:38.124 } 00:19:38.124 ] 00:19:38.124 }' 00:19:38.124 02:43:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:38.124 02:43:03 -- common/autotest_common.sh@10 -- # set +x 00:19:39.078 02:43:03 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:39.078 [2024-07-11 02:43:04.117137] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:19:39.078 [2024-07-11 02:43:04.117235] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:39.078 02:43:04 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:19:39.078 [2024-07-11 02:43:04.169387] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:19:39.335 [2024-07-11 02:43:04.172729] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:39.335 [2024-07-11 02:43:04.294124] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:39.335 [2024-07-11 02:43:04.294564] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:39.593 [2024-07-11 02:43:04.531702] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:39.593 [2024-07-11 02:43:04.531836] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:39.852 [2024-07-11 02:43:04.865611] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:40.110 [2024-07-11 02:43:05.093709] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:40.110 02:43:05 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:40.110 02:43:05 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:40.110 02:43:05 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:40.110 02:43:05 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:40.110 02:43:05 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:40.110 02:43:05 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:40.110 02:43:05 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:40.369 [2024-07-11 02:43:05.336497] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:19:40.369 [2024-07-11 02:43:05.337947] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:19:40.369 02:43:05 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:40.369 "name": "raid_bdev1", 00:19:40.369 "uuid": "60121813-cfc2-404b-9356-3d5c0beb69ba", 00:19:40.369 "strip_size_kb": 0, 00:19:40.369 "state": "online", 00:19:40.369 "raid_level": "raid1", 00:19:40.369 "superblock": false, 00:19:40.369 "num_base_bdevs": 2, 00:19:40.369 "num_base_bdevs_discovered": 2, 00:19:40.369 "num_base_bdevs_operational": 2, 00:19:40.369 "process": { 00:19:40.369 "type": "rebuild", 00:19:40.369 "target": "spare", 00:19:40.369 "progress": { 00:19:40.369 "blocks": 14336, 00:19:40.369 "percent": 21 00:19:40.369 } 00:19:40.369 }, 00:19:40.369 "base_bdevs_list": [ 00:19:40.369 { 00:19:40.369 "name": "spare", 00:19:40.369 "uuid": "ff50d641-2f21-566a-9651-61930e0b9f88", 00:19:40.369 "is_configured": true, 00:19:40.369 "data_offset": 0, 00:19:40.369 "data_size": 65536 00:19:40.369 }, 00:19:40.369 { 00:19:40.369 "name": "BaseBdev2", 00:19:40.369 "uuid": "7b48f5e9-053e-443e-9e77-0f39ef711ed9", 00:19:40.369 "is_configured": true, 00:19:40.369 "data_offset": 0, 00:19:40.369 "data_size": 65536 00:19:40.369 } 00:19:40.369 ] 00:19:40.369 }' 00:19:40.369 02:43:05 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:40.369 02:43:05 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:40.369 02:43:05 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:40.627 02:43:05 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:40.627 02:43:05 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:19:40.627 [2024-07-11 02:43:05.545462] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:40.627 [2024-07-11 02:43:05.545847] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:40.886 [2024-07-11 02:43:05.735569] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:40.886 [2024-07-11 02:43:05.793455] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:40.886 [2024-07-11 02:43:05.801153] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:40.886 [2024-07-11 02:43:05.831019] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002120 00:19:40.886 02:43:05 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:40.886 02:43:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:40.886 02:43:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:40.886 02:43:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:40.886 02:43:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:40.886 02:43:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:19:40.886 02:43:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:40.886 02:43:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:40.886 02:43:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:40.886 02:43:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:40.886 02:43:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:40.886 02:43:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.144 02:43:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:41.144 "name": "raid_bdev1", 00:19:41.144 "uuid": "60121813-cfc2-404b-9356-3d5c0beb69ba", 00:19:41.144 "strip_size_kb": 0, 00:19:41.144 "state": "online", 00:19:41.144 "raid_level": "raid1", 00:19:41.144 "superblock": false, 00:19:41.144 "num_base_bdevs": 2, 00:19:41.144 "num_base_bdevs_discovered": 1, 00:19:41.144 "num_base_bdevs_operational": 1, 00:19:41.144 "base_bdevs_list": [ 00:19:41.144 { 00:19:41.144 "name": null, 00:19:41.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:41.144 "is_configured": false, 00:19:41.144 "data_offset": 0, 00:19:41.144 "data_size": 65536 00:19:41.144 }, 00:19:41.144 { 00:19:41.144 "name": "BaseBdev2", 00:19:41.144 "uuid": "7b48f5e9-053e-443e-9e77-0f39ef711ed9", 00:19:41.144 "is_configured": true, 00:19:41.144 "data_offset": 0, 00:19:41.144 "data_size": 65536 00:19:41.144 } 00:19:41.144 ] 00:19:41.144 }' 00:19:41.144 02:43:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:41.144 02:43:06 -- common/autotest_common.sh@10 -- # set +x 00:19:41.711 02:43:06 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:41.711 02:43:06 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:41.711 02:43:06 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:19:41.711 02:43:06 -- bdev/bdev_raid.sh@185 -- # local target=none 00:19:41.711 02:43:06 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:41.711 02:43:06 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:41.711 02:43:06 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.970 02:43:06 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:41.970 "name": "raid_bdev1", 00:19:41.970 "uuid": "60121813-cfc2-404b-9356-3d5c0beb69ba", 00:19:41.970 "strip_size_kb": 0, 00:19:41.970 "state": "online", 00:19:41.970 "raid_level": "raid1", 00:19:41.970 "superblock": false, 00:19:41.970 "num_base_bdevs": 2, 00:19:41.970 "num_base_bdevs_discovered": 1, 00:19:41.970 "num_base_bdevs_operational": 1, 00:19:41.970 "base_bdevs_list": [ 00:19:41.970 { 00:19:41.970 "name": null, 00:19:41.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:41.970 "is_configured": false, 00:19:41.970 "data_offset": 0, 00:19:41.970 "data_size": 65536 00:19:41.970 }, 00:19:41.970 { 00:19:41.970 "name": "BaseBdev2", 00:19:41.970 "uuid": "7b48f5e9-053e-443e-9e77-0f39ef711ed9", 00:19:41.970 "is_configured": true, 00:19:41.970 "data_offset": 0, 00:19:41.970 "data_size": 65536 00:19:41.970 } 00:19:41.970 ] 00:19:41.970 }' 00:19:41.970 02:43:06 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:41.970 02:43:06 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:41.970 02:43:06 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:41.970 02:43:07 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:19:41.970 02:43:07 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:42.229 [2024-07-11 02:43:07.236596] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:19:42.229 [2024-07-11 02:43:07.236647] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:42.229 02:43:07 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:19:42.229 [2024-07-11 02:43:07.265429] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:19:42.229 [2024-07-11 02:43:07.267308] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:42.488 [2024-07-11 02:43:07.381987] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:42.488 [2024-07-11 02:43:07.382479] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:42.747 [2024-07-11 02:43:07.596478] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:42.747 [2024-07-11 02:43:07.596736] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:43.007 [2024-07-11 02:43:07.920826] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:43.007 [2024-07-11 02:43:07.921319] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:43.266 [2024-07-11 02:43:08.129922] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:43.266 02:43:08 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:43.266 02:43:08 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:43.266 02:43:08 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:43.266 02:43:08 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:43.266 02:43:08 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:43.266 02:43:08 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:43.266 02:43:08 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.525 02:43:08 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:43.525 "name": "raid_bdev1", 00:19:43.525 "uuid": "60121813-cfc2-404b-9356-3d5c0beb69ba", 00:19:43.525 "strip_size_kb": 0, 00:19:43.525 "state": "online", 00:19:43.525 "raid_level": "raid1", 00:19:43.525 "superblock": false, 00:19:43.525 "num_base_bdevs": 2, 00:19:43.525 "num_base_bdevs_discovered": 2, 00:19:43.525 "num_base_bdevs_operational": 2, 00:19:43.525 "process": { 00:19:43.525 "type": "rebuild", 00:19:43.525 "target": "spare", 00:19:43.525 "progress": { 00:19:43.525 "blocks": 14336, 00:19:43.525 "percent": 21 00:19:43.525 } 00:19:43.525 }, 00:19:43.525 "base_bdevs_list": [ 00:19:43.525 { 00:19:43.525 "name": "spare", 00:19:43.525 "uuid": "ff50d641-2f21-566a-9651-61930e0b9f88", 00:19:43.525 "is_configured": true, 00:19:43.525 "data_offset": 0, 00:19:43.525 "data_size": 65536 00:19:43.525 }, 00:19:43.525 { 00:19:43.525 "name": "BaseBdev2", 00:19:43.525 "uuid": "7b48f5e9-053e-443e-9e77-0f39ef711ed9", 00:19:43.525 "is_configured": true, 00:19:43.525 "data_offset": 0, 00:19:43.525 "data_size": 65536 00:19:43.525 } 00:19:43.525 ] 00:19:43.525 }' 00:19:43.525 02:43:08 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:43.525 02:43:08 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:43.525 02:43:08 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:43.525 [2024-07-11 02:43:08.562619] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:43.784 02:43:08 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:43.784 02:43:08 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:19:43.784 02:43:08 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:19:43.784 02:43:08 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:19:43.784 02:43:08 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:19:43.784 02:43:08 -- bdev/bdev_raid.sh@657 -- # local timeout=402 00:19:43.784 02:43:08 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:43.784 02:43:08 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:43.784 02:43:08 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:43.784 02:43:08 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:43.784 02:43:08 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:43.784 02:43:08 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:43.784 02:43:08 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:43.784 02:43:08 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:44.043 02:43:08 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:44.043 "name": "raid_bdev1", 00:19:44.043 "uuid": "60121813-cfc2-404b-9356-3d5c0beb69ba", 00:19:44.043 "strip_size_kb": 0, 00:19:44.043 "state": "online", 00:19:44.043 "raid_level": "raid1", 00:19:44.043 "superblock": false, 00:19:44.043 "num_base_bdevs": 2, 00:19:44.043 "num_base_bdevs_discovered": 2, 00:19:44.043 "num_base_bdevs_operational": 2, 00:19:44.043 "process": { 00:19:44.043 "type": "rebuild", 00:19:44.043 "target": "spare", 00:19:44.043 "progress": { 00:19:44.043 "blocks": 20480, 00:19:44.043 "percent": 31 00:19:44.043 } 00:19:44.043 }, 00:19:44.043 "base_bdevs_list": [ 00:19:44.043 { 00:19:44.043 "name": "spare", 00:19:44.043 "uuid": "ff50d641-2f21-566a-9651-61930e0b9f88", 00:19:44.044 "is_configured": true, 00:19:44.044 "data_offset": 0, 00:19:44.044 "data_size": 65536 00:19:44.044 }, 00:19:44.044 { 00:19:44.044 "name": "BaseBdev2", 00:19:44.044 "uuid": "7b48f5e9-053e-443e-9e77-0f39ef711ed9", 00:19:44.044 "is_configured": true, 00:19:44.044 "data_offset": 0, 00:19:44.044 "data_size": 65536 00:19:44.044 } 00:19:44.044 ] 00:19:44.044 }' 00:19:44.044 02:43:08 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:44.044 [2024-07-11 02:43:08.895402] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:19:44.044 02:43:08 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:44.044 02:43:08 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:44.044 02:43:09 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:44.044 02:43:09 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:19:44.979 [2024-07-11 02:43:09.761746] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:19:44.979 02:43:10 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:44.979 02:43:10 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:44.979 02:43:10 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:44.979 02:43:10 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:44.979 02:43:10 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:44.979 02:43:10 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:44.979 02:43:10 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:44.979 02:43:10 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:45.238 02:43:10 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:45.238 "name": "raid_bdev1", 00:19:45.238 "uuid": "60121813-cfc2-404b-9356-3d5c0beb69ba", 00:19:45.238 "strip_size_kb": 0, 00:19:45.238 "state": "online", 00:19:45.238 "raid_level": "raid1", 00:19:45.238 "superblock": false, 00:19:45.238 "num_base_bdevs": 2, 00:19:45.238 "num_base_bdevs_discovered": 2, 00:19:45.238 "num_base_bdevs_operational": 2, 00:19:45.238 "process": { 00:19:45.238 "type": "rebuild", 00:19:45.238 "target": "spare", 00:19:45.238 "progress": { 00:19:45.238 "blocks": 47104, 00:19:45.238 "percent": 71 00:19:45.238 } 00:19:45.238 }, 00:19:45.238 "base_bdevs_list": [ 00:19:45.238 { 00:19:45.238 "name": "spare", 00:19:45.238 "uuid": "ff50d641-2f21-566a-9651-61930e0b9f88", 00:19:45.238 "is_configured": true, 00:19:45.238 "data_offset": 0, 00:19:45.238 "data_size": 65536 00:19:45.238 }, 00:19:45.238 { 00:19:45.238 "name": "BaseBdev2", 00:19:45.238 "uuid": "7b48f5e9-053e-443e-9e77-0f39ef711ed9", 00:19:45.238 "is_configured": true, 00:19:45.238 "data_offset": 0, 00:19:45.238 "data_size": 65536 00:19:45.238 } 00:19:45.238 ] 00:19:45.238 }' 00:19:45.238 02:43:10 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:45.238 02:43:10 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:45.238 02:43:10 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:45.496 02:43:10 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:45.496 02:43:10 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:19:45.496 [2024-07-11 02:43:10.403811] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:19:45.496 [2024-07-11 02:43:10.404350] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:19:46.434 [2024-07-11 02:43:11.278462] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:46.434 02:43:11 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:46.434 02:43:11 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:46.434 02:43:11 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:46.434 02:43:11 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:46.434 02:43:11 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:46.435 02:43:11 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:46.435 02:43:11 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:46.435 02:43:11 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:46.435 [2024-07-11 02:43:11.384988] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:46.435 [2024-07-11 02:43:11.386462] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:46.694 02:43:11 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:46.694 "name": "raid_bdev1", 00:19:46.694 "uuid": "60121813-cfc2-404b-9356-3d5c0beb69ba", 00:19:46.694 "strip_size_kb": 0, 00:19:46.694 "state": "online", 00:19:46.694 "raid_level": "raid1", 00:19:46.694 "superblock": false, 00:19:46.694 "num_base_bdevs": 2, 00:19:46.694 "num_base_bdevs_discovered": 2, 00:19:46.694 "num_base_bdevs_operational": 2, 00:19:46.694 "base_bdevs_list": [ 00:19:46.694 { 00:19:46.694 "name": "spare", 00:19:46.694 "uuid": "ff50d641-2f21-566a-9651-61930e0b9f88", 00:19:46.694 "is_configured": true, 00:19:46.694 "data_offset": 0, 00:19:46.694 "data_size": 65536 00:19:46.694 }, 00:19:46.694 { 00:19:46.694 "name": "BaseBdev2", 00:19:46.694 "uuid": "7b48f5e9-053e-443e-9e77-0f39ef711ed9", 00:19:46.694 "is_configured": true, 00:19:46.694 "data_offset": 0, 00:19:46.694 "data_size": 65536 00:19:46.694 } 00:19:46.694 ] 00:19:46.694 }' 00:19:46.694 02:43:11 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:46.694 02:43:11 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:46.694 02:43:11 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:46.694 02:43:11 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:19:46.694 02:43:11 -- bdev/bdev_raid.sh@660 -- # break 00:19:46.694 02:43:11 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:46.694 02:43:11 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:46.694 02:43:11 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:19:46.694 02:43:11 -- bdev/bdev_raid.sh@185 -- # local target=none 00:19:46.694 02:43:11 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:46.694 02:43:11 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:46.694 02:43:11 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:46.954 02:43:11 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:46.954 "name": "raid_bdev1", 00:19:46.954 "uuid": "60121813-cfc2-404b-9356-3d5c0beb69ba", 00:19:46.954 "strip_size_kb": 0, 00:19:46.954 "state": "online", 00:19:46.954 "raid_level": "raid1", 00:19:46.954 "superblock": false, 00:19:46.954 "num_base_bdevs": 2, 00:19:46.954 "num_base_bdevs_discovered": 2, 00:19:46.954 "num_base_bdevs_operational": 2, 00:19:46.954 "base_bdevs_list": [ 00:19:46.954 { 00:19:46.954 "name": "spare", 00:19:46.954 "uuid": "ff50d641-2f21-566a-9651-61930e0b9f88", 00:19:46.954 "is_configured": true, 00:19:46.954 "data_offset": 0, 00:19:46.954 "data_size": 65536 00:19:46.954 }, 00:19:46.954 { 00:19:46.954 "name": "BaseBdev2", 00:19:46.954 "uuid": "7b48f5e9-053e-443e-9e77-0f39ef711ed9", 00:19:46.954 "is_configured": true, 00:19:46.954 "data_offset": 0, 00:19:46.954 "data_size": 65536 00:19:46.954 } 00:19:46.954 ] 00:19:46.954 }' 00:19:46.954 02:43:11 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:46.954 02:43:11 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:46.954 02:43:11 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:47.213 02:43:12 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:19:47.213 02:43:12 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:47.213 02:43:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:47.213 02:43:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:47.213 02:43:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:47.213 02:43:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:47.213 02:43:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:47.213 02:43:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:47.213 02:43:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:47.213 02:43:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:47.213 02:43:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:47.214 02:43:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:47.214 02:43:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.214 02:43:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:47.214 "name": "raid_bdev1", 00:19:47.214 "uuid": "60121813-cfc2-404b-9356-3d5c0beb69ba", 00:19:47.214 "strip_size_kb": 0, 00:19:47.214 "state": "online", 00:19:47.214 "raid_level": "raid1", 00:19:47.214 "superblock": false, 00:19:47.214 "num_base_bdevs": 2, 00:19:47.214 "num_base_bdevs_discovered": 2, 00:19:47.214 "num_base_bdevs_operational": 2, 00:19:47.214 "base_bdevs_list": [ 00:19:47.214 { 00:19:47.214 "name": "spare", 00:19:47.214 "uuid": "ff50d641-2f21-566a-9651-61930e0b9f88", 00:19:47.214 "is_configured": true, 00:19:47.214 "data_offset": 0, 00:19:47.214 "data_size": 65536 00:19:47.214 }, 00:19:47.214 { 00:19:47.214 "name": "BaseBdev2", 00:19:47.214 "uuid": "7b48f5e9-053e-443e-9e77-0f39ef711ed9", 00:19:47.214 "is_configured": true, 00:19:47.214 "data_offset": 0, 00:19:47.214 "data_size": 65536 00:19:47.214 } 00:19:47.214 ] 00:19:47.214 }' 00:19:47.214 02:43:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:47.214 02:43:12 -- common/autotest_common.sh@10 -- # set +x 00:19:47.782 02:43:12 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:48.040 [2024-07-11 02:43:13.025387] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:48.040 [2024-07-11 02:43:13.025582] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:48.040 00:19:48.040 Latency(us) 00:19:48.040 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:48.040 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:19:48.040 raid_bdev1 : 10.26 118.57 355.71 0.00 0.00 11662.82 268.10 112960.23 00:19:48.040 =================================================================================================================== 00:19:48.040 Total : 118.57 355.71 0.00 0.00 11662.82 268.10 112960.23 00:19:48.040 [2024-07-11 02:43:13.116678] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:48.040 [2024-07-11 02:43:13.116861] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:48.040 [2024-07-11 02:43:13.116969] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:48.040 0 00:19:48.040 [2024-07-11 02:43:13.117164] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name raid_bdev1, state offline 00:19:48.298 02:43:13 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:48.298 02:43:13 -- bdev/bdev_raid.sh@671 -- # jq length 00:19:48.298 02:43:13 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:19:48.298 02:43:13 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:19:48.298 02:43:13 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:19:48.298 02:43:13 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:48.298 02:43:13 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:19:48.298 02:43:13 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:48.298 02:43:13 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:19:48.298 02:43:13 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:48.298 02:43:13 -- bdev/nbd_common.sh@12 -- # local i 00:19:48.298 02:43:13 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:48.298 02:43:13 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:48.298 02:43:13 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:19:48.864 /dev/nbd0 00:19:48.864 02:43:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:48.864 02:43:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:48.864 02:43:13 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:19:48.864 02:43:13 -- common/autotest_common.sh@857 -- # local i 00:19:48.864 02:43:13 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:19:48.864 02:43:13 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:19:48.864 02:43:13 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:19:48.864 02:43:13 -- common/autotest_common.sh@861 -- # break 00:19:48.864 02:43:13 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:19:48.864 02:43:13 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:19:48.864 02:43:13 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:48.864 1+0 records in 00:19:48.864 1+0 records out 00:19:48.864 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000426452 s, 9.6 MB/s 00:19:48.864 02:43:13 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:48.864 02:43:13 -- common/autotest_common.sh@874 -- # size=4096 00:19:48.864 02:43:13 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:48.864 02:43:13 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:19:48.864 02:43:13 -- common/autotest_common.sh@877 -- # return 0 00:19:48.864 02:43:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:48.864 02:43:13 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:48.864 02:43:13 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:19:48.864 02:43:13 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']' 00:19:48.864 02:43:13 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:19:48.864 02:43:13 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:48.864 02:43:13 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:19:48.864 02:43:13 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:48.864 02:43:13 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:19:48.864 02:43:13 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:48.864 02:43:13 -- bdev/nbd_common.sh@12 -- # local i 00:19:48.864 02:43:13 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:48.864 02:43:13 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:48.864 02:43:13 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:19:48.864 /dev/nbd1 00:19:48.864 02:43:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:48.864 02:43:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:48.864 02:43:13 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:19:48.864 02:43:13 -- common/autotest_common.sh@857 -- # local i 00:19:48.864 02:43:13 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:19:48.864 02:43:13 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:19:48.864 02:43:13 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:19:48.864 02:43:13 -- common/autotest_common.sh@861 -- # break 00:19:48.864 02:43:13 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:19:48.864 02:43:13 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:19:48.864 02:43:13 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:49.122 1+0 records in 00:19:49.122 1+0 records out 00:19:49.122 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000505508 s, 8.1 MB/s 00:19:49.122 02:43:13 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:49.122 02:43:13 -- common/autotest_common.sh@874 -- # size=4096 00:19:49.122 02:43:13 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:49.122 02:43:13 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:19:49.122 02:43:13 -- common/autotest_common.sh@877 -- # return 0 00:19:49.122 02:43:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:49.122 02:43:13 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:49.122 02:43:13 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:19:49.122 02:43:14 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:19:49.122 02:43:14 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:49.122 02:43:14 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:19:49.122 02:43:14 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:49.122 02:43:14 -- bdev/nbd_common.sh@51 -- # local i 00:19:49.122 02:43:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:49.122 02:43:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:19:49.380 02:43:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:49.380 02:43:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:49.380 02:43:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:49.380 02:43:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:49.380 02:43:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:49.380 02:43:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:49.380 02:43:14 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:19:49.380 02:43:14 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:19:49.380 02:43:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:49.380 02:43:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:49.380 02:43:14 -- bdev/nbd_common.sh@41 -- # break 00:19:49.380 02:43:14 -- bdev/nbd_common.sh@45 -- # return 0 00:19:49.380 02:43:14 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:19:49.380 02:43:14 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:49.380 02:43:14 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:19:49.380 02:43:14 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:49.380 02:43:14 -- bdev/nbd_common.sh@51 -- # local i 00:19:49.380 02:43:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:49.380 02:43:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:19:49.638 02:43:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:49.638 02:43:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:49.638 02:43:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:49.638 02:43:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:49.638 02:43:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:49.638 02:43:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:49.638 02:43:14 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:19:49.638 02:43:14 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:19:49.638 02:43:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:49.638 02:43:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:49.638 02:43:14 -- bdev/nbd_common.sh@41 -- # break 00:19:49.638 02:43:14 -- bdev/nbd_common.sh@45 -- # return 0 00:19:49.638 02:43:14 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:19:49.638 02:43:14 -- bdev/bdev_raid.sh@709 -- # killprocess 136349 00:19:49.638 02:43:14 -- common/autotest_common.sh@926 -- # '[' -z 136349 ']' 00:19:49.638 02:43:14 -- common/autotest_common.sh@930 -- # kill -0 136349 00:19:49.638 02:43:14 -- common/autotest_common.sh@931 -- # uname 00:19:49.638 02:43:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:49.638 02:43:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 136349 00:19:49.638 killing process with pid 136349 00:19:49.638 Received shutdown signal, test time was about 11.840856 seconds 00:19:49.638 00:19:49.638 Latency(us) 00:19:49.638 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:49.638 =================================================================================================================== 00:19:49.638 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:49.638 02:43:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:49.638 02:43:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:49.638 02:43:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 136349' 00:19:49.638 02:43:14 -- common/autotest_common.sh@945 -- # kill 136349 00:19:49.638 02:43:14 -- common/autotest_common.sh@950 -- # wait 136349 00:19:49.638 [2024-07-11 02:43:14.690153] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:49.638 [2024-07-11 02:43:14.713906] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:49.896 ************************************ 00:19:49.896 END TEST raid_rebuild_test_io 00:19:49.896 ************************************ 00:19:49.896 02:43:14 -- bdev/bdev_raid.sh@711 -- # return 0 00:19:49.896 00:19:49.896 real 0m15.689s 00:19:49.896 user 0m25.005s 00:19:49.896 sys 0m1.667s 00:19:49.896 02:43:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:49.896 02:43:14 -- common/autotest_common.sh@10 -- # set +x 00:19:50.154 02:43:14 -- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true 00:19:50.154 02:43:14 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:19:50.154 02:43:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:50.154 02:43:14 -- common/autotest_common.sh@10 -- # set +x 00:19:50.154 ************************************ 00:19:50.154 START TEST raid_rebuild_test_sb_io 00:19:50.154 ************************************ 00:19:50.154 02:43:15 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 true true 00:19:50.154 02:43:15 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:19:50.154 02:43:15 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:19:50.154 02:43:15 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:19:50.154 02:43:15 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:19:50.154 02:43:15 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:19:50.154 02:43:15 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:19:50.154 02:43:15 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:50.154 02:43:15 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:19:50.154 02:43:15 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:19:50.154 02:43:15 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:50.154 02:43:15 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:19:50.154 02:43:15 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:19:50.154 02:43:15 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:50.154 02:43:15 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:19:50.154 02:43:15 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:19:50.154 02:43:15 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:19:50.154 02:43:15 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:19:50.154 02:43:15 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:19:50.154 02:43:15 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:19:50.154 02:43:15 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:19:50.154 02:43:15 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:19:50.154 02:43:15 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:19:50.154 02:43:15 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:19:50.154 02:43:15 -- bdev/bdev_raid.sh@544 -- # raid_pid=136841 00:19:50.154 02:43:15 -- bdev/bdev_raid.sh@545 -- # waitforlisten 136841 /var/tmp/spdk-raid.sock 00:19:50.154 02:43:15 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:50.154 02:43:15 -- common/autotest_common.sh@819 -- # '[' -z 136841 ']' 00:19:50.154 02:43:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:50.154 02:43:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:50.154 02:43:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:50.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:50.154 02:43:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:50.154 02:43:15 -- common/autotest_common.sh@10 -- # set +x 00:19:50.154 [2024-07-11 02:43:15.067613] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:19:50.154 [2024-07-11 02:43:15.068540] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136841 ] 00:19:50.154 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:50.155 Zero copy mechanism will not be used. 00:19:50.155 [2024-07-11 02:43:15.213107] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.412 [2024-07-11 02:43:15.285595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:50.412 [2024-07-11 02:43:15.337988] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:50.980 02:43:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:50.980 02:43:16 -- common/autotest_common.sh@852 -- # return 0 00:19:50.980 02:43:16 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:19:50.980 02:43:16 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:19:50.980 02:43:16 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:51.239 BaseBdev1_malloc 00:19:51.239 02:43:16 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:51.498 [2024-07-11 02:43:16.472580] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:51.498 [2024-07-11 02:43:16.472837] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:51.498 [2024-07-11 02:43:16.472907] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005d80 00:19:51.498 [2024-07-11 02:43:16.473148] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:51.498 [2024-07-11 02:43:16.475720] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:51.498 [2024-07-11 02:43:16.475930] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:51.498 BaseBdev1 00:19:51.498 02:43:16 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:19:51.498 02:43:16 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:19:51.498 02:43:16 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:51.756 BaseBdev2_malloc 00:19:51.756 02:43:16 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:52.017 [2024-07-11 02:43:16.911094] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:52.017 [2024-07-11 02:43:16.911344] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:52.017 [2024-07-11 02:43:16.911416] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:19:52.017 [2024-07-11 02:43:16.911552] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:52.017 [2024-07-11 02:43:16.913497] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:52.017 [2024-07-11 02:43:16.913704] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:52.017 BaseBdev2 00:19:52.017 02:43:16 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:19:52.284 spare_malloc 00:19:52.284 02:43:17 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:52.284 spare_delay 00:19:52.284 02:43:17 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:52.553 [2024-07-11 02:43:17.520998] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:52.553 [2024-07-11 02:43:17.521229] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:52.553 [2024-07-11 02:43:17.521306] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:19:52.553 [2024-07-11 02:43:17.521546] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:52.553 [2024-07-11 02:43:17.523774] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:52.553 [2024-07-11 02:43:17.523956] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:52.553 spare 00:19:52.553 02:43:17 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:19:52.811 [2024-07-11 02:43:17.749156] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:52.812 [2024-07-11 02:43:17.751197] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:52.812 [2024-07-11 02:43:17.751561] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:19:52.812 [2024-07-11 02:43:17.751694] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:52.812 [2024-07-11 02:43:17.751873] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:19:52.812 [2024-07-11 02:43:17.752361] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:19:52.812 [2024-07-11 02:43:17.752487] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008180 00:19:52.812 [2024-07-11 02:43:17.752725] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:52.812 02:43:17 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:52.812 02:43:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:52.812 02:43:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:52.812 02:43:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:52.812 02:43:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:52.812 02:43:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:52.812 02:43:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:52.812 02:43:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:52.812 02:43:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:52.812 02:43:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:52.812 02:43:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.812 02:43:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:53.070 02:43:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:53.070 "name": "raid_bdev1", 00:19:53.070 "uuid": "f5dda4c7-6980-4ab6-8562-1fd8429a817a", 00:19:53.070 "strip_size_kb": 0, 00:19:53.070 "state": "online", 00:19:53.070 "raid_level": "raid1", 00:19:53.070 "superblock": true, 00:19:53.070 "num_base_bdevs": 2, 00:19:53.070 "num_base_bdevs_discovered": 2, 00:19:53.070 "num_base_bdevs_operational": 2, 00:19:53.070 "base_bdevs_list": [ 00:19:53.070 { 00:19:53.070 "name": "BaseBdev1", 00:19:53.070 "uuid": "0d6c55aa-eb61-5e7c-8773-9a336bab4795", 00:19:53.070 "is_configured": true, 00:19:53.070 "data_offset": 2048, 00:19:53.070 "data_size": 63488 00:19:53.070 }, 00:19:53.070 { 00:19:53.070 "name": "BaseBdev2", 00:19:53.070 "uuid": "3594d110-0f18-5e19-8d08-49732000c2b0", 00:19:53.070 "is_configured": true, 00:19:53.070 "data_offset": 2048, 00:19:53.070 "data_size": 63488 00:19:53.070 } 00:19:53.070 ] 00:19:53.070 }' 00:19:53.070 02:43:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:53.070 02:43:18 -- common/autotest_common.sh@10 -- # set +x 00:19:53.638 02:43:18 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:53.638 02:43:18 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:19:53.896 [2024-07-11 02:43:18.881472] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:53.896 02:43:18 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:19:53.896 02:43:18 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:53.896 02:43:18 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:54.155 02:43:19 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:19:54.155 02:43:19 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:19:54.155 02:43:19 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:19:54.155 02:43:19 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:19:54.155 [2024-07-11 02:43:19.187788] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:19:54.155 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:54.155 Zero copy mechanism will not be used. 00:19:54.155 Running I/O for 60 seconds... 00:19:54.414 [2024-07-11 02:43:19.341684] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:54.414 [2024-07-11 02:43:19.348657] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000022c0 00:19:54.414 02:43:19 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:54.414 02:43:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:54.414 02:43:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:54.414 02:43:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:54.414 02:43:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:54.414 02:43:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:19:54.414 02:43:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:54.414 02:43:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:54.414 02:43:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:54.414 02:43:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:54.414 02:43:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:54.414 02:43:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:54.673 02:43:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:54.673 "name": "raid_bdev1", 00:19:54.673 "uuid": "f5dda4c7-6980-4ab6-8562-1fd8429a817a", 00:19:54.673 "strip_size_kb": 0, 00:19:54.673 "state": "online", 00:19:54.673 "raid_level": "raid1", 00:19:54.673 "superblock": true, 00:19:54.673 "num_base_bdevs": 2, 00:19:54.673 "num_base_bdevs_discovered": 1, 00:19:54.673 "num_base_bdevs_operational": 1, 00:19:54.673 "base_bdevs_list": [ 00:19:54.673 { 00:19:54.673 "name": null, 00:19:54.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:54.673 "is_configured": false, 00:19:54.673 "data_offset": 2048, 00:19:54.673 "data_size": 63488 00:19:54.673 }, 00:19:54.673 { 00:19:54.673 "name": "BaseBdev2", 00:19:54.673 "uuid": "3594d110-0f18-5e19-8d08-49732000c2b0", 00:19:54.673 "is_configured": true, 00:19:54.673 "data_offset": 2048, 00:19:54.673 "data_size": 63488 00:19:54.673 } 00:19:54.673 ] 00:19:54.673 }' 00:19:54.673 02:43:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:54.673 02:43:19 -- common/autotest_common.sh@10 -- # set +x 00:19:55.238 02:43:20 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:55.496 [2024-07-11 02:43:20.432799] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:19:55.496 [2024-07-11 02:43:20.433085] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:55.496 02:43:20 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:19:55.496 [2024-07-11 02:43:20.478944] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:19:55.496 [2024-07-11 02:43:20.480903] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:55.755 [2024-07-11 02:43:20.601453] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:55.755 [2024-07-11 02:43:20.601989] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:55.755 [2024-07-11 02:43:20.815826] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:55.755 [2024-07-11 02:43:20.816106] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:56.322 [2024-07-11 02:43:21.139249] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:56.580 02:43:21 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:56.580 02:43:21 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:56.581 02:43:21 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:56.581 02:43:21 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:56.581 02:43:21 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:56.581 02:43:21 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:56.581 02:43:21 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.581 [2024-07-11 02:43:21.596391] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:19:56.838 02:43:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:56.838 "name": "raid_bdev1", 00:19:56.838 "uuid": "f5dda4c7-6980-4ab6-8562-1fd8429a817a", 00:19:56.838 "strip_size_kb": 0, 00:19:56.838 "state": "online", 00:19:56.838 "raid_level": "raid1", 00:19:56.838 "superblock": true, 00:19:56.838 "num_base_bdevs": 2, 00:19:56.838 "num_base_bdevs_discovered": 2, 00:19:56.838 "num_base_bdevs_operational": 2, 00:19:56.838 "process": { 00:19:56.838 "type": "rebuild", 00:19:56.838 "target": "spare", 00:19:56.838 "progress": { 00:19:56.838 "blocks": 14336, 00:19:56.838 "percent": 22 00:19:56.838 } 00:19:56.838 }, 00:19:56.838 "base_bdevs_list": [ 00:19:56.838 { 00:19:56.838 "name": "spare", 00:19:56.838 "uuid": "423d1510-3dfe-5801-afaf-8bef85564888", 00:19:56.838 "is_configured": true, 00:19:56.838 "data_offset": 2048, 00:19:56.838 "data_size": 63488 00:19:56.838 }, 00:19:56.838 { 00:19:56.838 "name": "BaseBdev2", 00:19:56.838 "uuid": "3594d110-0f18-5e19-8d08-49732000c2b0", 00:19:56.838 "is_configured": true, 00:19:56.838 "data_offset": 2048, 00:19:56.838 "data_size": 63488 00:19:56.838 } 00:19:56.838 ] 00:19:56.838 }' 00:19:56.838 02:43:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:56.838 02:43:21 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:56.838 02:43:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:56.838 [2024-07-11 02:43:21.811282] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:56.838 [2024-07-11 02:43:21.811705] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:56.838 02:43:21 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:56.838 02:43:21 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:19:57.096 [2024-07-11 02:43:22.055702] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:57.096 [2024-07-11 02:43:22.127909] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:19:57.096 [2024-07-11 02:43:22.128402] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:19:57.354 [2024-07-11 02:43:22.235039] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:57.354 [2024-07-11 02:43:22.237006] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:57.354 [2024-07-11 02:43:22.243821] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000022c0 00:19:57.354 02:43:22 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:57.354 02:43:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:57.354 02:43:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:57.354 02:43:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:57.354 02:43:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:57.354 02:43:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:19:57.354 02:43:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:57.354 02:43:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:57.354 02:43:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:57.354 02:43:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:57.354 02:43:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:57.354 02:43:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.612 02:43:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:57.612 "name": "raid_bdev1", 00:19:57.612 "uuid": "f5dda4c7-6980-4ab6-8562-1fd8429a817a", 00:19:57.612 "strip_size_kb": 0, 00:19:57.612 "state": "online", 00:19:57.612 "raid_level": "raid1", 00:19:57.612 "superblock": true, 00:19:57.612 "num_base_bdevs": 2, 00:19:57.612 "num_base_bdevs_discovered": 1, 00:19:57.612 "num_base_bdevs_operational": 1, 00:19:57.612 "base_bdevs_list": [ 00:19:57.612 { 00:19:57.612 "name": null, 00:19:57.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:57.612 "is_configured": false, 00:19:57.612 "data_offset": 2048, 00:19:57.612 "data_size": 63488 00:19:57.612 }, 00:19:57.612 { 00:19:57.612 "name": "BaseBdev2", 00:19:57.612 "uuid": "3594d110-0f18-5e19-8d08-49732000c2b0", 00:19:57.612 "is_configured": true, 00:19:57.612 "data_offset": 2048, 00:19:57.612 "data_size": 63488 00:19:57.612 } 00:19:57.612 ] 00:19:57.612 }' 00:19:57.612 02:43:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:57.612 02:43:22 -- common/autotest_common.sh@10 -- # set +x 00:19:58.180 02:43:23 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:58.180 02:43:23 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:58.180 02:43:23 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:19:58.180 02:43:23 -- bdev/bdev_raid.sh@185 -- # local target=none 00:19:58.180 02:43:23 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:58.180 02:43:23 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:58.180 02:43:23 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.439 02:43:23 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:58.439 "name": "raid_bdev1", 00:19:58.439 "uuid": "f5dda4c7-6980-4ab6-8562-1fd8429a817a", 00:19:58.439 "strip_size_kb": 0, 00:19:58.439 "state": "online", 00:19:58.439 "raid_level": "raid1", 00:19:58.439 "superblock": true, 00:19:58.439 "num_base_bdevs": 2, 00:19:58.439 "num_base_bdevs_discovered": 1, 00:19:58.439 "num_base_bdevs_operational": 1, 00:19:58.439 "base_bdevs_list": [ 00:19:58.439 { 00:19:58.439 "name": null, 00:19:58.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.439 "is_configured": false, 00:19:58.439 "data_offset": 2048, 00:19:58.439 "data_size": 63488 00:19:58.439 }, 00:19:58.439 { 00:19:58.439 "name": "BaseBdev2", 00:19:58.439 "uuid": "3594d110-0f18-5e19-8d08-49732000c2b0", 00:19:58.439 "is_configured": true, 00:19:58.439 "data_offset": 2048, 00:19:58.439 "data_size": 63488 00:19:58.439 } 00:19:58.439 ] 00:19:58.439 }' 00:19:58.439 02:43:23 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:58.696 02:43:23 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:58.696 02:43:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:58.696 02:43:23 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:19:58.696 02:43:23 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:58.696 [2024-07-11 02:43:23.783399] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:19:58.696 [2024-07-11 02:43:23.783586] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:58.954 02:43:23 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:19:58.954 [2024-07-11 02:43:23.824771] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:19:58.954 [2024-07-11 02:43:23.826856] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:58.954 [2024-07-11 02:43:23.935598] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:58.954 [2024-07-11 02:43:23.936023] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:58.954 [2024-07-11 02:43:24.044685] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:58.954 [2024-07-11 02:43:24.044965] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:59.212 [2024-07-11 02:43:24.266911] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:59.212 [2024-07-11 02:43:24.267437] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:59.470 [2024-07-11 02:43:24.475524] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:59.470 [2024-07-11 02:43:24.475961] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:59.728 02:43:24 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:59.728 02:43:24 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:59.728 02:43:24 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:59.728 02:43:24 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:59.728 02:43:24 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:59.728 02:43:24 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:59.728 02:43:24 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:59.728 [2024-07-11 02:43:24.817212] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:19:59.986 [2024-07-11 02:43:24.918660] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:59.986 02:43:25 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:59.986 "name": "raid_bdev1", 00:19:59.986 "uuid": "f5dda4c7-6980-4ab6-8562-1fd8429a817a", 00:19:59.986 "strip_size_kb": 0, 00:19:59.986 "state": "online", 00:19:59.986 "raid_level": "raid1", 00:19:59.986 "superblock": true, 00:19:59.986 "num_base_bdevs": 2, 00:19:59.986 "num_base_bdevs_discovered": 2, 00:19:59.986 "num_base_bdevs_operational": 2, 00:19:59.986 "process": { 00:19:59.986 "type": "rebuild", 00:19:59.986 "target": "spare", 00:19:59.986 "progress": { 00:19:59.986 "blocks": 16384, 00:19:59.986 "percent": 25 00:19:59.986 } 00:19:59.986 }, 00:19:59.986 "base_bdevs_list": [ 00:19:59.986 { 00:19:59.986 "name": "spare", 00:19:59.986 "uuid": "423d1510-3dfe-5801-afaf-8bef85564888", 00:19:59.986 "is_configured": true, 00:19:59.986 "data_offset": 2048, 00:19:59.986 "data_size": 63488 00:19:59.987 }, 00:19:59.987 { 00:19:59.987 "name": "BaseBdev2", 00:19:59.987 "uuid": "3594d110-0f18-5e19-8d08-49732000c2b0", 00:19:59.987 "is_configured": true, 00:19:59.987 "data_offset": 2048, 00:19:59.987 "data_size": 63488 00:19:59.987 } 00:19:59.987 ] 00:19:59.987 }' 00:19:59.987 02:43:25 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:00.245 02:43:25 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:00.245 02:43:25 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:00.245 [2024-07-11 02:43:25.170662] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:20:00.245 02:43:25 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:00.245 02:43:25 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:20:00.245 02:43:25 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:20:00.245 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:20:00.245 02:43:25 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:20:00.245 02:43:25 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:20:00.245 02:43:25 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:20:00.245 02:43:25 -- bdev/bdev_raid.sh@657 -- # local timeout=419 00:20:00.245 02:43:25 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:00.245 02:43:25 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:00.245 02:43:25 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:00.245 02:43:25 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:00.245 02:43:25 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:00.245 02:43:25 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:00.245 02:43:25 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:00.245 02:43:25 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.504 02:43:25 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:00.504 "name": "raid_bdev1", 00:20:00.504 "uuid": "f5dda4c7-6980-4ab6-8562-1fd8429a817a", 00:20:00.504 "strip_size_kb": 0, 00:20:00.504 "state": "online", 00:20:00.504 "raid_level": "raid1", 00:20:00.504 "superblock": true, 00:20:00.504 "num_base_bdevs": 2, 00:20:00.504 "num_base_bdevs_discovered": 2, 00:20:00.504 "num_base_bdevs_operational": 2, 00:20:00.504 "process": { 00:20:00.504 "type": "rebuild", 00:20:00.504 "target": "spare", 00:20:00.504 "progress": { 00:20:00.504 "blocks": 20480, 00:20:00.504 "percent": 32 00:20:00.504 } 00:20:00.504 }, 00:20:00.504 "base_bdevs_list": [ 00:20:00.504 { 00:20:00.504 "name": "spare", 00:20:00.504 "uuid": "423d1510-3dfe-5801-afaf-8bef85564888", 00:20:00.504 "is_configured": true, 00:20:00.504 "data_offset": 2048, 00:20:00.504 "data_size": 63488 00:20:00.504 }, 00:20:00.504 { 00:20:00.504 "name": "BaseBdev2", 00:20:00.504 "uuid": "3594d110-0f18-5e19-8d08-49732000c2b0", 00:20:00.504 "is_configured": true, 00:20:00.504 "data_offset": 2048, 00:20:00.504 "data_size": 63488 00:20:00.504 } 00:20:00.504 ] 00:20:00.504 }' 00:20:00.504 02:43:25 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:00.504 [2024-07-11 02:43:25.392622] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:20:00.504 [2024-07-11 02:43:25.393011] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:20:00.504 02:43:25 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:00.504 02:43:25 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:00.504 02:43:25 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:00.504 02:43:25 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:00.762 [2024-07-11 02:43:25.844264] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:20:01.695 [2024-07-11 02:43:26.493446] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:20:01.695 02:43:26 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:01.695 02:43:26 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:01.695 02:43:26 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:01.695 02:43:26 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:01.695 02:43:26 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:01.695 02:43:26 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:01.695 02:43:26 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:01.695 02:43:26 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:01.695 02:43:26 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:01.695 "name": "raid_bdev1", 00:20:01.695 "uuid": "f5dda4c7-6980-4ab6-8562-1fd8429a817a", 00:20:01.695 "strip_size_kb": 0, 00:20:01.695 "state": "online", 00:20:01.695 "raid_level": "raid1", 00:20:01.695 "superblock": true, 00:20:01.695 "num_base_bdevs": 2, 00:20:01.695 "num_base_bdevs_discovered": 2, 00:20:01.695 "num_base_bdevs_operational": 2, 00:20:01.695 "process": { 00:20:01.695 "type": "rebuild", 00:20:01.695 "target": "spare", 00:20:01.695 "progress": { 00:20:01.695 "blocks": 43008, 00:20:01.695 "percent": 67 00:20:01.695 } 00:20:01.695 }, 00:20:01.695 "base_bdevs_list": [ 00:20:01.695 { 00:20:01.695 "name": "spare", 00:20:01.695 "uuid": "423d1510-3dfe-5801-afaf-8bef85564888", 00:20:01.695 "is_configured": true, 00:20:01.695 "data_offset": 2048, 00:20:01.695 "data_size": 63488 00:20:01.695 }, 00:20:01.695 { 00:20:01.695 "name": "BaseBdev2", 00:20:01.695 "uuid": "3594d110-0f18-5e19-8d08-49732000c2b0", 00:20:01.695 "is_configured": true, 00:20:01.695 "data_offset": 2048, 00:20:01.695 "data_size": 63488 00:20:01.695 } 00:20:01.695 ] 00:20:01.695 }' 00:20:01.695 02:43:26 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:01.954 02:43:26 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:01.954 02:43:26 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:01.954 02:43:26 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:01.954 02:43:26 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:01.954 [2024-07-11 02:43:26.922677] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:20:01.954 [2024-07-11 02:43:26.923017] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:20:02.212 [2024-07-11 02:43:27.246614] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:20:02.778 02:43:27 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:02.778 02:43:27 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:02.778 02:43:27 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:02.778 02:43:27 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:02.778 02:43:27 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:02.778 02:43:27 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:02.778 02:43:27 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:02.778 02:43:27 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:03.037 [2024-07-11 02:43:27.898322] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:03.037 [2024-07-11 02:43:27.998371] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:03.037 [2024-07-11 02:43:28.000190] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:03.037 02:43:28 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:03.037 "name": "raid_bdev1", 00:20:03.037 "uuid": "f5dda4c7-6980-4ab6-8562-1fd8429a817a", 00:20:03.037 "strip_size_kb": 0, 00:20:03.037 "state": "online", 00:20:03.037 "raid_level": "raid1", 00:20:03.037 "superblock": true, 00:20:03.037 "num_base_bdevs": 2, 00:20:03.037 "num_base_bdevs_discovered": 2, 00:20:03.037 "num_base_bdevs_operational": 2, 00:20:03.037 "base_bdevs_list": [ 00:20:03.037 { 00:20:03.037 "name": "spare", 00:20:03.037 "uuid": "423d1510-3dfe-5801-afaf-8bef85564888", 00:20:03.037 "is_configured": true, 00:20:03.037 "data_offset": 2048, 00:20:03.037 "data_size": 63488 00:20:03.037 }, 00:20:03.037 { 00:20:03.037 "name": "BaseBdev2", 00:20:03.037 "uuid": "3594d110-0f18-5e19-8d08-49732000c2b0", 00:20:03.037 "is_configured": true, 00:20:03.037 "data_offset": 2048, 00:20:03.037 "data_size": 63488 00:20:03.037 } 00:20:03.037 ] 00:20:03.037 }' 00:20:03.037 02:43:28 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:03.295 02:43:28 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:03.295 02:43:28 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:03.295 02:43:28 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:20:03.295 02:43:28 -- bdev/bdev_raid.sh@660 -- # break 00:20:03.295 02:43:28 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:03.295 02:43:28 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:03.295 02:43:28 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:03.295 02:43:28 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:03.295 02:43:28 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:03.295 02:43:28 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:03.295 02:43:28 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:03.553 02:43:28 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:03.553 "name": "raid_bdev1", 00:20:03.553 "uuid": "f5dda4c7-6980-4ab6-8562-1fd8429a817a", 00:20:03.553 "strip_size_kb": 0, 00:20:03.553 "state": "online", 00:20:03.553 "raid_level": "raid1", 00:20:03.553 "superblock": true, 00:20:03.553 "num_base_bdevs": 2, 00:20:03.553 "num_base_bdevs_discovered": 2, 00:20:03.553 "num_base_bdevs_operational": 2, 00:20:03.553 "base_bdevs_list": [ 00:20:03.553 { 00:20:03.553 "name": "spare", 00:20:03.553 "uuid": "423d1510-3dfe-5801-afaf-8bef85564888", 00:20:03.553 "is_configured": true, 00:20:03.553 "data_offset": 2048, 00:20:03.553 "data_size": 63488 00:20:03.553 }, 00:20:03.553 { 00:20:03.553 "name": "BaseBdev2", 00:20:03.553 "uuid": "3594d110-0f18-5e19-8d08-49732000c2b0", 00:20:03.553 "is_configured": true, 00:20:03.553 "data_offset": 2048, 00:20:03.553 "data_size": 63488 00:20:03.553 } 00:20:03.553 ] 00:20:03.553 }' 00:20:03.553 02:43:28 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:03.553 02:43:28 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:03.553 02:43:28 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:03.553 02:43:28 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:03.553 02:43:28 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:03.553 02:43:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:03.553 02:43:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:03.553 02:43:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:03.553 02:43:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:03.553 02:43:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:03.553 02:43:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:03.553 02:43:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:03.553 02:43:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:03.553 02:43:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:03.553 02:43:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:03.553 02:43:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:03.811 02:43:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:03.811 "name": "raid_bdev1", 00:20:03.811 "uuid": "f5dda4c7-6980-4ab6-8562-1fd8429a817a", 00:20:03.811 "strip_size_kb": 0, 00:20:03.811 "state": "online", 00:20:03.811 "raid_level": "raid1", 00:20:03.811 "superblock": true, 00:20:03.811 "num_base_bdevs": 2, 00:20:03.811 "num_base_bdevs_discovered": 2, 00:20:03.811 "num_base_bdevs_operational": 2, 00:20:03.811 "base_bdevs_list": [ 00:20:03.811 { 00:20:03.811 "name": "spare", 00:20:03.811 "uuid": "423d1510-3dfe-5801-afaf-8bef85564888", 00:20:03.811 "is_configured": true, 00:20:03.811 "data_offset": 2048, 00:20:03.811 "data_size": 63488 00:20:03.811 }, 00:20:03.811 { 00:20:03.811 "name": "BaseBdev2", 00:20:03.811 "uuid": "3594d110-0f18-5e19-8d08-49732000c2b0", 00:20:03.811 "is_configured": true, 00:20:03.811 "data_offset": 2048, 00:20:03.811 "data_size": 63488 00:20:03.811 } 00:20:03.811 ] 00:20:03.811 }' 00:20:03.811 02:43:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:03.811 02:43:28 -- common/autotest_common.sh@10 -- # set +x 00:20:04.745 02:43:29 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:04.745 [2024-07-11 02:43:29.736850] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:04.745 [2024-07-11 02:43:29.737044] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:04.745 00:20:04.745 Latency(us) 00:20:04.745 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.745 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:20:04.745 raid_bdev1 : 10.64 123.28 369.83 0.00 0.00 10741.19 269.96 113436.86 00:20:04.745 =================================================================================================================== 00:20:04.745 Total : 123.28 369.83 0.00 0.00 10741.19 269.96 113436.86 00:20:04.745 [2024-07-11 02:43:29.836411] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:04.745 [2024-07-11 02:43:29.836601] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:05.003 0 00:20:05.003 [2024-07-11 02:43:29.836727] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:05.003 [2024-07-11 02:43:29.836745] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state offline 00:20:05.003 02:43:29 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:05.003 02:43:29 -- bdev/bdev_raid.sh@671 -- # jq length 00:20:05.262 02:43:30 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:20:05.262 02:43:30 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:20:05.262 02:43:30 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:20:05.262 02:43:30 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:05.262 02:43:30 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:20:05.262 02:43:30 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:05.262 02:43:30 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:20:05.262 02:43:30 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:05.262 02:43:30 -- bdev/nbd_common.sh@12 -- # local i 00:20:05.262 02:43:30 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:05.262 02:43:30 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:05.262 02:43:30 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:20:05.520 /dev/nbd0 00:20:05.520 02:43:30 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:05.520 02:43:30 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:05.520 02:43:30 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:20:05.520 02:43:30 -- common/autotest_common.sh@857 -- # local i 00:20:05.520 02:43:30 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:05.520 02:43:30 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:05.520 02:43:30 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:20:05.520 02:43:30 -- common/autotest_common.sh@861 -- # break 00:20:05.520 02:43:30 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:05.520 02:43:30 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:05.521 02:43:30 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:05.521 1+0 records in 00:20:05.521 1+0 records out 00:20:05.521 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000321917 s, 12.7 MB/s 00:20:05.521 02:43:30 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:05.521 02:43:30 -- common/autotest_common.sh@874 -- # size=4096 00:20:05.521 02:43:30 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:05.521 02:43:30 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:05.521 02:43:30 -- common/autotest_common.sh@877 -- # return 0 00:20:05.521 02:43:30 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:05.521 02:43:30 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:05.521 02:43:30 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:20:05.521 02:43:30 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']' 00:20:05.521 02:43:30 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:20:05.521 02:43:30 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:05.521 02:43:30 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:20:05.521 02:43:30 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:05.521 02:43:30 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:20:05.521 02:43:30 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:05.521 02:43:30 -- bdev/nbd_common.sh@12 -- # local i 00:20:05.521 02:43:30 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:05.521 02:43:30 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:05.521 02:43:30 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:20:05.521 /dev/nbd1 00:20:05.521 02:43:30 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:05.521 02:43:30 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:05.521 02:43:30 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:20:05.521 02:43:30 -- common/autotest_common.sh@857 -- # local i 00:20:05.521 02:43:30 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:05.521 02:43:30 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:05.521 02:43:30 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:20:05.779 02:43:30 -- common/autotest_common.sh@861 -- # break 00:20:05.779 02:43:30 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:05.779 02:43:30 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:05.779 02:43:30 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:05.779 1+0 records in 00:20:05.779 1+0 records out 00:20:05.779 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000182674 s, 22.4 MB/s 00:20:05.779 02:43:30 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:05.779 02:43:30 -- common/autotest_common.sh@874 -- # size=4096 00:20:05.779 02:43:30 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:05.779 02:43:30 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:05.779 02:43:30 -- common/autotest_common.sh@877 -- # return 0 00:20:05.779 02:43:30 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:05.779 02:43:30 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:05.779 02:43:30 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:05.779 02:43:30 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:20:05.779 02:43:30 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:05.779 02:43:30 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:20:05.779 02:43:30 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:05.779 02:43:30 -- bdev/nbd_common.sh@51 -- # local i 00:20:05.779 02:43:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:05.779 02:43:30 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:06.038 02:43:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:06.038 02:43:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:06.038 02:43:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:06.038 02:43:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:06.038 02:43:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:06.038 02:43:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:06.038 02:43:30 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:20:06.038 02:43:31 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:20:06.038 02:43:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:06.038 02:43:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:06.038 02:43:31 -- bdev/nbd_common.sh@41 -- # break 00:20:06.038 02:43:31 -- bdev/nbd_common.sh@45 -- # return 0 00:20:06.038 02:43:31 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:20:06.038 02:43:31 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:06.038 02:43:31 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:20:06.038 02:43:31 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:06.038 02:43:31 -- bdev/nbd_common.sh@51 -- # local i 00:20:06.038 02:43:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:06.038 02:43:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:06.297 02:43:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:06.297 02:43:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:06.297 02:43:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:06.297 02:43:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:06.297 02:43:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:06.297 02:43:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:06.297 02:43:31 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:20:06.556 02:43:31 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:20:06.556 02:43:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:06.556 02:43:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:06.556 02:43:31 -- bdev/nbd_common.sh@41 -- # break 00:20:06.556 02:43:31 -- bdev/nbd_common.sh@45 -- # return 0 00:20:06.556 02:43:31 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:20:06.556 02:43:31 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:20:06.556 02:43:31 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:20:06.556 02:43:31 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:20:06.814 02:43:31 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:07.085 [2024-07-11 02:43:31.914497] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:07.085 [2024-07-11 02:43:31.914646] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:07.085 [2024-07-11 02:43:31.914683] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:20:07.085 [2024-07-11 02:43:31.914710] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:07.085 [2024-07-11 02:43:31.917008] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:07.085 [2024-07-11 02:43:31.917087] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:07.085 [2024-07-11 02:43:31.917185] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:07.085 [2024-07-11 02:43:31.917239] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:07.085 BaseBdev1 00:20:07.085 02:43:31 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:20:07.085 02:43:31 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:20:07.085 02:43:31 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:20:07.085 02:43:32 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:07.380 [2024-07-11 02:43:32.302570] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:07.380 [2024-07-11 02:43:32.302665] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:07.380 [2024-07-11 02:43:32.302698] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:20:07.380 [2024-07-11 02:43:32.302719] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:07.380 [2024-07-11 02:43:32.303122] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:07.380 [2024-07-11 02:43:32.303178] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:07.380 [2024-07-11 02:43:32.303253] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:20:07.380 [2024-07-11 02:43:32.303267] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:20:07.380 [2024-07-11 02:43:32.303274] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:07.380 [2024-07-11 02:43:32.303297] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state configuring 00:20:07.380 [2024-07-11 02:43:32.303368] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:07.380 BaseBdev2 00:20:07.380 02:43:32 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:20:07.644 02:43:32 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:07.903 [2024-07-11 02:43:32.746698] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:07.903 [2024-07-11 02:43:32.746783] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:07.903 [2024-07-11 02:43:32.746838] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:20:07.903 [2024-07-11 02:43:32.746856] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:07.903 [2024-07-11 02:43:32.747267] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:07.903 [2024-07-11 02:43:32.747343] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:07.903 [2024-07-11 02:43:32.747454] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:20:07.903 [2024-07-11 02:43:32.747504] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:07.903 spare 00:20:07.903 02:43:32 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:07.903 02:43:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:07.903 02:43:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:07.903 02:43:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:07.903 02:43:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:07.903 02:43:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:07.903 02:43:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:07.903 02:43:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:07.903 02:43:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:07.903 02:43:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:07.903 02:43:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:07.903 02:43:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:07.903 [2024-07-11 02:43:32.847609] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009f80 00:20:07.903 [2024-07-11 02:43:32.847632] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:07.903 [2024-07-11 02:43:32.847743] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029870 00:20:07.903 [2024-07-11 02:43:32.848120] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009f80 00:20:07.903 [2024-07-11 02:43:32.848142] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009f80 00:20:07.903 [2024-07-11 02:43:32.848247] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:08.161 02:43:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:08.161 "name": "raid_bdev1", 00:20:08.161 "uuid": "f5dda4c7-6980-4ab6-8562-1fd8429a817a", 00:20:08.161 "strip_size_kb": 0, 00:20:08.161 "state": "online", 00:20:08.161 "raid_level": "raid1", 00:20:08.161 "superblock": true, 00:20:08.161 "num_base_bdevs": 2, 00:20:08.161 "num_base_bdevs_discovered": 2, 00:20:08.161 "num_base_bdevs_operational": 2, 00:20:08.161 "base_bdevs_list": [ 00:20:08.161 { 00:20:08.161 "name": "spare", 00:20:08.161 "uuid": "423d1510-3dfe-5801-afaf-8bef85564888", 00:20:08.161 "is_configured": true, 00:20:08.161 "data_offset": 2048, 00:20:08.161 "data_size": 63488 00:20:08.161 }, 00:20:08.161 { 00:20:08.161 "name": "BaseBdev2", 00:20:08.161 "uuid": "3594d110-0f18-5e19-8d08-49732000c2b0", 00:20:08.161 "is_configured": true, 00:20:08.161 "data_offset": 2048, 00:20:08.161 "data_size": 63488 00:20:08.161 } 00:20:08.161 ] 00:20:08.161 }' 00:20:08.161 02:43:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:08.161 02:43:33 -- common/autotest_common.sh@10 -- # set +x 00:20:08.727 02:43:33 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:08.727 02:43:33 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:08.727 02:43:33 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:08.727 02:43:33 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:08.727 02:43:33 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:08.727 02:43:33 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:08.727 02:43:33 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.985 02:43:33 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:08.985 "name": "raid_bdev1", 00:20:08.985 "uuid": "f5dda4c7-6980-4ab6-8562-1fd8429a817a", 00:20:08.985 "strip_size_kb": 0, 00:20:08.985 "state": "online", 00:20:08.985 "raid_level": "raid1", 00:20:08.985 "superblock": true, 00:20:08.985 "num_base_bdevs": 2, 00:20:08.985 "num_base_bdevs_discovered": 2, 00:20:08.985 "num_base_bdevs_operational": 2, 00:20:08.985 "base_bdevs_list": [ 00:20:08.985 { 00:20:08.985 "name": "spare", 00:20:08.985 "uuid": "423d1510-3dfe-5801-afaf-8bef85564888", 00:20:08.985 "is_configured": true, 00:20:08.985 "data_offset": 2048, 00:20:08.985 "data_size": 63488 00:20:08.985 }, 00:20:08.985 { 00:20:08.985 "name": "BaseBdev2", 00:20:08.985 "uuid": "3594d110-0f18-5e19-8d08-49732000c2b0", 00:20:08.985 "is_configured": true, 00:20:08.985 "data_offset": 2048, 00:20:08.985 "data_size": 63488 00:20:08.985 } 00:20:08.985 ] 00:20:08.985 }' 00:20:08.985 02:43:33 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:08.985 02:43:33 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:08.985 02:43:33 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:08.985 02:43:34 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:08.986 02:43:34 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:08.986 02:43:34 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:09.244 02:43:34 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:20:09.244 02:43:34 -- bdev/bdev_raid.sh@709 -- # killprocess 136841 00:20:09.244 02:43:34 -- common/autotest_common.sh@926 -- # '[' -z 136841 ']' 00:20:09.244 02:43:34 -- common/autotest_common.sh@930 -- # kill -0 136841 00:20:09.244 02:43:34 -- common/autotest_common.sh@931 -- # uname 00:20:09.244 02:43:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:09.244 02:43:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 136841 00:20:09.244 killing process with pid 136841 00:20:09.244 Received shutdown signal, test time was about 15.082305 seconds 00:20:09.244 00:20:09.244 Latency(us) 00:20:09.244 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.244 =================================================================================================================== 00:20:09.244 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:09.244 02:43:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:09.244 02:43:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:09.244 02:43:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 136841' 00:20:09.244 02:43:34 -- common/autotest_common.sh@945 -- # kill 136841 00:20:09.244 02:43:34 -- common/autotest_common.sh@950 -- # wait 136841 00:20:09.244 [2024-07-11 02:43:34.272217] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:09.244 [2024-07-11 02:43:34.272355] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:09.244 [2024-07-11 02:43:34.272438] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:09.244 [2024-07-11 02:43:34.272452] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009f80 name raid_bdev1, state offline 00:20:09.244 [2024-07-11 02:43:34.296824] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:09.503 ************************************ 00:20:09.503 END TEST raid_rebuild_test_sb_io 00:20:09.503 ************************************ 00:20:09.503 02:43:34 -- bdev/bdev_raid.sh@711 -- # return 0 00:20:09.503 00:20:09.503 real 0m19.528s 00:20:09.503 user 0m32.319s 00:20:09.503 sys 0m2.131s 00:20:09.503 02:43:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:09.503 02:43:34 -- common/autotest_common.sh@10 -- # set +x 00:20:09.503 02:43:34 -- bdev/bdev_raid.sh@734 -- # for n in 2 4 00:20:09.503 02:43:34 -- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false 00:20:09.503 02:43:34 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:20:09.503 02:43:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:09.503 02:43:34 -- common/autotest_common.sh@10 -- # set +x 00:20:09.503 ************************************ 00:20:09.503 START TEST raid_rebuild_test 00:20:09.503 ************************************ 00:20:09.503 02:43:34 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 false false 00:20:09.503 02:43:34 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:20:09.503 02:43:34 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:20:09.503 02:43:34 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:20:09.503 02:43:34 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:20:09.503 02:43:34 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:20:09.503 02:43:34 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:20:09.503 02:43:34 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:09.503 02:43:34 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:20:09.503 02:43:34 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:09.503 02:43:34 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:09.503 02:43:34 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:20:09.503 02:43:34 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:09.503 02:43:34 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:09.503 02:43:34 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:20:09.503 02:43:34 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:09.503 02:43:34 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:09.503 02:43:34 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:20:09.503 02:43:34 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:09.503 02:43:34 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:09.761 02:43:34 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:20:09.761 02:43:34 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:20:09.761 02:43:34 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:20:09.761 02:43:34 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:20:09.761 02:43:34 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:20:09.761 02:43:34 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:20:09.761 02:43:34 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:20:09.761 02:43:34 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:20:09.761 02:43:34 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:20:09.761 02:43:34 -- bdev/bdev_raid.sh@544 -- # raid_pid=137406 00:20:09.761 02:43:34 -- bdev/bdev_raid.sh@545 -- # waitforlisten 137406 /var/tmp/spdk-raid.sock 00:20:09.761 02:43:34 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:09.761 02:43:34 -- common/autotest_common.sh@819 -- # '[' -z 137406 ']' 00:20:09.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:09.761 02:43:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:09.761 02:43:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:09.761 02:43:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:09.761 02:43:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:09.761 02:43:34 -- common/autotest_common.sh@10 -- # set +x 00:20:09.761 [2024-07-11 02:43:34.635349] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:20:09.761 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:09.761 Zero copy mechanism will not be used. 00:20:09.761 [2024-07-11 02:43:34.635532] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137406 ] 00:20:09.761 [2024-07-11 02:43:34.772825] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.761 [2024-07-11 02:43:34.832921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:10.019 [2024-07-11 02:43:34.883542] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:10.584 02:43:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:10.584 02:43:35 -- common/autotest_common.sh@852 -- # return 0 00:20:10.584 02:43:35 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:10.584 02:43:35 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:10.584 02:43:35 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:10.864 BaseBdev1 00:20:10.864 02:43:35 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:10.864 02:43:35 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:10.864 02:43:35 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:11.120 BaseBdev2 00:20:11.120 02:43:36 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:11.120 02:43:36 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:11.120 02:43:36 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:11.376 BaseBdev3 00:20:11.377 02:43:36 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:11.377 02:43:36 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:11.377 02:43:36 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:20:11.377 BaseBdev4 00:20:11.633 02:43:36 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:11.633 spare_malloc 00:20:11.633 02:43:36 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:11.889 spare_delay 00:20:11.889 02:43:36 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:12.146 [2024-07-11 02:43:37.117067] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:12.146 [2024-07-11 02:43:37.117195] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:12.146 [2024-07-11 02:43:37.117231] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:20:12.146 [2024-07-11 02:43:37.117267] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:12.146 [2024-07-11 02:43:37.119666] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:12.146 [2024-07-11 02:43:37.119732] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:12.146 spare 00:20:12.146 02:43:37 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:20:12.403 [2024-07-11 02:43:37.305136] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:12.403 [2024-07-11 02:43:37.306750] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:12.403 [2024-07-11 02:43:37.306798] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:12.403 [2024-07-11 02:43:37.306830] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:12.403 [2024-07-11 02:43:37.306908] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:20:12.403 [2024-07-11 02:43:37.306919] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:12.403 [2024-07-11 02:43:37.307067] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:20:12.403 [2024-07-11 02:43:37.307406] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:20:12.403 [2024-07-11 02:43:37.307419] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008180 00:20:12.403 [2024-07-11 02:43:37.307558] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:12.403 02:43:37 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:12.403 02:43:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:12.403 02:43:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:12.403 02:43:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:12.403 02:43:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:12.403 02:43:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:12.403 02:43:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:12.403 02:43:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:12.403 02:43:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:12.403 02:43:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:12.403 02:43:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:12.403 02:43:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:12.661 02:43:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:12.661 "name": "raid_bdev1", 00:20:12.661 "uuid": "8ac2856f-9c74-440d-bf97-1e7088268d09", 00:20:12.661 "strip_size_kb": 0, 00:20:12.661 "state": "online", 00:20:12.661 "raid_level": "raid1", 00:20:12.661 "superblock": false, 00:20:12.661 "num_base_bdevs": 4, 00:20:12.661 "num_base_bdevs_discovered": 4, 00:20:12.661 "num_base_bdevs_operational": 4, 00:20:12.661 "base_bdevs_list": [ 00:20:12.661 { 00:20:12.661 "name": "BaseBdev1", 00:20:12.661 "uuid": "94240558-2e9a-410e-93ce-1e12b3059cf0", 00:20:12.661 "is_configured": true, 00:20:12.661 "data_offset": 0, 00:20:12.661 "data_size": 65536 00:20:12.661 }, 00:20:12.661 { 00:20:12.661 "name": "BaseBdev2", 00:20:12.661 "uuid": "896d5196-316e-4796-9900-81f8e0b16f11", 00:20:12.661 "is_configured": true, 00:20:12.661 "data_offset": 0, 00:20:12.661 "data_size": 65536 00:20:12.661 }, 00:20:12.661 { 00:20:12.661 "name": "BaseBdev3", 00:20:12.662 "uuid": "c332a237-1d05-47a4-805f-5a842b18d39b", 00:20:12.662 "is_configured": true, 00:20:12.662 "data_offset": 0, 00:20:12.662 "data_size": 65536 00:20:12.662 }, 00:20:12.662 { 00:20:12.662 "name": "BaseBdev4", 00:20:12.662 "uuid": "d34c12e2-fa35-4d62-bf0e-db05fd99d0fb", 00:20:12.662 "is_configured": true, 00:20:12.662 "data_offset": 0, 00:20:12.662 "data_size": 65536 00:20:12.662 } 00:20:12.662 ] 00:20:12.662 }' 00:20:12.662 02:43:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:12.662 02:43:37 -- common/autotest_common.sh@10 -- # set +x 00:20:13.227 02:43:38 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:13.227 02:43:38 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:20:13.485 [2024-07-11 02:43:38.477552] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:13.485 02:43:38 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:20:13.485 02:43:38 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:13.485 02:43:38 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:13.743 02:43:38 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:20:13.743 02:43:38 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:20:13.743 02:43:38 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:20:13.743 02:43:38 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:20:13.743 02:43:38 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:13.743 02:43:38 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:20:13.743 02:43:38 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:13.743 02:43:38 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:20:13.743 02:43:38 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:13.743 02:43:38 -- bdev/nbd_common.sh@12 -- # local i 00:20:13.743 02:43:38 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:13.743 02:43:38 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:13.743 02:43:38 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:14.001 [2024-07-11 02:43:38.881456] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:20:14.001 /dev/nbd0 00:20:14.001 02:43:38 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:14.001 02:43:38 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:14.001 02:43:38 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:20:14.001 02:43:38 -- common/autotest_common.sh@857 -- # local i 00:20:14.001 02:43:38 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:14.001 02:43:38 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:14.001 02:43:38 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:20:14.001 02:43:38 -- common/autotest_common.sh@861 -- # break 00:20:14.001 02:43:38 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:14.001 02:43:38 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:14.001 02:43:38 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:14.001 1+0 records in 00:20:14.001 1+0 records out 00:20:14.001 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000774528 s, 5.3 MB/s 00:20:14.001 02:43:38 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:14.001 02:43:38 -- common/autotest_common.sh@874 -- # size=4096 00:20:14.002 02:43:38 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:14.002 02:43:38 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:14.002 02:43:38 -- common/autotest_common.sh@877 -- # return 0 00:20:14.002 02:43:38 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:14.002 02:43:38 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:14.002 02:43:38 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:20:14.002 02:43:38 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:20:14.002 02:43:38 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:20:19.270 65536+0 records in 00:20:19.270 65536+0 records out 00:20:19.270 33554432 bytes (34 MB, 32 MiB) copied, 5.37092 s, 6.2 MB/s 00:20:19.270 02:43:44 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:20:19.270 02:43:44 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:19.270 02:43:44 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:20:19.270 02:43:44 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:19.270 02:43:44 -- bdev/nbd_common.sh@51 -- # local i 00:20:19.270 02:43:44 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:19.270 02:43:44 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:19.528 02:43:44 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:19.528 02:43:44 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:19.528 02:43:44 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:19.528 02:43:44 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:19.528 02:43:44 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:19.528 02:43:44 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:19.528 [2024-07-11 02:43:44.552755] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:19.528 02:43:44 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:20:19.786 02:43:44 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:20:19.786 02:43:44 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:19.786 02:43:44 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:19.786 02:43:44 -- bdev/nbd_common.sh@41 -- # break 00:20:19.786 02:43:44 -- bdev/nbd_common.sh@45 -- # return 0 00:20:19.786 02:43:44 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:20.043 [2024-07-11 02:43:44.956478] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:20.043 02:43:44 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:20.043 02:43:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:20.043 02:43:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:20.043 02:43:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:20.043 02:43:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:20.043 02:43:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:20.043 02:43:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:20.043 02:43:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:20.043 02:43:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:20.043 02:43:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:20.043 02:43:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:20.043 02:43:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:20.301 02:43:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:20.301 "name": "raid_bdev1", 00:20:20.301 "uuid": "8ac2856f-9c74-440d-bf97-1e7088268d09", 00:20:20.301 "strip_size_kb": 0, 00:20:20.301 "state": "online", 00:20:20.301 "raid_level": "raid1", 00:20:20.301 "superblock": false, 00:20:20.301 "num_base_bdevs": 4, 00:20:20.301 "num_base_bdevs_discovered": 3, 00:20:20.301 "num_base_bdevs_operational": 3, 00:20:20.301 "base_bdevs_list": [ 00:20:20.301 { 00:20:20.301 "name": null, 00:20:20.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:20.301 "is_configured": false, 00:20:20.301 "data_offset": 0, 00:20:20.301 "data_size": 65536 00:20:20.301 }, 00:20:20.301 { 00:20:20.301 "name": "BaseBdev2", 00:20:20.301 "uuid": "896d5196-316e-4796-9900-81f8e0b16f11", 00:20:20.301 "is_configured": true, 00:20:20.301 "data_offset": 0, 00:20:20.301 "data_size": 65536 00:20:20.301 }, 00:20:20.301 { 00:20:20.301 "name": "BaseBdev3", 00:20:20.301 "uuid": "c332a237-1d05-47a4-805f-5a842b18d39b", 00:20:20.301 "is_configured": true, 00:20:20.301 "data_offset": 0, 00:20:20.301 "data_size": 65536 00:20:20.301 }, 00:20:20.301 { 00:20:20.301 "name": "BaseBdev4", 00:20:20.301 "uuid": "d34c12e2-fa35-4d62-bf0e-db05fd99d0fb", 00:20:20.301 "is_configured": true, 00:20:20.301 "data_offset": 0, 00:20:20.301 "data_size": 65536 00:20:20.301 } 00:20:20.301 ] 00:20:20.301 }' 00:20:20.301 02:43:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:20.301 02:43:45 -- common/autotest_common.sh@10 -- # set +x 00:20:20.867 02:43:45 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:21.125 [2024-07-11 02:43:46.068668] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:21.125 [2024-07-11 02:43:46.068728] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:21.125 [2024-07-11 02:43:46.072925] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d08030 00:20:21.125 [2024-07-11 02:43:46.074807] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:21.125 02:43:46 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:20:22.058 02:43:47 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:22.058 02:43:47 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:22.058 02:43:47 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:22.058 02:43:47 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:22.058 02:43:47 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:22.058 02:43:47 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:22.058 02:43:47 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:22.316 02:43:47 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:22.316 "name": "raid_bdev1", 00:20:22.316 "uuid": "8ac2856f-9c74-440d-bf97-1e7088268d09", 00:20:22.316 "strip_size_kb": 0, 00:20:22.316 "state": "online", 00:20:22.316 "raid_level": "raid1", 00:20:22.316 "superblock": false, 00:20:22.316 "num_base_bdevs": 4, 00:20:22.316 "num_base_bdevs_discovered": 4, 00:20:22.316 "num_base_bdevs_operational": 4, 00:20:22.316 "process": { 00:20:22.317 "type": "rebuild", 00:20:22.317 "target": "spare", 00:20:22.317 "progress": { 00:20:22.317 "blocks": 24576, 00:20:22.317 "percent": 37 00:20:22.317 } 00:20:22.317 }, 00:20:22.317 "base_bdevs_list": [ 00:20:22.317 { 00:20:22.317 "name": "spare", 00:20:22.317 "uuid": "196b4684-d094-5a31-b910-350430daa0d4", 00:20:22.317 "is_configured": true, 00:20:22.317 "data_offset": 0, 00:20:22.317 "data_size": 65536 00:20:22.317 }, 00:20:22.317 { 00:20:22.317 "name": "BaseBdev2", 00:20:22.317 "uuid": "896d5196-316e-4796-9900-81f8e0b16f11", 00:20:22.317 "is_configured": true, 00:20:22.317 "data_offset": 0, 00:20:22.317 "data_size": 65536 00:20:22.317 }, 00:20:22.317 { 00:20:22.317 "name": "BaseBdev3", 00:20:22.317 "uuid": "c332a237-1d05-47a4-805f-5a842b18d39b", 00:20:22.317 "is_configured": true, 00:20:22.317 "data_offset": 0, 00:20:22.317 "data_size": 65536 00:20:22.317 }, 00:20:22.317 { 00:20:22.317 "name": "BaseBdev4", 00:20:22.317 "uuid": "d34c12e2-fa35-4d62-bf0e-db05fd99d0fb", 00:20:22.317 "is_configured": true, 00:20:22.317 "data_offset": 0, 00:20:22.317 "data_size": 65536 00:20:22.317 } 00:20:22.317 ] 00:20:22.317 }' 00:20:22.317 02:43:47 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:22.317 02:43:47 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:22.317 02:43:47 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:22.575 02:43:47 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:22.575 02:43:47 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:22.575 [2024-07-11 02:43:47.658832] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:22.833 [2024-07-11 02:43:47.683889] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:22.833 [2024-07-11 02:43:47.684014] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:22.833 02:43:47 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:22.833 02:43:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:22.833 02:43:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:22.833 02:43:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:22.833 02:43:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:22.833 02:43:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:22.833 02:43:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:22.833 02:43:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:22.833 02:43:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:22.833 02:43:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:22.833 02:43:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:22.833 02:43:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:22.833 02:43:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:22.833 "name": "raid_bdev1", 00:20:22.833 "uuid": "8ac2856f-9c74-440d-bf97-1e7088268d09", 00:20:22.833 "strip_size_kb": 0, 00:20:22.833 "state": "online", 00:20:22.833 "raid_level": "raid1", 00:20:22.833 "superblock": false, 00:20:22.833 "num_base_bdevs": 4, 00:20:22.833 "num_base_bdevs_discovered": 3, 00:20:22.833 "num_base_bdevs_operational": 3, 00:20:22.833 "base_bdevs_list": [ 00:20:22.833 { 00:20:22.833 "name": null, 00:20:22.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:22.833 "is_configured": false, 00:20:22.833 "data_offset": 0, 00:20:22.833 "data_size": 65536 00:20:22.833 }, 00:20:22.833 { 00:20:22.833 "name": "BaseBdev2", 00:20:22.833 "uuid": "896d5196-316e-4796-9900-81f8e0b16f11", 00:20:22.833 "is_configured": true, 00:20:22.833 "data_offset": 0, 00:20:22.833 "data_size": 65536 00:20:22.833 }, 00:20:22.833 { 00:20:22.834 "name": "BaseBdev3", 00:20:22.834 "uuid": "c332a237-1d05-47a4-805f-5a842b18d39b", 00:20:22.834 "is_configured": true, 00:20:22.834 "data_offset": 0, 00:20:22.834 "data_size": 65536 00:20:22.834 }, 00:20:22.834 { 00:20:22.834 "name": "BaseBdev4", 00:20:22.834 "uuid": "d34c12e2-fa35-4d62-bf0e-db05fd99d0fb", 00:20:22.834 "is_configured": true, 00:20:22.834 "data_offset": 0, 00:20:22.834 "data_size": 65536 00:20:22.834 } 00:20:22.834 ] 00:20:22.834 }' 00:20:22.834 02:43:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:22.834 02:43:47 -- common/autotest_common.sh@10 -- # set +x 00:20:23.768 02:43:48 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:23.768 02:43:48 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:23.768 02:43:48 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:23.768 02:43:48 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:23.768 02:43:48 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:23.768 02:43:48 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:23.768 02:43:48 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:23.768 02:43:48 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:23.768 "name": "raid_bdev1", 00:20:23.768 "uuid": "8ac2856f-9c74-440d-bf97-1e7088268d09", 00:20:23.768 "strip_size_kb": 0, 00:20:23.768 "state": "online", 00:20:23.768 "raid_level": "raid1", 00:20:23.768 "superblock": false, 00:20:23.768 "num_base_bdevs": 4, 00:20:23.768 "num_base_bdevs_discovered": 3, 00:20:23.768 "num_base_bdevs_operational": 3, 00:20:23.768 "base_bdevs_list": [ 00:20:23.768 { 00:20:23.768 "name": null, 00:20:23.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:23.768 "is_configured": false, 00:20:23.768 "data_offset": 0, 00:20:23.768 "data_size": 65536 00:20:23.768 }, 00:20:23.768 { 00:20:23.768 "name": "BaseBdev2", 00:20:23.768 "uuid": "896d5196-316e-4796-9900-81f8e0b16f11", 00:20:23.768 "is_configured": true, 00:20:23.768 "data_offset": 0, 00:20:23.768 "data_size": 65536 00:20:23.768 }, 00:20:23.768 { 00:20:23.768 "name": "BaseBdev3", 00:20:23.768 "uuid": "c332a237-1d05-47a4-805f-5a842b18d39b", 00:20:23.768 "is_configured": true, 00:20:23.768 "data_offset": 0, 00:20:23.768 "data_size": 65536 00:20:23.768 }, 00:20:23.768 { 00:20:23.768 "name": "BaseBdev4", 00:20:23.768 "uuid": "d34c12e2-fa35-4d62-bf0e-db05fd99d0fb", 00:20:23.768 "is_configured": true, 00:20:23.768 "data_offset": 0, 00:20:23.768 "data_size": 65536 00:20:23.768 } 00:20:23.768 ] 00:20:23.768 }' 00:20:23.768 02:43:48 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:24.035 02:43:48 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:24.035 02:43:48 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:24.035 02:43:48 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:24.035 02:43:48 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:24.035 [2024-07-11 02:43:49.085005] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:24.035 [2024-07-11 02:43:49.085048] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:24.035 [2024-07-11 02:43:49.089054] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d081d0 00:20:24.035 [2024-07-11 02:43:49.090908] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:24.035 02:43:49 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:20:25.424 02:43:50 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:25.424 02:43:50 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:25.424 02:43:50 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:25.424 02:43:50 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:25.424 02:43:50 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:25.424 02:43:50 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:25.424 02:43:50 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.424 02:43:50 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:25.424 "name": "raid_bdev1", 00:20:25.424 "uuid": "8ac2856f-9c74-440d-bf97-1e7088268d09", 00:20:25.424 "strip_size_kb": 0, 00:20:25.424 "state": "online", 00:20:25.424 "raid_level": "raid1", 00:20:25.424 "superblock": false, 00:20:25.424 "num_base_bdevs": 4, 00:20:25.424 "num_base_bdevs_discovered": 4, 00:20:25.424 "num_base_bdevs_operational": 4, 00:20:25.424 "process": { 00:20:25.424 "type": "rebuild", 00:20:25.424 "target": "spare", 00:20:25.424 "progress": { 00:20:25.424 "blocks": 22528, 00:20:25.424 "percent": 34 00:20:25.424 } 00:20:25.424 }, 00:20:25.424 "base_bdevs_list": [ 00:20:25.424 { 00:20:25.424 "name": "spare", 00:20:25.424 "uuid": "196b4684-d094-5a31-b910-350430daa0d4", 00:20:25.424 "is_configured": true, 00:20:25.424 "data_offset": 0, 00:20:25.424 "data_size": 65536 00:20:25.424 }, 00:20:25.424 { 00:20:25.424 "name": "BaseBdev2", 00:20:25.424 "uuid": "896d5196-316e-4796-9900-81f8e0b16f11", 00:20:25.424 "is_configured": true, 00:20:25.424 "data_offset": 0, 00:20:25.424 "data_size": 65536 00:20:25.424 }, 00:20:25.424 { 00:20:25.424 "name": "BaseBdev3", 00:20:25.424 "uuid": "c332a237-1d05-47a4-805f-5a842b18d39b", 00:20:25.424 "is_configured": true, 00:20:25.424 "data_offset": 0, 00:20:25.424 "data_size": 65536 00:20:25.424 }, 00:20:25.424 { 00:20:25.424 "name": "BaseBdev4", 00:20:25.424 "uuid": "d34c12e2-fa35-4d62-bf0e-db05fd99d0fb", 00:20:25.424 "is_configured": true, 00:20:25.424 "data_offset": 0, 00:20:25.424 "data_size": 65536 00:20:25.424 } 00:20:25.424 ] 00:20:25.424 }' 00:20:25.424 02:43:50 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:25.424 02:43:50 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:25.424 02:43:50 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:25.424 02:43:50 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:25.424 02:43:50 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:20:25.424 02:43:50 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:20:25.424 02:43:50 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:20:25.424 02:43:50 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:20:25.424 02:43:50 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:25.683 [2024-07-11 02:43:50.586227] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:25.683 [2024-07-11 02:43:50.598176] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d081d0 00:20:25.683 02:43:50 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:20:25.683 02:43:50 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:20:25.683 02:43:50 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:25.683 02:43:50 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:25.683 02:43:50 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:25.683 02:43:50 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:25.683 02:43:50 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:25.683 02:43:50 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:25.683 02:43:50 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.942 02:43:50 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:25.942 "name": "raid_bdev1", 00:20:25.942 "uuid": "8ac2856f-9c74-440d-bf97-1e7088268d09", 00:20:25.942 "strip_size_kb": 0, 00:20:25.942 "state": "online", 00:20:25.942 "raid_level": "raid1", 00:20:25.942 "superblock": false, 00:20:25.942 "num_base_bdevs": 4, 00:20:25.942 "num_base_bdevs_discovered": 3, 00:20:25.942 "num_base_bdevs_operational": 3, 00:20:25.942 "process": { 00:20:25.942 "type": "rebuild", 00:20:25.942 "target": "spare", 00:20:25.942 "progress": { 00:20:25.942 "blocks": 34816, 00:20:25.942 "percent": 53 00:20:25.943 } 00:20:25.943 }, 00:20:25.943 "base_bdevs_list": [ 00:20:25.943 { 00:20:25.943 "name": "spare", 00:20:25.943 "uuid": "196b4684-d094-5a31-b910-350430daa0d4", 00:20:25.943 "is_configured": true, 00:20:25.943 "data_offset": 0, 00:20:25.943 "data_size": 65536 00:20:25.943 }, 00:20:25.943 { 00:20:25.943 "name": null, 00:20:25.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:25.943 "is_configured": false, 00:20:25.943 "data_offset": 0, 00:20:25.943 "data_size": 65536 00:20:25.943 }, 00:20:25.943 { 00:20:25.943 "name": "BaseBdev3", 00:20:25.943 "uuid": "c332a237-1d05-47a4-805f-5a842b18d39b", 00:20:25.943 "is_configured": true, 00:20:25.943 "data_offset": 0, 00:20:25.943 "data_size": 65536 00:20:25.943 }, 00:20:25.943 { 00:20:25.943 "name": "BaseBdev4", 00:20:25.943 "uuid": "d34c12e2-fa35-4d62-bf0e-db05fd99d0fb", 00:20:25.943 "is_configured": true, 00:20:25.943 "data_offset": 0, 00:20:25.943 "data_size": 65536 00:20:25.943 } 00:20:25.943 ] 00:20:25.943 }' 00:20:25.943 02:43:50 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:25.943 02:43:50 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:25.943 02:43:50 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:25.943 02:43:50 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:25.943 02:43:50 -- bdev/bdev_raid.sh@657 -- # local timeout=444 00:20:25.943 02:43:50 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:25.943 02:43:50 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:25.943 02:43:50 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:25.943 02:43:50 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:25.943 02:43:50 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:25.943 02:43:50 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:25.943 02:43:50 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:25.943 02:43:50 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:26.203 02:43:51 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:26.203 "name": "raid_bdev1", 00:20:26.203 "uuid": "8ac2856f-9c74-440d-bf97-1e7088268d09", 00:20:26.203 "strip_size_kb": 0, 00:20:26.203 "state": "online", 00:20:26.203 "raid_level": "raid1", 00:20:26.203 "superblock": false, 00:20:26.203 "num_base_bdevs": 4, 00:20:26.203 "num_base_bdevs_discovered": 3, 00:20:26.203 "num_base_bdevs_operational": 3, 00:20:26.203 "process": { 00:20:26.203 "type": "rebuild", 00:20:26.203 "target": "spare", 00:20:26.203 "progress": { 00:20:26.203 "blocks": 43008, 00:20:26.203 "percent": 65 00:20:26.203 } 00:20:26.203 }, 00:20:26.203 "base_bdevs_list": [ 00:20:26.203 { 00:20:26.203 "name": "spare", 00:20:26.203 "uuid": "196b4684-d094-5a31-b910-350430daa0d4", 00:20:26.203 "is_configured": true, 00:20:26.203 "data_offset": 0, 00:20:26.203 "data_size": 65536 00:20:26.203 }, 00:20:26.203 { 00:20:26.203 "name": null, 00:20:26.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.203 "is_configured": false, 00:20:26.203 "data_offset": 0, 00:20:26.203 "data_size": 65536 00:20:26.203 }, 00:20:26.203 { 00:20:26.203 "name": "BaseBdev3", 00:20:26.203 "uuid": "c332a237-1d05-47a4-805f-5a842b18d39b", 00:20:26.203 "is_configured": true, 00:20:26.203 "data_offset": 0, 00:20:26.203 "data_size": 65536 00:20:26.203 }, 00:20:26.203 { 00:20:26.203 "name": "BaseBdev4", 00:20:26.203 "uuid": "d34c12e2-fa35-4d62-bf0e-db05fd99d0fb", 00:20:26.203 "is_configured": true, 00:20:26.203 "data_offset": 0, 00:20:26.203 "data_size": 65536 00:20:26.203 } 00:20:26.203 ] 00:20:26.203 }' 00:20:26.203 02:43:51 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:26.203 02:43:51 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:26.203 02:43:51 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:26.462 02:43:51 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:26.462 02:43:51 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:27.396 [2024-07-11 02:43:52.306900] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:27.396 [2024-07-11 02:43:52.306963] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:27.396 [2024-07-11 02:43:52.307046] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:27.396 02:43:52 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:27.396 02:43:52 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:27.396 02:43:52 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:27.396 02:43:52 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:27.396 02:43:52 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:27.396 02:43:52 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:27.396 02:43:52 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:27.396 02:43:52 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:27.654 02:43:52 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:27.654 "name": "raid_bdev1", 00:20:27.654 "uuid": "8ac2856f-9c74-440d-bf97-1e7088268d09", 00:20:27.654 "strip_size_kb": 0, 00:20:27.654 "state": "online", 00:20:27.654 "raid_level": "raid1", 00:20:27.654 "superblock": false, 00:20:27.654 "num_base_bdevs": 4, 00:20:27.654 "num_base_bdevs_discovered": 3, 00:20:27.654 "num_base_bdevs_operational": 3, 00:20:27.654 "base_bdevs_list": [ 00:20:27.654 { 00:20:27.654 "name": "spare", 00:20:27.654 "uuid": "196b4684-d094-5a31-b910-350430daa0d4", 00:20:27.654 "is_configured": true, 00:20:27.654 "data_offset": 0, 00:20:27.654 "data_size": 65536 00:20:27.654 }, 00:20:27.654 { 00:20:27.654 "name": null, 00:20:27.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:27.655 "is_configured": false, 00:20:27.655 "data_offset": 0, 00:20:27.655 "data_size": 65536 00:20:27.655 }, 00:20:27.655 { 00:20:27.655 "name": "BaseBdev3", 00:20:27.655 "uuid": "c332a237-1d05-47a4-805f-5a842b18d39b", 00:20:27.655 "is_configured": true, 00:20:27.655 "data_offset": 0, 00:20:27.655 "data_size": 65536 00:20:27.655 }, 00:20:27.655 { 00:20:27.655 "name": "BaseBdev4", 00:20:27.655 "uuid": "d34c12e2-fa35-4d62-bf0e-db05fd99d0fb", 00:20:27.655 "is_configured": true, 00:20:27.655 "data_offset": 0, 00:20:27.655 "data_size": 65536 00:20:27.655 } 00:20:27.655 ] 00:20:27.655 }' 00:20:27.655 02:43:52 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:27.655 02:43:52 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:27.655 02:43:52 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:27.655 02:43:52 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:20:27.655 02:43:52 -- bdev/bdev_raid.sh@660 -- # break 00:20:27.655 02:43:52 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:27.655 02:43:52 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:27.655 02:43:52 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:27.655 02:43:52 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:27.655 02:43:52 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:27.655 02:43:52 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:27.655 02:43:52 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:27.913 02:43:52 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:27.913 "name": "raid_bdev1", 00:20:27.913 "uuid": "8ac2856f-9c74-440d-bf97-1e7088268d09", 00:20:27.913 "strip_size_kb": 0, 00:20:27.913 "state": "online", 00:20:27.913 "raid_level": "raid1", 00:20:27.913 "superblock": false, 00:20:27.913 "num_base_bdevs": 4, 00:20:27.913 "num_base_bdevs_discovered": 3, 00:20:27.913 "num_base_bdevs_operational": 3, 00:20:27.913 "base_bdevs_list": [ 00:20:27.913 { 00:20:27.913 "name": "spare", 00:20:27.913 "uuid": "196b4684-d094-5a31-b910-350430daa0d4", 00:20:27.913 "is_configured": true, 00:20:27.913 "data_offset": 0, 00:20:27.913 "data_size": 65536 00:20:27.913 }, 00:20:27.913 { 00:20:27.913 "name": null, 00:20:27.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:27.913 "is_configured": false, 00:20:27.913 "data_offset": 0, 00:20:27.913 "data_size": 65536 00:20:27.913 }, 00:20:27.913 { 00:20:27.913 "name": "BaseBdev3", 00:20:27.913 "uuid": "c332a237-1d05-47a4-805f-5a842b18d39b", 00:20:27.913 "is_configured": true, 00:20:27.913 "data_offset": 0, 00:20:27.913 "data_size": 65536 00:20:27.913 }, 00:20:27.913 { 00:20:27.913 "name": "BaseBdev4", 00:20:27.913 "uuid": "d34c12e2-fa35-4d62-bf0e-db05fd99d0fb", 00:20:27.913 "is_configured": true, 00:20:27.913 "data_offset": 0, 00:20:27.913 "data_size": 65536 00:20:27.913 } 00:20:27.913 ] 00:20:27.913 }' 00:20:27.913 02:43:52 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:27.913 02:43:52 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:27.913 02:43:52 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:27.913 02:43:52 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:27.913 02:43:52 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:27.913 02:43:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:27.913 02:43:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:27.913 02:43:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:27.913 02:43:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:27.913 02:43:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:27.913 02:43:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:27.913 02:43:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:27.913 02:43:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:27.913 02:43:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:27.913 02:43:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:27.914 02:43:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:28.172 02:43:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:28.172 "name": "raid_bdev1", 00:20:28.172 "uuid": "8ac2856f-9c74-440d-bf97-1e7088268d09", 00:20:28.172 "strip_size_kb": 0, 00:20:28.172 "state": "online", 00:20:28.172 "raid_level": "raid1", 00:20:28.172 "superblock": false, 00:20:28.172 "num_base_bdevs": 4, 00:20:28.172 "num_base_bdevs_discovered": 3, 00:20:28.172 "num_base_bdevs_operational": 3, 00:20:28.172 "base_bdevs_list": [ 00:20:28.172 { 00:20:28.172 "name": "spare", 00:20:28.172 "uuid": "196b4684-d094-5a31-b910-350430daa0d4", 00:20:28.172 "is_configured": true, 00:20:28.172 "data_offset": 0, 00:20:28.172 "data_size": 65536 00:20:28.172 }, 00:20:28.172 { 00:20:28.172 "name": null, 00:20:28.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.172 "is_configured": false, 00:20:28.172 "data_offset": 0, 00:20:28.172 "data_size": 65536 00:20:28.172 }, 00:20:28.172 { 00:20:28.172 "name": "BaseBdev3", 00:20:28.172 "uuid": "c332a237-1d05-47a4-805f-5a842b18d39b", 00:20:28.172 "is_configured": true, 00:20:28.172 "data_offset": 0, 00:20:28.172 "data_size": 65536 00:20:28.172 }, 00:20:28.172 { 00:20:28.172 "name": "BaseBdev4", 00:20:28.172 "uuid": "d34c12e2-fa35-4d62-bf0e-db05fd99d0fb", 00:20:28.172 "is_configured": true, 00:20:28.172 "data_offset": 0, 00:20:28.172 "data_size": 65536 00:20:28.172 } 00:20:28.172 ] 00:20:28.172 }' 00:20:28.172 02:43:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:28.172 02:43:53 -- common/autotest_common.sh@10 -- # set +x 00:20:29.106 02:43:53 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:29.106 [2024-07-11 02:43:54.027572] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:29.106 [2024-07-11 02:43:54.027607] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:29.106 [2024-07-11 02:43:54.027708] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:29.106 [2024-07-11 02:43:54.027791] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:29.106 [2024-07-11 02:43:54.027802] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state offline 00:20:29.106 02:43:54 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:29.106 02:43:54 -- bdev/bdev_raid.sh@671 -- # jq length 00:20:29.363 02:43:54 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:20:29.363 02:43:54 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:20:29.363 02:43:54 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:29.363 02:43:54 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:29.363 02:43:54 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:20:29.363 02:43:54 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:29.363 02:43:54 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:20:29.363 02:43:54 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:29.363 02:43:54 -- bdev/nbd_common.sh@12 -- # local i 00:20:29.363 02:43:54 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:29.363 02:43:54 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:29.363 02:43:54 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:29.621 /dev/nbd0 00:20:29.621 02:43:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:29.621 02:43:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:29.621 02:43:54 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:20:29.621 02:43:54 -- common/autotest_common.sh@857 -- # local i 00:20:29.621 02:43:54 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:29.621 02:43:54 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:29.621 02:43:54 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:20:29.621 02:43:54 -- common/autotest_common.sh@861 -- # break 00:20:29.621 02:43:54 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:29.621 02:43:54 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:29.621 02:43:54 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:29.621 1+0 records in 00:20:29.621 1+0 records out 00:20:29.621 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0004984 s, 8.2 MB/s 00:20:29.621 02:43:54 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:29.621 02:43:54 -- common/autotest_common.sh@874 -- # size=4096 00:20:29.621 02:43:54 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:29.621 02:43:54 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:29.621 02:43:54 -- common/autotest_common.sh@877 -- # return 0 00:20:29.621 02:43:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:29.621 02:43:54 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:29.621 02:43:54 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:20:29.879 /dev/nbd1 00:20:29.879 02:43:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:29.879 02:43:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:29.879 02:43:54 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:20:29.879 02:43:54 -- common/autotest_common.sh@857 -- # local i 00:20:29.879 02:43:54 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:29.879 02:43:54 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:29.879 02:43:54 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:20:29.879 02:43:54 -- common/autotest_common.sh@861 -- # break 00:20:29.879 02:43:54 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:29.879 02:43:54 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:29.879 02:43:54 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:29.879 1+0 records in 00:20:29.879 1+0 records out 00:20:29.879 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000517131 s, 7.9 MB/s 00:20:29.879 02:43:54 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:29.879 02:43:54 -- common/autotest_common.sh@874 -- # size=4096 00:20:29.879 02:43:54 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:29.879 02:43:54 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:29.879 02:43:54 -- common/autotest_common.sh@877 -- # return 0 00:20:29.879 02:43:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:29.879 02:43:54 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:29.879 02:43:54 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:20:29.879 02:43:54 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:20:29.879 02:43:54 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:29.879 02:43:54 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:20:29.879 02:43:54 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:29.879 02:43:54 -- bdev/nbd_common.sh@51 -- # local i 00:20:29.879 02:43:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:29.879 02:43:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:30.137 02:43:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:30.137 02:43:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:30.137 02:43:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:30.137 02:43:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:30.137 02:43:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:30.137 02:43:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:30.137 02:43:55 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:20:30.395 02:43:55 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:20:30.395 02:43:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:30.395 02:43:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:30.395 02:43:55 -- bdev/nbd_common.sh@41 -- # break 00:20:30.395 02:43:55 -- bdev/nbd_common.sh@45 -- # return 0 00:20:30.395 02:43:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:30.395 02:43:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:30.654 02:43:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:30.654 02:43:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:30.654 02:43:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:30.654 02:43:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:30.654 02:43:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:30.654 02:43:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:30.654 02:43:55 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:20:30.654 02:43:55 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:20:30.654 02:43:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:30.654 02:43:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:30.654 02:43:55 -- bdev/nbd_common.sh@41 -- # break 00:20:30.654 02:43:55 -- bdev/nbd_common.sh@45 -- # return 0 00:20:30.654 02:43:55 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:20:30.654 02:43:55 -- bdev/bdev_raid.sh@709 -- # killprocess 137406 00:20:30.654 02:43:55 -- common/autotest_common.sh@926 -- # '[' -z 137406 ']' 00:20:30.654 02:43:55 -- common/autotest_common.sh@930 -- # kill -0 137406 00:20:30.654 02:43:55 -- common/autotest_common.sh@931 -- # uname 00:20:30.654 02:43:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:30.654 02:43:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 137406 00:20:30.654 killing process with pid 137406 00:20:30.654 02:43:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:30.654 02:43:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:30.654 02:43:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 137406' 00:20:30.654 02:43:55 -- common/autotest_common.sh@945 -- # kill 137406 00:20:30.654 02:43:55 -- common/autotest_common.sh@950 -- # wait 137406 00:20:30.654 Received shutdown signal, test time was about 60.000000 seconds 00:20:30.654 00:20:30.654 Latency(us) 00:20:30.654 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:30.654 =================================================================================================================== 00:20:30.654 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:30.654 [2024-07-11 02:43:55.630459] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:30.654 [2024-07-11 02:43:55.674861] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:30.913 02:43:55 -- bdev/bdev_raid.sh@711 -- # return 0 00:20:30.913 00:20:30.913 real 0m21.313s 00:20:30.913 user 0m29.786s 00:20:30.913 sys 0m4.456s 00:20:30.913 02:43:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:30.913 ************************************ 00:20:30.913 END TEST raid_rebuild_test 00:20:30.913 ************************************ 00:20:30.913 02:43:55 -- common/autotest_common.sh@10 -- # set +x 00:20:30.913 02:43:55 -- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false 00:20:30.913 02:43:55 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:20:30.913 02:43:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:30.913 02:43:55 -- common/autotest_common.sh@10 -- # set +x 00:20:30.913 ************************************ 00:20:30.913 START TEST raid_rebuild_test_sb 00:20:30.913 ************************************ 00:20:30.913 02:43:55 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 true false 00:20:30.913 02:43:55 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:20:30.913 02:43:55 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:20:30.913 02:43:55 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:20:30.913 02:43:55 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:20:30.913 02:43:55 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:20:30.913 02:43:55 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:20:30.913 02:43:55 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:30.913 02:43:55 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:20:30.913 02:43:55 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:30.913 02:43:55 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:30.913 02:43:55 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:20:30.913 02:43:55 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:30.913 02:43:55 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:30.913 02:43:55 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:20:30.913 02:43:55 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:30.913 02:43:55 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:30.913 02:43:55 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:20:30.913 02:43:55 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:30.913 02:43:55 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:30.913 02:43:55 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:20:30.913 02:43:55 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:20:30.913 02:43:55 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:20:30.913 02:43:55 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:20:30.913 02:43:55 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:20:30.913 02:43:55 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:20:30.913 02:43:55 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:20:30.913 02:43:55 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:20:30.913 02:43:55 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:20:30.913 02:43:55 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:20:30.913 02:43:55 -- bdev/bdev_raid.sh@544 -- # raid_pid=137995 00:20:30.913 02:43:55 -- bdev/bdev_raid.sh@545 -- # waitforlisten 137995 /var/tmp/spdk-raid.sock 00:20:30.913 02:43:55 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:30.913 02:43:55 -- common/autotest_common.sh@819 -- # '[' -z 137995 ']' 00:20:30.913 02:43:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:30.913 02:43:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:30.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:30.913 02:43:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:30.913 02:43:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:30.913 02:43:55 -- common/autotest_common.sh@10 -- # set +x 00:20:31.171 [2024-07-11 02:43:56.023861] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:20:31.171 [2024-07-11 02:43:56.024137] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137995 ] 00:20:31.171 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:31.171 Zero copy mechanism will not be used. 00:20:31.171 [2024-07-11 02:43:56.168712] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.171 [2024-07-11 02:43:56.223695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:31.429 [2024-07-11 02:43:56.273730] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:31.996 02:43:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:31.996 02:43:56 -- common/autotest_common.sh@852 -- # return 0 00:20:31.996 02:43:56 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:31.996 02:43:56 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:31.996 02:43:56 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:31.996 BaseBdev1_malloc 00:20:31.996 02:43:57 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:32.255 [2024-07-11 02:43:57.248805] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:32.255 [2024-07-11 02:43:57.248903] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:32.255 [2024-07-11 02:43:57.248940] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005d80 00:20:32.255 [2024-07-11 02:43:57.248976] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:32.255 [2024-07-11 02:43:57.251097] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:32.255 [2024-07-11 02:43:57.251159] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:32.255 BaseBdev1 00:20:32.255 02:43:57 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:32.255 02:43:57 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:32.255 02:43:57 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:32.514 BaseBdev2_malloc 00:20:32.514 02:43:57 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:32.772 [2024-07-11 02:43:57.623285] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:32.772 [2024-07-11 02:43:57.623406] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:32.772 [2024-07-11 02:43:57.623443] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:20:32.772 [2024-07-11 02:43:57.623480] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:32.772 [2024-07-11 02:43:57.625462] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:32.772 [2024-07-11 02:43:57.625523] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:32.772 BaseBdev2 00:20:32.772 02:43:57 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:32.772 02:43:57 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:32.772 02:43:57 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:32.772 BaseBdev3_malloc 00:20:32.772 02:43:57 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:20:33.030 [2024-07-11 02:43:58.104071] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:20:33.030 [2024-07-11 02:43:58.104195] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:33.030 [2024-07-11 02:43:58.104238] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:20:33.030 [2024-07-11 02:43:58.104290] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:33.030 [2024-07-11 02:43:58.106650] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:33.030 [2024-07-11 02:43:58.106715] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:33.030 BaseBdev3 00:20:33.030 02:43:58 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:33.030 02:43:58 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:33.030 02:43:58 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:33.289 BaseBdev4_malloc 00:20:33.289 02:43:58 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:20:33.548 [2024-07-11 02:43:58.566008] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:20:33.548 [2024-07-11 02:43:58.566099] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:33.548 [2024-07-11 02:43:58.566130] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:20:33.548 [2024-07-11 02:43:58.566165] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:33.548 [2024-07-11 02:43:58.568394] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:33.548 [2024-07-11 02:43:58.568442] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:33.548 BaseBdev4 00:20:33.548 02:43:58 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:33.806 spare_malloc 00:20:33.806 02:43:58 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:34.064 spare_delay 00:20:34.064 02:43:58 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:34.064 [2024-07-11 02:43:59.152565] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:34.064 [2024-07-11 02:43:59.152681] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:34.064 [2024-07-11 02:43:59.152715] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:20:34.065 [2024-07-11 02:43:59.152772] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:34.065 [2024-07-11 02:43:59.155104] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:34.065 [2024-07-11 02:43:59.155170] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:34.324 spare 00:20:34.324 02:43:59 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:20:34.324 [2024-07-11 02:43:59.344704] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:34.324 [2024-07-11 02:43:59.346698] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:34.324 [2024-07-11 02:43:59.346783] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:34.324 [2024-07-11 02:43:59.346841] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:34.324 [2024-07-11 02:43:59.347082] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:20:34.324 [2024-07-11 02:43:59.347114] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:34.324 [2024-07-11 02:43:59.347255] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:20:34.324 [2024-07-11 02:43:59.347766] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:20:34.324 [2024-07-11 02:43:59.347788] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:20:34.324 [2024-07-11 02:43:59.347935] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:34.324 02:43:59 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:34.324 02:43:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:34.324 02:43:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:34.324 02:43:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:34.324 02:43:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:34.324 02:43:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:34.324 02:43:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:34.324 02:43:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:34.324 02:43:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:34.324 02:43:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:34.324 02:43:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:34.324 02:43:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:34.584 02:43:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:34.584 "name": "raid_bdev1", 00:20:34.584 "uuid": "4007a868-019f-4723-9a8e-de09ac3c4e3f", 00:20:34.584 "strip_size_kb": 0, 00:20:34.584 "state": "online", 00:20:34.584 "raid_level": "raid1", 00:20:34.584 "superblock": true, 00:20:34.584 "num_base_bdevs": 4, 00:20:34.584 "num_base_bdevs_discovered": 4, 00:20:34.584 "num_base_bdevs_operational": 4, 00:20:34.584 "base_bdevs_list": [ 00:20:34.584 { 00:20:34.584 "name": "BaseBdev1", 00:20:34.584 "uuid": "b92c4b9c-0bf5-52cf-96b5-281330b86fbc", 00:20:34.584 "is_configured": true, 00:20:34.584 "data_offset": 2048, 00:20:34.584 "data_size": 63488 00:20:34.584 }, 00:20:34.584 { 00:20:34.584 "name": "BaseBdev2", 00:20:34.584 "uuid": "487b4cf6-9512-54de-a72d-1e48797a6873", 00:20:34.584 "is_configured": true, 00:20:34.584 "data_offset": 2048, 00:20:34.584 "data_size": 63488 00:20:34.584 }, 00:20:34.584 { 00:20:34.584 "name": "BaseBdev3", 00:20:34.584 "uuid": "284911a1-2c61-505b-a1cf-a991c0264ebd", 00:20:34.584 "is_configured": true, 00:20:34.584 "data_offset": 2048, 00:20:34.584 "data_size": 63488 00:20:34.584 }, 00:20:34.584 { 00:20:34.584 "name": "BaseBdev4", 00:20:34.584 "uuid": "58ce2f82-560f-53f0-85ef-835b880bd012", 00:20:34.584 "is_configured": true, 00:20:34.584 "data_offset": 2048, 00:20:34.584 "data_size": 63488 00:20:34.584 } 00:20:34.584 ] 00:20:34.584 }' 00:20:34.584 02:43:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:34.584 02:43:59 -- common/autotest_common.sh@10 -- # set +x 00:20:35.520 02:44:00 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:35.520 02:44:00 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:20:35.520 [2024-07-11 02:44:00.445081] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:35.520 02:44:00 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:20:35.520 02:44:00 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:35.520 02:44:00 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:35.778 02:44:00 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:20:35.778 02:44:00 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:20:35.778 02:44:00 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:20:35.778 02:44:00 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:20:35.779 02:44:00 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:35.779 02:44:00 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:20:35.779 02:44:00 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:35.779 02:44:00 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:20:35.779 02:44:00 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:35.779 02:44:00 -- bdev/nbd_common.sh@12 -- # local i 00:20:35.779 02:44:00 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:35.779 02:44:00 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:35.779 02:44:00 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:36.037 [2024-07-11 02:44:00.901011] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:20:36.037 /dev/nbd0 00:20:36.037 02:44:00 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:36.037 02:44:00 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:36.037 02:44:00 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:20:36.037 02:44:00 -- common/autotest_common.sh@857 -- # local i 00:20:36.037 02:44:00 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:36.037 02:44:00 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:36.037 02:44:00 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:20:36.037 02:44:00 -- common/autotest_common.sh@861 -- # break 00:20:36.037 02:44:00 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:36.037 02:44:00 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:36.037 02:44:00 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:36.037 1+0 records in 00:20:36.037 1+0 records out 00:20:36.037 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000355325 s, 11.5 MB/s 00:20:36.037 02:44:00 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:36.037 02:44:00 -- common/autotest_common.sh@874 -- # size=4096 00:20:36.038 02:44:00 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:36.038 02:44:00 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:36.038 02:44:00 -- common/autotest_common.sh@877 -- # return 0 00:20:36.038 02:44:00 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:36.038 02:44:00 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:36.038 02:44:00 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:20:36.038 02:44:00 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:20:36.038 02:44:00 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:20:41.305 63488+0 records in 00:20:41.305 63488+0 records out 00:20:41.305 32505856 bytes (33 MB, 31 MiB) copied, 5.39643 s, 6.0 MB/s 00:20:41.305 02:44:06 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:20:41.305 02:44:06 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:41.305 02:44:06 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:20:41.305 02:44:06 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:41.305 02:44:06 -- bdev/nbd_common.sh@51 -- # local i 00:20:41.305 02:44:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:41.305 02:44:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:41.564 02:44:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:41.564 02:44:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:41.564 02:44:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:41.564 02:44:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:41.564 02:44:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:41.564 02:44:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:41.564 02:44:06 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:20:41.564 [2024-07-11 02:44:06.630925] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:41.823 02:44:06 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:20:41.823 02:44:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:41.823 02:44:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:41.823 02:44:06 -- bdev/nbd_common.sh@41 -- # break 00:20:41.823 02:44:06 -- bdev/nbd_common.sh@45 -- # return 0 00:20:41.823 02:44:06 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:41.823 [2024-07-11 02:44:06.910560] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:42.081 02:44:06 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:42.081 02:44:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:42.081 02:44:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:42.081 02:44:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:42.081 02:44:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:42.081 02:44:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:42.081 02:44:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:42.081 02:44:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:42.081 02:44:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:42.081 02:44:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:42.081 02:44:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:42.081 02:44:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:42.081 02:44:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:42.081 "name": "raid_bdev1", 00:20:42.081 "uuid": "4007a868-019f-4723-9a8e-de09ac3c4e3f", 00:20:42.081 "strip_size_kb": 0, 00:20:42.081 "state": "online", 00:20:42.081 "raid_level": "raid1", 00:20:42.081 "superblock": true, 00:20:42.081 "num_base_bdevs": 4, 00:20:42.081 "num_base_bdevs_discovered": 3, 00:20:42.081 "num_base_bdevs_operational": 3, 00:20:42.081 "base_bdevs_list": [ 00:20:42.081 { 00:20:42.081 "name": null, 00:20:42.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:42.081 "is_configured": false, 00:20:42.081 "data_offset": 2048, 00:20:42.081 "data_size": 63488 00:20:42.081 }, 00:20:42.081 { 00:20:42.081 "name": "BaseBdev2", 00:20:42.081 "uuid": "487b4cf6-9512-54de-a72d-1e48797a6873", 00:20:42.081 "is_configured": true, 00:20:42.081 "data_offset": 2048, 00:20:42.081 "data_size": 63488 00:20:42.081 }, 00:20:42.081 { 00:20:42.081 "name": "BaseBdev3", 00:20:42.081 "uuid": "284911a1-2c61-505b-a1cf-a991c0264ebd", 00:20:42.081 "is_configured": true, 00:20:42.082 "data_offset": 2048, 00:20:42.082 "data_size": 63488 00:20:42.082 }, 00:20:42.082 { 00:20:42.082 "name": "BaseBdev4", 00:20:42.082 "uuid": "58ce2f82-560f-53f0-85ef-835b880bd012", 00:20:42.082 "is_configured": true, 00:20:42.082 "data_offset": 2048, 00:20:42.082 "data_size": 63488 00:20:42.082 } 00:20:42.082 ] 00:20:42.082 }' 00:20:42.082 02:44:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:42.082 02:44:07 -- common/autotest_common.sh@10 -- # set +x 00:20:43.016 02:44:07 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:43.016 [2024-07-11 02:44:08.058793] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:43.016 [2024-07-11 02:44:08.058844] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:43.016 [2024-07-11 02:44:08.062916] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca1b00 00:20:43.016 [2024-07-11 02:44:08.064796] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:43.016 02:44:08 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:20:44.391 02:44:09 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:44.391 02:44:09 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:44.391 02:44:09 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:44.392 02:44:09 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:44.392 02:44:09 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:44.392 02:44:09 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:44.392 02:44:09 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:44.392 02:44:09 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:44.392 "name": "raid_bdev1", 00:20:44.392 "uuid": "4007a868-019f-4723-9a8e-de09ac3c4e3f", 00:20:44.392 "strip_size_kb": 0, 00:20:44.392 "state": "online", 00:20:44.392 "raid_level": "raid1", 00:20:44.392 "superblock": true, 00:20:44.392 "num_base_bdevs": 4, 00:20:44.392 "num_base_bdevs_discovered": 4, 00:20:44.392 "num_base_bdevs_operational": 4, 00:20:44.392 "process": { 00:20:44.392 "type": "rebuild", 00:20:44.392 "target": "spare", 00:20:44.392 "progress": { 00:20:44.392 "blocks": 24576, 00:20:44.392 "percent": 38 00:20:44.392 } 00:20:44.392 }, 00:20:44.392 "base_bdevs_list": [ 00:20:44.392 { 00:20:44.392 "name": "spare", 00:20:44.392 "uuid": "9781ad09-2841-5d16-8f75-a93773329fe7", 00:20:44.392 "is_configured": true, 00:20:44.392 "data_offset": 2048, 00:20:44.392 "data_size": 63488 00:20:44.392 }, 00:20:44.392 { 00:20:44.392 "name": "BaseBdev2", 00:20:44.392 "uuid": "487b4cf6-9512-54de-a72d-1e48797a6873", 00:20:44.392 "is_configured": true, 00:20:44.392 "data_offset": 2048, 00:20:44.392 "data_size": 63488 00:20:44.392 }, 00:20:44.392 { 00:20:44.392 "name": "BaseBdev3", 00:20:44.392 "uuid": "284911a1-2c61-505b-a1cf-a991c0264ebd", 00:20:44.392 "is_configured": true, 00:20:44.392 "data_offset": 2048, 00:20:44.392 "data_size": 63488 00:20:44.392 }, 00:20:44.392 { 00:20:44.392 "name": "BaseBdev4", 00:20:44.392 "uuid": "58ce2f82-560f-53f0-85ef-835b880bd012", 00:20:44.392 "is_configured": true, 00:20:44.392 "data_offset": 2048, 00:20:44.392 "data_size": 63488 00:20:44.392 } 00:20:44.392 ] 00:20:44.392 }' 00:20:44.392 02:44:09 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:44.392 02:44:09 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:44.392 02:44:09 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:44.392 02:44:09 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:44.392 02:44:09 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:44.651 [2024-07-11 02:44:09.656446] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:44.651 [2024-07-11 02:44:09.673966] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:44.651 [2024-07-11 02:44:09.674037] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:44.651 02:44:09 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:44.651 02:44:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:44.651 02:44:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:44.651 02:44:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:44.651 02:44:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:44.651 02:44:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:44.651 02:44:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:44.651 02:44:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:44.651 02:44:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:44.651 02:44:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:44.651 02:44:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:44.651 02:44:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:44.910 02:44:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:44.910 "name": "raid_bdev1", 00:20:44.910 "uuid": "4007a868-019f-4723-9a8e-de09ac3c4e3f", 00:20:44.910 "strip_size_kb": 0, 00:20:44.910 "state": "online", 00:20:44.910 "raid_level": "raid1", 00:20:44.910 "superblock": true, 00:20:44.910 "num_base_bdevs": 4, 00:20:44.910 "num_base_bdevs_discovered": 3, 00:20:44.910 "num_base_bdevs_operational": 3, 00:20:44.910 "base_bdevs_list": [ 00:20:44.910 { 00:20:44.910 "name": null, 00:20:44.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:44.910 "is_configured": false, 00:20:44.910 "data_offset": 2048, 00:20:44.910 "data_size": 63488 00:20:44.910 }, 00:20:44.910 { 00:20:44.910 "name": "BaseBdev2", 00:20:44.910 "uuid": "487b4cf6-9512-54de-a72d-1e48797a6873", 00:20:44.910 "is_configured": true, 00:20:44.910 "data_offset": 2048, 00:20:44.910 "data_size": 63488 00:20:44.910 }, 00:20:44.910 { 00:20:44.910 "name": "BaseBdev3", 00:20:44.910 "uuid": "284911a1-2c61-505b-a1cf-a991c0264ebd", 00:20:44.910 "is_configured": true, 00:20:44.910 "data_offset": 2048, 00:20:44.910 "data_size": 63488 00:20:44.910 }, 00:20:44.910 { 00:20:44.910 "name": "BaseBdev4", 00:20:44.910 "uuid": "58ce2f82-560f-53f0-85ef-835b880bd012", 00:20:44.910 "is_configured": true, 00:20:44.910 "data_offset": 2048, 00:20:44.910 "data_size": 63488 00:20:44.910 } 00:20:44.910 ] 00:20:44.910 }' 00:20:44.910 02:44:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:44.910 02:44:09 -- common/autotest_common.sh@10 -- # set +x 00:20:45.845 02:44:10 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:45.845 02:44:10 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:45.845 02:44:10 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:45.845 02:44:10 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:45.845 02:44:10 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:45.845 02:44:10 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:45.845 02:44:10 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:45.845 02:44:10 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:45.845 "name": "raid_bdev1", 00:20:45.845 "uuid": "4007a868-019f-4723-9a8e-de09ac3c4e3f", 00:20:45.845 "strip_size_kb": 0, 00:20:45.845 "state": "online", 00:20:45.845 "raid_level": "raid1", 00:20:45.845 "superblock": true, 00:20:45.845 "num_base_bdevs": 4, 00:20:45.845 "num_base_bdevs_discovered": 3, 00:20:45.845 "num_base_bdevs_operational": 3, 00:20:45.845 "base_bdevs_list": [ 00:20:45.845 { 00:20:45.845 "name": null, 00:20:45.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.845 "is_configured": false, 00:20:45.845 "data_offset": 2048, 00:20:45.845 "data_size": 63488 00:20:45.845 }, 00:20:45.845 { 00:20:45.845 "name": "BaseBdev2", 00:20:45.845 "uuid": "487b4cf6-9512-54de-a72d-1e48797a6873", 00:20:45.845 "is_configured": true, 00:20:45.845 "data_offset": 2048, 00:20:45.845 "data_size": 63488 00:20:45.845 }, 00:20:45.845 { 00:20:45.845 "name": "BaseBdev3", 00:20:45.845 "uuid": "284911a1-2c61-505b-a1cf-a991c0264ebd", 00:20:45.845 "is_configured": true, 00:20:45.845 "data_offset": 2048, 00:20:45.845 "data_size": 63488 00:20:45.845 }, 00:20:45.845 { 00:20:45.845 "name": "BaseBdev4", 00:20:45.845 "uuid": "58ce2f82-560f-53f0-85ef-835b880bd012", 00:20:45.845 "is_configured": true, 00:20:45.845 "data_offset": 2048, 00:20:45.845 "data_size": 63488 00:20:45.845 } 00:20:45.845 ] 00:20:45.845 }' 00:20:45.845 02:44:10 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:45.845 02:44:10 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:45.845 02:44:10 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:45.845 02:44:10 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:45.845 02:44:10 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:46.104 [2024-07-11 02:44:11.162581] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:46.104 [2024-07-11 02:44:11.162620] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:46.104 [2024-07-11 02:44:11.166651] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca1ca0 00:20:46.104 [2024-07-11 02:44:11.168505] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:46.104 02:44:11 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:20:47.480 02:44:12 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:47.480 02:44:12 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:47.480 02:44:12 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:47.480 02:44:12 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:47.480 02:44:12 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:47.480 02:44:12 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:47.480 02:44:12 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:47.480 02:44:12 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:47.480 "name": "raid_bdev1", 00:20:47.480 "uuid": "4007a868-019f-4723-9a8e-de09ac3c4e3f", 00:20:47.480 "strip_size_kb": 0, 00:20:47.480 "state": "online", 00:20:47.480 "raid_level": "raid1", 00:20:47.480 "superblock": true, 00:20:47.480 "num_base_bdevs": 4, 00:20:47.480 "num_base_bdevs_discovered": 4, 00:20:47.480 "num_base_bdevs_operational": 4, 00:20:47.480 "process": { 00:20:47.480 "type": "rebuild", 00:20:47.480 "target": "spare", 00:20:47.480 "progress": { 00:20:47.480 "blocks": 24576, 00:20:47.480 "percent": 38 00:20:47.480 } 00:20:47.480 }, 00:20:47.480 "base_bdevs_list": [ 00:20:47.480 { 00:20:47.480 "name": "spare", 00:20:47.480 "uuid": "9781ad09-2841-5d16-8f75-a93773329fe7", 00:20:47.480 "is_configured": true, 00:20:47.480 "data_offset": 2048, 00:20:47.480 "data_size": 63488 00:20:47.480 }, 00:20:47.480 { 00:20:47.480 "name": "BaseBdev2", 00:20:47.480 "uuid": "487b4cf6-9512-54de-a72d-1e48797a6873", 00:20:47.480 "is_configured": true, 00:20:47.480 "data_offset": 2048, 00:20:47.480 "data_size": 63488 00:20:47.480 }, 00:20:47.480 { 00:20:47.480 "name": "BaseBdev3", 00:20:47.480 "uuid": "284911a1-2c61-505b-a1cf-a991c0264ebd", 00:20:47.480 "is_configured": true, 00:20:47.480 "data_offset": 2048, 00:20:47.480 "data_size": 63488 00:20:47.480 }, 00:20:47.480 { 00:20:47.480 "name": "BaseBdev4", 00:20:47.480 "uuid": "58ce2f82-560f-53f0-85ef-835b880bd012", 00:20:47.480 "is_configured": true, 00:20:47.480 "data_offset": 2048, 00:20:47.480 "data_size": 63488 00:20:47.480 } 00:20:47.480 ] 00:20:47.480 }' 00:20:47.480 02:44:12 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:47.480 02:44:12 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:47.480 02:44:12 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:47.480 02:44:12 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:47.480 02:44:12 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:20:47.480 02:44:12 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:20:47.480 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:20:47.480 02:44:12 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:20:47.480 02:44:12 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:20:47.480 02:44:12 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:20:47.480 02:44:12 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:47.738 [2024-07-11 02:44:12.711631] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:47.738 [2024-07-11 02:44:12.776379] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca1ca0 00:20:47.997 02:44:12 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:20:47.997 02:44:12 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:20:47.997 02:44:12 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:47.997 02:44:12 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:47.997 02:44:12 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:47.997 02:44:12 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:47.997 02:44:12 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:47.997 02:44:12 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:47.997 02:44:12 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:48.255 02:44:13 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:48.255 "name": "raid_bdev1", 00:20:48.255 "uuid": "4007a868-019f-4723-9a8e-de09ac3c4e3f", 00:20:48.255 "strip_size_kb": 0, 00:20:48.255 "state": "online", 00:20:48.255 "raid_level": "raid1", 00:20:48.255 "superblock": true, 00:20:48.255 "num_base_bdevs": 4, 00:20:48.255 "num_base_bdevs_discovered": 3, 00:20:48.255 "num_base_bdevs_operational": 3, 00:20:48.255 "process": { 00:20:48.255 "type": "rebuild", 00:20:48.255 "target": "spare", 00:20:48.255 "progress": { 00:20:48.255 "blocks": 38912, 00:20:48.255 "percent": 61 00:20:48.255 } 00:20:48.255 }, 00:20:48.255 "base_bdevs_list": [ 00:20:48.255 { 00:20:48.255 "name": "spare", 00:20:48.255 "uuid": "9781ad09-2841-5d16-8f75-a93773329fe7", 00:20:48.255 "is_configured": true, 00:20:48.255 "data_offset": 2048, 00:20:48.255 "data_size": 63488 00:20:48.255 }, 00:20:48.255 { 00:20:48.255 "name": null, 00:20:48.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:48.255 "is_configured": false, 00:20:48.255 "data_offset": 2048, 00:20:48.255 "data_size": 63488 00:20:48.255 }, 00:20:48.255 { 00:20:48.255 "name": "BaseBdev3", 00:20:48.255 "uuid": "284911a1-2c61-505b-a1cf-a991c0264ebd", 00:20:48.255 "is_configured": true, 00:20:48.255 "data_offset": 2048, 00:20:48.255 "data_size": 63488 00:20:48.255 }, 00:20:48.255 { 00:20:48.255 "name": "BaseBdev4", 00:20:48.255 "uuid": "58ce2f82-560f-53f0-85ef-835b880bd012", 00:20:48.255 "is_configured": true, 00:20:48.255 "data_offset": 2048, 00:20:48.255 "data_size": 63488 00:20:48.255 } 00:20:48.255 ] 00:20:48.255 }' 00:20:48.255 02:44:13 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:48.255 02:44:13 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:48.255 02:44:13 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:48.255 02:44:13 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:48.255 02:44:13 -- bdev/bdev_raid.sh@657 -- # local timeout=467 00:20:48.255 02:44:13 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:48.255 02:44:13 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:48.255 02:44:13 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:48.255 02:44:13 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:48.255 02:44:13 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:48.255 02:44:13 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:48.255 02:44:13 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:48.255 02:44:13 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:48.514 02:44:13 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:48.514 "name": "raid_bdev1", 00:20:48.514 "uuid": "4007a868-019f-4723-9a8e-de09ac3c4e3f", 00:20:48.514 "strip_size_kb": 0, 00:20:48.514 "state": "online", 00:20:48.514 "raid_level": "raid1", 00:20:48.514 "superblock": true, 00:20:48.514 "num_base_bdevs": 4, 00:20:48.514 "num_base_bdevs_discovered": 3, 00:20:48.514 "num_base_bdevs_operational": 3, 00:20:48.514 "process": { 00:20:48.514 "type": "rebuild", 00:20:48.514 "target": "spare", 00:20:48.514 "progress": { 00:20:48.514 "blocks": 45056, 00:20:48.514 "percent": 70 00:20:48.514 } 00:20:48.514 }, 00:20:48.514 "base_bdevs_list": [ 00:20:48.514 { 00:20:48.514 "name": "spare", 00:20:48.514 "uuid": "9781ad09-2841-5d16-8f75-a93773329fe7", 00:20:48.514 "is_configured": true, 00:20:48.514 "data_offset": 2048, 00:20:48.514 "data_size": 63488 00:20:48.514 }, 00:20:48.514 { 00:20:48.514 "name": null, 00:20:48.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:48.514 "is_configured": false, 00:20:48.514 "data_offset": 2048, 00:20:48.514 "data_size": 63488 00:20:48.514 }, 00:20:48.514 { 00:20:48.514 "name": "BaseBdev3", 00:20:48.514 "uuid": "284911a1-2c61-505b-a1cf-a991c0264ebd", 00:20:48.514 "is_configured": true, 00:20:48.514 "data_offset": 2048, 00:20:48.514 "data_size": 63488 00:20:48.514 }, 00:20:48.514 { 00:20:48.514 "name": "BaseBdev4", 00:20:48.514 "uuid": "58ce2f82-560f-53f0-85ef-835b880bd012", 00:20:48.514 "is_configured": true, 00:20:48.514 "data_offset": 2048, 00:20:48.514 "data_size": 63488 00:20:48.514 } 00:20:48.514 ] 00:20:48.514 }' 00:20:48.514 02:44:13 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:48.514 02:44:13 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:48.514 02:44:13 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:48.514 02:44:13 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:48.514 02:44:13 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:49.446 [2024-07-11 02:44:14.284474] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:49.446 [2024-07-11 02:44:14.284568] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:49.446 [2024-07-11 02:44:14.284708] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:49.704 02:44:14 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:49.704 02:44:14 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:49.704 02:44:14 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:49.704 02:44:14 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:49.704 02:44:14 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:49.704 02:44:14 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:49.704 02:44:14 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:49.704 02:44:14 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:49.962 02:44:14 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:49.962 "name": "raid_bdev1", 00:20:49.962 "uuid": "4007a868-019f-4723-9a8e-de09ac3c4e3f", 00:20:49.962 "strip_size_kb": 0, 00:20:49.962 "state": "online", 00:20:49.962 "raid_level": "raid1", 00:20:49.962 "superblock": true, 00:20:49.962 "num_base_bdevs": 4, 00:20:49.962 "num_base_bdevs_discovered": 3, 00:20:49.962 "num_base_bdevs_operational": 3, 00:20:49.962 "base_bdevs_list": [ 00:20:49.962 { 00:20:49.962 "name": "spare", 00:20:49.962 "uuid": "9781ad09-2841-5d16-8f75-a93773329fe7", 00:20:49.962 "is_configured": true, 00:20:49.962 "data_offset": 2048, 00:20:49.962 "data_size": 63488 00:20:49.962 }, 00:20:49.962 { 00:20:49.962 "name": null, 00:20:49.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:49.962 "is_configured": false, 00:20:49.962 "data_offset": 2048, 00:20:49.962 "data_size": 63488 00:20:49.962 }, 00:20:49.962 { 00:20:49.962 "name": "BaseBdev3", 00:20:49.962 "uuid": "284911a1-2c61-505b-a1cf-a991c0264ebd", 00:20:49.962 "is_configured": true, 00:20:49.962 "data_offset": 2048, 00:20:49.962 "data_size": 63488 00:20:49.962 }, 00:20:49.962 { 00:20:49.962 "name": "BaseBdev4", 00:20:49.962 "uuid": "58ce2f82-560f-53f0-85ef-835b880bd012", 00:20:49.962 "is_configured": true, 00:20:49.962 "data_offset": 2048, 00:20:49.962 "data_size": 63488 00:20:49.962 } 00:20:49.962 ] 00:20:49.962 }' 00:20:49.962 02:44:14 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:49.962 02:44:14 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:49.962 02:44:14 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:49.962 02:44:14 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:20:49.962 02:44:14 -- bdev/bdev_raid.sh@660 -- # break 00:20:49.962 02:44:14 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:49.962 02:44:14 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:49.962 02:44:14 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:49.963 02:44:14 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:49.963 02:44:14 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:49.963 02:44:14 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:49.963 02:44:14 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:50.222 02:44:15 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:50.222 "name": "raid_bdev1", 00:20:50.222 "uuid": "4007a868-019f-4723-9a8e-de09ac3c4e3f", 00:20:50.222 "strip_size_kb": 0, 00:20:50.222 "state": "online", 00:20:50.222 "raid_level": "raid1", 00:20:50.222 "superblock": true, 00:20:50.222 "num_base_bdevs": 4, 00:20:50.222 "num_base_bdevs_discovered": 3, 00:20:50.222 "num_base_bdevs_operational": 3, 00:20:50.222 "base_bdevs_list": [ 00:20:50.222 { 00:20:50.222 "name": "spare", 00:20:50.222 "uuid": "9781ad09-2841-5d16-8f75-a93773329fe7", 00:20:50.222 "is_configured": true, 00:20:50.222 "data_offset": 2048, 00:20:50.222 "data_size": 63488 00:20:50.222 }, 00:20:50.222 { 00:20:50.222 "name": null, 00:20:50.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:50.222 "is_configured": false, 00:20:50.222 "data_offset": 2048, 00:20:50.222 "data_size": 63488 00:20:50.222 }, 00:20:50.222 { 00:20:50.222 "name": "BaseBdev3", 00:20:50.222 "uuid": "284911a1-2c61-505b-a1cf-a991c0264ebd", 00:20:50.222 "is_configured": true, 00:20:50.222 "data_offset": 2048, 00:20:50.222 "data_size": 63488 00:20:50.222 }, 00:20:50.222 { 00:20:50.222 "name": "BaseBdev4", 00:20:50.222 "uuid": "58ce2f82-560f-53f0-85ef-835b880bd012", 00:20:50.222 "is_configured": true, 00:20:50.222 "data_offset": 2048, 00:20:50.222 "data_size": 63488 00:20:50.222 } 00:20:50.222 ] 00:20:50.222 }' 00:20:50.222 02:44:15 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:50.222 02:44:15 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:50.222 02:44:15 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:50.222 02:44:15 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:50.222 02:44:15 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:50.222 02:44:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:50.222 02:44:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:50.222 02:44:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:50.222 02:44:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:50.222 02:44:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:50.222 02:44:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:50.222 02:44:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:50.222 02:44:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:50.222 02:44:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:50.222 02:44:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:50.222 02:44:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:50.791 02:44:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:50.791 "name": "raid_bdev1", 00:20:50.791 "uuid": "4007a868-019f-4723-9a8e-de09ac3c4e3f", 00:20:50.791 "strip_size_kb": 0, 00:20:50.791 "state": "online", 00:20:50.791 "raid_level": "raid1", 00:20:50.791 "superblock": true, 00:20:50.791 "num_base_bdevs": 4, 00:20:50.791 "num_base_bdevs_discovered": 3, 00:20:50.791 "num_base_bdevs_operational": 3, 00:20:50.791 "base_bdevs_list": [ 00:20:50.791 { 00:20:50.791 "name": "spare", 00:20:50.791 "uuid": "9781ad09-2841-5d16-8f75-a93773329fe7", 00:20:50.791 "is_configured": true, 00:20:50.791 "data_offset": 2048, 00:20:50.791 "data_size": 63488 00:20:50.791 }, 00:20:50.791 { 00:20:50.791 "name": null, 00:20:50.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:50.791 "is_configured": false, 00:20:50.791 "data_offset": 2048, 00:20:50.791 "data_size": 63488 00:20:50.792 }, 00:20:50.792 { 00:20:50.792 "name": "BaseBdev3", 00:20:50.792 "uuid": "284911a1-2c61-505b-a1cf-a991c0264ebd", 00:20:50.792 "is_configured": true, 00:20:50.792 "data_offset": 2048, 00:20:50.792 "data_size": 63488 00:20:50.792 }, 00:20:50.792 { 00:20:50.792 "name": "BaseBdev4", 00:20:50.792 "uuid": "58ce2f82-560f-53f0-85ef-835b880bd012", 00:20:50.792 "is_configured": true, 00:20:50.792 "data_offset": 2048, 00:20:50.792 "data_size": 63488 00:20:50.792 } 00:20:50.792 ] 00:20:50.792 }' 00:20:50.792 02:44:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:50.792 02:44:15 -- common/autotest_common.sh@10 -- # set +x 00:20:51.360 02:44:16 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:51.618 [2024-07-11 02:44:16.494799] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:51.618 [2024-07-11 02:44:16.494834] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:51.618 [2024-07-11 02:44:16.494942] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:51.618 [2024-07-11 02:44:16.495029] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:51.618 [2024-07-11 02:44:16.495074] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:20:51.618 02:44:16 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:51.618 02:44:16 -- bdev/bdev_raid.sh@671 -- # jq length 00:20:51.876 02:44:16 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:20:51.876 02:44:16 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:20:51.876 02:44:16 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:51.876 02:44:16 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:51.876 02:44:16 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:20:51.876 02:44:16 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:51.876 02:44:16 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:20:51.876 02:44:16 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:51.876 02:44:16 -- bdev/nbd_common.sh@12 -- # local i 00:20:51.876 02:44:16 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:51.876 02:44:16 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:51.876 02:44:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:52.135 /dev/nbd0 00:20:52.135 02:44:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:52.135 02:44:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:52.135 02:44:17 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:20:52.135 02:44:17 -- common/autotest_common.sh@857 -- # local i 00:20:52.135 02:44:17 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:52.135 02:44:17 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:52.135 02:44:17 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:20:52.135 02:44:17 -- common/autotest_common.sh@861 -- # break 00:20:52.135 02:44:17 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:52.135 02:44:17 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:52.135 02:44:17 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:52.135 1+0 records in 00:20:52.135 1+0 records out 00:20:52.135 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00041982 s, 9.8 MB/s 00:20:52.135 02:44:17 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:52.135 02:44:17 -- common/autotest_common.sh@874 -- # size=4096 00:20:52.135 02:44:17 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:52.135 02:44:17 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:52.135 02:44:17 -- common/autotest_common.sh@877 -- # return 0 00:20:52.135 02:44:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:52.135 02:44:17 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:52.135 02:44:17 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:20:52.435 /dev/nbd1 00:20:52.435 02:44:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:52.435 02:44:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:52.435 02:44:17 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:20:52.435 02:44:17 -- common/autotest_common.sh@857 -- # local i 00:20:52.435 02:44:17 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:52.435 02:44:17 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:52.435 02:44:17 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:20:52.435 02:44:17 -- common/autotest_common.sh@861 -- # break 00:20:52.435 02:44:17 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:52.435 02:44:17 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:52.435 02:44:17 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:52.435 1+0 records in 00:20:52.435 1+0 records out 00:20:52.436 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00040835 s, 10.0 MB/s 00:20:52.436 02:44:17 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:52.436 02:44:17 -- common/autotest_common.sh@874 -- # size=4096 00:20:52.436 02:44:17 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:52.436 02:44:17 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:52.436 02:44:17 -- common/autotest_common.sh@877 -- # return 0 00:20:52.436 02:44:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:52.436 02:44:17 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:52.436 02:44:17 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:52.436 02:44:17 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:20:52.436 02:44:17 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:52.436 02:44:17 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:20:52.436 02:44:17 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:52.436 02:44:17 -- bdev/nbd_common.sh@51 -- # local i 00:20:52.436 02:44:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:52.436 02:44:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:52.693 02:44:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:52.693 02:44:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:52.693 02:44:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:52.693 02:44:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:52.693 02:44:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:52.693 02:44:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:52.693 02:44:17 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:20:52.693 02:44:17 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:20:52.693 02:44:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:52.693 02:44:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:52.693 02:44:17 -- bdev/nbd_common.sh@41 -- # break 00:20:52.693 02:44:17 -- bdev/nbd_common.sh@45 -- # return 0 00:20:52.693 02:44:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:52.693 02:44:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:52.952 02:44:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:52.952 02:44:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:52.952 02:44:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:52.952 02:44:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:52.952 02:44:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:52.952 02:44:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:52.952 02:44:18 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:20:53.210 02:44:18 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:20:53.210 02:44:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:53.210 02:44:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:53.210 02:44:18 -- bdev/nbd_common.sh@41 -- # break 00:20:53.210 02:44:18 -- bdev/nbd_common.sh@45 -- # return 0 00:20:53.210 02:44:18 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:20:53.210 02:44:18 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:20:53.210 02:44:18 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:20:53.210 02:44:18 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:20:53.468 02:44:18 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:53.468 [2024-07-11 02:44:18.522141] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:53.468 [2024-07-11 02:44:18.522225] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:53.468 [2024-07-11 02:44:18.522269] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:53.468 [2024-07-11 02:44:18.522292] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:53.468 [2024-07-11 02:44:18.524322] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:53.468 [2024-07-11 02:44:18.524379] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:53.468 [2024-07-11 02:44:18.524474] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:53.468 [2024-07-11 02:44:18.524545] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:53.468 BaseBdev1 00:20:53.468 02:44:18 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:20:53.468 02:44:18 -- bdev/bdev_raid.sh@695 -- # '[' -z '' ']' 00:20:53.468 02:44:18 -- bdev/bdev_raid.sh@696 -- # continue 00:20:53.468 02:44:18 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:20:53.468 02:44:18 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:20:53.468 02:44:18 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:20:53.726 02:44:18 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:20:53.985 [2024-07-11 02:44:18.946264] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:20:53.985 [2024-07-11 02:44:18.946355] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:53.985 [2024-07-11 02:44:18.946390] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:20:53.985 [2024-07-11 02:44:18.946446] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:53.985 [2024-07-11 02:44:18.946845] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:53.985 [2024-07-11 02:44:18.946902] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:53.985 [2024-07-11 02:44:18.946969] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:20:53.985 [2024-07-11 02:44:18.946982] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1) 00:20:53.985 [2024-07-11 02:44:18.946990] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:53.985 [2024-07-11 02:44:18.947021] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ae80 name raid_bdev1, state configuring 00:20:53.985 [2024-07-11 02:44:18.947062] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:53.985 BaseBdev3 00:20:53.985 02:44:18 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:20:53.985 02:44:18 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:20:53.985 02:44:18 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:20:54.244 02:44:19 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:20:54.503 [2024-07-11 02:44:19.337548] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:20:54.503 [2024-07-11 02:44:19.337675] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:54.503 [2024-07-11 02:44:19.337718] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:20:54.503 [2024-07-11 02:44:19.337745] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:54.503 [2024-07-11 02:44:19.338229] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:54.503 [2024-07-11 02:44:19.338290] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:54.503 [2024-07-11 02:44:19.338385] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:20:54.503 [2024-07-11 02:44:19.338435] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:54.503 BaseBdev4 00:20:54.503 02:44:19 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:20:54.503 02:44:19 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:54.762 [2024-07-11 02:44:19.713618] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:54.762 [2024-07-11 02:44:19.713737] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:54.762 [2024-07-11 02:44:19.713788] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:20:54.762 [2024-07-11 02:44:19.713817] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:54.762 [2024-07-11 02:44:19.714343] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:54.762 [2024-07-11 02:44:19.714447] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:54.762 [2024-07-11 02:44:19.714534] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:20:54.762 [2024-07-11 02:44:19.714589] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:54.762 spare 00:20:54.762 02:44:19 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:54.762 02:44:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:54.762 02:44:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:54.762 02:44:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:54.762 02:44:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:54.762 02:44:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:54.762 02:44:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:54.762 02:44:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:54.762 02:44:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:54.762 02:44:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:54.762 02:44:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:54.762 02:44:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:54.762 [2024-07-11 02:44:19.814740] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000b480 00:20:54.762 [2024-07-11 02:44:19.814764] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:54.762 [2024-07-11 02:44:19.814941] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc2300 00:20:54.762 [2024-07-11 02:44:19.815368] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000b480 00:20:54.762 [2024-07-11 02:44:19.815418] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000b480 00:20:54.762 [2024-07-11 02:44:19.815568] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:55.020 02:44:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:55.020 "name": "raid_bdev1", 00:20:55.020 "uuid": "4007a868-019f-4723-9a8e-de09ac3c4e3f", 00:20:55.020 "strip_size_kb": 0, 00:20:55.020 "state": "online", 00:20:55.020 "raid_level": "raid1", 00:20:55.020 "superblock": true, 00:20:55.020 "num_base_bdevs": 4, 00:20:55.020 "num_base_bdevs_discovered": 3, 00:20:55.020 "num_base_bdevs_operational": 3, 00:20:55.020 "base_bdevs_list": [ 00:20:55.020 { 00:20:55.020 "name": "spare", 00:20:55.020 "uuid": "9781ad09-2841-5d16-8f75-a93773329fe7", 00:20:55.020 "is_configured": true, 00:20:55.020 "data_offset": 2048, 00:20:55.020 "data_size": 63488 00:20:55.020 }, 00:20:55.020 { 00:20:55.020 "name": null, 00:20:55.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:55.020 "is_configured": false, 00:20:55.020 "data_offset": 2048, 00:20:55.020 "data_size": 63488 00:20:55.020 }, 00:20:55.020 { 00:20:55.020 "name": "BaseBdev3", 00:20:55.020 "uuid": "284911a1-2c61-505b-a1cf-a991c0264ebd", 00:20:55.020 "is_configured": true, 00:20:55.020 "data_offset": 2048, 00:20:55.021 "data_size": 63488 00:20:55.021 }, 00:20:55.021 { 00:20:55.021 "name": "BaseBdev4", 00:20:55.021 "uuid": "58ce2f82-560f-53f0-85ef-835b880bd012", 00:20:55.021 "is_configured": true, 00:20:55.021 "data_offset": 2048, 00:20:55.021 "data_size": 63488 00:20:55.021 } 00:20:55.021 ] 00:20:55.021 }' 00:20:55.021 02:44:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:55.021 02:44:19 -- common/autotest_common.sh@10 -- # set +x 00:20:55.587 02:44:20 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:55.587 02:44:20 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:55.587 02:44:20 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:55.587 02:44:20 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:55.587 02:44:20 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:55.587 02:44:20 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:55.587 02:44:20 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:55.846 02:44:20 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:55.846 "name": "raid_bdev1", 00:20:55.846 "uuid": "4007a868-019f-4723-9a8e-de09ac3c4e3f", 00:20:55.846 "strip_size_kb": 0, 00:20:55.846 "state": "online", 00:20:55.846 "raid_level": "raid1", 00:20:55.846 "superblock": true, 00:20:55.846 "num_base_bdevs": 4, 00:20:55.846 "num_base_bdevs_discovered": 3, 00:20:55.846 "num_base_bdevs_operational": 3, 00:20:55.846 "base_bdevs_list": [ 00:20:55.846 { 00:20:55.846 "name": "spare", 00:20:55.846 "uuid": "9781ad09-2841-5d16-8f75-a93773329fe7", 00:20:55.846 "is_configured": true, 00:20:55.846 "data_offset": 2048, 00:20:55.846 "data_size": 63488 00:20:55.846 }, 00:20:55.846 { 00:20:55.846 "name": null, 00:20:55.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:55.846 "is_configured": false, 00:20:55.846 "data_offset": 2048, 00:20:55.846 "data_size": 63488 00:20:55.846 }, 00:20:55.846 { 00:20:55.846 "name": "BaseBdev3", 00:20:55.846 "uuid": "284911a1-2c61-505b-a1cf-a991c0264ebd", 00:20:55.846 "is_configured": true, 00:20:55.846 "data_offset": 2048, 00:20:55.846 "data_size": 63488 00:20:55.846 }, 00:20:55.846 { 00:20:55.846 "name": "BaseBdev4", 00:20:55.846 "uuid": "58ce2f82-560f-53f0-85ef-835b880bd012", 00:20:55.846 "is_configured": true, 00:20:55.846 "data_offset": 2048, 00:20:55.846 "data_size": 63488 00:20:55.846 } 00:20:55.846 ] 00:20:55.846 }' 00:20:55.846 02:44:20 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:55.846 02:44:20 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:55.846 02:44:20 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:55.846 02:44:20 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:55.846 02:44:20 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:55.846 02:44:20 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:56.105 02:44:21 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:20:56.105 02:44:21 -- bdev/bdev_raid.sh@709 -- # killprocess 137995 00:20:56.105 02:44:21 -- common/autotest_common.sh@926 -- # '[' -z 137995 ']' 00:20:56.105 02:44:21 -- common/autotest_common.sh@930 -- # kill -0 137995 00:20:56.105 02:44:21 -- common/autotest_common.sh@931 -- # uname 00:20:56.105 02:44:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:56.105 02:44:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 137995 00:20:56.105 killing process with pid 137995 00:20:56.105 Received shutdown signal, test time was about 60.000000 seconds 00:20:56.105 00:20:56.105 Latency(us) 00:20:56.105 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:56.105 =================================================================================================================== 00:20:56.105 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:56.105 02:44:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:56.105 02:44:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:56.105 02:44:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 137995' 00:20:56.105 02:44:21 -- common/autotest_common.sh@945 -- # kill 137995 00:20:56.105 02:44:21 -- common/autotest_common.sh@950 -- # wait 137995 00:20:56.105 [2024-07-11 02:44:21.133250] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:56.105 [2024-07-11 02:44:21.133364] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:56.105 [2024-07-11 02:44:21.133445] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:56.105 [2024-07-11 02:44:21.133456] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b480 name raid_bdev1, state offline 00:20:56.105 [2024-07-11 02:44:21.178432] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:56.364 ************************************ 00:20:56.364 END TEST raid_rebuild_test_sb 00:20:56.364 ************************************ 00:20:56.364 02:44:21 -- bdev/bdev_raid.sh@711 -- # return 0 00:20:56.364 00:20:56.364 real 0m25.447s 00:20:56.364 user 0m38.074s 00:20:56.364 sys 0m3.749s 00:20:56.364 02:44:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:56.364 02:44:21 -- common/autotest_common.sh@10 -- # set +x 00:20:56.364 02:44:21 -- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true 00:20:56.364 02:44:21 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:20:56.364 02:44:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:56.364 02:44:21 -- common/autotest_common.sh@10 -- # set +x 00:20:56.623 ************************************ 00:20:56.623 START TEST raid_rebuild_test_io 00:20:56.623 ************************************ 00:20:56.623 02:44:21 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 false true 00:20:56.623 02:44:21 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:20:56.623 02:44:21 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:20:56.623 02:44:21 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:20:56.623 02:44:21 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:20:56.623 02:44:21 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:20:56.623 02:44:21 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:20:56.623 02:44:21 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:56.623 02:44:21 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:20:56.623 02:44:21 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:56.623 02:44:21 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:56.623 02:44:21 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:20:56.623 02:44:21 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:56.623 02:44:21 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:56.623 02:44:21 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:20:56.623 02:44:21 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:56.623 02:44:21 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:56.623 02:44:21 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:20:56.623 02:44:21 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:56.623 02:44:21 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:56.623 02:44:21 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:20:56.623 02:44:21 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:20:56.623 02:44:21 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:20:56.623 02:44:21 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:20:56.623 02:44:21 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:20:56.623 02:44:21 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:20:56.623 02:44:21 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:20:56.623 02:44:21 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:20:56.623 02:44:21 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:20:56.623 02:44:21 -- bdev/bdev_raid.sh@544 -- # raid_pid=138666 00:20:56.623 02:44:21 -- bdev/bdev_raid.sh@545 -- # waitforlisten 138666 /var/tmp/spdk-raid.sock 00:20:56.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:56.623 02:44:21 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:56.623 02:44:21 -- common/autotest_common.sh@819 -- # '[' -z 138666 ']' 00:20:56.623 02:44:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:56.623 02:44:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:56.623 02:44:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:56.623 02:44:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:56.623 02:44:21 -- common/autotest_common.sh@10 -- # set +x 00:20:56.623 [2024-07-11 02:44:21.516360] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:20:56.623 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:56.623 Zero copy mechanism will not be used. 00:20:56.623 [2024-07-11 02:44:21.516553] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138666 ] 00:20:56.623 [2024-07-11 02:44:21.653232] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.623 [2024-07-11 02:44:21.712395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:56.882 [2024-07-11 02:44:21.763219] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:57.448 02:44:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:57.448 02:44:22 -- common/autotest_common.sh@852 -- # return 0 00:20:57.448 02:44:22 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:57.448 02:44:22 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:57.448 02:44:22 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:57.706 BaseBdev1 00:20:57.706 02:44:22 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:57.706 02:44:22 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:57.706 02:44:22 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:57.963 BaseBdev2 00:20:57.963 02:44:22 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:57.963 02:44:22 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:57.963 02:44:22 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:58.221 BaseBdev3 00:20:58.221 02:44:23 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:58.221 02:44:23 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:58.221 02:44:23 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:20:58.479 BaseBdev4 00:20:58.479 02:44:23 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:58.479 spare_malloc 00:20:58.479 02:44:23 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:58.737 spare_delay 00:20:58.737 02:44:23 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:58.996 [2024-07-11 02:44:23.906020] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:58.996 [2024-07-11 02:44:23.906180] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:58.996 [2024-07-11 02:44:23.906263] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:20:58.996 [2024-07-11 02:44:23.906304] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:58.996 [2024-07-11 02:44:23.908625] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:58.996 [2024-07-11 02:44:23.908680] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:58.996 spare 00:20:58.996 02:44:23 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:20:59.256 [2024-07-11 02:44:24.102117] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:59.256 [2024-07-11 02:44:24.103945] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:59.256 [2024-07-11 02:44:24.104011] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:59.256 [2024-07-11 02:44:24.104046] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:59.256 [2024-07-11 02:44:24.104132] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:20:59.256 [2024-07-11 02:44:24.104144] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:59.256 [2024-07-11 02:44:24.104321] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:20:59.256 [2024-07-11 02:44:24.104662] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:20:59.256 [2024-07-11 02:44:24.104676] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008180 00:20:59.256 [2024-07-11 02:44:24.104841] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:59.256 02:44:24 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:59.256 02:44:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:59.256 02:44:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:59.256 02:44:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:59.256 02:44:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:59.256 02:44:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:59.256 02:44:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:59.256 02:44:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:59.256 02:44:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:59.256 02:44:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:59.256 02:44:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:59.256 02:44:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:59.515 02:44:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:59.515 "name": "raid_bdev1", 00:20:59.515 "uuid": "8615fdd0-dbea-4493-844b-f9367019e322", 00:20:59.515 "strip_size_kb": 0, 00:20:59.515 "state": "online", 00:20:59.515 "raid_level": "raid1", 00:20:59.515 "superblock": false, 00:20:59.515 "num_base_bdevs": 4, 00:20:59.515 "num_base_bdevs_discovered": 4, 00:20:59.515 "num_base_bdevs_operational": 4, 00:20:59.515 "base_bdevs_list": [ 00:20:59.515 { 00:20:59.515 "name": "BaseBdev1", 00:20:59.515 "uuid": "d370ff22-5710-46ea-8ac0-70702f4032c6", 00:20:59.516 "is_configured": true, 00:20:59.516 "data_offset": 0, 00:20:59.516 "data_size": 65536 00:20:59.516 }, 00:20:59.516 { 00:20:59.516 "name": "BaseBdev2", 00:20:59.516 "uuid": "d94d741d-ca72-416f-84dd-ab2ae9388be3", 00:20:59.516 "is_configured": true, 00:20:59.516 "data_offset": 0, 00:20:59.516 "data_size": 65536 00:20:59.516 }, 00:20:59.516 { 00:20:59.516 "name": "BaseBdev3", 00:20:59.516 "uuid": "0c782d09-7479-4f0c-879b-cd488a6e60ce", 00:20:59.516 "is_configured": true, 00:20:59.516 "data_offset": 0, 00:20:59.516 "data_size": 65536 00:20:59.516 }, 00:20:59.516 { 00:20:59.516 "name": "BaseBdev4", 00:20:59.516 "uuid": "66916f56-1f30-4689-97a5-82a21a085ac4", 00:20:59.516 "is_configured": true, 00:20:59.516 "data_offset": 0, 00:20:59.516 "data_size": 65536 00:20:59.516 } 00:20:59.516 ] 00:20:59.516 }' 00:20:59.516 02:44:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:59.516 02:44:24 -- common/autotest_common.sh@10 -- # set +x 00:21:00.083 02:44:24 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:00.083 02:44:24 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:21:00.341 [2024-07-11 02:44:25.262573] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:00.341 02:44:25 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:21:00.341 02:44:25 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:00.341 02:44:25 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:00.600 02:44:25 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:21:00.600 02:44:25 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:21:00.600 02:44:25 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:00.600 02:44:25 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:00.600 [2024-07-11 02:44:25.580826] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:21:00.600 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:00.600 Zero copy mechanism will not be used. 00:21:00.600 Running I/O for 60 seconds... 00:21:00.600 [2024-07-11 02:44:25.669948] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:00.600 [2024-07-11 02:44:25.676203] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000022c0 00:21:00.857 02:44:25 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:00.857 02:44:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:00.857 02:44:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:00.857 02:44:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:00.857 02:44:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:00.857 02:44:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:00.857 02:44:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:00.857 02:44:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:00.857 02:44:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:00.857 02:44:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:00.857 02:44:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:00.857 02:44:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:01.114 02:44:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:01.114 "name": "raid_bdev1", 00:21:01.114 "uuid": "8615fdd0-dbea-4493-844b-f9367019e322", 00:21:01.114 "strip_size_kb": 0, 00:21:01.114 "state": "online", 00:21:01.114 "raid_level": "raid1", 00:21:01.114 "superblock": false, 00:21:01.114 "num_base_bdevs": 4, 00:21:01.114 "num_base_bdevs_discovered": 3, 00:21:01.114 "num_base_bdevs_operational": 3, 00:21:01.114 "base_bdevs_list": [ 00:21:01.114 { 00:21:01.114 "name": null, 00:21:01.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:01.114 "is_configured": false, 00:21:01.114 "data_offset": 0, 00:21:01.114 "data_size": 65536 00:21:01.114 }, 00:21:01.114 { 00:21:01.114 "name": "BaseBdev2", 00:21:01.114 "uuid": "d94d741d-ca72-416f-84dd-ab2ae9388be3", 00:21:01.114 "is_configured": true, 00:21:01.114 "data_offset": 0, 00:21:01.114 "data_size": 65536 00:21:01.114 }, 00:21:01.114 { 00:21:01.114 "name": "BaseBdev3", 00:21:01.114 "uuid": "0c782d09-7479-4f0c-879b-cd488a6e60ce", 00:21:01.114 "is_configured": true, 00:21:01.114 "data_offset": 0, 00:21:01.114 "data_size": 65536 00:21:01.114 }, 00:21:01.114 { 00:21:01.114 "name": "BaseBdev4", 00:21:01.114 "uuid": "66916f56-1f30-4689-97a5-82a21a085ac4", 00:21:01.114 "is_configured": true, 00:21:01.114 "data_offset": 0, 00:21:01.114 "data_size": 65536 00:21:01.114 } 00:21:01.114 ] 00:21:01.114 }' 00:21:01.114 02:44:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:01.114 02:44:25 -- common/autotest_common.sh@10 -- # set +x 00:21:01.681 02:44:26 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:01.940 [2024-07-11 02:44:26.981365] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:01.940 [2024-07-11 02:44:26.981432] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:01.940 02:44:27 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:21:01.940 [2024-07-11 02:44:27.026384] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:21:01.940 [2024-07-11 02:44:27.028369] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:02.198 [2024-07-11 02:44:27.138698] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:02.198 [2024-07-11 02:44:27.139933] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:02.456 [2024-07-11 02:44:27.363094] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:02.456 [2024-07-11 02:44:27.363430] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:02.714 [2024-07-11 02:44:27.635452] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:02.714 [2024-07-11 02:44:27.636637] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:02.972 [2024-07-11 02:44:27.868883] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:02.972 [2024-07-11 02:44:27.869165] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:02.972 02:44:28 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:02.972 02:44:28 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:02.972 02:44:28 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:02.972 02:44:28 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:02.972 02:44:28 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:02.972 02:44:28 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:02.972 02:44:28 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:03.230 [2024-07-11 02:44:28.134522] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:03.230 02:44:28 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:03.230 "name": "raid_bdev1", 00:21:03.230 "uuid": "8615fdd0-dbea-4493-844b-f9367019e322", 00:21:03.230 "strip_size_kb": 0, 00:21:03.230 "state": "online", 00:21:03.230 "raid_level": "raid1", 00:21:03.230 "superblock": false, 00:21:03.230 "num_base_bdevs": 4, 00:21:03.230 "num_base_bdevs_discovered": 4, 00:21:03.230 "num_base_bdevs_operational": 4, 00:21:03.230 "process": { 00:21:03.230 "type": "rebuild", 00:21:03.230 "target": "spare", 00:21:03.230 "progress": { 00:21:03.230 "blocks": 14336, 00:21:03.230 "percent": 21 00:21:03.230 } 00:21:03.230 }, 00:21:03.230 "base_bdevs_list": [ 00:21:03.230 { 00:21:03.230 "name": "spare", 00:21:03.230 "uuid": "10b916cc-f782-598f-b5b6-16a64230f099", 00:21:03.230 "is_configured": true, 00:21:03.230 "data_offset": 0, 00:21:03.230 "data_size": 65536 00:21:03.230 }, 00:21:03.230 { 00:21:03.230 "name": "BaseBdev2", 00:21:03.230 "uuid": "d94d741d-ca72-416f-84dd-ab2ae9388be3", 00:21:03.230 "is_configured": true, 00:21:03.230 "data_offset": 0, 00:21:03.230 "data_size": 65536 00:21:03.230 }, 00:21:03.230 { 00:21:03.230 "name": "BaseBdev3", 00:21:03.230 "uuid": "0c782d09-7479-4f0c-879b-cd488a6e60ce", 00:21:03.230 "is_configured": true, 00:21:03.230 "data_offset": 0, 00:21:03.230 "data_size": 65536 00:21:03.230 }, 00:21:03.230 { 00:21:03.230 "name": "BaseBdev4", 00:21:03.230 "uuid": "66916f56-1f30-4689-97a5-82a21a085ac4", 00:21:03.230 "is_configured": true, 00:21:03.230 "data_offset": 0, 00:21:03.230 "data_size": 65536 00:21:03.230 } 00:21:03.230 ] 00:21:03.230 }' 00:21:03.230 02:44:28 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:03.488 02:44:28 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:03.488 02:44:28 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:03.488 [2024-07-11 02:44:28.353439] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:03.488 02:44:28 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:03.488 02:44:28 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:03.488 [2024-07-11 02:44:28.560611] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:03.488 [2024-07-11 02:44:28.578646] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:03.488 [2024-07-11 02:44:28.579085] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:03.747 [2024-07-11 02:44:28.686646] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:03.747 [2024-07-11 02:44:28.689501] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:03.747 [2024-07-11 02:44:28.695944] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000022c0 00:21:03.747 02:44:28 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:03.747 02:44:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:03.747 02:44:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:03.747 02:44:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:03.747 02:44:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:03.747 02:44:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:03.747 02:44:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:03.747 02:44:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:03.747 02:44:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:03.747 02:44:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:03.747 02:44:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:03.747 02:44:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.006 02:44:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:04.006 "name": "raid_bdev1", 00:21:04.006 "uuid": "8615fdd0-dbea-4493-844b-f9367019e322", 00:21:04.006 "strip_size_kb": 0, 00:21:04.006 "state": "online", 00:21:04.006 "raid_level": "raid1", 00:21:04.006 "superblock": false, 00:21:04.006 "num_base_bdevs": 4, 00:21:04.006 "num_base_bdevs_discovered": 3, 00:21:04.006 "num_base_bdevs_operational": 3, 00:21:04.006 "base_bdevs_list": [ 00:21:04.006 { 00:21:04.006 "name": null, 00:21:04.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:04.006 "is_configured": false, 00:21:04.006 "data_offset": 0, 00:21:04.006 "data_size": 65536 00:21:04.006 }, 00:21:04.006 { 00:21:04.006 "name": "BaseBdev2", 00:21:04.006 "uuid": "d94d741d-ca72-416f-84dd-ab2ae9388be3", 00:21:04.006 "is_configured": true, 00:21:04.006 "data_offset": 0, 00:21:04.006 "data_size": 65536 00:21:04.006 }, 00:21:04.006 { 00:21:04.006 "name": "BaseBdev3", 00:21:04.006 "uuid": "0c782d09-7479-4f0c-879b-cd488a6e60ce", 00:21:04.006 "is_configured": true, 00:21:04.006 "data_offset": 0, 00:21:04.006 "data_size": 65536 00:21:04.006 }, 00:21:04.006 { 00:21:04.006 "name": "BaseBdev4", 00:21:04.006 "uuid": "66916f56-1f30-4689-97a5-82a21a085ac4", 00:21:04.006 "is_configured": true, 00:21:04.006 "data_offset": 0, 00:21:04.006 "data_size": 65536 00:21:04.006 } 00:21:04.006 ] 00:21:04.006 }' 00:21:04.006 02:44:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:04.006 02:44:28 -- common/autotest_common.sh@10 -- # set +x 00:21:04.939 02:44:29 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:04.939 02:44:29 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:04.939 02:44:29 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:04.939 02:44:29 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:04.939 02:44:29 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:04.939 02:44:29 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:04.939 02:44:29 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.939 02:44:30 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:04.939 "name": "raid_bdev1", 00:21:04.939 "uuid": "8615fdd0-dbea-4493-844b-f9367019e322", 00:21:04.939 "strip_size_kb": 0, 00:21:04.939 "state": "online", 00:21:04.939 "raid_level": "raid1", 00:21:04.939 "superblock": false, 00:21:04.939 "num_base_bdevs": 4, 00:21:04.939 "num_base_bdevs_discovered": 3, 00:21:04.939 "num_base_bdevs_operational": 3, 00:21:04.939 "base_bdevs_list": [ 00:21:04.939 { 00:21:04.939 "name": null, 00:21:04.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:04.940 "is_configured": false, 00:21:04.940 "data_offset": 0, 00:21:04.940 "data_size": 65536 00:21:04.940 }, 00:21:04.940 { 00:21:04.940 "name": "BaseBdev2", 00:21:04.940 "uuid": "d94d741d-ca72-416f-84dd-ab2ae9388be3", 00:21:04.940 "is_configured": true, 00:21:04.940 "data_offset": 0, 00:21:04.940 "data_size": 65536 00:21:04.940 }, 00:21:04.940 { 00:21:04.940 "name": "BaseBdev3", 00:21:04.940 "uuid": "0c782d09-7479-4f0c-879b-cd488a6e60ce", 00:21:04.940 "is_configured": true, 00:21:04.940 "data_offset": 0, 00:21:04.940 "data_size": 65536 00:21:04.940 }, 00:21:04.940 { 00:21:04.940 "name": "BaseBdev4", 00:21:04.940 "uuid": "66916f56-1f30-4689-97a5-82a21a085ac4", 00:21:04.940 "is_configured": true, 00:21:04.940 "data_offset": 0, 00:21:04.940 "data_size": 65536 00:21:04.940 } 00:21:04.940 ] 00:21:04.940 }' 00:21:04.940 02:44:30 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:05.198 02:44:30 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:05.198 02:44:30 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:05.198 02:44:30 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:05.198 02:44:30 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:05.506 [2024-07-11 02:44:30.303480] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:05.506 [2024-07-11 02:44:30.303537] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:05.506 02:44:30 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:21:05.506 [2024-07-11 02:44:30.355764] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:21:05.506 [2024-07-11 02:44:30.357540] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:05.507 [2024-07-11 02:44:30.495515] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:05.777 [2024-07-11 02:44:30.718358] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:05.777 [2024-07-11 02:44:30.718624] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:06.035 [2024-07-11 02:44:31.041914] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:06.291 [2024-07-11 02:44:31.267499] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:06.292 [2024-07-11 02:44:31.267783] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:06.292 02:44:31 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:06.292 02:44:31 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:06.292 02:44:31 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:06.292 02:44:31 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:06.292 02:44:31 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:06.292 02:44:31 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:06.292 02:44:31 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:06.549 02:44:31 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:06.549 "name": "raid_bdev1", 00:21:06.549 "uuid": "8615fdd0-dbea-4493-844b-f9367019e322", 00:21:06.549 "strip_size_kb": 0, 00:21:06.549 "state": "online", 00:21:06.549 "raid_level": "raid1", 00:21:06.549 "superblock": false, 00:21:06.549 "num_base_bdevs": 4, 00:21:06.549 "num_base_bdevs_discovered": 4, 00:21:06.549 "num_base_bdevs_operational": 4, 00:21:06.549 "process": { 00:21:06.549 "type": "rebuild", 00:21:06.549 "target": "spare", 00:21:06.549 "progress": { 00:21:06.549 "blocks": 12288, 00:21:06.549 "percent": 18 00:21:06.549 } 00:21:06.549 }, 00:21:06.549 "base_bdevs_list": [ 00:21:06.549 { 00:21:06.549 "name": "spare", 00:21:06.549 "uuid": "10b916cc-f782-598f-b5b6-16a64230f099", 00:21:06.549 "is_configured": true, 00:21:06.549 "data_offset": 0, 00:21:06.549 "data_size": 65536 00:21:06.549 }, 00:21:06.549 { 00:21:06.549 "name": "BaseBdev2", 00:21:06.549 "uuid": "d94d741d-ca72-416f-84dd-ab2ae9388be3", 00:21:06.549 "is_configured": true, 00:21:06.549 "data_offset": 0, 00:21:06.549 "data_size": 65536 00:21:06.549 }, 00:21:06.549 { 00:21:06.549 "name": "BaseBdev3", 00:21:06.549 "uuid": "0c782d09-7479-4f0c-879b-cd488a6e60ce", 00:21:06.549 "is_configured": true, 00:21:06.549 "data_offset": 0, 00:21:06.549 "data_size": 65536 00:21:06.549 }, 00:21:06.549 { 00:21:06.549 "name": "BaseBdev4", 00:21:06.549 "uuid": "66916f56-1f30-4689-97a5-82a21a085ac4", 00:21:06.549 "is_configured": true, 00:21:06.549 "data_offset": 0, 00:21:06.549 "data_size": 65536 00:21:06.549 } 00:21:06.549 ] 00:21:06.549 }' 00:21:06.549 02:44:31 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:06.549 [2024-07-11 02:44:31.598126] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:06.549 [2024-07-11 02:44:31.599403] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:06.808 02:44:31 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:06.808 02:44:31 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:06.808 02:44:31 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:06.808 02:44:31 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:21:06.808 02:44:31 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:21:06.808 02:44:31 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:21:06.808 02:44:31 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:21:06.808 02:44:31 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:06.808 [2024-07-11 02:44:31.816791] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:06.808 [2024-07-11 02:44:31.817100] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:06.808 [2024-07-11 02:44:31.892792] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:07.066 [2024-07-11 02:44:32.084759] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000022c0 00:21:07.066 [2024-07-11 02:44:32.084809] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002530 00:21:07.066 02:44:32 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:21:07.066 02:44:32 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:21:07.066 02:44:32 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:07.066 02:44:32 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:07.066 02:44:32 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:07.066 02:44:32 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:07.066 02:44:32 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:07.066 02:44:32 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:07.066 02:44:32 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:07.324 [2024-07-11 02:44:32.208904] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:07.324 [2024-07-11 02:44:32.209343] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:07.324 02:44:32 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:07.324 "name": "raid_bdev1", 00:21:07.324 "uuid": "8615fdd0-dbea-4493-844b-f9367019e322", 00:21:07.324 "strip_size_kb": 0, 00:21:07.324 "state": "online", 00:21:07.324 "raid_level": "raid1", 00:21:07.324 "superblock": false, 00:21:07.324 "num_base_bdevs": 4, 00:21:07.324 "num_base_bdevs_discovered": 3, 00:21:07.324 "num_base_bdevs_operational": 3, 00:21:07.324 "process": { 00:21:07.324 "type": "rebuild", 00:21:07.324 "target": "spare", 00:21:07.324 "progress": { 00:21:07.324 "blocks": 20480, 00:21:07.324 "percent": 31 00:21:07.324 } 00:21:07.324 }, 00:21:07.324 "base_bdevs_list": [ 00:21:07.324 { 00:21:07.324 "name": "spare", 00:21:07.324 "uuid": "10b916cc-f782-598f-b5b6-16a64230f099", 00:21:07.324 "is_configured": true, 00:21:07.324 "data_offset": 0, 00:21:07.324 "data_size": 65536 00:21:07.324 }, 00:21:07.324 { 00:21:07.324 "name": null, 00:21:07.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:07.324 "is_configured": false, 00:21:07.324 "data_offset": 0, 00:21:07.324 "data_size": 65536 00:21:07.324 }, 00:21:07.324 { 00:21:07.324 "name": "BaseBdev3", 00:21:07.324 "uuid": "0c782d09-7479-4f0c-879b-cd488a6e60ce", 00:21:07.324 "is_configured": true, 00:21:07.324 "data_offset": 0, 00:21:07.324 "data_size": 65536 00:21:07.324 }, 00:21:07.324 { 00:21:07.324 "name": "BaseBdev4", 00:21:07.324 "uuid": "66916f56-1f30-4689-97a5-82a21a085ac4", 00:21:07.325 "is_configured": true, 00:21:07.325 "data_offset": 0, 00:21:07.325 "data_size": 65536 00:21:07.325 } 00:21:07.325 ] 00:21:07.325 }' 00:21:07.325 02:44:32 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:07.325 [2024-07-11 02:44:32.339002] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:21:07.325 02:44:32 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:07.325 02:44:32 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:07.583 02:44:32 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:07.583 02:44:32 -- bdev/bdev_raid.sh@657 -- # local timeout=486 00:21:07.583 02:44:32 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:07.583 02:44:32 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:07.583 02:44:32 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:07.583 02:44:32 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:07.583 02:44:32 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:07.583 02:44:32 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:07.583 02:44:32 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:07.583 02:44:32 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:07.840 02:44:32 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:07.840 "name": "raid_bdev1", 00:21:07.840 "uuid": "8615fdd0-dbea-4493-844b-f9367019e322", 00:21:07.840 "strip_size_kb": 0, 00:21:07.840 "state": "online", 00:21:07.840 "raid_level": "raid1", 00:21:07.840 "superblock": false, 00:21:07.840 "num_base_bdevs": 4, 00:21:07.841 "num_base_bdevs_discovered": 3, 00:21:07.841 "num_base_bdevs_operational": 3, 00:21:07.841 "process": { 00:21:07.841 "type": "rebuild", 00:21:07.841 "target": "spare", 00:21:07.841 "progress": { 00:21:07.841 "blocks": 26624, 00:21:07.841 "percent": 40 00:21:07.841 } 00:21:07.841 }, 00:21:07.841 "base_bdevs_list": [ 00:21:07.841 { 00:21:07.841 "name": "spare", 00:21:07.841 "uuid": "10b916cc-f782-598f-b5b6-16a64230f099", 00:21:07.841 "is_configured": true, 00:21:07.841 "data_offset": 0, 00:21:07.841 "data_size": 65536 00:21:07.841 }, 00:21:07.841 { 00:21:07.841 "name": null, 00:21:07.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:07.841 "is_configured": false, 00:21:07.841 "data_offset": 0, 00:21:07.841 "data_size": 65536 00:21:07.841 }, 00:21:07.841 { 00:21:07.841 "name": "BaseBdev3", 00:21:07.841 "uuid": "0c782d09-7479-4f0c-879b-cd488a6e60ce", 00:21:07.841 "is_configured": true, 00:21:07.841 "data_offset": 0, 00:21:07.841 "data_size": 65536 00:21:07.841 }, 00:21:07.841 { 00:21:07.841 "name": "BaseBdev4", 00:21:07.841 "uuid": "66916f56-1f30-4689-97a5-82a21a085ac4", 00:21:07.841 "is_configured": true, 00:21:07.841 "data_offset": 0, 00:21:07.841 "data_size": 65536 00:21:07.841 } 00:21:07.841 ] 00:21:07.841 }' 00:21:07.841 02:44:32 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:07.841 [2024-07-11 02:44:32.759068] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:21:07.841 02:44:32 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:07.841 02:44:32 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:07.841 02:44:32 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:07.841 02:44:32 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:08.099 [2024-07-11 02:44:33.091554] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:21:08.099 [2024-07-11 02:44:33.092115] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:21:08.668 [2024-07-11 02:44:33.638965] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:21:08.925 02:44:33 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:08.925 02:44:33 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:08.925 02:44:33 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:08.925 02:44:33 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:08.925 02:44:33 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:08.925 02:44:33 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:08.925 02:44:33 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:08.925 02:44:33 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:08.925 [2024-07-11 02:44:33.957835] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:21:09.183 02:44:34 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:09.183 "name": "raid_bdev1", 00:21:09.183 "uuid": "8615fdd0-dbea-4493-844b-f9367019e322", 00:21:09.183 "strip_size_kb": 0, 00:21:09.183 "state": "online", 00:21:09.183 "raid_level": "raid1", 00:21:09.183 "superblock": false, 00:21:09.183 "num_base_bdevs": 4, 00:21:09.183 "num_base_bdevs_discovered": 3, 00:21:09.183 "num_base_bdevs_operational": 3, 00:21:09.183 "process": { 00:21:09.183 "type": "rebuild", 00:21:09.183 "target": "spare", 00:21:09.183 "progress": { 00:21:09.183 "blocks": 45056, 00:21:09.183 "percent": 68 00:21:09.183 } 00:21:09.183 }, 00:21:09.183 "base_bdevs_list": [ 00:21:09.183 { 00:21:09.183 "name": "spare", 00:21:09.183 "uuid": "10b916cc-f782-598f-b5b6-16a64230f099", 00:21:09.183 "is_configured": true, 00:21:09.183 "data_offset": 0, 00:21:09.183 "data_size": 65536 00:21:09.183 }, 00:21:09.183 { 00:21:09.183 "name": null, 00:21:09.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:09.183 "is_configured": false, 00:21:09.183 "data_offset": 0, 00:21:09.183 "data_size": 65536 00:21:09.183 }, 00:21:09.183 { 00:21:09.183 "name": "BaseBdev3", 00:21:09.183 "uuid": "0c782d09-7479-4f0c-879b-cd488a6e60ce", 00:21:09.183 "is_configured": true, 00:21:09.183 "data_offset": 0, 00:21:09.183 "data_size": 65536 00:21:09.183 }, 00:21:09.183 { 00:21:09.183 "name": "BaseBdev4", 00:21:09.183 "uuid": "66916f56-1f30-4689-97a5-82a21a085ac4", 00:21:09.183 "is_configured": true, 00:21:09.183 "data_offset": 0, 00:21:09.183 "data_size": 65536 00:21:09.183 } 00:21:09.183 ] 00:21:09.183 }' 00:21:09.183 02:44:34 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:09.183 02:44:34 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:09.183 02:44:34 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:09.183 02:44:34 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:09.183 02:44:34 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:09.749 [2024-07-11 02:44:34.751082] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:21:10.316 [2024-07-11 02:44:35.182997] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:10.316 02:44:35 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:10.316 02:44:35 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:10.316 02:44:35 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:10.316 02:44:35 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:10.316 02:44:35 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:10.316 02:44:35 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:10.316 02:44:35 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:10.316 02:44:35 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.316 [2024-07-11 02:44:35.289038] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:10.316 [2024-07-11 02:44:35.292710] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:10.574 02:44:35 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:10.574 "name": "raid_bdev1", 00:21:10.574 "uuid": "8615fdd0-dbea-4493-844b-f9367019e322", 00:21:10.574 "strip_size_kb": 0, 00:21:10.574 "state": "online", 00:21:10.574 "raid_level": "raid1", 00:21:10.574 "superblock": false, 00:21:10.574 "num_base_bdevs": 4, 00:21:10.574 "num_base_bdevs_discovered": 3, 00:21:10.574 "num_base_bdevs_operational": 3, 00:21:10.574 "base_bdevs_list": [ 00:21:10.574 { 00:21:10.574 "name": "spare", 00:21:10.574 "uuid": "10b916cc-f782-598f-b5b6-16a64230f099", 00:21:10.574 "is_configured": true, 00:21:10.574 "data_offset": 0, 00:21:10.574 "data_size": 65536 00:21:10.574 }, 00:21:10.574 { 00:21:10.574 "name": null, 00:21:10.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:10.574 "is_configured": false, 00:21:10.574 "data_offset": 0, 00:21:10.574 "data_size": 65536 00:21:10.574 }, 00:21:10.574 { 00:21:10.574 "name": "BaseBdev3", 00:21:10.574 "uuid": "0c782d09-7479-4f0c-879b-cd488a6e60ce", 00:21:10.574 "is_configured": true, 00:21:10.574 "data_offset": 0, 00:21:10.574 "data_size": 65536 00:21:10.574 }, 00:21:10.574 { 00:21:10.574 "name": "BaseBdev4", 00:21:10.574 "uuid": "66916f56-1f30-4689-97a5-82a21a085ac4", 00:21:10.574 "is_configured": true, 00:21:10.574 "data_offset": 0, 00:21:10.575 "data_size": 65536 00:21:10.575 } 00:21:10.575 ] 00:21:10.575 }' 00:21:10.575 02:44:35 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:10.575 02:44:35 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:10.575 02:44:35 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:10.575 02:44:35 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:21:10.575 02:44:35 -- bdev/bdev_raid.sh@660 -- # break 00:21:10.575 02:44:35 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:10.575 02:44:35 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:10.575 02:44:35 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:10.575 02:44:35 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:10.575 02:44:35 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:10.575 02:44:35 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:10.575 02:44:35 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.833 02:44:35 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:10.833 "name": "raid_bdev1", 00:21:10.833 "uuid": "8615fdd0-dbea-4493-844b-f9367019e322", 00:21:10.833 "strip_size_kb": 0, 00:21:10.833 "state": "online", 00:21:10.833 "raid_level": "raid1", 00:21:10.833 "superblock": false, 00:21:10.833 "num_base_bdevs": 4, 00:21:10.833 "num_base_bdevs_discovered": 3, 00:21:10.833 "num_base_bdevs_operational": 3, 00:21:10.833 "base_bdevs_list": [ 00:21:10.833 { 00:21:10.833 "name": "spare", 00:21:10.833 "uuid": "10b916cc-f782-598f-b5b6-16a64230f099", 00:21:10.833 "is_configured": true, 00:21:10.833 "data_offset": 0, 00:21:10.833 "data_size": 65536 00:21:10.833 }, 00:21:10.833 { 00:21:10.833 "name": null, 00:21:10.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:10.833 "is_configured": false, 00:21:10.833 "data_offset": 0, 00:21:10.833 "data_size": 65536 00:21:10.833 }, 00:21:10.833 { 00:21:10.833 "name": "BaseBdev3", 00:21:10.833 "uuid": "0c782d09-7479-4f0c-879b-cd488a6e60ce", 00:21:10.833 "is_configured": true, 00:21:10.833 "data_offset": 0, 00:21:10.833 "data_size": 65536 00:21:10.833 }, 00:21:10.833 { 00:21:10.833 "name": "BaseBdev4", 00:21:10.833 "uuid": "66916f56-1f30-4689-97a5-82a21a085ac4", 00:21:10.833 "is_configured": true, 00:21:10.833 "data_offset": 0, 00:21:10.833 "data_size": 65536 00:21:10.833 } 00:21:10.833 ] 00:21:10.833 }' 00:21:10.833 02:44:35 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:10.833 02:44:35 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:10.833 02:44:35 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:10.833 02:44:35 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:10.833 02:44:35 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:10.833 02:44:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:10.833 02:44:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:10.833 02:44:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:10.833 02:44:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:10.833 02:44:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:10.833 02:44:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:10.833 02:44:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:10.833 02:44:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:10.833 02:44:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:10.833 02:44:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.833 02:44:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:11.092 02:44:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:11.092 "name": "raid_bdev1", 00:21:11.092 "uuid": "8615fdd0-dbea-4493-844b-f9367019e322", 00:21:11.092 "strip_size_kb": 0, 00:21:11.092 "state": "online", 00:21:11.092 "raid_level": "raid1", 00:21:11.092 "superblock": false, 00:21:11.092 "num_base_bdevs": 4, 00:21:11.092 "num_base_bdevs_discovered": 3, 00:21:11.092 "num_base_bdevs_operational": 3, 00:21:11.092 "base_bdevs_list": [ 00:21:11.092 { 00:21:11.092 "name": "spare", 00:21:11.092 "uuid": "10b916cc-f782-598f-b5b6-16a64230f099", 00:21:11.092 "is_configured": true, 00:21:11.092 "data_offset": 0, 00:21:11.092 "data_size": 65536 00:21:11.092 }, 00:21:11.092 { 00:21:11.092 "name": null, 00:21:11.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:11.092 "is_configured": false, 00:21:11.092 "data_offset": 0, 00:21:11.092 "data_size": 65536 00:21:11.092 }, 00:21:11.092 { 00:21:11.092 "name": "BaseBdev3", 00:21:11.092 "uuid": "0c782d09-7479-4f0c-879b-cd488a6e60ce", 00:21:11.092 "is_configured": true, 00:21:11.092 "data_offset": 0, 00:21:11.092 "data_size": 65536 00:21:11.092 }, 00:21:11.092 { 00:21:11.092 "name": "BaseBdev4", 00:21:11.092 "uuid": "66916f56-1f30-4689-97a5-82a21a085ac4", 00:21:11.092 "is_configured": true, 00:21:11.092 "data_offset": 0, 00:21:11.092 "data_size": 65536 00:21:11.092 } 00:21:11.092 ] 00:21:11.092 }' 00:21:11.092 02:44:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:11.092 02:44:36 -- common/autotest_common.sh@10 -- # set +x 00:21:12.029 02:44:36 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:12.029 [2024-07-11 02:44:37.000181] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:12.029 [2024-07-11 02:44:37.000217] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:12.029 00:21:12.029 Latency(us) 00:21:12.029 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:12.029 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:21:12.029 raid_bdev1 : 11.45 120.09 360.28 0.00 0.00 11193.92 268.10 116773.24 00:21:12.030 =================================================================================================================== 00:21:12.030 Total : 120.09 360.28 0.00 0.00 11193.92 268.10 116773.24 00:21:12.030 [2024-07-11 02:44:37.035519] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:12.030 [2024-07-11 02:44:37.035566] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:12.030 0 00:21:12.030 [2024-07-11 02:44:37.035669] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:12.030 [2024-07-11 02:44:37.035682] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state offline 00:21:12.030 02:44:37 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:12.030 02:44:37 -- bdev/bdev_raid.sh@671 -- # jq length 00:21:12.288 02:44:37 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:21:12.288 02:44:37 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:21:12.288 02:44:37 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:21:12.288 02:44:37 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:12.288 02:44:37 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:21:12.288 02:44:37 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:12.288 02:44:37 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:21:12.288 02:44:37 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:12.288 02:44:37 -- bdev/nbd_common.sh@12 -- # local i 00:21:12.288 02:44:37 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:12.288 02:44:37 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:12.288 02:44:37 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:21:12.547 /dev/nbd0 00:21:12.547 02:44:37 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:12.547 02:44:37 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:12.547 02:44:37 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:21:12.547 02:44:37 -- common/autotest_common.sh@857 -- # local i 00:21:12.547 02:44:37 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:12.547 02:44:37 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:12.547 02:44:37 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:21:12.547 02:44:37 -- common/autotest_common.sh@861 -- # break 00:21:12.547 02:44:37 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:12.547 02:44:37 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:12.547 02:44:37 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:12.547 1+0 records in 00:21:12.547 1+0 records out 00:21:12.547 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000383176 s, 10.7 MB/s 00:21:12.547 02:44:37 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:12.547 02:44:37 -- common/autotest_common.sh@874 -- # size=4096 00:21:12.547 02:44:37 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:12.547 02:44:37 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:12.547 02:44:37 -- common/autotest_common.sh@877 -- # return 0 00:21:12.547 02:44:37 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:12.547 02:44:37 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:12.547 02:44:37 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:21:12.547 02:44:37 -- bdev/bdev_raid.sh@677 -- # '[' -z '' ']' 00:21:12.547 02:44:37 -- bdev/bdev_raid.sh@678 -- # continue 00:21:12.547 02:44:37 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:21:12.547 02:44:37 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']' 00:21:12.547 02:44:37 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:21:12.547 02:44:37 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:12.547 02:44:37 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:21:12.547 02:44:37 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:12.547 02:44:37 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:21:12.547 02:44:37 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:12.547 02:44:37 -- bdev/nbd_common.sh@12 -- # local i 00:21:12.547 02:44:37 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:12.547 02:44:37 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:12.547 02:44:37 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:21:12.806 /dev/nbd1 00:21:12.806 02:44:37 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:12.806 02:44:37 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:12.806 02:44:37 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:21:12.806 02:44:37 -- common/autotest_common.sh@857 -- # local i 00:21:12.806 02:44:37 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:12.806 02:44:37 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:12.806 02:44:37 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:21:12.806 02:44:37 -- common/autotest_common.sh@861 -- # break 00:21:12.806 02:44:37 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:12.806 02:44:37 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:12.806 02:44:37 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:12.806 1+0 records in 00:21:12.806 1+0 records out 00:21:12.806 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000343748 s, 11.9 MB/s 00:21:12.806 02:44:37 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:12.806 02:44:37 -- common/autotest_common.sh@874 -- # size=4096 00:21:12.806 02:44:37 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:12.806 02:44:37 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:12.806 02:44:37 -- common/autotest_common.sh@877 -- # return 0 00:21:12.806 02:44:37 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:12.806 02:44:37 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:12.806 02:44:37 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:21:13.064 02:44:37 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:21:13.064 02:44:37 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:13.064 02:44:37 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:21:13.064 02:44:37 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:13.064 02:44:37 -- bdev/nbd_common.sh@51 -- # local i 00:21:13.064 02:44:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:13.064 02:44:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:13.323 02:44:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:13.323 02:44:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:13.323 02:44:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:13.323 02:44:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:13.323 02:44:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:13.323 02:44:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:13.323 02:44:38 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:21:13.323 02:44:38 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:21:13.323 02:44:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:13.323 02:44:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:13.323 02:44:38 -- bdev/nbd_common.sh@41 -- # break 00:21:13.323 02:44:38 -- bdev/nbd_common.sh@45 -- # return 0 00:21:13.323 02:44:38 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:21:13.323 02:44:38 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']' 00:21:13.323 02:44:38 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:21:13.323 02:44:38 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:13.323 02:44:38 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:21:13.323 02:44:38 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:13.323 02:44:38 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:21:13.323 02:44:38 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:13.323 02:44:38 -- bdev/nbd_common.sh@12 -- # local i 00:21:13.323 02:44:38 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:13.323 02:44:38 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:13.323 02:44:38 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:21:13.582 /dev/nbd1 00:21:13.582 02:44:38 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:13.582 02:44:38 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:13.582 02:44:38 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:21:13.582 02:44:38 -- common/autotest_common.sh@857 -- # local i 00:21:13.582 02:44:38 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:13.582 02:44:38 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:13.582 02:44:38 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:21:13.582 02:44:38 -- common/autotest_common.sh@861 -- # break 00:21:13.582 02:44:38 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:13.582 02:44:38 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:13.582 02:44:38 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:13.582 1+0 records in 00:21:13.582 1+0 records out 00:21:13.582 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000406873 s, 10.1 MB/s 00:21:13.582 02:44:38 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:13.582 02:44:38 -- common/autotest_common.sh@874 -- # size=4096 00:21:13.582 02:44:38 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:13.582 02:44:38 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:13.582 02:44:38 -- common/autotest_common.sh@877 -- # return 0 00:21:13.582 02:44:38 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:13.582 02:44:38 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:13.582 02:44:38 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:21:13.582 02:44:38 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:21:13.582 02:44:38 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:13.582 02:44:38 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:21:13.582 02:44:38 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:13.582 02:44:38 -- bdev/nbd_common.sh@51 -- # local i 00:21:13.582 02:44:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:13.582 02:44:38 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:13.840 02:44:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:13.840 02:44:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:13.840 02:44:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:13.840 02:44:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:13.840 02:44:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:13.840 02:44:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:13.840 02:44:38 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:21:14.099 02:44:39 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:21:14.099 02:44:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:14.099 02:44:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:14.099 02:44:39 -- bdev/nbd_common.sh@41 -- # break 00:21:14.099 02:44:39 -- bdev/nbd_common.sh@45 -- # return 0 00:21:14.099 02:44:39 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:14.099 02:44:39 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:14.099 02:44:39 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:21:14.099 02:44:39 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:14.099 02:44:39 -- bdev/nbd_common.sh@51 -- # local i 00:21:14.099 02:44:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:14.099 02:44:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:14.358 02:44:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:14.358 02:44:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:14.358 02:44:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:14.358 02:44:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:14.358 02:44:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:14.358 02:44:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:14.358 02:44:39 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:21:14.358 02:44:39 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:21:14.358 02:44:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:14.358 02:44:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:14.358 02:44:39 -- bdev/nbd_common.sh@41 -- # break 00:21:14.358 02:44:39 -- bdev/nbd_common.sh@45 -- # return 0 00:21:14.358 02:44:39 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:21:14.358 02:44:39 -- bdev/bdev_raid.sh@709 -- # killprocess 138666 00:21:14.358 02:44:39 -- common/autotest_common.sh@926 -- # '[' -z 138666 ']' 00:21:14.358 02:44:39 -- common/autotest_common.sh@930 -- # kill -0 138666 00:21:14.358 02:44:39 -- common/autotest_common.sh@931 -- # uname 00:21:14.358 02:44:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:14.358 02:44:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 138666 00:21:14.358 02:44:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:14.358 02:44:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:14.358 02:44:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 138666' 00:21:14.358 killing process with pid 138666 00:21:14.358 02:44:39 -- common/autotest_common.sh@945 -- # kill 138666 00:21:14.358 Received shutdown signal, test time was about 13.771295 seconds 00:21:14.358 00:21:14.358 Latency(us) 00:21:14.358 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:14.358 =================================================================================================================== 00:21:14.358 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:14.358 02:44:39 -- common/autotest_common.sh@950 -- # wait 138666 00:21:14.358 [2024-07-11 02:44:39.354124] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:14.358 [2024-07-11 02:44:39.394259] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:14.617 02:44:39 -- bdev/bdev_raid.sh@711 -- # return 0 00:21:14.617 00:21:14.617 real 0m18.149s 00:21:14.617 user 0m29.006s 00:21:14.617 sys 0m2.208s 00:21:14.617 02:44:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:14.617 02:44:39 -- common/autotest_common.sh@10 -- # set +x 00:21:14.617 ************************************ 00:21:14.617 END TEST raid_rebuild_test_io 00:21:14.617 ************************************ 00:21:14.617 02:44:39 -- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true 00:21:14.617 02:44:39 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:21:14.617 02:44:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:14.617 02:44:39 -- common/autotest_common.sh@10 -- # set +x 00:21:14.617 ************************************ 00:21:14.617 START TEST raid_rebuild_test_sb_io 00:21:14.617 ************************************ 00:21:14.617 02:44:39 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 true true 00:21:14.617 02:44:39 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:21:14.617 02:44:39 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:21:14.618 02:44:39 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:21:14.618 02:44:39 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:21:14.618 02:44:39 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:21:14.618 02:44:39 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:21:14.618 02:44:39 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:14.618 02:44:39 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:21:14.618 02:44:39 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:14.618 02:44:39 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:14.618 02:44:39 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:21:14.618 02:44:39 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:14.618 02:44:39 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:14.618 02:44:39 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:21:14.618 02:44:39 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:14.618 02:44:39 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:14.618 02:44:39 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:21:14.618 02:44:39 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:14.618 02:44:39 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:14.618 02:44:39 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:21:14.618 02:44:39 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:21:14.618 02:44:39 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:21:14.618 02:44:39 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:21:14.618 02:44:39 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:21:14.618 02:44:39 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:21:14.618 02:44:39 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:21:14.618 02:44:39 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:21:14.618 02:44:39 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:21:14.618 02:44:39 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:21:14.618 02:44:39 -- bdev/bdev_raid.sh@544 -- # raid_pid=139225 00:21:14.618 02:44:39 -- bdev/bdev_raid.sh@545 -- # waitforlisten 139225 /var/tmp/spdk-raid.sock 00:21:14.618 02:44:39 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:14.618 02:44:39 -- common/autotest_common.sh@819 -- # '[' -z 139225 ']' 00:21:14.618 02:44:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:14.618 02:44:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:14.618 02:44:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:14.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:14.618 02:44:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:14.618 02:44:39 -- common/autotest_common.sh@10 -- # set +x 00:21:14.876 [2024-07-11 02:44:39.730952] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:21:14.876 [2024-07-11 02:44:39.731351] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139225 ] 00:21:14.876 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:14.876 Zero copy mechanism will not be used. 00:21:14.876 [2024-07-11 02:44:39.876175] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.876 [2024-07-11 02:44:39.939543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:15.134 [2024-07-11 02:44:39.992226] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:15.702 02:44:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:15.702 02:44:40 -- common/autotest_common.sh@852 -- # return 0 00:21:15.702 02:44:40 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:15.702 02:44:40 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:15.702 02:44:40 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:15.962 BaseBdev1_malloc 00:21:15.962 02:44:40 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:16.220 [2024-07-11 02:44:41.100476] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:16.220 [2024-07-11 02:44:41.100803] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:16.220 [2024-07-11 02:44:41.100967] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005d80 00:21:16.220 [2024-07-11 02:44:41.101101] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:16.220 [2024-07-11 02:44:41.103748] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:16.220 [2024-07-11 02:44:41.103958] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:16.220 BaseBdev1 00:21:16.220 02:44:41 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:16.220 02:44:41 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:16.220 02:44:41 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:16.220 BaseBdev2_malloc 00:21:16.478 02:44:41 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:16.478 [2024-07-11 02:44:41.502719] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:16.478 [2024-07-11 02:44:41.503008] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:16.478 [2024-07-11 02:44:41.503090] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:21:16.478 [2024-07-11 02:44:41.503352] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:16.478 [2024-07-11 02:44:41.505454] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:16.478 [2024-07-11 02:44:41.505621] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:16.478 BaseBdev2 00:21:16.478 02:44:41 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:16.478 02:44:41 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:16.478 02:44:41 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:16.736 BaseBdev3_malloc 00:21:16.736 02:44:41 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:16.993 [2024-07-11 02:44:41.929297] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:16.993 [2024-07-11 02:44:41.929540] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:16.994 [2024-07-11 02:44:41.929614] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:21:16.994 [2024-07-11 02:44:41.929948] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:16.994 [2024-07-11 02:44:41.932156] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:16.994 [2024-07-11 02:44:41.932336] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:16.994 BaseBdev3 00:21:16.994 02:44:41 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:16.994 02:44:41 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:16.994 02:44:41 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:21:17.252 BaseBdev4_malloc 00:21:17.252 02:44:42 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:21:17.252 [2024-07-11 02:44:42.327822] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:21:17.252 [2024-07-11 02:44:42.328102] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:17.252 [2024-07-11 02:44:42.328173] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:21:17.252 [2024-07-11 02:44:42.328445] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:17.252 [2024-07-11 02:44:42.330662] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:17.252 [2024-07-11 02:44:42.330834] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:17.252 BaseBdev4 00:21:17.252 02:44:42 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:17.509 spare_malloc 00:21:17.510 02:44:42 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:17.767 spare_delay 00:21:17.767 02:44:42 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:18.025 [2024-07-11 02:44:42.921989] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:18.025 [2024-07-11 02:44:42.922219] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:18.025 [2024-07-11 02:44:42.922286] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:21:18.025 [2024-07-11 02:44:42.922408] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:18.025 [2024-07-11 02:44:42.924510] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:18.025 [2024-07-11 02:44:42.924661] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:18.025 spare 00:21:18.025 02:44:42 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:21:18.026 [2024-07-11 02:44:43.106100] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:18.026 [2024-07-11 02:44:43.107913] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:18.026 [2024-07-11 02:44:43.108117] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:18.026 [2024-07-11 02:44:43.108207] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:18.026 [2024-07-11 02:44:43.108494] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:21:18.026 [2024-07-11 02:44:43.108535] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:18.026 [2024-07-11 02:44:43.108741] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:21:18.026 [2024-07-11 02:44:43.109212] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:21:18.026 [2024-07-11 02:44:43.109362] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:21:18.026 [2024-07-11 02:44:43.109594] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:18.284 02:44:43 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:18.284 02:44:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:18.284 02:44:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:18.284 02:44:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:18.284 02:44:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:18.284 02:44:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:18.284 02:44:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:18.284 02:44:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:18.284 02:44:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:18.284 02:44:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:18.284 02:44:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:18.284 02:44:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:18.551 02:44:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:18.551 "name": "raid_bdev1", 00:21:18.551 "uuid": "640395d4-45e4-4071-aec9-b3d27167e78c", 00:21:18.551 "strip_size_kb": 0, 00:21:18.551 "state": "online", 00:21:18.551 "raid_level": "raid1", 00:21:18.551 "superblock": true, 00:21:18.551 "num_base_bdevs": 4, 00:21:18.551 "num_base_bdevs_discovered": 4, 00:21:18.551 "num_base_bdevs_operational": 4, 00:21:18.551 "base_bdevs_list": [ 00:21:18.551 { 00:21:18.551 "name": "BaseBdev1", 00:21:18.551 "uuid": "d27f1227-60ef-5689-bf9b-8f4eb6995249", 00:21:18.551 "is_configured": true, 00:21:18.551 "data_offset": 2048, 00:21:18.551 "data_size": 63488 00:21:18.551 }, 00:21:18.551 { 00:21:18.551 "name": "BaseBdev2", 00:21:18.551 "uuid": "7ffed350-4403-5525-9ac1-cbf6af36e4c3", 00:21:18.551 "is_configured": true, 00:21:18.551 "data_offset": 2048, 00:21:18.551 "data_size": 63488 00:21:18.551 }, 00:21:18.551 { 00:21:18.551 "name": "BaseBdev3", 00:21:18.551 "uuid": "d885e17b-2f27-5208-922a-b6c03bb3f48c", 00:21:18.551 "is_configured": true, 00:21:18.551 "data_offset": 2048, 00:21:18.551 "data_size": 63488 00:21:18.551 }, 00:21:18.551 { 00:21:18.551 "name": "BaseBdev4", 00:21:18.551 "uuid": "764ff9da-5d2b-5f8c-8783-c457c3b27975", 00:21:18.551 "is_configured": true, 00:21:18.551 "data_offset": 2048, 00:21:18.551 "data_size": 63488 00:21:18.551 } 00:21:18.551 ] 00:21:18.551 }' 00:21:18.551 02:44:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:18.551 02:44:43 -- common/autotest_common.sh@10 -- # set +x 00:21:19.130 02:44:44 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:19.130 02:44:44 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:21:19.389 [2024-07-11 02:44:44.298579] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:19.389 02:44:44 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:21:19.389 02:44:44 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:19.389 02:44:44 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:19.647 02:44:44 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:21:19.647 02:44:44 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:21:19.647 02:44:44 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:19.647 02:44:44 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:19.647 [2024-07-11 02:44:44.680911] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:21:19.647 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:19.647 Zero copy mechanism will not be used. 00:21:19.647 Running I/O for 60 seconds... 00:21:19.905 [2024-07-11 02:44:44.768550] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:19.905 [2024-07-11 02:44:44.775039] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002600 00:21:19.905 02:44:44 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:19.905 02:44:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:19.905 02:44:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:19.905 02:44:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:19.905 02:44:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:19.905 02:44:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:19.905 02:44:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:19.905 02:44:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:19.905 02:44:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:19.905 02:44:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:19.905 02:44:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:19.905 02:44:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:20.163 02:44:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:20.163 "name": "raid_bdev1", 00:21:20.163 "uuid": "640395d4-45e4-4071-aec9-b3d27167e78c", 00:21:20.163 "strip_size_kb": 0, 00:21:20.163 "state": "online", 00:21:20.163 "raid_level": "raid1", 00:21:20.163 "superblock": true, 00:21:20.163 "num_base_bdevs": 4, 00:21:20.163 "num_base_bdevs_discovered": 3, 00:21:20.163 "num_base_bdevs_operational": 3, 00:21:20.163 "base_bdevs_list": [ 00:21:20.163 { 00:21:20.163 "name": null, 00:21:20.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:20.163 "is_configured": false, 00:21:20.163 "data_offset": 2048, 00:21:20.163 "data_size": 63488 00:21:20.163 }, 00:21:20.163 { 00:21:20.163 "name": "BaseBdev2", 00:21:20.163 "uuid": "7ffed350-4403-5525-9ac1-cbf6af36e4c3", 00:21:20.163 "is_configured": true, 00:21:20.163 "data_offset": 2048, 00:21:20.163 "data_size": 63488 00:21:20.163 }, 00:21:20.163 { 00:21:20.163 "name": "BaseBdev3", 00:21:20.163 "uuid": "d885e17b-2f27-5208-922a-b6c03bb3f48c", 00:21:20.163 "is_configured": true, 00:21:20.163 "data_offset": 2048, 00:21:20.164 "data_size": 63488 00:21:20.164 }, 00:21:20.164 { 00:21:20.164 "name": "BaseBdev4", 00:21:20.164 "uuid": "764ff9da-5d2b-5f8c-8783-c457c3b27975", 00:21:20.164 "is_configured": true, 00:21:20.164 "data_offset": 2048, 00:21:20.164 "data_size": 63488 00:21:20.164 } 00:21:20.164 ] 00:21:20.164 }' 00:21:20.164 02:44:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:20.164 02:44:45 -- common/autotest_common.sh@10 -- # set +x 00:21:20.729 02:44:45 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:20.987 [2024-07-11 02:44:45.979907] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:20.987 [2024-07-11 02:44:45.980199] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:20.987 [2024-07-11 02:44:46.004021] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:21:20.987 [2024-07-11 02:44:46.006363] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:20.987 02:44:46 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:21:21.245 [2024-07-11 02:44:46.131627] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:21.245 [2024-07-11 02:44:46.133092] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:21.503 [2024-07-11 02:44:46.364494] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:21.761 [2024-07-11 02:44:46.713478] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:22.018 [2024-07-11 02:44:46.939604] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:22.018 [2024-07-11 02:44:46.940512] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:22.018 02:44:47 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:22.018 02:44:47 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:22.018 02:44:47 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:22.018 02:44:47 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:22.018 02:44:47 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:22.018 02:44:47 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:22.018 02:44:47 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:22.275 02:44:47 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:22.275 "name": "raid_bdev1", 00:21:22.275 "uuid": "640395d4-45e4-4071-aec9-b3d27167e78c", 00:21:22.275 "strip_size_kb": 0, 00:21:22.275 "state": "online", 00:21:22.275 "raid_level": "raid1", 00:21:22.275 "superblock": true, 00:21:22.275 "num_base_bdevs": 4, 00:21:22.275 "num_base_bdevs_discovered": 4, 00:21:22.275 "num_base_bdevs_operational": 4, 00:21:22.275 "process": { 00:21:22.275 "type": "rebuild", 00:21:22.275 "target": "spare", 00:21:22.275 "progress": { 00:21:22.275 "blocks": 12288, 00:21:22.275 "percent": 19 00:21:22.275 } 00:21:22.275 }, 00:21:22.275 "base_bdevs_list": [ 00:21:22.275 { 00:21:22.275 "name": "spare", 00:21:22.275 "uuid": "cd2fce1a-4b85-575f-85dd-1fd8db1b2576", 00:21:22.275 "is_configured": true, 00:21:22.275 "data_offset": 2048, 00:21:22.275 "data_size": 63488 00:21:22.275 }, 00:21:22.275 { 00:21:22.275 "name": "BaseBdev2", 00:21:22.275 "uuid": "7ffed350-4403-5525-9ac1-cbf6af36e4c3", 00:21:22.275 "is_configured": true, 00:21:22.275 "data_offset": 2048, 00:21:22.275 "data_size": 63488 00:21:22.275 }, 00:21:22.275 { 00:21:22.275 "name": "BaseBdev3", 00:21:22.275 "uuid": "d885e17b-2f27-5208-922a-b6c03bb3f48c", 00:21:22.275 "is_configured": true, 00:21:22.275 "data_offset": 2048, 00:21:22.275 "data_size": 63488 00:21:22.275 }, 00:21:22.275 { 00:21:22.275 "name": "BaseBdev4", 00:21:22.275 "uuid": "764ff9da-5d2b-5f8c-8783-c457c3b27975", 00:21:22.275 "is_configured": true, 00:21:22.275 "data_offset": 2048, 00:21:22.275 "data_size": 63488 00:21:22.275 } 00:21:22.275 ] 00:21:22.275 }' 00:21:22.275 02:44:47 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:22.275 [2024-07-11 02:44:47.302803] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:22.275 [2024-07-11 02:44:47.304255] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:22.275 02:44:47 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:22.275 02:44:47 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:22.533 02:44:47 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:22.533 02:44:47 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:22.533 [2024-07-11 02:44:47.536356] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:22.533 [2024-07-11 02:44:47.566526] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:22.791 [2024-07-11 02:44:47.747514] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:22.791 [2024-07-11 02:44:47.756358] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:22.791 [2024-07-11 02:44:47.775896] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002600 00:21:22.791 02:44:47 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:22.791 02:44:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:22.791 02:44:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:22.791 02:44:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:22.791 02:44:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:22.791 02:44:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:22.791 02:44:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:22.791 02:44:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:22.791 02:44:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:22.791 02:44:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:22.791 02:44:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:22.791 02:44:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:23.049 02:44:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:23.049 "name": "raid_bdev1", 00:21:23.049 "uuid": "640395d4-45e4-4071-aec9-b3d27167e78c", 00:21:23.049 "strip_size_kb": 0, 00:21:23.049 "state": "online", 00:21:23.049 "raid_level": "raid1", 00:21:23.049 "superblock": true, 00:21:23.049 "num_base_bdevs": 4, 00:21:23.049 "num_base_bdevs_discovered": 3, 00:21:23.049 "num_base_bdevs_operational": 3, 00:21:23.049 "base_bdevs_list": [ 00:21:23.049 { 00:21:23.049 "name": null, 00:21:23.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:23.049 "is_configured": false, 00:21:23.049 "data_offset": 2048, 00:21:23.049 "data_size": 63488 00:21:23.049 }, 00:21:23.049 { 00:21:23.049 "name": "BaseBdev2", 00:21:23.049 "uuid": "7ffed350-4403-5525-9ac1-cbf6af36e4c3", 00:21:23.049 "is_configured": true, 00:21:23.049 "data_offset": 2048, 00:21:23.049 "data_size": 63488 00:21:23.049 }, 00:21:23.049 { 00:21:23.049 "name": "BaseBdev3", 00:21:23.049 "uuid": "d885e17b-2f27-5208-922a-b6c03bb3f48c", 00:21:23.049 "is_configured": true, 00:21:23.049 "data_offset": 2048, 00:21:23.049 "data_size": 63488 00:21:23.049 }, 00:21:23.049 { 00:21:23.049 "name": "BaseBdev4", 00:21:23.049 "uuid": "764ff9da-5d2b-5f8c-8783-c457c3b27975", 00:21:23.049 "is_configured": true, 00:21:23.049 "data_offset": 2048, 00:21:23.049 "data_size": 63488 00:21:23.049 } 00:21:23.049 ] 00:21:23.049 }' 00:21:23.049 02:44:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:23.049 02:44:48 -- common/autotest_common.sh@10 -- # set +x 00:21:23.982 02:44:48 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:23.982 02:44:48 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:23.982 02:44:48 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:23.982 02:44:48 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:23.982 02:44:48 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:23.982 02:44:48 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:23.982 02:44:48 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:24.240 02:44:49 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:24.240 "name": "raid_bdev1", 00:21:24.240 "uuid": "640395d4-45e4-4071-aec9-b3d27167e78c", 00:21:24.240 "strip_size_kb": 0, 00:21:24.240 "state": "online", 00:21:24.240 "raid_level": "raid1", 00:21:24.240 "superblock": true, 00:21:24.240 "num_base_bdevs": 4, 00:21:24.240 "num_base_bdevs_discovered": 3, 00:21:24.240 "num_base_bdevs_operational": 3, 00:21:24.240 "base_bdevs_list": [ 00:21:24.240 { 00:21:24.240 "name": null, 00:21:24.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:24.240 "is_configured": false, 00:21:24.240 "data_offset": 2048, 00:21:24.240 "data_size": 63488 00:21:24.240 }, 00:21:24.240 { 00:21:24.240 "name": "BaseBdev2", 00:21:24.240 "uuid": "7ffed350-4403-5525-9ac1-cbf6af36e4c3", 00:21:24.240 "is_configured": true, 00:21:24.240 "data_offset": 2048, 00:21:24.240 "data_size": 63488 00:21:24.240 }, 00:21:24.240 { 00:21:24.240 "name": "BaseBdev3", 00:21:24.240 "uuid": "d885e17b-2f27-5208-922a-b6c03bb3f48c", 00:21:24.240 "is_configured": true, 00:21:24.240 "data_offset": 2048, 00:21:24.240 "data_size": 63488 00:21:24.240 }, 00:21:24.240 { 00:21:24.240 "name": "BaseBdev4", 00:21:24.240 "uuid": "764ff9da-5d2b-5f8c-8783-c457c3b27975", 00:21:24.240 "is_configured": true, 00:21:24.240 "data_offset": 2048, 00:21:24.240 "data_size": 63488 00:21:24.240 } 00:21:24.240 ] 00:21:24.240 }' 00:21:24.240 02:44:49 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:24.240 02:44:49 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:24.240 02:44:49 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:24.240 02:44:49 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:24.240 02:44:49 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:24.497 [2024-07-11 02:44:49.430490] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:24.497 [2024-07-11 02:44:49.430768] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:24.497 02:44:49 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:21:24.497 [2024-07-11 02:44:49.488284] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:21:24.497 [2024-07-11 02:44:49.490586] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:24.754 [2024-07-11 02:44:49.599790] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:24.754 [2024-07-11 02:44:49.600326] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:24.754 [2024-07-11 02:44:49.817764] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:24.754 [2024-07-11 02:44:49.818738] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:25.320 [2024-07-11 02:44:50.187257] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:25.320 [2024-07-11 02:44:50.302800] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:25.320 [2024-07-11 02:44:50.303626] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:25.579 02:44:50 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:25.579 02:44:50 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:25.579 02:44:50 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:25.579 02:44:50 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:25.579 02:44:50 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:25.579 02:44:50 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:25.579 02:44:50 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:25.579 [2024-07-11 02:44:50.654183] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:25.837 02:44:50 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:25.837 "name": "raid_bdev1", 00:21:25.837 "uuid": "640395d4-45e4-4071-aec9-b3d27167e78c", 00:21:25.837 "strip_size_kb": 0, 00:21:25.837 "state": "online", 00:21:25.837 "raid_level": "raid1", 00:21:25.837 "superblock": true, 00:21:25.837 "num_base_bdevs": 4, 00:21:25.837 "num_base_bdevs_discovered": 4, 00:21:25.837 "num_base_bdevs_operational": 4, 00:21:25.837 "process": { 00:21:25.837 "type": "rebuild", 00:21:25.837 "target": "spare", 00:21:25.837 "progress": { 00:21:25.837 "blocks": 14336, 00:21:25.837 "percent": 22 00:21:25.837 } 00:21:25.837 }, 00:21:25.837 "base_bdevs_list": [ 00:21:25.837 { 00:21:25.837 "name": "spare", 00:21:25.837 "uuid": "cd2fce1a-4b85-575f-85dd-1fd8db1b2576", 00:21:25.837 "is_configured": true, 00:21:25.837 "data_offset": 2048, 00:21:25.837 "data_size": 63488 00:21:25.837 }, 00:21:25.837 { 00:21:25.837 "name": "BaseBdev2", 00:21:25.837 "uuid": "7ffed350-4403-5525-9ac1-cbf6af36e4c3", 00:21:25.837 "is_configured": true, 00:21:25.837 "data_offset": 2048, 00:21:25.837 "data_size": 63488 00:21:25.837 }, 00:21:25.837 { 00:21:25.837 "name": "BaseBdev3", 00:21:25.837 "uuid": "d885e17b-2f27-5208-922a-b6c03bb3f48c", 00:21:25.837 "is_configured": true, 00:21:25.837 "data_offset": 2048, 00:21:25.837 "data_size": 63488 00:21:25.837 }, 00:21:25.837 { 00:21:25.837 "name": "BaseBdev4", 00:21:25.837 "uuid": "764ff9da-5d2b-5f8c-8783-c457c3b27975", 00:21:25.837 "is_configured": true, 00:21:25.837 "data_offset": 2048, 00:21:25.837 "data_size": 63488 00:21:25.837 } 00:21:25.837 ] 00:21:25.838 }' 00:21:25.838 02:44:50 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:25.838 02:44:50 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:25.838 02:44:50 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:25.838 02:44:50 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:25.838 02:44:50 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:21:25.838 02:44:50 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:21:25.838 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:21:25.838 02:44:50 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:21:25.838 02:44:50 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:21:25.838 02:44:50 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:21:25.838 02:44:50 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:25.838 [2024-07-11 02:44:50.873590] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:26.095 [2024-07-11 02:44:51.066536] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:26.095 [2024-07-11 02:44:51.117186] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:26.353 [2024-07-11 02:44:51.226267] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002600 00:21:26.353 [2024-07-11 02:44:51.226445] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002870 00:21:26.353 02:44:51 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:21:26.353 02:44:51 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:21:26.353 02:44:51 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:26.353 02:44:51 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:26.353 02:44:51 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:26.353 02:44:51 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:26.353 02:44:51 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:26.353 02:44:51 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:26.353 02:44:51 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:26.353 [2024-07-11 02:44:51.383973] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:21:26.611 02:44:51 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:26.611 "name": "raid_bdev1", 00:21:26.611 "uuid": "640395d4-45e4-4071-aec9-b3d27167e78c", 00:21:26.611 "strip_size_kb": 0, 00:21:26.611 "state": "online", 00:21:26.611 "raid_level": "raid1", 00:21:26.611 "superblock": true, 00:21:26.611 "num_base_bdevs": 4, 00:21:26.611 "num_base_bdevs_discovered": 3, 00:21:26.611 "num_base_bdevs_operational": 3, 00:21:26.611 "process": { 00:21:26.611 "type": "rebuild", 00:21:26.611 "target": "spare", 00:21:26.611 "progress": { 00:21:26.611 "blocks": 24576, 00:21:26.611 "percent": 38 00:21:26.611 } 00:21:26.611 }, 00:21:26.611 "base_bdevs_list": [ 00:21:26.611 { 00:21:26.611 "name": "spare", 00:21:26.611 "uuid": "cd2fce1a-4b85-575f-85dd-1fd8db1b2576", 00:21:26.611 "is_configured": true, 00:21:26.611 "data_offset": 2048, 00:21:26.611 "data_size": 63488 00:21:26.611 }, 00:21:26.611 { 00:21:26.611 "name": null, 00:21:26.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:26.611 "is_configured": false, 00:21:26.611 "data_offset": 2048, 00:21:26.611 "data_size": 63488 00:21:26.611 }, 00:21:26.611 { 00:21:26.611 "name": "BaseBdev3", 00:21:26.611 "uuid": "d885e17b-2f27-5208-922a-b6c03bb3f48c", 00:21:26.611 "is_configured": true, 00:21:26.611 "data_offset": 2048, 00:21:26.611 "data_size": 63488 00:21:26.611 }, 00:21:26.611 { 00:21:26.611 "name": "BaseBdev4", 00:21:26.611 "uuid": "764ff9da-5d2b-5f8c-8783-c457c3b27975", 00:21:26.611 "is_configured": true, 00:21:26.611 "data_offset": 2048, 00:21:26.611 "data_size": 63488 00:21:26.611 } 00:21:26.611 ] 00:21:26.611 }' 00:21:26.611 02:44:51 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:26.611 02:44:51 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:26.611 02:44:51 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:26.869 02:44:51 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:26.869 02:44:51 -- bdev/bdev_raid.sh@657 -- # local timeout=505 00:21:26.869 02:44:51 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:26.869 02:44:51 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:26.869 02:44:51 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:26.869 02:44:51 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:26.869 02:44:51 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:26.869 02:44:51 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:26.869 02:44:51 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:26.869 02:44:51 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:27.128 02:44:51 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:27.128 "name": "raid_bdev1", 00:21:27.128 "uuid": "640395d4-45e4-4071-aec9-b3d27167e78c", 00:21:27.128 "strip_size_kb": 0, 00:21:27.128 "state": "online", 00:21:27.128 "raid_level": "raid1", 00:21:27.128 "superblock": true, 00:21:27.128 "num_base_bdevs": 4, 00:21:27.128 "num_base_bdevs_discovered": 3, 00:21:27.128 "num_base_bdevs_operational": 3, 00:21:27.128 "process": { 00:21:27.128 "type": "rebuild", 00:21:27.128 "target": "spare", 00:21:27.128 "progress": { 00:21:27.128 "blocks": 32768, 00:21:27.128 "percent": 51 00:21:27.128 } 00:21:27.128 }, 00:21:27.128 "base_bdevs_list": [ 00:21:27.128 { 00:21:27.128 "name": "spare", 00:21:27.128 "uuid": "cd2fce1a-4b85-575f-85dd-1fd8db1b2576", 00:21:27.128 "is_configured": true, 00:21:27.128 "data_offset": 2048, 00:21:27.128 "data_size": 63488 00:21:27.128 }, 00:21:27.128 { 00:21:27.128 "name": null, 00:21:27.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.128 "is_configured": false, 00:21:27.128 "data_offset": 2048, 00:21:27.128 "data_size": 63488 00:21:27.128 }, 00:21:27.128 { 00:21:27.128 "name": "BaseBdev3", 00:21:27.128 "uuid": "d885e17b-2f27-5208-922a-b6c03bb3f48c", 00:21:27.128 "is_configured": true, 00:21:27.128 "data_offset": 2048, 00:21:27.128 "data_size": 63488 00:21:27.128 }, 00:21:27.128 { 00:21:27.128 "name": "BaseBdev4", 00:21:27.128 "uuid": "764ff9da-5d2b-5f8c-8783-c457c3b27975", 00:21:27.128 "is_configured": true, 00:21:27.128 "data_offset": 2048, 00:21:27.128 "data_size": 63488 00:21:27.128 } 00:21:27.128 ] 00:21:27.128 }' 00:21:27.128 02:44:51 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:27.128 02:44:52 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:27.128 02:44:52 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:27.128 [2024-07-11 02:44:52.074535] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:21:27.128 02:44:52 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:27.128 02:44:52 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:28.061 [2024-07-11 02:44:52.887745] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:21:28.061 [2024-07-11 02:44:52.995487] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:21:28.061 02:44:53 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:28.061 02:44:53 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:28.061 02:44:53 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:28.061 02:44:53 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:28.061 02:44:53 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:28.061 02:44:53 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:28.061 02:44:53 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:28.061 02:44:53 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:28.319 [2024-07-11 02:44:53.314070] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:21:28.319 02:44:53 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:28.319 "name": "raid_bdev1", 00:21:28.319 "uuid": "640395d4-45e4-4071-aec9-b3d27167e78c", 00:21:28.319 "strip_size_kb": 0, 00:21:28.319 "state": "online", 00:21:28.319 "raid_level": "raid1", 00:21:28.319 "superblock": true, 00:21:28.319 "num_base_bdevs": 4, 00:21:28.319 "num_base_bdevs_discovered": 3, 00:21:28.319 "num_base_bdevs_operational": 3, 00:21:28.319 "process": { 00:21:28.319 "type": "rebuild", 00:21:28.319 "target": "spare", 00:21:28.319 "progress": { 00:21:28.319 "blocks": 57344, 00:21:28.319 "percent": 90 00:21:28.319 } 00:21:28.319 }, 00:21:28.319 "base_bdevs_list": [ 00:21:28.319 { 00:21:28.319 "name": "spare", 00:21:28.319 "uuid": "cd2fce1a-4b85-575f-85dd-1fd8db1b2576", 00:21:28.319 "is_configured": true, 00:21:28.319 "data_offset": 2048, 00:21:28.319 "data_size": 63488 00:21:28.319 }, 00:21:28.319 { 00:21:28.319 "name": null, 00:21:28.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:28.319 "is_configured": false, 00:21:28.319 "data_offset": 2048, 00:21:28.319 "data_size": 63488 00:21:28.319 }, 00:21:28.319 { 00:21:28.319 "name": "BaseBdev3", 00:21:28.319 "uuid": "d885e17b-2f27-5208-922a-b6c03bb3f48c", 00:21:28.319 "is_configured": true, 00:21:28.319 "data_offset": 2048, 00:21:28.319 "data_size": 63488 00:21:28.319 }, 00:21:28.319 { 00:21:28.319 "name": "BaseBdev4", 00:21:28.319 "uuid": "764ff9da-5d2b-5f8c-8783-c457c3b27975", 00:21:28.319 "is_configured": true, 00:21:28.319 "data_offset": 2048, 00:21:28.319 "data_size": 63488 00:21:28.319 } 00:21:28.319 ] 00:21:28.319 }' 00:21:28.319 02:44:53 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:28.577 02:44:53 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:28.577 02:44:53 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:28.577 02:44:53 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:28.577 02:44:53 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:28.577 [2024-07-11 02:44:53.644953] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:28.834 [2024-07-11 02:44:53.744957] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:28.834 [2024-07-11 02:44:53.747340] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:29.400 02:44:54 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:29.400 02:44:54 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:29.400 02:44:54 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:29.400 02:44:54 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:29.400 02:44:54 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:29.400 02:44:54 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:29.400 02:44:54 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:29.400 02:44:54 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:29.659 02:44:54 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:29.659 "name": "raid_bdev1", 00:21:29.659 "uuid": "640395d4-45e4-4071-aec9-b3d27167e78c", 00:21:29.659 "strip_size_kb": 0, 00:21:29.659 "state": "online", 00:21:29.659 "raid_level": "raid1", 00:21:29.659 "superblock": true, 00:21:29.659 "num_base_bdevs": 4, 00:21:29.659 "num_base_bdevs_discovered": 3, 00:21:29.659 "num_base_bdevs_operational": 3, 00:21:29.659 "base_bdevs_list": [ 00:21:29.659 { 00:21:29.659 "name": "spare", 00:21:29.659 "uuid": "cd2fce1a-4b85-575f-85dd-1fd8db1b2576", 00:21:29.659 "is_configured": true, 00:21:29.659 "data_offset": 2048, 00:21:29.659 "data_size": 63488 00:21:29.659 }, 00:21:29.659 { 00:21:29.659 "name": null, 00:21:29.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.659 "is_configured": false, 00:21:29.659 "data_offset": 2048, 00:21:29.659 "data_size": 63488 00:21:29.659 }, 00:21:29.659 { 00:21:29.659 "name": "BaseBdev3", 00:21:29.659 "uuid": "d885e17b-2f27-5208-922a-b6c03bb3f48c", 00:21:29.659 "is_configured": true, 00:21:29.659 "data_offset": 2048, 00:21:29.659 "data_size": 63488 00:21:29.659 }, 00:21:29.659 { 00:21:29.659 "name": "BaseBdev4", 00:21:29.659 "uuid": "764ff9da-5d2b-5f8c-8783-c457c3b27975", 00:21:29.659 "is_configured": true, 00:21:29.659 "data_offset": 2048, 00:21:29.659 "data_size": 63488 00:21:29.659 } 00:21:29.659 ] 00:21:29.659 }' 00:21:29.659 02:44:54 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:29.917 02:44:54 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:29.917 02:44:54 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:29.917 02:44:54 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:21:29.917 02:44:54 -- bdev/bdev_raid.sh@660 -- # break 00:21:29.917 02:44:54 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:29.917 02:44:54 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:29.917 02:44:54 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:29.917 02:44:54 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:29.917 02:44:54 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:29.917 02:44:54 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:29.917 02:44:54 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:30.174 02:44:55 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:30.174 "name": "raid_bdev1", 00:21:30.174 "uuid": "640395d4-45e4-4071-aec9-b3d27167e78c", 00:21:30.174 "strip_size_kb": 0, 00:21:30.175 "state": "online", 00:21:30.175 "raid_level": "raid1", 00:21:30.175 "superblock": true, 00:21:30.175 "num_base_bdevs": 4, 00:21:30.175 "num_base_bdevs_discovered": 3, 00:21:30.175 "num_base_bdevs_operational": 3, 00:21:30.175 "base_bdevs_list": [ 00:21:30.175 { 00:21:30.175 "name": "spare", 00:21:30.175 "uuid": "cd2fce1a-4b85-575f-85dd-1fd8db1b2576", 00:21:30.175 "is_configured": true, 00:21:30.175 "data_offset": 2048, 00:21:30.175 "data_size": 63488 00:21:30.175 }, 00:21:30.175 { 00:21:30.175 "name": null, 00:21:30.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.175 "is_configured": false, 00:21:30.175 "data_offset": 2048, 00:21:30.175 "data_size": 63488 00:21:30.175 }, 00:21:30.175 { 00:21:30.175 "name": "BaseBdev3", 00:21:30.175 "uuid": "d885e17b-2f27-5208-922a-b6c03bb3f48c", 00:21:30.175 "is_configured": true, 00:21:30.175 "data_offset": 2048, 00:21:30.175 "data_size": 63488 00:21:30.175 }, 00:21:30.175 { 00:21:30.175 "name": "BaseBdev4", 00:21:30.175 "uuid": "764ff9da-5d2b-5f8c-8783-c457c3b27975", 00:21:30.175 "is_configured": true, 00:21:30.175 "data_offset": 2048, 00:21:30.175 "data_size": 63488 00:21:30.175 } 00:21:30.175 ] 00:21:30.175 }' 00:21:30.175 02:44:55 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:30.175 02:44:55 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:30.175 02:44:55 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:30.175 02:44:55 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:30.175 02:44:55 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:30.175 02:44:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:30.175 02:44:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:30.175 02:44:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:30.175 02:44:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:30.175 02:44:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:30.175 02:44:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:30.175 02:44:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:30.175 02:44:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:30.175 02:44:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:30.175 02:44:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:30.175 02:44:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:30.433 02:44:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:30.433 "name": "raid_bdev1", 00:21:30.433 "uuid": "640395d4-45e4-4071-aec9-b3d27167e78c", 00:21:30.433 "strip_size_kb": 0, 00:21:30.433 "state": "online", 00:21:30.433 "raid_level": "raid1", 00:21:30.433 "superblock": true, 00:21:30.433 "num_base_bdevs": 4, 00:21:30.433 "num_base_bdevs_discovered": 3, 00:21:30.433 "num_base_bdevs_operational": 3, 00:21:30.433 "base_bdevs_list": [ 00:21:30.433 { 00:21:30.433 "name": "spare", 00:21:30.433 "uuid": "cd2fce1a-4b85-575f-85dd-1fd8db1b2576", 00:21:30.433 "is_configured": true, 00:21:30.433 "data_offset": 2048, 00:21:30.433 "data_size": 63488 00:21:30.433 }, 00:21:30.433 { 00:21:30.433 "name": null, 00:21:30.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.433 "is_configured": false, 00:21:30.433 "data_offset": 2048, 00:21:30.433 "data_size": 63488 00:21:30.433 }, 00:21:30.433 { 00:21:30.433 "name": "BaseBdev3", 00:21:30.433 "uuid": "d885e17b-2f27-5208-922a-b6c03bb3f48c", 00:21:30.433 "is_configured": true, 00:21:30.433 "data_offset": 2048, 00:21:30.433 "data_size": 63488 00:21:30.433 }, 00:21:30.433 { 00:21:30.433 "name": "BaseBdev4", 00:21:30.433 "uuid": "764ff9da-5d2b-5f8c-8783-c457c3b27975", 00:21:30.433 "is_configured": true, 00:21:30.433 "data_offset": 2048, 00:21:30.433 "data_size": 63488 00:21:30.433 } 00:21:30.433 ] 00:21:30.433 }' 00:21:30.433 02:44:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:30.433 02:44:55 -- common/autotest_common.sh@10 -- # set +x 00:21:31.367 02:44:56 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:31.367 [2024-07-11 02:44:56.369343] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:31.367 [2024-07-11 02:44:56.369503] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:31.633 00:21:31.633 Latency(us) 00:21:31.633 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:31.633 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:21:31.633 raid_bdev1 : 11.78 111.14 333.41 0.00 0.00 12845.05 273.69 118203.11 00:21:31.633 =================================================================================================================== 00:21:31.634 Total : 111.14 333.41 0.00 0.00 12845.05 273.69 118203.11 00:21:31.634 [2024-07-11 02:44:56.465568] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:31.634 [2024-07-11 02:44:56.465756] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:31.634 0 00:21:31.634 [2024-07-11 02:44:56.466124] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:31.634 [2024-07-11 02:44:56.466238] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:21:31.634 02:44:56 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:31.634 02:44:56 -- bdev/bdev_raid.sh@671 -- # jq length 00:21:31.634 02:44:56 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:21:31.634 02:44:56 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:21:31.634 02:44:56 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:21:31.634 02:44:56 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:31.634 02:44:56 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:21:31.634 02:44:56 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:31.634 02:44:56 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:21:31.634 02:44:56 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:31.634 02:44:56 -- bdev/nbd_common.sh@12 -- # local i 00:21:31.634 02:44:56 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:31.634 02:44:56 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:31.634 02:44:56 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:21:31.910 /dev/nbd0 00:21:31.910 02:44:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:31.910 02:44:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:31.910 02:44:56 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:21:31.910 02:44:56 -- common/autotest_common.sh@857 -- # local i 00:21:31.910 02:44:56 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:31.910 02:44:56 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:31.910 02:44:56 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:21:31.910 02:44:56 -- common/autotest_common.sh@861 -- # break 00:21:31.910 02:44:56 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:31.910 02:44:56 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:31.910 02:44:56 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:31.910 1+0 records in 00:21:31.910 1+0 records out 00:21:31.910 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000726111 s, 5.6 MB/s 00:21:31.910 02:44:56 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:31.910 02:44:56 -- common/autotest_common.sh@874 -- # size=4096 00:21:31.910 02:44:56 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:32.169 02:44:57 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:32.169 02:44:57 -- common/autotest_common.sh@877 -- # return 0 00:21:32.169 02:44:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:32.169 02:44:57 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:32.169 02:44:57 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:21:32.169 02:44:57 -- bdev/bdev_raid.sh@677 -- # '[' -z '' ']' 00:21:32.169 02:44:57 -- bdev/bdev_raid.sh@678 -- # continue 00:21:32.169 02:44:57 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:21:32.169 02:44:57 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']' 00:21:32.169 02:44:57 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:21:32.169 02:44:57 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:32.169 02:44:57 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:21:32.169 02:44:57 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:32.169 02:44:57 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:21:32.169 02:44:57 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:32.169 02:44:57 -- bdev/nbd_common.sh@12 -- # local i 00:21:32.169 02:44:57 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:32.169 02:44:57 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:32.169 02:44:57 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:21:32.169 /dev/nbd1 00:21:32.169 02:44:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:32.169 02:44:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:32.169 02:44:57 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:21:32.169 02:44:57 -- common/autotest_common.sh@857 -- # local i 00:21:32.169 02:44:57 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:32.169 02:44:57 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:32.169 02:44:57 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:21:32.169 02:44:57 -- common/autotest_common.sh@861 -- # break 00:21:32.169 02:44:57 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:32.169 02:44:57 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:32.169 02:44:57 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:32.169 1+0 records in 00:21:32.169 1+0 records out 00:21:32.169 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000514149 s, 8.0 MB/s 00:21:32.169 02:44:57 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:32.169 02:44:57 -- common/autotest_common.sh@874 -- # size=4096 00:21:32.169 02:44:57 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:32.169 02:44:57 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:32.169 02:44:57 -- common/autotest_common.sh@877 -- # return 0 00:21:32.169 02:44:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:32.169 02:44:57 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:32.169 02:44:57 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:32.428 02:44:57 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:21:32.428 02:44:57 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:32.428 02:44:57 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:21:32.428 02:44:57 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:32.428 02:44:57 -- bdev/nbd_common.sh@51 -- # local i 00:21:32.428 02:44:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:32.428 02:44:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:32.686 02:44:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:32.686 02:44:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:32.686 02:44:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:32.686 02:44:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:32.686 02:44:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:32.686 02:44:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:32.686 02:44:57 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:21:32.686 02:44:57 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:21:32.686 02:44:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:32.686 02:44:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:32.686 02:44:57 -- bdev/nbd_common.sh@41 -- # break 00:21:32.686 02:44:57 -- bdev/nbd_common.sh@45 -- # return 0 00:21:32.686 02:44:57 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:21:32.686 02:44:57 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']' 00:21:32.686 02:44:57 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:21:32.686 02:44:57 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:32.686 02:44:57 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:21:32.686 02:44:57 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:32.686 02:44:57 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:21:32.686 02:44:57 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:32.686 02:44:57 -- bdev/nbd_common.sh@12 -- # local i 00:21:32.686 02:44:57 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:32.686 02:44:57 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:32.686 02:44:57 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:21:32.944 /dev/nbd1 00:21:32.944 02:44:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:32.944 02:44:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:32.944 02:44:57 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:21:32.944 02:44:57 -- common/autotest_common.sh@857 -- # local i 00:21:32.944 02:44:57 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:32.944 02:44:57 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:32.944 02:44:57 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:21:32.944 02:44:57 -- common/autotest_common.sh@861 -- # break 00:21:32.944 02:44:57 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:32.944 02:44:57 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:32.944 02:44:57 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:32.944 1+0 records in 00:21:32.944 1+0 records out 00:21:32.944 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000564548 s, 7.3 MB/s 00:21:32.944 02:44:57 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:32.944 02:44:57 -- common/autotest_common.sh@874 -- # size=4096 00:21:32.944 02:44:57 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:32.944 02:44:57 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:32.944 02:44:57 -- common/autotest_common.sh@877 -- # return 0 00:21:32.944 02:44:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:32.944 02:44:57 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:32.944 02:44:57 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:32.944 02:44:57 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:21:32.944 02:44:57 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:32.944 02:44:57 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:21:32.944 02:44:57 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:32.944 02:44:57 -- bdev/nbd_common.sh@51 -- # local i 00:21:32.944 02:44:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:32.944 02:44:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:33.203 02:44:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:33.203 02:44:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:33.203 02:44:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:33.203 02:44:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:33.203 02:44:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:33.203 02:44:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:33.203 02:44:58 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:21:33.462 02:44:58 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:21:33.462 02:44:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:33.462 02:44:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:33.462 02:44:58 -- bdev/nbd_common.sh@41 -- # break 00:21:33.462 02:44:58 -- bdev/nbd_common.sh@45 -- # return 0 00:21:33.462 02:44:58 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:33.462 02:44:58 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:33.462 02:44:58 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:21:33.462 02:44:58 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:33.462 02:44:58 -- bdev/nbd_common.sh@51 -- # local i 00:21:33.462 02:44:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:33.462 02:44:58 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:33.720 02:44:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:33.720 02:44:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:33.720 02:44:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:33.720 02:44:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:33.720 02:44:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:33.720 02:44:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:33.720 02:44:58 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:21:33.720 02:44:58 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:21:33.720 02:44:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:33.720 02:44:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:33.720 02:44:58 -- bdev/nbd_common.sh@41 -- # break 00:21:33.720 02:44:58 -- bdev/nbd_common.sh@45 -- # return 0 00:21:33.720 02:44:58 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:21:33.720 02:44:58 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:33.720 02:44:58 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:21:33.720 02:44:58 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:21:33.980 02:44:58 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:34.238 [2024-07-11 02:44:59.193247] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:34.238 [2024-07-11 02:44:59.193534] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:34.238 [2024-07-11 02:44:59.193729] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:21:34.238 [2024-07-11 02:44:59.193849] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:34.238 [2024-07-11 02:44:59.196376] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:34.238 [2024-07-11 02:44:59.196592] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:34.238 [2024-07-11 02:44:59.196803] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:34.238 [2024-07-11 02:44:59.196975] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:34.238 BaseBdev1 00:21:34.238 02:44:59 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:34.238 02:44:59 -- bdev/bdev_raid.sh@695 -- # '[' -z '' ']' 00:21:34.238 02:44:59 -- bdev/bdev_raid.sh@696 -- # continue 00:21:34.238 02:44:59 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:34.238 02:44:59 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:21:34.238 02:44:59 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:21:34.496 02:44:59 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:34.754 [2024-07-11 02:44:59.609344] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:34.754 [2024-07-11 02:44:59.609551] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:34.754 [2024-07-11 02:44:59.609628] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:21:34.754 [2024-07-11 02:44:59.609849] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:34.754 [2024-07-11 02:44:59.610414] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:34.754 [2024-07-11 02:44:59.610634] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:34.754 [2024-07-11 02:44:59.610819] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:21:34.754 [2024-07-11 02:44:59.610922] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1) 00:21:34.754 [2024-07-11 02:44:59.611009] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:34.754 [2024-07-11 02:44:59.611071] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b180 name raid_bdev1, state configuring 00:21:34.754 [2024-07-11 02:44:59.611282] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:34.754 BaseBdev3 00:21:34.754 02:44:59 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:34.754 02:44:59 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:21:34.754 02:44:59 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:21:34.754 02:44:59 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:21:35.011 [2024-07-11 02:44:59.993006] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:21:35.011 [2024-07-11 02:44:59.993233] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:35.011 [2024-07-11 02:44:59.993303] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:21:35.011 [2024-07-11 02:44:59.993521] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:35.011 [2024-07-11 02:44:59.993989] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:35.011 [2024-07-11 02:44:59.994173] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:35.011 [2024-07-11 02:44:59.994348] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:21:35.011 [2024-07-11 02:44:59.994455] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:35.011 BaseBdev4 00:21:35.011 02:45:00 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:21:35.271 02:45:00 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:35.530 [2024-07-11 02:45:00.389151] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:35.530 [2024-07-11 02:45:00.389378] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:35.530 [2024-07-11 02:45:00.389446] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:21:35.530 [2024-07-11 02:45:00.389728] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:35.530 [2024-07-11 02:45:00.390262] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:35.530 [2024-07-11 02:45:00.390454] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:35.530 [2024-07-11 02:45:00.390629] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:21:35.530 [2024-07-11 02:45:00.390749] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:35.530 spare 00:21:35.530 02:45:00 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:35.530 02:45:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:35.530 02:45:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:35.530 02:45:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:35.530 02:45:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:35.530 02:45:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:35.530 02:45:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:35.530 02:45:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:35.530 02:45:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:35.530 02:45:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:35.530 02:45:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:35.530 02:45:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:35.530 [2024-07-11 02:45:00.490950] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000b780 00:21:35.530 [2024-07-11 02:45:00.491110] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:35.530 [2024-07-11 02:45:00.491278] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000036bb0 00:21:35.530 [2024-07-11 02:45:00.491947] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000b780 00:21:35.530 [2024-07-11 02:45:00.492098] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000b780 00:21:35.530 [2024-07-11 02:45:00.492338] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:35.530 02:45:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:35.530 "name": "raid_bdev1", 00:21:35.530 "uuid": "640395d4-45e4-4071-aec9-b3d27167e78c", 00:21:35.530 "strip_size_kb": 0, 00:21:35.530 "state": "online", 00:21:35.530 "raid_level": "raid1", 00:21:35.530 "superblock": true, 00:21:35.530 "num_base_bdevs": 4, 00:21:35.530 "num_base_bdevs_discovered": 3, 00:21:35.530 "num_base_bdevs_operational": 3, 00:21:35.530 "base_bdevs_list": [ 00:21:35.530 { 00:21:35.530 "name": "spare", 00:21:35.530 "uuid": "cd2fce1a-4b85-575f-85dd-1fd8db1b2576", 00:21:35.530 "is_configured": true, 00:21:35.530 "data_offset": 2048, 00:21:35.530 "data_size": 63488 00:21:35.530 }, 00:21:35.530 { 00:21:35.530 "name": null, 00:21:35.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:35.530 "is_configured": false, 00:21:35.530 "data_offset": 2048, 00:21:35.530 "data_size": 63488 00:21:35.530 }, 00:21:35.530 { 00:21:35.530 "name": "BaseBdev3", 00:21:35.530 "uuid": "d885e17b-2f27-5208-922a-b6c03bb3f48c", 00:21:35.530 "is_configured": true, 00:21:35.530 "data_offset": 2048, 00:21:35.530 "data_size": 63488 00:21:35.530 }, 00:21:35.530 { 00:21:35.530 "name": "BaseBdev4", 00:21:35.530 "uuid": "764ff9da-5d2b-5f8c-8783-c457c3b27975", 00:21:35.530 "is_configured": true, 00:21:35.530 "data_offset": 2048, 00:21:35.530 "data_size": 63488 00:21:35.530 } 00:21:35.530 ] 00:21:35.530 }' 00:21:35.530 02:45:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:35.530 02:45:00 -- common/autotest_common.sh@10 -- # set +x 00:21:36.466 02:45:01 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:36.466 02:45:01 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:36.466 02:45:01 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:36.466 02:45:01 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:36.466 02:45:01 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:36.466 02:45:01 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:36.466 02:45:01 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:36.466 02:45:01 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:36.466 "name": "raid_bdev1", 00:21:36.466 "uuid": "640395d4-45e4-4071-aec9-b3d27167e78c", 00:21:36.466 "strip_size_kb": 0, 00:21:36.466 "state": "online", 00:21:36.466 "raid_level": "raid1", 00:21:36.466 "superblock": true, 00:21:36.466 "num_base_bdevs": 4, 00:21:36.466 "num_base_bdevs_discovered": 3, 00:21:36.466 "num_base_bdevs_operational": 3, 00:21:36.466 "base_bdevs_list": [ 00:21:36.466 { 00:21:36.466 "name": "spare", 00:21:36.466 "uuid": "cd2fce1a-4b85-575f-85dd-1fd8db1b2576", 00:21:36.466 "is_configured": true, 00:21:36.466 "data_offset": 2048, 00:21:36.466 "data_size": 63488 00:21:36.466 }, 00:21:36.466 { 00:21:36.466 "name": null, 00:21:36.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:36.466 "is_configured": false, 00:21:36.466 "data_offset": 2048, 00:21:36.466 "data_size": 63488 00:21:36.466 }, 00:21:36.466 { 00:21:36.466 "name": "BaseBdev3", 00:21:36.466 "uuid": "d885e17b-2f27-5208-922a-b6c03bb3f48c", 00:21:36.466 "is_configured": true, 00:21:36.466 "data_offset": 2048, 00:21:36.466 "data_size": 63488 00:21:36.466 }, 00:21:36.466 { 00:21:36.466 "name": "BaseBdev4", 00:21:36.466 "uuid": "764ff9da-5d2b-5f8c-8783-c457c3b27975", 00:21:36.466 "is_configured": true, 00:21:36.466 "data_offset": 2048, 00:21:36.466 "data_size": 63488 00:21:36.466 } 00:21:36.466 ] 00:21:36.466 }' 00:21:36.466 02:45:01 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:36.724 02:45:01 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:36.724 02:45:01 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:36.724 02:45:01 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:36.724 02:45:01 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:36.724 02:45:01 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:36.983 02:45:01 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:21:36.983 02:45:01 -- bdev/bdev_raid.sh@709 -- # killprocess 139225 00:21:36.983 02:45:01 -- common/autotest_common.sh@926 -- # '[' -z 139225 ']' 00:21:36.983 02:45:01 -- common/autotest_common.sh@930 -- # kill -0 139225 00:21:36.983 02:45:01 -- common/autotest_common.sh@931 -- # uname 00:21:36.983 02:45:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:36.983 02:45:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 139225 00:21:36.983 killing process with pid 139225 00:21:36.983 Received shutdown signal, test time was about 17.270018 seconds 00:21:36.983 00:21:36.983 Latency(us) 00:21:36.983 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:36.983 =================================================================================================================== 00:21:36.983 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:36.983 02:45:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:36.983 02:45:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:36.983 02:45:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 139225' 00:21:36.983 02:45:01 -- common/autotest_common.sh@945 -- # kill 139225 00:21:36.983 02:45:01 -- common/autotest_common.sh@950 -- # wait 139225 00:21:36.983 [2024-07-11 02:45:01.953193] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:36.983 [2024-07-11 02:45:01.953340] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:36.983 [2024-07-11 02:45:01.953589] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:36.983 [2024-07-11 02:45:01.953751] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b780 name raid_bdev1, state offline 00:21:36.983 [2024-07-11 02:45:01.993834] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:37.241 ************************************ 00:21:37.241 END TEST raid_rebuild_test_sb_io 00:21:37.241 ************************************ 00:21:37.241 02:45:02 -- bdev/bdev_raid.sh@711 -- # return 0 00:21:37.241 00:21:37.241 real 0m22.553s 00:21:37.241 user 0m37.287s 00:21:37.241 sys 0m2.712s 00:21:37.241 02:45:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:37.241 02:45:02 -- common/autotest_common.sh@10 -- # set +x 00:21:37.241 02:45:02 -- bdev/bdev_raid.sh@742 -- # '[' y == y ']' 00:21:37.241 02:45:02 -- bdev/bdev_raid.sh@743 -- # for n in {3..4} 00:21:37.241 02:45:02 -- bdev/bdev_raid.sh@744 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:21:37.241 02:45:02 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:21:37.241 02:45:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:37.241 02:45:02 -- common/autotest_common.sh@10 -- # set +x 00:21:37.241 ************************************ 00:21:37.241 START TEST raid5f_state_function_test 00:21:37.241 ************************************ 00:21:37.241 02:45:02 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid5f 3 false 00:21:37.241 02:45:02 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:21:37.241 02:45:02 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:21:37.241 02:45:02 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:21:37.241 02:45:02 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:21:37.241 02:45:02 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:21:37.241 02:45:02 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:21:37.241 02:45:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:37.241 02:45:02 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:21:37.241 02:45:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:37.241 02:45:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:37.241 02:45:02 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:21:37.241 02:45:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:37.241 02:45:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:37.241 02:45:02 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:21:37.241 02:45:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:37.241 02:45:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:37.241 Process raid pid: 139866 00:21:37.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:37.241 02:45:02 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:21:37.241 02:45:02 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:21:37.241 02:45:02 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:21:37.241 02:45:02 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:21:37.241 02:45:02 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:21:37.241 02:45:02 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:21:37.241 02:45:02 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:21:37.241 02:45:02 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:21:37.241 02:45:02 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:21:37.241 02:45:02 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:21:37.241 02:45:02 -- bdev/bdev_raid.sh@226 -- # raid_pid=139866 00:21:37.241 02:45:02 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 139866' 00:21:37.241 02:45:02 -- bdev/bdev_raid.sh@228 -- # waitforlisten 139866 /var/tmp/spdk-raid.sock 00:21:37.241 02:45:02 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:21:37.241 02:45:02 -- common/autotest_common.sh@819 -- # '[' -z 139866 ']' 00:21:37.241 02:45:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:37.241 02:45:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:37.241 02:45:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:37.241 02:45:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:37.241 02:45:02 -- common/autotest_common.sh@10 -- # set +x 00:21:37.499 [2024-07-11 02:45:02.344051] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:21:37.499 [2024-07-11 02:45:02.344454] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:37.499 [2024-07-11 02:45:02.488836] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.499 [2024-07-11 02:45:02.564360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:37.757 [2024-07-11 02:45:02.617525] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:38.322 02:45:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:38.322 02:45:03 -- common/autotest_common.sh@852 -- # return 0 00:21:38.322 02:45:03 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:38.579 [2024-07-11 02:45:03.473735] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:38.579 [2024-07-11 02:45:03.474003] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:38.579 [2024-07-11 02:45:03.474123] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:38.579 [2024-07-11 02:45:03.474177] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:38.579 [2024-07-11 02:45:03.474262] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:38.579 [2024-07-11 02:45:03.474433] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:38.579 02:45:03 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:38.579 02:45:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:38.579 02:45:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:38.579 02:45:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:38.579 02:45:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:38.579 02:45:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:38.579 02:45:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:38.579 02:45:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:38.579 02:45:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:38.579 02:45:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:38.579 02:45:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:38.579 02:45:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:38.836 02:45:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:38.836 "name": "Existed_Raid", 00:21:38.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.836 "strip_size_kb": 64, 00:21:38.836 "state": "configuring", 00:21:38.836 "raid_level": "raid5f", 00:21:38.836 "superblock": false, 00:21:38.836 "num_base_bdevs": 3, 00:21:38.836 "num_base_bdevs_discovered": 0, 00:21:38.836 "num_base_bdevs_operational": 3, 00:21:38.836 "base_bdevs_list": [ 00:21:38.836 { 00:21:38.836 "name": "BaseBdev1", 00:21:38.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.836 "is_configured": false, 00:21:38.836 "data_offset": 0, 00:21:38.836 "data_size": 0 00:21:38.836 }, 00:21:38.836 { 00:21:38.836 "name": "BaseBdev2", 00:21:38.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.836 "is_configured": false, 00:21:38.836 "data_offset": 0, 00:21:38.836 "data_size": 0 00:21:38.836 }, 00:21:38.836 { 00:21:38.836 "name": "BaseBdev3", 00:21:38.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.836 "is_configured": false, 00:21:38.836 "data_offset": 0, 00:21:38.836 "data_size": 0 00:21:38.836 } 00:21:38.836 ] 00:21:38.836 }' 00:21:38.836 02:45:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:38.836 02:45:03 -- common/autotest_common.sh@10 -- # set +x 00:21:39.401 02:45:04 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:39.659 [2024-07-11 02:45:04.673751] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:39.659 [2024-07-11 02:45:04.673975] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:21:39.659 02:45:04 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:39.916 [2024-07-11 02:45:04.917862] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:39.916 [2024-07-11 02:45:04.918089] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:39.916 [2024-07-11 02:45:04.918195] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:39.916 [2024-07-11 02:45:04.918252] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:39.916 [2024-07-11 02:45:04.918447] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:39.916 [2024-07-11 02:45:04.918508] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:39.916 02:45:04 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:40.175 [2024-07-11 02:45:05.176288] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:40.175 BaseBdev1 00:21:40.175 02:45:05 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:21:40.175 02:45:05 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:21:40.175 02:45:05 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:21:40.175 02:45:05 -- common/autotest_common.sh@889 -- # local i 00:21:40.175 02:45:05 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:21:40.175 02:45:05 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:21:40.175 02:45:05 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:40.433 02:45:05 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:40.691 [ 00:21:40.691 { 00:21:40.691 "name": "BaseBdev1", 00:21:40.691 "aliases": [ 00:21:40.691 "0480d713-7967-4836-afb4-2b74a24cfece" 00:21:40.691 ], 00:21:40.691 "product_name": "Malloc disk", 00:21:40.691 "block_size": 512, 00:21:40.691 "num_blocks": 65536, 00:21:40.691 "uuid": "0480d713-7967-4836-afb4-2b74a24cfece", 00:21:40.691 "assigned_rate_limits": { 00:21:40.691 "rw_ios_per_sec": 0, 00:21:40.691 "rw_mbytes_per_sec": 0, 00:21:40.691 "r_mbytes_per_sec": 0, 00:21:40.691 "w_mbytes_per_sec": 0 00:21:40.691 }, 00:21:40.691 "claimed": true, 00:21:40.691 "claim_type": "exclusive_write", 00:21:40.691 "zoned": false, 00:21:40.691 "supported_io_types": { 00:21:40.691 "read": true, 00:21:40.691 "write": true, 00:21:40.691 "unmap": true, 00:21:40.691 "write_zeroes": true, 00:21:40.691 "flush": true, 00:21:40.691 "reset": true, 00:21:40.691 "compare": false, 00:21:40.691 "compare_and_write": false, 00:21:40.691 "abort": true, 00:21:40.691 "nvme_admin": false, 00:21:40.691 "nvme_io": false 00:21:40.691 }, 00:21:40.691 "memory_domains": [ 00:21:40.691 { 00:21:40.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:40.691 "dma_device_type": 2 00:21:40.691 } 00:21:40.691 ], 00:21:40.691 "driver_specific": {} 00:21:40.691 } 00:21:40.691 ] 00:21:40.691 02:45:05 -- common/autotest_common.sh@895 -- # return 0 00:21:40.691 02:45:05 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:40.691 02:45:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:40.691 02:45:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:40.691 02:45:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:40.691 02:45:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:40.691 02:45:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:40.691 02:45:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:40.691 02:45:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:40.691 02:45:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:40.691 02:45:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:40.691 02:45:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:40.691 02:45:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:40.691 02:45:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:40.691 "name": "Existed_Raid", 00:21:40.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:40.691 "strip_size_kb": 64, 00:21:40.691 "state": "configuring", 00:21:40.691 "raid_level": "raid5f", 00:21:40.691 "superblock": false, 00:21:40.691 "num_base_bdevs": 3, 00:21:40.691 "num_base_bdevs_discovered": 1, 00:21:40.691 "num_base_bdevs_operational": 3, 00:21:40.691 "base_bdevs_list": [ 00:21:40.691 { 00:21:40.691 "name": "BaseBdev1", 00:21:40.691 "uuid": "0480d713-7967-4836-afb4-2b74a24cfece", 00:21:40.691 "is_configured": true, 00:21:40.691 "data_offset": 0, 00:21:40.691 "data_size": 65536 00:21:40.691 }, 00:21:40.691 { 00:21:40.691 "name": "BaseBdev2", 00:21:40.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:40.691 "is_configured": false, 00:21:40.691 "data_offset": 0, 00:21:40.691 "data_size": 0 00:21:40.691 }, 00:21:40.691 { 00:21:40.691 "name": "BaseBdev3", 00:21:40.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:40.949 "is_configured": false, 00:21:40.949 "data_offset": 0, 00:21:40.949 "data_size": 0 00:21:40.949 } 00:21:40.949 ] 00:21:40.949 }' 00:21:40.949 02:45:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:40.949 02:45:05 -- common/autotest_common.sh@10 -- # set +x 00:21:41.515 02:45:06 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:41.773 [2024-07-11 02:45:06.676630] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:41.773 [2024-07-11 02:45:06.676846] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005a80 name Existed_Raid, state configuring 00:21:41.773 02:45:06 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:21:41.773 02:45:06 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:42.031 [2024-07-11 02:45:06.928764] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:42.031 [2024-07-11 02:45:06.931152] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:42.031 [2024-07-11 02:45:06.931354] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:42.031 [2024-07-11 02:45:06.931455] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:42.031 [2024-07-11 02:45:06.931550] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:42.031 02:45:06 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:21:42.031 02:45:06 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:42.031 02:45:06 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:42.031 02:45:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:42.031 02:45:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:42.031 02:45:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:42.031 02:45:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:42.031 02:45:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:42.031 02:45:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:42.031 02:45:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:42.031 02:45:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:42.031 02:45:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:42.031 02:45:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:42.031 02:45:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:42.289 02:45:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:42.289 "name": "Existed_Raid", 00:21:42.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:42.289 "strip_size_kb": 64, 00:21:42.289 "state": "configuring", 00:21:42.289 "raid_level": "raid5f", 00:21:42.289 "superblock": false, 00:21:42.289 "num_base_bdevs": 3, 00:21:42.289 "num_base_bdevs_discovered": 1, 00:21:42.289 "num_base_bdevs_operational": 3, 00:21:42.289 "base_bdevs_list": [ 00:21:42.289 { 00:21:42.289 "name": "BaseBdev1", 00:21:42.289 "uuid": "0480d713-7967-4836-afb4-2b74a24cfece", 00:21:42.289 "is_configured": true, 00:21:42.289 "data_offset": 0, 00:21:42.289 "data_size": 65536 00:21:42.289 }, 00:21:42.289 { 00:21:42.289 "name": "BaseBdev2", 00:21:42.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:42.289 "is_configured": false, 00:21:42.289 "data_offset": 0, 00:21:42.289 "data_size": 0 00:21:42.289 }, 00:21:42.289 { 00:21:42.289 "name": "BaseBdev3", 00:21:42.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:42.289 "is_configured": false, 00:21:42.289 "data_offset": 0, 00:21:42.289 "data_size": 0 00:21:42.289 } 00:21:42.289 ] 00:21:42.289 }' 00:21:42.289 02:45:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:42.289 02:45:07 -- common/autotest_common.sh@10 -- # set +x 00:21:42.856 02:45:07 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:43.114 [2024-07-11 02:45:08.165579] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:43.114 BaseBdev2 00:21:43.114 02:45:08 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:21:43.114 02:45:08 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:21:43.114 02:45:08 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:21:43.114 02:45:08 -- common/autotest_common.sh@889 -- # local i 00:21:43.114 02:45:08 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:21:43.114 02:45:08 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:21:43.114 02:45:08 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:43.372 02:45:08 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:43.630 [ 00:21:43.630 { 00:21:43.630 "name": "BaseBdev2", 00:21:43.630 "aliases": [ 00:21:43.630 "e65fa997-a578-482c-ba2a-45174c7028fe" 00:21:43.630 ], 00:21:43.630 "product_name": "Malloc disk", 00:21:43.630 "block_size": 512, 00:21:43.630 "num_blocks": 65536, 00:21:43.630 "uuid": "e65fa997-a578-482c-ba2a-45174c7028fe", 00:21:43.630 "assigned_rate_limits": { 00:21:43.630 "rw_ios_per_sec": 0, 00:21:43.630 "rw_mbytes_per_sec": 0, 00:21:43.630 "r_mbytes_per_sec": 0, 00:21:43.630 "w_mbytes_per_sec": 0 00:21:43.630 }, 00:21:43.630 "claimed": true, 00:21:43.630 "claim_type": "exclusive_write", 00:21:43.630 "zoned": false, 00:21:43.630 "supported_io_types": { 00:21:43.630 "read": true, 00:21:43.630 "write": true, 00:21:43.630 "unmap": true, 00:21:43.630 "write_zeroes": true, 00:21:43.630 "flush": true, 00:21:43.630 "reset": true, 00:21:43.630 "compare": false, 00:21:43.630 "compare_and_write": false, 00:21:43.630 "abort": true, 00:21:43.630 "nvme_admin": false, 00:21:43.630 "nvme_io": false 00:21:43.630 }, 00:21:43.630 "memory_domains": [ 00:21:43.630 { 00:21:43.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:43.630 "dma_device_type": 2 00:21:43.630 } 00:21:43.630 ], 00:21:43.630 "driver_specific": {} 00:21:43.630 } 00:21:43.630 ] 00:21:43.630 02:45:08 -- common/autotest_common.sh@895 -- # return 0 00:21:43.630 02:45:08 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:21:43.630 02:45:08 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:43.630 02:45:08 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:43.630 02:45:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:43.630 02:45:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:43.630 02:45:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:43.630 02:45:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:43.630 02:45:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:43.630 02:45:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:43.630 02:45:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:43.630 02:45:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:43.630 02:45:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:43.630 02:45:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:43.630 02:45:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:43.889 02:45:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:43.889 "name": "Existed_Raid", 00:21:43.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:43.889 "strip_size_kb": 64, 00:21:43.889 "state": "configuring", 00:21:43.889 "raid_level": "raid5f", 00:21:43.889 "superblock": false, 00:21:43.889 "num_base_bdevs": 3, 00:21:43.889 "num_base_bdevs_discovered": 2, 00:21:43.889 "num_base_bdevs_operational": 3, 00:21:43.889 "base_bdevs_list": [ 00:21:43.889 { 00:21:43.889 "name": "BaseBdev1", 00:21:43.889 "uuid": "0480d713-7967-4836-afb4-2b74a24cfece", 00:21:43.889 "is_configured": true, 00:21:43.889 "data_offset": 0, 00:21:43.889 "data_size": 65536 00:21:43.889 }, 00:21:43.889 { 00:21:43.889 "name": "BaseBdev2", 00:21:43.889 "uuid": "e65fa997-a578-482c-ba2a-45174c7028fe", 00:21:43.889 "is_configured": true, 00:21:43.889 "data_offset": 0, 00:21:43.889 "data_size": 65536 00:21:43.889 }, 00:21:43.889 { 00:21:43.889 "name": "BaseBdev3", 00:21:43.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:43.889 "is_configured": false, 00:21:43.889 "data_offset": 0, 00:21:43.889 "data_size": 0 00:21:43.889 } 00:21:43.889 ] 00:21:43.889 }' 00:21:43.889 02:45:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:43.889 02:45:08 -- common/autotest_common.sh@10 -- # set +x 00:21:44.459 02:45:09 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:44.733 [2024-07-11 02:45:09.743092] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:44.733 [2024-07-11 02:45:09.743332] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006380 00:21:44.733 [2024-07-11 02:45:09.743376] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:21:44.733 [2024-07-11 02:45:09.743688] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80 00:21:44.733 [2024-07-11 02:45:09.744599] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006380 00:21:44.733 [2024-07-11 02:45:09.744746] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006380 00:21:44.733 [2024-07-11 02:45:09.745116] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:44.733 BaseBdev3 00:21:44.733 02:45:09 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:21:44.733 02:45:09 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:21:44.733 02:45:09 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:21:44.733 02:45:09 -- common/autotest_common.sh@889 -- # local i 00:21:44.733 02:45:09 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:21:44.733 02:45:09 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:21:44.733 02:45:09 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:44.999 02:45:10 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:45.257 [ 00:21:45.257 { 00:21:45.257 "name": "BaseBdev3", 00:21:45.257 "aliases": [ 00:21:45.257 "d9f8994f-2d51-49fb-8f97-d977ea7f9627" 00:21:45.257 ], 00:21:45.257 "product_name": "Malloc disk", 00:21:45.257 "block_size": 512, 00:21:45.257 "num_blocks": 65536, 00:21:45.257 "uuid": "d9f8994f-2d51-49fb-8f97-d977ea7f9627", 00:21:45.257 "assigned_rate_limits": { 00:21:45.257 "rw_ios_per_sec": 0, 00:21:45.257 "rw_mbytes_per_sec": 0, 00:21:45.257 "r_mbytes_per_sec": 0, 00:21:45.257 "w_mbytes_per_sec": 0 00:21:45.257 }, 00:21:45.257 "claimed": true, 00:21:45.257 "claim_type": "exclusive_write", 00:21:45.257 "zoned": false, 00:21:45.257 "supported_io_types": { 00:21:45.257 "read": true, 00:21:45.257 "write": true, 00:21:45.257 "unmap": true, 00:21:45.257 "write_zeroes": true, 00:21:45.257 "flush": true, 00:21:45.257 "reset": true, 00:21:45.257 "compare": false, 00:21:45.257 "compare_and_write": false, 00:21:45.257 "abort": true, 00:21:45.257 "nvme_admin": false, 00:21:45.257 "nvme_io": false 00:21:45.257 }, 00:21:45.257 "memory_domains": [ 00:21:45.257 { 00:21:45.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:45.257 "dma_device_type": 2 00:21:45.257 } 00:21:45.257 ], 00:21:45.257 "driver_specific": {} 00:21:45.257 } 00:21:45.257 ] 00:21:45.257 02:45:10 -- common/autotest_common.sh@895 -- # return 0 00:21:45.257 02:45:10 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:21:45.257 02:45:10 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:45.257 02:45:10 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:21:45.257 02:45:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:45.257 02:45:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:45.257 02:45:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:45.257 02:45:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:45.257 02:45:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:45.257 02:45:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:45.257 02:45:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:45.257 02:45:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:45.257 02:45:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:45.258 02:45:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:45.258 02:45:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:45.516 02:45:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:45.516 "name": "Existed_Raid", 00:21:45.516 "uuid": "af8dd042-4471-4442-b446-8b401b3e803b", 00:21:45.516 "strip_size_kb": 64, 00:21:45.516 "state": "online", 00:21:45.516 "raid_level": "raid5f", 00:21:45.516 "superblock": false, 00:21:45.516 "num_base_bdevs": 3, 00:21:45.516 "num_base_bdevs_discovered": 3, 00:21:45.516 "num_base_bdevs_operational": 3, 00:21:45.516 "base_bdevs_list": [ 00:21:45.516 { 00:21:45.516 "name": "BaseBdev1", 00:21:45.516 "uuid": "0480d713-7967-4836-afb4-2b74a24cfece", 00:21:45.516 "is_configured": true, 00:21:45.516 "data_offset": 0, 00:21:45.516 "data_size": 65536 00:21:45.516 }, 00:21:45.516 { 00:21:45.516 "name": "BaseBdev2", 00:21:45.516 "uuid": "e65fa997-a578-482c-ba2a-45174c7028fe", 00:21:45.516 "is_configured": true, 00:21:45.516 "data_offset": 0, 00:21:45.516 "data_size": 65536 00:21:45.516 }, 00:21:45.516 { 00:21:45.516 "name": "BaseBdev3", 00:21:45.516 "uuid": "d9f8994f-2d51-49fb-8f97-d977ea7f9627", 00:21:45.516 "is_configured": true, 00:21:45.516 "data_offset": 0, 00:21:45.516 "data_size": 65536 00:21:45.516 } 00:21:45.516 ] 00:21:45.516 }' 00:21:45.516 02:45:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:45.516 02:45:10 -- common/autotest_common.sh@10 -- # set +x 00:21:46.082 02:45:11 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:46.339 [2024-07-11 02:45:11.255590] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:46.339 02:45:11 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:21:46.339 02:45:11 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:21:46.339 02:45:11 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:21:46.339 02:45:11 -- bdev/bdev_raid.sh@196 -- # return 0 00:21:46.339 02:45:11 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:21:46.339 02:45:11 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:21:46.339 02:45:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:46.339 02:45:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:46.339 02:45:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:46.339 02:45:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:46.339 02:45:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:46.339 02:45:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:46.339 02:45:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:46.339 02:45:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:46.339 02:45:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:46.339 02:45:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:46.339 02:45:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:46.597 02:45:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:46.597 "name": "Existed_Raid", 00:21:46.597 "uuid": "af8dd042-4471-4442-b446-8b401b3e803b", 00:21:46.597 "strip_size_kb": 64, 00:21:46.597 "state": "online", 00:21:46.597 "raid_level": "raid5f", 00:21:46.597 "superblock": false, 00:21:46.597 "num_base_bdevs": 3, 00:21:46.597 "num_base_bdevs_discovered": 2, 00:21:46.597 "num_base_bdevs_operational": 2, 00:21:46.597 "base_bdevs_list": [ 00:21:46.597 { 00:21:46.597 "name": null, 00:21:46.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:46.597 "is_configured": false, 00:21:46.597 "data_offset": 0, 00:21:46.597 "data_size": 65536 00:21:46.597 }, 00:21:46.597 { 00:21:46.597 "name": "BaseBdev2", 00:21:46.597 "uuid": "e65fa997-a578-482c-ba2a-45174c7028fe", 00:21:46.597 "is_configured": true, 00:21:46.597 "data_offset": 0, 00:21:46.597 "data_size": 65536 00:21:46.597 }, 00:21:46.597 { 00:21:46.597 "name": "BaseBdev3", 00:21:46.597 "uuid": "d9f8994f-2d51-49fb-8f97-d977ea7f9627", 00:21:46.597 "is_configured": true, 00:21:46.597 "data_offset": 0, 00:21:46.597 "data_size": 65536 00:21:46.597 } 00:21:46.597 ] 00:21:46.597 }' 00:21:46.597 02:45:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:46.597 02:45:11 -- common/autotest_common.sh@10 -- # set +x 00:21:47.162 02:45:12 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:21:47.162 02:45:12 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:47.162 02:45:12 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:47.162 02:45:12 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:21:47.419 02:45:12 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:21:47.419 02:45:12 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:47.419 02:45:12 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:21:47.677 [2024-07-11 02:45:12.589625] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:47.677 [2024-07-11 02:45:12.589848] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:47.677 [2024-07-11 02:45:12.590041] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:47.677 02:45:12 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:21:47.677 02:45:12 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:47.677 02:45:12 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:47.677 02:45:12 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:21:47.935 02:45:12 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:21:47.935 02:45:12 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:47.935 02:45:12 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:21:47.935 [2024-07-11 02:45:12.987586] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:47.935 [2024-07-11 02:45:12.987775] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state offline 00:21:47.935 02:45:13 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:21:47.935 02:45:13 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:47.935 02:45:13 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:47.935 02:45:13 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:21:48.193 02:45:13 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:21:48.193 02:45:13 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:21:48.193 02:45:13 -- bdev/bdev_raid.sh@287 -- # killprocess 139866 00:21:48.193 02:45:13 -- common/autotest_common.sh@926 -- # '[' -z 139866 ']' 00:21:48.193 02:45:13 -- common/autotest_common.sh@930 -- # kill -0 139866 00:21:48.193 02:45:13 -- common/autotest_common.sh@931 -- # uname 00:21:48.193 02:45:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:48.193 02:45:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 139866 00:21:48.193 killing process with pid 139866 00:21:48.193 02:45:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:48.193 02:45:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:48.193 02:45:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 139866' 00:21:48.193 02:45:13 -- common/autotest_common.sh@945 -- # kill 139866 00:21:48.193 02:45:13 -- common/autotest_common.sh@950 -- # wait 139866 00:21:48.193 [2024-07-11 02:45:13.224228] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:48.193 [2024-07-11 02:45:13.224336] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:48.451 ************************************ 00:21:48.451 END TEST raid5f_state_function_test 00:21:48.451 ************************************ 00:21:48.451 02:45:13 -- bdev/bdev_raid.sh@289 -- # return 0 00:21:48.451 00:21:48.451 real 0m11.169s 00:21:48.451 user 0m20.764s 00:21:48.451 sys 0m1.327s 00:21:48.451 02:45:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:48.451 02:45:13 -- common/autotest_common.sh@10 -- # set +x 00:21:48.451 02:45:13 -- bdev/bdev_raid.sh@745 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:21:48.451 02:45:13 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:21:48.451 02:45:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:48.451 02:45:13 -- common/autotest_common.sh@10 -- # set +x 00:21:48.451 ************************************ 00:21:48.451 START TEST raid5f_state_function_test_sb 00:21:48.451 ************************************ 00:21:48.451 02:45:13 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid5f 3 true 00:21:48.451 02:45:13 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:21:48.451 02:45:13 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:21:48.451 02:45:13 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:21:48.451 02:45:13 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:21:48.451 02:45:13 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:21:48.451 02:45:13 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:21:48.451 02:45:13 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:48.451 02:45:13 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:21:48.451 02:45:13 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:48.451 02:45:13 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:48.452 02:45:13 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:21:48.452 02:45:13 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:48.452 02:45:13 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:48.452 02:45:13 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:21:48.452 02:45:13 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:48.452 02:45:13 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:48.452 02:45:13 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:21:48.452 02:45:13 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:21:48.452 02:45:13 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:21:48.452 Process raid pid: 140248 00:21:48.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:48.452 02:45:13 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:21:48.452 02:45:13 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:21:48.452 02:45:13 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:21:48.452 02:45:13 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:21:48.452 02:45:13 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:21:48.452 02:45:13 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:21:48.452 02:45:13 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:21:48.452 02:45:13 -- bdev/bdev_raid.sh@226 -- # raid_pid=140248 00:21:48.452 02:45:13 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 140248' 00:21:48.452 02:45:13 -- bdev/bdev_raid.sh@228 -- # waitforlisten 140248 /var/tmp/spdk-raid.sock 00:21:48.452 02:45:13 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:21:48.452 02:45:13 -- common/autotest_common.sh@819 -- # '[' -z 140248 ']' 00:21:48.452 02:45:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:48.452 02:45:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:48.452 02:45:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:48.452 02:45:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:48.452 02:45:13 -- common/autotest_common.sh@10 -- # set +x 00:21:48.708 [2024-07-11 02:45:13.554171] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:21:48.708 [2024-07-11 02:45:13.554550] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:48.708 [2024-07-11 02:45:13.688972] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:48.708 [2024-07-11 02:45:13.750692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:48.964 [2024-07-11 02:45:13.800934] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:49.528 02:45:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:49.528 02:45:14 -- common/autotest_common.sh@852 -- # return 0 00:21:49.528 02:45:14 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:49.786 [2024-07-11 02:45:14.692224] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:49.786 [2024-07-11 02:45:14.692452] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:49.786 [2024-07-11 02:45:14.692550] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:49.786 [2024-07-11 02:45:14.692603] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:49.786 [2024-07-11 02:45:14.692688] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:49.786 [2024-07-11 02:45:14.692759] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:49.786 02:45:14 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:49.786 02:45:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:49.786 02:45:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:49.786 02:45:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:49.786 02:45:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:49.786 02:45:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:49.786 02:45:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:49.786 02:45:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:49.786 02:45:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:49.786 02:45:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:49.786 02:45:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:49.786 02:45:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:50.046 02:45:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:50.046 "name": "Existed_Raid", 00:21:50.046 "uuid": "89647674-e680-4b44-916b-686a07babc8a", 00:21:50.046 "strip_size_kb": 64, 00:21:50.046 "state": "configuring", 00:21:50.046 "raid_level": "raid5f", 00:21:50.046 "superblock": true, 00:21:50.046 "num_base_bdevs": 3, 00:21:50.046 "num_base_bdevs_discovered": 0, 00:21:50.046 "num_base_bdevs_operational": 3, 00:21:50.046 "base_bdevs_list": [ 00:21:50.046 { 00:21:50.046 "name": "BaseBdev1", 00:21:50.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:50.046 "is_configured": false, 00:21:50.046 "data_offset": 0, 00:21:50.046 "data_size": 0 00:21:50.046 }, 00:21:50.046 { 00:21:50.046 "name": "BaseBdev2", 00:21:50.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:50.046 "is_configured": false, 00:21:50.046 "data_offset": 0, 00:21:50.046 "data_size": 0 00:21:50.046 }, 00:21:50.046 { 00:21:50.046 "name": "BaseBdev3", 00:21:50.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:50.046 "is_configured": false, 00:21:50.046 "data_offset": 0, 00:21:50.046 "data_size": 0 00:21:50.046 } 00:21:50.046 ] 00:21:50.046 }' 00:21:50.046 02:45:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:50.046 02:45:14 -- common/autotest_common.sh@10 -- # set +x 00:21:50.613 02:45:15 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:50.870 [2024-07-11 02:45:15.760269] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:50.870 [2024-07-11 02:45:15.760469] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:21:50.870 02:45:15 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:50.870 [2024-07-11 02:45:15.948336] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:50.870 [2024-07-11 02:45:15.948549] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:50.870 [2024-07-11 02:45:15.948644] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:50.870 [2024-07-11 02:45:15.948701] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:50.870 [2024-07-11 02:45:15.948784] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:50.870 [2024-07-11 02:45:15.948904] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:50.870 02:45:15 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:51.127 [2024-07-11 02:45:16.210981] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:51.127 BaseBdev1 00:21:51.385 02:45:16 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:21:51.385 02:45:16 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:21:51.385 02:45:16 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:21:51.385 02:45:16 -- common/autotest_common.sh@889 -- # local i 00:21:51.385 02:45:16 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:21:51.385 02:45:16 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:21:51.385 02:45:16 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:51.385 02:45:16 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:51.643 [ 00:21:51.643 { 00:21:51.643 "name": "BaseBdev1", 00:21:51.643 "aliases": [ 00:21:51.643 "3f153d2d-d3bb-4280-a618-84a50dbbc405" 00:21:51.643 ], 00:21:51.643 "product_name": "Malloc disk", 00:21:51.643 "block_size": 512, 00:21:51.643 "num_blocks": 65536, 00:21:51.643 "uuid": "3f153d2d-d3bb-4280-a618-84a50dbbc405", 00:21:51.643 "assigned_rate_limits": { 00:21:51.643 "rw_ios_per_sec": 0, 00:21:51.643 "rw_mbytes_per_sec": 0, 00:21:51.643 "r_mbytes_per_sec": 0, 00:21:51.643 "w_mbytes_per_sec": 0 00:21:51.643 }, 00:21:51.643 "claimed": true, 00:21:51.643 "claim_type": "exclusive_write", 00:21:51.643 "zoned": false, 00:21:51.643 "supported_io_types": { 00:21:51.643 "read": true, 00:21:51.643 "write": true, 00:21:51.643 "unmap": true, 00:21:51.643 "write_zeroes": true, 00:21:51.643 "flush": true, 00:21:51.643 "reset": true, 00:21:51.643 "compare": false, 00:21:51.643 "compare_and_write": false, 00:21:51.643 "abort": true, 00:21:51.643 "nvme_admin": false, 00:21:51.643 "nvme_io": false 00:21:51.643 }, 00:21:51.643 "memory_domains": [ 00:21:51.643 { 00:21:51.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:51.643 "dma_device_type": 2 00:21:51.643 } 00:21:51.643 ], 00:21:51.643 "driver_specific": {} 00:21:51.643 } 00:21:51.643 ] 00:21:51.643 02:45:16 -- common/autotest_common.sh@895 -- # return 0 00:21:51.643 02:45:16 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:51.643 02:45:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:51.643 02:45:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:51.643 02:45:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:51.643 02:45:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:51.643 02:45:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:51.643 02:45:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:51.643 02:45:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:51.643 02:45:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:51.643 02:45:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:51.643 02:45:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:51.643 02:45:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:51.901 02:45:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:51.901 "name": "Existed_Raid", 00:21:51.901 "uuid": "ef968454-2095-48a8-8cdd-7475bb74b19f", 00:21:51.901 "strip_size_kb": 64, 00:21:51.901 "state": "configuring", 00:21:51.901 "raid_level": "raid5f", 00:21:51.901 "superblock": true, 00:21:51.901 "num_base_bdevs": 3, 00:21:51.901 "num_base_bdevs_discovered": 1, 00:21:51.901 "num_base_bdevs_operational": 3, 00:21:51.901 "base_bdevs_list": [ 00:21:51.901 { 00:21:51.901 "name": "BaseBdev1", 00:21:51.901 "uuid": "3f153d2d-d3bb-4280-a618-84a50dbbc405", 00:21:51.901 "is_configured": true, 00:21:51.901 "data_offset": 2048, 00:21:51.901 "data_size": 63488 00:21:51.901 }, 00:21:51.901 { 00:21:51.901 "name": "BaseBdev2", 00:21:51.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:51.901 "is_configured": false, 00:21:51.901 "data_offset": 0, 00:21:51.901 "data_size": 0 00:21:51.901 }, 00:21:51.901 { 00:21:51.901 "name": "BaseBdev3", 00:21:51.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:51.901 "is_configured": false, 00:21:51.901 "data_offset": 0, 00:21:51.901 "data_size": 0 00:21:51.901 } 00:21:51.901 ] 00:21:51.901 }' 00:21:51.901 02:45:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:51.901 02:45:16 -- common/autotest_common.sh@10 -- # set +x 00:21:52.465 02:45:17 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:52.753 [2024-07-11 02:45:17.751305] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:52.753 [2024-07-11 02:45:17.751554] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005a80 name Existed_Raid, state configuring 00:21:52.753 02:45:17 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:21:52.753 02:45:17 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:53.010 02:45:17 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:53.269 BaseBdev1 00:21:53.269 02:45:18 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:21:53.269 02:45:18 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:21:53.269 02:45:18 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:21:53.269 02:45:18 -- common/autotest_common.sh@889 -- # local i 00:21:53.269 02:45:18 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:21:53.269 02:45:18 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:21:53.269 02:45:18 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:53.527 02:45:18 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:53.785 [ 00:21:53.785 { 00:21:53.785 "name": "BaseBdev1", 00:21:53.785 "aliases": [ 00:21:53.785 "2b14657b-120a-4b3b-be93-f8e4ed3ffc59" 00:21:53.785 ], 00:21:53.785 "product_name": "Malloc disk", 00:21:53.785 "block_size": 512, 00:21:53.785 "num_blocks": 65536, 00:21:53.785 "uuid": "2b14657b-120a-4b3b-be93-f8e4ed3ffc59", 00:21:53.785 "assigned_rate_limits": { 00:21:53.785 "rw_ios_per_sec": 0, 00:21:53.785 "rw_mbytes_per_sec": 0, 00:21:53.785 "r_mbytes_per_sec": 0, 00:21:53.785 "w_mbytes_per_sec": 0 00:21:53.785 }, 00:21:53.785 "claimed": false, 00:21:53.785 "zoned": false, 00:21:53.785 "supported_io_types": { 00:21:53.785 "read": true, 00:21:53.785 "write": true, 00:21:53.785 "unmap": true, 00:21:53.785 "write_zeroes": true, 00:21:53.785 "flush": true, 00:21:53.785 "reset": true, 00:21:53.785 "compare": false, 00:21:53.785 "compare_and_write": false, 00:21:53.785 "abort": true, 00:21:53.785 "nvme_admin": false, 00:21:53.785 "nvme_io": false 00:21:53.785 }, 00:21:53.785 "memory_domains": [ 00:21:53.785 { 00:21:53.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:53.785 "dma_device_type": 2 00:21:53.785 } 00:21:53.785 ], 00:21:53.785 "driver_specific": {} 00:21:53.785 } 00:21:53.785 ] 00:21:53.785 02:45:18 -- common/autotest_common.sh@895 -- # return 0 00:21:53.785 02:45:18 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:53.785 [2024-07-11 02:45:18.806223] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:53.785 [2024-07-11 02:45:18.808010] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:53.785 [2024-07-11 02:45:18.808188] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:53.785 [2024-07-11 02:45:18.808282] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:53.785 [2024-07-11 02:45:18.808399] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:53.785 02:45:18 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:21:53.785 02:45:18 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:53.785 02:45:18 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:53.785 02:45:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:53.785 02:45:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:53.785 02:45:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:53.785 02:45:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:53.785 02:45:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:53.785 02:45:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:53.785 02:45:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:53.785 02:45:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:53.785 02:45:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:53.785 02:45:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:53.785 02:45:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:54.043 02:45:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:54.043 "name": "Existed_Raid", 00:21:54.043 "uuid": "6c139161-3981-427d-8f37-aa92a43bca6f", 00:21:54.043 "strip_size_kb": 64, 00:21:54.043 "state": "configuring", 00:21:54.043 "raid_level": "raid5f", 00:21:54.043 "superblock": true, 00:21:54.043 "num_base_bdevs": 3, 00:21:54.043 "num_base_bdevs_discovered": 1, 00:21:54.043 "num_base_bdevs_operational": 3, 00:21:54.043 "base_bdevs_list": [ 00:21:54.043 { 00:21:54.043 "name": "BaseBdev1", 00:21:54.043 "uuid": "2b14657b-120a-4b3b-be93-f8e4ed3ffc59", 00:21:54.043 "is_configured": true, 00:21:54.043 "data_offset": 2048, 00:21:54.043 "data_size": 63488 00:21:54.043 }, 00:21:54.043 { 00:21:54.043 "name": "BaseBdev2", 00:21:54.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:54.043 "is_configured": false, 00:21:54.043 "data_offset": 0, 00:21:54.043 "data_size": 0 00:21:54.043 }, 00:21:54.043 { 00:21:54.043 "name": "BaseBdev3", 00:21:54.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:54.043 "is_configured": false, 00:21:54.043 "data_offset": 0, 00:21:54.043 "data_size": 0 00:21:54.043 } 00:21:54.043 ] 00:21:54.043 }' 00:21:54.043 02:45:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:54.043 02:45:19 -- common/autotest_common.sh@10 -- # set +x 00:21:54.976 02:45:19 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:54.976 [2024-07-11 02:45:19.933907] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:54.976 BaseBdev2 00:21:54.976 02:45:19 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:21:54.976 02:45:19 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:21:54.976 02:45:19 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:21:54.976 02:45:19 -- common/autotest_common.sh@889 -- # local i 00:21:54.976 02:45:19 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:21:54.976 02:45:19 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:21:54.976 02:45:19 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:55.234 02:45:20 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:55.492 [ 00:21:55.492 { 00:21:55.492 "name": "BaseBdev2", 00:21:55.492 "aliases": [ 00:21:55.492 "3acc8512-57f2-4508-848d-ab092487f08c" 00:21:55.492 ], 00:21:55.492 "product_name": "Malloc disk", 00:21:55.492 "block_size": 512, 00:21:55.492 "num_blocks": 65536, 00:21:55.492 "uuid": "3acc8512-57f2-4508-848d-ab092487f08c", 00:21:55.492 "assigned_rate_limits": { 00:21:55.492 "rw_ios_per_sec": 0, 00:21:55.492 "rw_mbytes_per_sec": 0, 00:21:55.492 "r_mbytes_per_sec": 0, 00:21:55.492 "w_mbytes_per_sec": 0 00:21:55.492 }, 00:21:55.492 "claimed": true, 00:21:55.492 "claim_type": "exclusive_write", 00:21:55.492 "zoned": false, 00:21:55.492 "supported_io_types": { 00:21:55.492 "read": true, 00:21:55.492 "write": true, 00:21:55.492 "unmap": true, 00:21:55.492 "write_zeroes": true, 00:21:55.492 "flush": true, 00:21:55.492 "reset": true, 00:21:55.492 "compare": false, 00:21:55.492 "compare_and_write": false, 00:21:55.492 "abort": true, 00:21:55.492 "nvme_admin": false, 00:21:55.492 "nvme_io": false 00:21:55.492 }, 00:21:55.492 "memory_domains": [ 00:21:55.492 { 00:21:55.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:55.492 "dma_device_type": 2 00:21:55.492 } 00:21:55.492 ], 00:21:55.492 "driver_specific": {} 00:21:55.492 } 00:21:55.492 ] 00:21:55.492 02:45:20 -- common/autotest_common.sh@895 -- # return 0 00:21:55.492 02:45:20 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:21:55.492 02:45:20 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:55.492 02:45:20 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:55.492 02:45:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:55.492 02:45:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:55.492 02:45:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:55.492 02:45:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:55.492 02:45:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:55.492 02:45:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:55.492 02:45:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:55.492 02:45:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:55.492 02:45:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:55.492 02:45:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:55.492 02:45:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:55.750 02:45:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:55.750 "name": "Existed_Raid", 00:21:55.750 "uuid": "6c139161-3981-427d-8f37-aa92a43bca6f", 00:21:55.750 "strip_size_kb": 64, 00:21:55.750 "state": "configuring", 00:21:55.750 "raid_level": "raid5f", 00:21:55.750 "superblock": true, 00:21:55.750 "num_base_bdevs": 3, 00:21:55.750 "num_base_bdevs_discovered": 2, 00:21:55.750 "num_base_bdevs_operational": 3, 00:21:55.750 "base_bdevs_list": [ 00:21:55.750 { 00:21:55.750 "name": "BaseBdev1", 00:21:55.750 "uuid": "2b14657b-120a-4b3b-be93-f8e4ed3ffc59", 00:21:55.750 "is_configured": true, 00:21:55.750 "data_offset": 2048, 00:21:55.750 "data_size": 63488 00:21:55.750 }, 00:21:55.750 { 00:21:55.750 "name": "BaseBdev2", 00:21:55.750 "uuid": "3acc8512-57f2-4508-848d-ab092487f08c", 00:21:55.750 "is_configured": true, 00:21:55.750 "data_offset": 2048, 00:21:55.750 "data_size": 63488 00:21:55.750 }, 00:21:55.750 { 00:21:55.750 "name": "BaseBdev3", 00:21:55.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:55.750 "is_configured": false, 00:21:55.750 "data_offset": 0, 00:21:55.750 "data_size": 0 00:21:55.750 } 00:21:55.750 ] 00:21:55.750 }' 00:21:55.750 02:45:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:55.750 02:45:20 -- common/autotest_common.sh@10 -- # set +x 00:21:56.331 02:45:21 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:56.588 [2024-07-11 02:45:21.506661] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:56.588 [2024-07-11 02:45:21.507197] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006980 00:21:56.588 [2024-07-11 02:45:21.507317] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:21:56.588 BaseBdev3 00:21:56.588 [2024-07-11 02:45:21.507475] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:21:56.588 [2024-07-11 02:45:21.508286] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006980 00:21:56.588 [2024-07-11 02:45:21.508432] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006980 00:21:56.588 [2024-07-11 02:45:21.508694] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:56.588 02:45:21 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:21:56.588 02:45:21 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:21:56.588 02:45:21 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:21:56.588 02:45:21 -- common/autotest_common.sh@889 -- # local i 00:21:56.588 02:45:21 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:21:56.588 02:45:21 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:21:56.588 02:45:21 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:56.848 02:45:21 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:56.848 [ 00:21:56.848 { 00:21:56.848 "name": "BaseBdev3", 00:21:56.848 "aliases": [ 00:21:56.848 "40b2bff3-618b-4933-9cd8-08b9cfa610c5" 00:21:56.848 ], 00:21:56.848 "product_name": "Malloc disk", 00:21:56.848 "block_size": 512, 00:21:56.848 "num_blocks": 65536, 00:21:56.848 "uuid": "40b2bff3-618b-4933-9cd8-08b9cfa610c5", 00:21:56.848 "assigned_rate_limits": { 00:21:56.848 "rw_ios_per_sec": 0, 00:21:56.848 "rw_mbytes_per_sec": 0, 00:21:56.848 "r_mbytes_per_sec": 0, 00:21:56.848 "w_mbytes_per_sec": 0 00:21:56.848 }, 00:21:56.848 "claimed": true, 00:21:56.848 "claim_type": "exclusive_write", 00:21:56.848 "zoned": false, 00:21:56.848 "supported_io_types": { 00:21:56.848 "read": true, 00:21:56.848 "write": true, 00:21:56.848 "unmap": true, 00:21:56.848 "write_zeroes": true, 00:21:56.848 "flush": true, 00:21:56.848 "reset": true, 00:21:56.848 "compare": false, 00:21:56.848 "compare_and_write": false, 00:21:56.848 "abort": true, 00:21:56.848 "nvme_admin": false, 00:21:56.848 "nvme_io": false 00:21:56.848 }, 00:21:56.848 "memory_domains": [ 00:21:56.848 { 00:21:56.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:56.848 "dma_device_type": 2 00:21:56.848 } 00:21:56.848 ], 00:21:56.848 "driver_specific": {} 00:21:56.848 } 00:21:56.848 ] 00:21:56.848 02:45:21 -- common/autotest_common.sh@895 -- # return 0 00:21:56.848 02:45:21 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:21:56.848 02:45:21 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:56.848 02:45:21 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:21:56.848 02:45:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:56.848 02:45:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:56.848 02:45:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:56.848 02:45:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:56.848 02:45:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:56.848 02:45:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:56.848 02:45:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:56.848 02:45:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:56.848 02:45:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:56.848 02:45:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:56.848 02:45:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:57.105 02:45:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:57.105 "name": "Existed_Raid", 00:21:57.105 "uuid": "6c139161-3981-427d-8f37-aa92a43bca6f", 00:21:57.105 "strip_size_kb": 64, 00:21:57.105 "state": "online", 00:21:57.105 "raid_level": "raid5f", 00:21:57.105 "superblock": true, 00:21:57.105 "num_base_bdevs": 3, 00:21:57.105 "num_base_bdevs_discovered": 3, 00:21:57.105 "num_base_bdevs_operational": 3, 00:21:57.105 "base_bdevs_list": [ 00:21:57.105 { 00:21:57.105 "name": "BaseBdev1", 00:21:57.105 "uuid": "2b14657b-120a-4b3b-be93-f8e4ed3ffc59", 00:21:57.105 "is_configured": true, 00:21:57.105 "data_offset": 2048, 00:21:57.105 "data_size": 63488 00:21:57.105 }, 00:21:57.105 { 00:21:57.105 "name": "BaseBdev2", 00:21:57.105 "uuid": "3acc8512-57f2-4508-848d-ab092487f08c", 00:21:57.105 "is_configured": true, 00:21:57.105 "data_offset": 2048, 00:21:57.105 "data_size": 63488 00:21:57.105 }, 00:21:57.105 { 00:21:57.105 "name": "BaseBdev3", 00:21:57.105 "uuid": "40b2bff3-618b-4933-9cd8-08b9cfa610c5", 00:21:57.105 "is_configured": true, 00:21:57.105 "data_offset": 2048, 00:21:57.105 "data_size": 63488 00:21:57.105 } 00:21:57.105 ] 00:21:57.105 }' 00:21:57.105 02:45:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:57.105 02:45:22 -- common/autotest_common.sh@10 -- # set +x 00:21:58.038 02:45:22 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:58.038 [2024-07-11 02:45:23.014294] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:58.038 02:45:23 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:21:58.038 02:45:23 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:21:58.038 02:45:23 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:21:58.038 02:45:23 -- bdev/bdev_raid.sh@196 -- # return 0 00:21:58.038 02:45:23 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:21:58.038 02:45:23 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:21:58.038 02:45:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:58.038 02:45:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:58.038 02:45:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:58.038 02:45:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:58.038 02:45:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:58.038 02:45:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:58.038 02:45:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:58.038 02:45:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:58.038 02:45:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:58.038 02:45:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:58.038 02:45:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:58.296 02:45:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:58.296 "name": "Existed_Raid", 00:21:58.296 "uuid": "6c139161-3981-427d-8f37-aa92a43bca6f", 00:21:58.296 "strip_size_kb": 64, 00:21:58.296 "state": "online", 00:21:58.296 "raid_level": "raid5f", 00:21:58.296 "superblock": true, 00:21:58.296 "num_base_bdevs": 3, 00:21:58.296 "num_base_bdevs_discovered": 2, 00:21:58.296 "num_base_bdevs_operational": 2, 00:21:58.296 "base_bdevs_list": [ 00:21:58.296 { 00:21:58.296 "name": null, 00:21:58.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:58.296 "is_configured": false, 00:21:58.296 "data_offset": 2048, 00:21:58.296 "data_size": 63488 00:21:58.296 }, 00:21:58.296 { 00:21:58.296 "name": "BaseBdev2", 00:21:58.296 "uuid": "3acc8512-57f2-4508-848d-ab092487f08c", 00:21:58.296 "is_configured": true, 00:21:58.296 "data_offset": 2048, 00:21:58.296 "data_size": 63488 00:21:58.296 }, 00:21:58.296 { 00:21:58.296 "name": "BaseBdev3", 00:21:58.296 "uuid": "40b2bff3-618b-4933-9cd8-08b9cfa610c5", 00:21:58.296 "is_configured": true, 00:21:58.296 "data_offset": 2048, 00:21:58.296 "data_size": 63488 00:21:58.296 } 00:21:58.296 ] 00:21:58.296 }' 00:21:58.297 02:45:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:58.297 02:45:23 -- common/autotest_common.sh@10 -- # set +x 00:21:58.865 02:45:23 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:21:58.865 02:45:23 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:58.865 02:45:23 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:58.865 02:45:23 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:21:59.123 02:45:24 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:21:59.123 02:45:24 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:59.123 02:45:24 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:21:59.380 [2024-07-11 02:45:24.247793] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:59.380 [2024-07-11 02:45:24.248070] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:59.380 [2024-07-11 02:45:24.248257] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:59.380 02:45:24 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:21:59.380 02:45:24 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:59.380 02:45:24 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:59.380 02:45:24 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:21:59.638 02:45:24 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:21:59.638 02:45:24 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:59.638 02:45:24 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:21:59.896 [2024-07-11 02:45:24.756482] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:59.896 [2024-07-11 02:45:24.756797] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state offline 00:21:59.896 02:45:24 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:21:59.896 02:45:24 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:59.896 02:45:24 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:59.896 02:45:24 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:22:00.154 02:45:25 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:22:00.154 02:45:25 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:22:00.154 02:45:25 -- bdev/bdev_raid.sh@287 -- # killprocess 140248 00:22:00.154 02:45:25 -- common/autotest_common.sh@926 -- # '[' -z 140248 ']' 00:22:00.154 02:45:25 -- common/autotest_common.sh@930 -- # kill -0 140248 00:22:00.154 02:45:25 -- common/autotest_common.sh@931 -- # uname 00:22:00.154 02:45:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:00.154 02:45:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 140248 00:22:00.154 killing process with pid 140248 00:22:00.154 02:45:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:00.154 02:45:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:00.154 02:45:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 140248' 00:22:00.154 02:45:25 -- common/autotest_common.sh@945 -- # kill 140248 00:22:00.154 02:45:25 -- common/autotest_common.sh@950 -- # wait 140248 00:22:00.154 [2024-07-11 02:45:25.041747] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:00.154 [2024-07-11 02:45:25.041851] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:00.412 ************************************ 00:22:00.412 END TEST raid5f_state_function_test_sb 00:22:00.412 ************************************ 00:22:00.412 02:45:25 -- bdev/bdev_raid.sh@289 -- # return 0 00:22:00.412 00:22:00.412 real 0m11.847s 00:22:00.412 user 0m22.068s 00:22:00.412 sys 0m1.292s 00:22:00.412 02:45:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:00.412 02:45:25 -- common/autotest_common.sh@10 -- # set +x 00:22:00.412 02:45:25 -- bdev/bdev_raid.sh@746 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:22:00.412 02:45:25 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:22:00.412 02:45:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:00.412 02:45:25 -- common/autotest_common.sh@10 -- # set +x 00:22:00.412 ************************************ 00:22:00.412 START TEST raid5f_superblock_test 00:22:00.412 ************************************ 00:22:00.412 02:45:25 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid5f 3 00:22:00.412 02:45:25 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid5f 00:22:00.412 02:45:25 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:22:00.412 02:45:25 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:22:00.412 02:45:25 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:22:00.412 02:45:25 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:22:00.412 02:45:25 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:22:00.412 02:45:25 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:22:00.412 02:45:25 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:22:00.412 02:45:25 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:22:00.412 02:45:25 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:22:00.412 02:45:25 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:22:00.412 02:45:25 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:22:00.412 02:45:25 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:22:00.412 02:45:25 -- bdev/bdev_raid.sh@349 -- # '[' raid5f '!=' raid1 ']' 00:22:00.412 02:45:25 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:22:00.412 02:45:25 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:22:00.412 02:45:25 -- bdev/bdev_raid.sh@357 -- # raid_pid=140651 00:22:00.412 02:45:25 -- bdev/bdev_raid.sh@358 -- # waitforlisten 140651 /var/tmp/spdk-raid.sock 00:22:00.412 02:45:25 -- common/autotest_common.sh@819 -- # '[' -z 140651 ']' 00:22:00.412 02:45:25 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:22:00.412 02:45:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:00.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:00.413 02:45:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:00.413 02:45:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:00.413 02:45:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:00.413 02:45:25 -- common/autotest_common.sh@10 -- # set +x 00:22:00.413 [2024-07-11 02:45:25.450206] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:22:00.413 [2024-07-11 02:45:25.450429] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140651 ] 00:22:00.671 [2024-07-11 02:45:25.592818] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.671 [2024-07-11 02:45:25.659068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:00.671 [2024-07-11 02:45:25.715292] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:01.605 02:45:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:01.605 02:45:26 -- common/autotest_common.sh@852 -- # return 0 00:22:01.605 02:45:26 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:22:01.605 02:45:26 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:22:01.605 02:45:26 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:22:01.605 02:45:26 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:22:01.605 02:45:26 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:01.605 02:45:26 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:01.605 02:45:26 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:22:01.605 02:45:26 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:01.605 02:45:26 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:22:01.605 malloc1 00:22:01.605 02:45:26 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:01.862 [2024-07-11 02:45:26.731126] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:01.862 [2024-07-11 02:45:26.731227] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:01.862 [2024-07-11 02:45:26.731262] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005d80 00:22:01.862 [2024-07-11 02:45:26.731303] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:01.862 [2024-07-11 02:45:26.733639] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:01.862 [2024-07-11 02:45:26.733711] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:01.862 pt1 00:22:01.862 02:45:26 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:22:01.862 02:45:26 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:22:01.862 02:45:26 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:22:01.862 02:45:26 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:22:01.863 02:45:26 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:01.863 02:45:26 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:01.863 02:45:26 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:22:01.863 02:45:26 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:01.863 02:45:26 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:22:02.121 malloc2 00:22:02.121 02:45:26 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:02.121 [2024-07-11 02:45:27.157030] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:02.121 [2024-07-11 02:45:27.157107] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:02.121 [2024-07-11 02:45:27.157141] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:22:02.121 [2024-07-11 02:45:27.157178] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:02.121 [2024-07-11 02:45:27.159272] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:02.121 [2024-07-11 02:45:27.159333] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:02.121 pt2 00:22:02.121 02:45:27 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:22:02.121 02:45:27 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:22:02.121 02:45:27 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:22:02.121 02:45:27 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:22:02.121 02:45:27 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:22:02.121 02:45:27 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:02.121 02:45:27 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:22:02.121 02:45:27 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:02.121 02:45:27 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:22:02.379 malloc3 00:22:02.637 02:45:27 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:02.896 [2024-07-11 02:45:27.737171] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:02.896 [2024-07-11 02:45:27.737267] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:02.896 [2024-07-11 02:45:27.737307] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:22:02.896 [2024-07-11 02:45:27.737346] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:02.896 [2024-07-11 02:45:27.739344] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:02.896 [2024-07-11 02:45:27.739392] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:02.896 pt3 00:22:02.896 02:45:27 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:22:02.896 02:45:27 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:22:02.896 02:45:27 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:22:02.896 [2024-07-11 02:45:27.941290] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:02.896 [2024-07-11 02:45:27.943114] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:02.896 [2024-07-11 02:45:27.943196] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:02.896 [2024-07-11 02:45:27.943415] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007b80 00:22:02.896 [2024-07-11 02:45:27.943430] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:02.896 [2024-07-11 02:45:27.943600] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:22:02.896 [2024-07-11 02:45:27.944358] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007b80 00:22:02.896 [2024-07-11 02:45:27.944381] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007b80 00:22:02.896 [2024-07-11 02:45:27.944525] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:02.896 02:45:27 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:02.896 02:45:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:02.896 02:45:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:02.896 02:45:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:02.896 02:45:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:02.896 02:45:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:02.896 02:45:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:02.896 02:45:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:02.896 02:45:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:02.896 02:45:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:02.896 02:45:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:02.896 02:45:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:03.154 02:45:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:03.154 "name": "raid_bdev1", 00:22:03.154 "uuid": "ffaf8f16-e9de-4c88-aec4-1efe7e9cfcc8", 00:22:03.154 "strip_size_kb": 64, 00:22:03.154 "state": "online", 00:22:03.154 "raid_level": "raid5f", 00:22:03.154 "superblock": true, 00:22:03.154 "num_base_bdevs": 3, 00:22:03.154 "num_base_bdevs_discovered": 3, 00:22:03.154 "num_base_bdevs_operational": 3, 00:22:03.154 "base_bdevs_list": [ 00:22:03.154 { 00:22:03.154 "name": "pt1", 00:22:03.154 "uuid": "2caa835a-8c44-5c8f-b6f5-70fac8111162", 00:22:03.154 "is_configured": true, 00:22:03.154 "data_offset": 2048, 00:22:03.154 "data_size": 63488 00:22:03.154 }, 00:22:03.154 { 00:22:03.154 "name": "pt2", 00:22:03.154 "uuid": "40cbb412-0992-5f96-a1ba-96c8368fe22f", 00:22:03.154 "is_configured": true, 00:22:03.154 "data_offset": 2048, 00:22:03.154 "data_size": 63488 00:22:03.154 }, 00:22:03.154 { 00:22:03.154 "name": "pt3", 00:22:03.155 "uuid": "4f273448-eae2-5922-ace2-66c4e0b17d27", 00:22:03.155 "is_configured": true, 00:22:03.155 "data_offset": 2048, 00:22:03.155 "data_size": 63488 00:22:03.155 } 00:22:03.155 ] 00:22:03.155 }' 00:22:03.155 02:45:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:03.155 02:45:28 -- common/autotest_common.sh@10 -- # set +x 00:22:03.723 02:45:28 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:03.723 02:45:28 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:22:03.981 [2024-07-11 02:45:29.038640] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:03.981 02:45:29 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=ffaf8f16-e9de-4c88-aec4-1efe7e9cfcc8 00:22:03.981 02:45:29 -- bdev/bdev_raid.sh@380 -- # '[' -z ffaf8f16-e9de-4c88-aec4-1efe7e9cfcc8 ']' 00:22:03.981 02:45:29 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:04.240 [2024-07-11 02:45:29.230488] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:04.240 [2024-07-11 02:45:29.230511] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:04.240 [2024-07-11 02:45:29.230622] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:04.240 [2024-07-11 02:45:29.230761] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:04.240 [2024-07-11 02:45:29.230773] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007b80 name raid_bdev1, state offline 00:22:04.240 02:45:29 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:04.240 02:45:29 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:22:04.498 02:45:29 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:22:04.498 02:45:29 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:22:04.498 02:45:29 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:22:04.498 02:45:29 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:22:04.756 02:45:29 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:22:04.756 02:45:29 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:04.756 02:45:29 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:22:04.756 02:45:29 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:22:05.015 02:45:30 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:22:05.015 02:45:30 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:05.274 02:45:30 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:22:05.274 02:45:30 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:22:05.274 02:45:30 -- common/autotest_common.sh@640 -- # local es=0 00:22:05.274 02:45:30 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:22:05.274 02:45:30 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:05.274 02:45:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:05.274 02:45:30 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:05.274 02:45:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:05.274 02:45:30 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:05.274 02:45:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:05.274 02:45:30 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:05.274 02:45:30 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:22:05.274 02:45:30 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:22:05.533 [2024-07-11 02:45:30.526734] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:05.533 [2024-07-11 02:45:30.528400] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:05.533 [2024-07-11 02:45:30.528449] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:22:05.533 [2024-07-11 02:45:30.528499] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:22:05.533 [2024-07-11 02:45:30.528584] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:22:05.533 [2024-07-11 02:45:30.528616] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:22:05.533 [2024-07-11 02:45:30.528690] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:05.533 [2024-07-11 02:45:30.528702] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state configuring 00:22:05.533 request: 00:22:05.533 { 00:22:05.533 "name": "raid_bdev1", 00:22:05.533 "raid_level": "raid5f", 00:22:05.533 "base_bdevs": [ 00:22:05.533 "malloc1", 00:22:05.533 "malloc2", 00:22:05.533 "malloc3" 00:22:05.533 ], 00:22:05.533 "superblock": false, 00:22:05.533 "strip_size_kb": 64, 00:22:05.533 "method": "bdev_raid_create", 00:22:05.533 "req_id": 1 00:22:05.533 } 00:22:05.533 Got JSON-RPC error response 00:22:05.533 response: 00:22:05.533 { 00:22:05.533 "code": -17, 00:22:05.533 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:05.533 } 00:22:05.533 02:45:30 -- common/autotest_common.sh@643 -- # es=1 00:22:05.533 02:45:30 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:05.533 02:45:30 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:05.533 02:45:30 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:05.533 02:45:30 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:05.533 02:45:30 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:22:05.792 02:45:30 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:22:05.792 02:45:30 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:22:05.792 02:45:30 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:06.050 [2024-07-11 02:45:30.926775] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:06.050 [2024-07-11 02:45:30.926878] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:06.050 [2024-07-11 02:45:30.926916] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:22:06.050 [2024-07-11 02:45:30.926940] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:06.050 [2024-07-11 02:45:30.929003] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:06.051 [2024-07-11 02:45:30.929064] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:06.051 [2024-07-11 02:45:30.929162] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:22:06.051 [2024-07-11 02:45:30.929274] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:06.051 pt1 00:22:06.051 02:45:30 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:22:06.051 02:45:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:06.051 02:45:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:06.051 02:45:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:06.051 02:45:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:06.051 02:45:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:06.051 02:45:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:06.051 02:45:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:06.051 02:45:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:06.051 02:45:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:06.051 02:45:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:06.051 02:45:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:06.309 02:45:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:06.309 "name": "raid_bdev1", 00:22:06.309 "uuid": "ffaf8f16-e9de-4c88-aec4-1efe7e9cfcc8", 00:22:06.309 "strip_size_kb": 64, 00:22:06.309 "state": "configuring", 00:22:06.309 "raid_level": "raid5f", 00:22:06.309 "superblock": true, 00:22:06.309 "num_base_bdevs": 3, 00:22:06.309 "num_base_bdevs_discovered": 1, 00:22:06.309 "num_base_bdevs_operational": 3, 00:22:06.309 "base_bdevs_list": [ 00:22:06.309 { 00:22:06.309 "name": "pt1", 00:22:06.309 "uuid": "2caa835a-8c44-5c8f-b6f5-70fac8111162", 00:22:06.309 "is_configured": true, 00:22:06.309 "data_offset": 2048, 00:22:06.309 "data_size": 63488 00:22:06.309 }, 00:22:06.309 { 00:22:06.309 "name": null, 00:22:06.309 "uuid": "40cbb412-0992-5f96-a1ba-96c8368fe22f", 00:22:06.309 "is_configured": false, 00:22:06.309 "data_offset": 2048, 00:22:06.309 "data_size": 63488 00:22:06.309 }, 00:22:06.309 { 00:22:06.309 "name": null, 00:22:06.309 "uuid": "4f273448-eae2-5922-ace2-66c4e0b17d27", 00:22:06.309 "is_configured": false, 00:22:06.309 "data_offset": 2048, 00:22:06.309 "data_size": 63488 00:22:06.309 } 00:22:06.309 ] 00:22:06.309 }' 00:22:06.309 02:45:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:06.309 02:45:31 -- common/autotest_common.sh@10 -- # set +x 00:22:06.876 02:45:31 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:22:06.876 02:45:31 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:07.135 [2024-07-11 02:45:31.994963] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:07.135 [2024-07-11 02:45:31.995062] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:07.135 [2024-07-11 02:45:31.995113] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:07.135 [2024-07-11 02:45:31.995162] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:07.135 [2024-07-11 02:45:31.995708] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:07.135 [2024-07-11 02:45:31.995764] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:07.135 [2024-07-11 02:45:31.995892] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:22:07.135 [2024-07-11 02:45:31.995935] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:07.135 pt2 00:22:07.135 02:45:32 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:07.394 [2024-07-11 02:45:32.247043] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:22:07.394 02:45:32 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:22:07.394 02:45:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:07.394 02:45:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:07.394 02:45:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:07.394 02:45:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:07.394 02:45:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:07.394 02:45:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:07.394 02:45:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:07.394 02:45:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:07.394 02:45:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:07.394 02:45:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:07.394 02:45:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:07.652 02:45:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:07.652 "name": "raid_bdev1", 00:22:07.652 "uuid": "ffaf8f16-e9de-4c88-aec4-1efe7e9cfcc8", 00:22:07.652 "strip_size_kb": 64, 00:22:07.652 "state": "configuring", 00:22:07.652 "raid_level": "raid5f", 00:22:07.652 "superblock": true, 00:22:07.652 "num_base_bdevs": 3, 00:22:07.653 "num_base_bdevs_discovered": 1, 00:22:07.653 "num_base_bdevs_operational": 3, 00:22:07.653 "base_bdevs_list": [ 00:22:07.653 { 00:22:07.653 "name": "pt1", 00:22:07.653 "uuid": "2caa835a-8c44-5c8f-b6f5-70fac8111162", 00:22:07.653 "is_configured": true, 00:22:07.653 "data_offset": 2048, 00:22:07.653 "data_size": 63488 00:22:07.653 }, 00:22:07.653 { 00:22:07.653 "name": null, 00:22:07.653 "uuid": "40cbb412-0992-5f96-a1ba-96c8368fe22f", 00:22:07.653 "is_configured": false, 00:22:07.653 "data_offset": 2048, 00:22:07.653 "data_size": 63488 00:22:07.653 }, 00:22:07.653 { 00:22:07.653 "name": null, 00:22:07.653 "uuid": "4f273448-eae2-5922-ace2-66c4e0b17d27", 00:22:07.653 "is_configured": false, 00:22:07.653 "data_offset": 2048, 00:22:07.653 "data_size": 63488 00:22:07.653 } 00:22:07.653 ] 00:22:07.653 }' 00:22:07.653 02:45:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:07.653 02:45:32 -- common/autotest_common.sh@10 -- # set +x 00:22:08.220 02:45:33 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:22:08.220 02:45:33 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:22:08.220 02:45:33 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:08.220 [2024-07-11 02:45:33.287199] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:08.220 [2024-07-11 02:45:33.287279] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:08.220 [2024-07-11 02:45:33.287313] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:22:08.220 [2024-07-11 02:45:33.287338] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:08.220 [2024-07-11 02:45:33.287758] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:08.220 [2024-07-11 02:45:33.287792] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:08.220 [2024-07-11 02:45:33.287877] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:22:08.220 [2024-07-11 02:45:33.287904] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:08.220 pt2 00:22:08.220 02:45:33 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:22:08.220 02:45:33 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:22:08.220 02:45:33 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:08.479 [2024-07-11 02:45:33.527281] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:08.479 [2024-07-11 02:45:33.527374] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:08.479 [2024-07-11 02:45:33.527409] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:22:08.479 [2024-07-11 02:45:33.527435] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:08.479 [2024-07-11 02:45:33.527922] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:08.479 [2024-07-11 02:45:33.527960] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:08.479 [2024-07-11 02:45:33.528053] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:22:08.479 [2024-07-11 02:45:33.528088] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:08.479 [2024-07-11 02:45:33.528234] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:22:08.479 [2024-07-11 02:45:33.528257] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:08.479 [2024-07-11 02:45:33.528325] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:22:08.479 [2024-07-11 02:45:33.529060] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:22:08.479 [2024-07-11 02:45:33.529085] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:22:08.479 [2024-07-11 02:45:33.529240] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:08.479 pt3 00:22:08.479 02:45:33 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:22:08.479 02:45:33 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:22:08.479 02:45:33 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:08.479 02:45:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:08.479 02:45:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:08.479 02:45:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:08.479 02:45:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:08.479 02:45:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:08.479 02:45:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:08.479 02:45:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:08.479 02:45:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:08.479 02:45:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:08.479 02:45:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:08.479 02:45:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:08.739 02:45:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:08.739 "name": "raid_bdev1", 00:22:08.739 "uuid": "ffaf8f16-e9de-4c88-aec4-1efe7e9cfcc8", 00:22:08.739 "strip_size_kb": 64, 00:22:08.739 "state": "online", 00:22:08.739 "raid_level": "raid5f", 00:22:08.739 "superblock": true, 00:22:08.739 "num_base_bdevs": 3, 00:22:08.739 "num_base_bdevs_discovered": 3, 00:22:08.739 "num_base_bdevs_operational": 3, 00:22:08.739 "base_bdevs_list": [ 00:22:08.739 { 00:22:08.739 "name": "pt1", 00:22:08.739 "uuid": "2caa835a-8c44-5c8f-b6f5-70fac8111162", 00:22:08.739 "is_configured": true, 00:22:08.739 "data_offset": 2048, 00:22:08.739 "data_size": 63488 00:22:08.739 }, 00:22:08.739 { 00:22:08.739 "name": "pt2", 00:22:08.739 "uuid": "40cbb412-0992-5f96-a1ba-96c8368fe22f", 00:22:08.739 "is_configured": true, 00:22:08.739 "data_offset": 2048, 00:22:08.739 "data_size": 63488 00:22:08.739 }, 00:22:08.739 { 00:22:08.739 "name": "pt3", 00:22:08.739 "uuid": "4f273448-eae2-5922-ace2-66c4e0b17d27", 00:22:08.739 "is_configured": true, 00:22:08.739 "data_offset": 2048, 00:22:08.739 "data_size": 63488 00:22:08.739 } 00:22:08.739 ] 00:22:08.739 }' 00:22:08.739 02:45:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:08.739 02:45:33 -- common/autotest_common.sh@10 -- # set +x 00:22:09.304 02:45:34 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:09.304 02:45:34 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:22:09.563 [2024-07-11 02:45:34.539611] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:09.563 02:45:34 -- bdev/bdev_raid.sh@430 -- # '[' ffaf8f16-e9de-4c88-aec4-1efe7e9cfcc8 '!=' ffaf8f16-e9de-4c88-aec4-1efe7e9cfcc8 ']' 00:22:09.563 02:45:34 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid5f 00:22:09.563 02:45:34 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:22:09.563 02:45:34 -- bdev/bdev_raid.sh@196 -- # return 0 00:22:09.563 02:45:34 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:22:09.820 [2024-07-11 02:45:34.787591] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:22:09.820 02:45:34 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:09.820 02:45:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:09.820 02:45:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:09.820 02:45:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:09.820 02:45:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:09.820 02:45:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:09.821 02:45:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:09.821 02:45:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:09.821 02:45:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:09.821 02:45:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:09.821 02:45:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:09.821 02:45:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:10.079 02:45:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:10.079 "name": "raid_bdev1", 00:22:10.079 "uuid": "ffaf8f16-e9de-4c88-aec4-1efe7e9cfcc8", 00:22:10.079 "strip_size_kb": 64, 00:22:10.079 "state": "online", 00:22:10.079 "raid_level": "raid5f", 00:22:10.079 "superblock": true, 00:22:10.079 "num_base_bdevs": 3, 00:22:10.079 "num_base_bdevs_discovered": 2, 00:22:10.079 "num_base_bdevs_operational": 2, 00:22:10.079 "base_bdevs_list": [ 00:22:10.079 { 00:22:10.079 "name": null, 00:22:10.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:10.079 "is_configured": false, 00:22:10.079 "data_offset": 2048, 00:22:10.079 "data_size": 63488 00:22:10.079 }, 00:22:10.079 { 00:22:10.079 "name": "pt2", 00:22:10.079 "uuid": "40cbb412-0992-5f96-a1ba-96c8368fe22f", 00:22:10.079 "is_configured": true, 00:22:10.079 "data_offset": 2048, 00:22:10.079 "data_size": 63488 00:22:10.079 }, 00:22:10.079 { 00:22:10.079 "name": "pt3", 00:22:10.079 "uuid": "4f273448-eae2-5922-ace2-66c4e0b17d27", 00:22:10.079 "is_configured": true, 00:22:10.079 "data_offset": 2048, 00:22:10.079 "data_size": 63488 00:22:10.079 } 00:22:10.079 ] 00:22:10.079 }' 00:22:10.079 02:45:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:10.079 02:45:34 -- common/autotest_common.sh@10 -- # set +x 00:22:10.646 02:45:35 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:10.905 [2024-07-11 02:45:35.787797] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:10.905 [2024-07-11 02:45:35.787835] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:10.905 [2024-07-11 02:45:35.787932] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:10.905 [2024-07-11 02:45:35.788048] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:10.905 [2024-07-11 02:45:35.788059] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:22:10.905 02:45:35 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:10.905 02:45:35 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:22:11.163 02:45:36 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:22:11.163 02:45:36 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:22:11.163 02:45:36 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:22:11.163 02:45:36 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:22:11.163 02:45:36 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:11.421 02:45:36 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:22:11.421 02:45:36 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:22:11.421 02:45:36 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:22:11.680 02:45:36 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:22:11.680 02:45:36 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:22:11.680 02:45:36 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:22:11.680 02:45:36 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:22:11.680 02:45:36 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:11.939 [2024-07-11 02:45:36.803860] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:11.939 [2024-07-11 02:45:36.803948] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:11.939 [2024-07-11 02:45:36.803986] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:22:11.939 [2024-07-11 02:45:36.804008] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:11.939 [2024-07-11 02:45:36.806275] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:11.939 [2024-07-11 02:45:36.806345] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:11.939 [2024-07-11 02:45:36.806519] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:22:11.939 [2024-07-11 02:45:36.806566] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:11.939 pt2 00:22:11.939 02:45:36 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:22:11.939 02:45:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:11.939 02:45:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:11.939 02:45:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:11.939 02:45:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:11.939 02:45:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:11.939 02:45:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:11.939 02:45:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:11.939 02:45:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:11.939 02:45:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:11.939 02:45:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:11.939 02:45:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:11.939 02:45:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:11.939 "name": "raid_bdev1", 00:22:11.939 "uuid": "ffaf8f16-e9de-4c88-aec4-1efe7e9cfcc8", 00:22:11.939 "strip_size_kb": 64, 00:22:11.939 "state": "configuring", 00:22:11.939 "raid_level": "raid5f", 00:22:11.939 "superblock": true, 00:22:11.939 "num_base_bdevs": 3, 00:22:11.939 "num_base_bdevs_discovered": 1, 00:22:11.939 "num_base_bdevs_operational": 2, 00:22:11.939 "base_bdevs_list": [ 00:22:11.939 { 00:22:11.939 "name": null, 00:22:11.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:11.939 "is_configured": false, 00:22:11.939 "data_offset": 2048, 00:22:11.939 "data_size": 63488 00:22:11.939 }, 00:22:11.939 { 00:22:11.939 "name": "pt2", 00:22:11.939 "uuid": "40cbb412-0992-5f96-a1ba-96c8368fe22f", 00:22:11.939 "is_configured": true, 00:22:11.939 "data_offset": 2048, 00:22:11.939 "data_size": 63488 00:22:11.939 }, 00:22:11.939 { 00:22:11.939 "name": null, 00:22:11.939 "uuid": "4f273448-eae2-5922-ace2-66c4e0b17d27", 00:22:11.939 "is_configured": false, 00:22:11.939 "data_offset": 2048, 00:22:11.939 "data_size": 63488 00:22:11.939 } 00:22:11.939 ] 00:22:11.939 }' 00:22:11.939 02:45:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:11.939 02:45:37 -- common/autotest_common.sh@10 -- # set +x 00:22:12.875 02:45:37 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:22:12.875 02:45:37 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:22:12.875 02:45:37 -- bdev/bdev_raid.sh@462 -- # i=2 00:22:12.875 02:45:37 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:12.875 [2024-07-11 02:45:37.960195] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:12.875 [2024-07-11 02:45:37.960289] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:12.875 [2024-07-11 02:45:37.960332] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:22:12.875 [2024-07-11 02:45:37.960353] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:12.875 [2024-07-11 02:45:37.960848] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:12.875 [2024-07-11 02:45:37.960892] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:12.875 [2024-07-11 02:45:37.961019] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:22:12.875 [2024-07-11 02:45:37.961049] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:12.875 [2024-07-11 02:45:37.961168] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009f80 00:22:12.875 [2024-07-11 02:45:37.961183] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:12.875 [2024-07-11 02:45:37.961265] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:22:12.875 [2024-07-11 02:45:37.962033] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009f80 00:22:12.875 [2024-07-11 02:45:37.962057] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009f80 00:22:12.875 [2024-07-11 02:45:37.962319] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:12.875 pt3 00:22:13.133 02:45:37 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:13.133 02:45:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:13.133 02:45:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:13.133 02:45:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:13.133 02:45:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:13.133 02:45:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:13.133 02:45:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:13.133 02:45:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:13.133 02:45:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:13.133 02:45:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:13.133 02:45:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:13.133 02:45:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:13.134 02:45:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:13.134 "name": "raid_bdev1", 00:22:13.134 "uuid": "ffaf8f16-e9de-4c88-aec4-1efe7e9cfcc8", 00:22:13.134 "strip_size_kb": 64, 00:22:13.134 "state": "online", 00:22:13.134 "raid_level": "raid5f", 00:22:13.134 "superblock": true, 00:22:13.134 "num_base_bdevs": 3, 00:22:13.134 "num_base_bdevs_discovered": 2, 00:22:13.134 "num_base_bdevs_operational": 2, 00:22:13.134 "base_bdevs_list": [ 00:22:13.134 { 00:22:13.134 "name": null, 00:22:13.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:13.134 "is_configured": false, 00:22:13.134 "data_offset": 2048, 00:22:13.134 "data_size": 63488 00:22:13.134 }, 00:22:13.134 { 00:22:13.134 "name": "pt2", 00:22:13.134 "uuid": "40cbb412-0992-5f96-a1ba-96c8368fe22f", 00:22:13.134 "is_configured": true, 00:22:13.134 "data_offset": 2048, 00:22:13.134 "data_size": 63488 00:22:13.134 }, 00:22:13.134 { 00:22:13.134 "name": "pt3", 00:22:13.134 "uuid": "4f273448-eae2-5922-ace2-66c4e0b17d27", 00:22:13.134 "is_configured": true, 00:22:13.134 "data_offset": 2048, 00:22:13.134 "data_size": 63488 00:22:13.134 } 00:22:13.134 ] 00:22:13.134 }' 00:22:13.134 02:45:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:13.134 02:45:38 -- common/autotest_common.sh@10 -- # set +x 00:22:14.068 02:45:38 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:22:14.068 02:45:38 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:14.068 [2024-07-11 02:45:39.030310] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:14.068 [2024-07-11 02:45:39.030377] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:14.068 [2024-07-11 02:45:39.030496] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:14.068 [2024-07-11 02:45:39.030580] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:14.068 [2024-07-11 02:45:39.030593] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009f80 name raid_bdev1, state offline 00:22:14.068 02:45:39 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:14.068 02:45:39 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:22:14.326 02:45:39 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:22:14.326 02:45:39 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:22:14.326 02:45:39 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:14.326 [2024-07-11 02:45:39.397928] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:14.326 [2024-07-11 02:45:39.398005] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:14.326 [2024-07-11 02:45:39.398047] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:22:14.326 [2024-07-11 02:45:39.398067] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:14.326 [2024-07-11 02:45:39.400216] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:14.326 [2024-07-11 02:45:39.400262] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:14.326 [2024-07-11 02:45:39.400356] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:22:14.326 [2024-07-11 02:45:39.400400] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:14.326 pt1 00:22:14.326 02:45:39 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:22:14.326 02:45:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:14.326 02:45:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:14.326 02:45:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:14.326 02:45:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:14.326 02:45:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:14.326 02:45:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:14.326 02:45:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:14.326 02:45:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:14.326 02:45:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:14.327 02:45:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:14.327 02:45:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:14.585 02:45:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:14.585 "name": "raid_bdev1", 00:22:14.585 "uuid": "ffaf8f16-e9de-4c88-aec4-1efe7e9cfcc8", 00:22:14.585 "strip_size_kb": 64, 00:22:14.585 "state": "configuring", 00:22:14.585 "raid_level": "raid5f", 00:22:14.585 "superblock": true, 00:22:14.585 "num_base_bdevs": 3, 00:22:14.585 "num_base_bdevs_discovered": 1, 00:22:14.585 "num_base_bdevs_operational": 3, 00:22:14.585 "base_bdevs_list": [ 00:22:14.585 { 00:22:14.585 "name": "pt1", 00:22:14.585 "uuid": "2caa835a-8c44-5c8f-b6f5-70fac8111162", 00:22:14.585 "is_configured": true, 00:22:14.585 "data_offset": 2048, 00:22:14.585 "data_size": 63488 00:22:14.585 }, 00:22:14.585 { 00:22:14.585 "name": null, 00:22:14.585 "uuid": "40cbb412-0992-5f96-a1ba-96c8368fe22f", 00:22:14.585 "is_configured": false, 00:22:14.585 "data_offset": 2048, 00:22:14.585 "data_size": 63488 00:22:14.585 }, 00:22:14.585 { 00:22:14.585 "name": null, 00:22:14.585 "uuid": "4f273448-eae2-5922-ace2-66c4e0b17d27", 00:22:14.585 "is_configured": false, 00:22:14.585 "data_offset": 2048, 00:22:14.585 "data_size": 63488 00:22:14.585 } 00:22:14.585 ] 00:22:14.585 }' 00:22:14.585 02:45:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:14.585 02:45:39 -- common/autotest_common.sh@10 -- # set +x 00:22:15.151 02:45:40 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:22:15.151 02:45:40 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:22:15.151 02:45:40 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:15.409 02:45:40 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:22:15.409 02:45:40 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:22:15.409 02:45:40 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:22:15.667 02:45:40 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:22:15.667 02:45:40 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:22:15.667 02:45:40 -- bdev/bdev_raid.sh@489 -- # i=2 00:22:15.667 02:45:40 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:15.926 [2024-07-11 02:45:40.849911] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:15.926 [2024-07-11 02:45:40.850075] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:15.926 [2024-07-11 02:45:40.850112] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:22:15.926 [2024-07-11 02:45:40.850139] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:15.926 [2024-07-11 02:45:40.850618] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:15.926 [2024-07-11 02:45:40.850662] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:15.926 [2024-07-11 02:45:40.850755] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:22:15.926 [2024-07-11 02:45:40.850770] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:22:15.926 [2024-07-11 02:45:40.850778] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:15.926 [2024-07-11 02:45:40.850811] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state configuring 00:22:15.926 [2024-07-11 02:45:40.850865] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:15.926 pt3 00:22:15.926 02:45:40 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:22:15.926 02:45:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:15.926 02:45:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:15.926 02:45:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:15.926 02:45:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:15.926 02:45:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:15.926 02:45:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:15.926 02:45:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:15.926 02:45:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:15.926 02:45:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:15.926 02:45:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:15.926 02:45:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:16.184 02:45:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:16.184 "name": "raid_bdev1", 00:22:16.184 "uuid": "ffaf8f16-e9de-4c88-aec4-1efe7e9cfcc8", 00:22:16.184 "strip_size_kb": 64, 00:22:16.184 "state": "configuring", 00:22:16.184 "raid_level": "raid5f", 00:22:16.184 "superblock": true, 00:22:16.184 "num_base_bdevs": 3, 00:22:16.184 "num_base_bdevs_discovered": 1, 00:22:16.184 "num_base_bdevs_operational": 2, 00:22:16.184 "base_bdevs_list": [ 00:22:16.184 { 00:22:16.184 "name": null, 00:22:16.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:16.184 "is_configured": false, 00:22:16.184 "data_offset": 2048, 00:22:16.184 "data_size": 63488 00:22:16.184 }, 00:22:16.184 { 00:22:16.184 "name": null, 00:22:16.184 "uuid": "40cbb412-0992-5f96-a1ba-96c8368fe22f", 00:22:16.184 "is_configured": false, 00:22:16.184 "data_offset": 2048, 00:22:16.184 "data_size": 63488 00:22:16.184 }, 00:22:16.184 { 00:22:16.184 "name": "pt3", 00:22:16.184 "uuid": "4f273448-eae2-5922-ace2-66c4e0b17d27", 00:22:16.184 "is_configured": true, 00:22:16.184 "data_offset": 2048, 00:22:16.184 "data_size": 63488 00:22:16.184 } 00:22:16.184 ] 00:22:16.184 }' 00:22:16.184 02:45:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:16.184 02:45:41 -- common/autotest_common.sh@10 -- # set +x 00:22:16.750 02:45:41 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:22:16.750 02:45:41 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:22:16.750 02:45:41 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:17.008 [2024-07-11 02:45:41.874249] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:17.008 [2024-07-11 02:45:41.874369] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:17.008 [2024-07-11 02:45:41.874406] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:22:17.008 [2024-07-11 02:45:41.874433] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:17.008 [2024-07-11 02:45:41.874951] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:17.008 [2024-07-11 02:45:41.875026] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:17.008 [2024-07-11 02:45:41.875110] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:22:17.008 [2024-07-11 02:45:41.875138] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:17.008 [2024-07-11 02:45:41.875272] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000b180 00:22:17.008 [2024-07-11 02:45:41.875286] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:17.008 [2024-07-11 02:45:41.875374] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002bb0 00:22:17.008 [2024-07-11 02:45:41.876180] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000b180 00:22:17.008 [2024-07-11 02:45:41.876206] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000b180 00:22:17.008 [2024-07-11 02:45:41.876417] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:17.008 pt2 00:22:17.008 02:45:41 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:22:17.008 02:45:41 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:22:17.008 02:45:41 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:17.008 02:45:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:17.008 02:45:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:17.008 02:45:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:17.008 02:45:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:17.008 02:45:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:17.008 02:45:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:17.008 02:45:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:17.008 02:45:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:17.008 02:45:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:17.008 02:45:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:17.008 02:45:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:17.008 02:45:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:17.008 "name": "raid_bdev1", 00:22:17.008 "uuid": "ffaf8f16-e9de-4c88-aec4-1efe7e9cfcc8", 00:22:17.008 "strip_size_kb": 64, 00:22:17.008 "state": "online", 00:22:17.008 "raid_level": "raid5f", 00:22:17.008 "superblock": true, 00:22:17.008 "num_base_bdevs": 3, 00:22:17.008 "num_base_bdevs_discovered": 2, 00:22:17.008 "num_base_bdevs_operational": 2, 00:22:17.008 "base_bdevs_list": [ 00:22:17.008 { 00:22:17.008 "name": null, 00:22:17.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.008 "is_configured": false, 00:22:17.008 "data_offset": 2048, 00:22:17.008 "data_size": 63488 00:22:17.008 }, 00:22:17.008 { 00:22:17.008 "name": "pt2", 00:22:17.008 "uuid": "40cbb412-0992-5f96-a1ba-96c8368fe22f", 00:22:17.008 "is_configured": true, 00:22:17.008 "data_offset": 2048, 00:22:17.008 "data_size": 63488 00:22:17.008 }, 00:22:17.008 { 00:22:17.008 "name": "pt3", 00:22:17.008 "uuid": "4f273448-eae2-5922-ace2-66c4e0b17d27", 00:22:17.008 "is_configured": true, 00:22:17.008 "data_offset": 2048, 00:22:17.008 "data_size": 63488 00:22:17.008 } 00:22:17.008 ] 00:22:17.008 }' 00:22:17.008 02:45:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:17.008 02:45:42 -- common/autotest_common.sh@10 -- # set +x 00:22:17.944 02:45:42 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:17.944 02:45:42 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:22:17.944 [2024-07-11 02:45:42.894666] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:17.944 02:45:42 -- bdev/bdev_raid.sh@506 -- # '[' ffaf8f16-e9de-4c88-aec4-1efe7e9cfcc8 '!=' ffaf8f16-e9de-4c88-aec4-1efe7e9cfcc8 ']' 00:22:17.944 02:45:42 -- bdev/bdev_raid.sh@511 -- # killprocess 140651 00:22:17.944 02:45:42 -- common/autotest_common.sh@926 -- # '[' -z 140651 ']' 00:22:17.944 02:45:42 -- common/autotest_common.sh@930 -- # kill -0 140651 00:22:17.944 02:45:42 -- common/autotest_common.sh@931 -- # uname 00:22:17.944 02:45:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:17.944 02:45:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 140651 00:22:17.944 killing process with pid 140651 00:22:17.944 02:45:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:17.944 02:45:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:17.944 02:45:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 140651' 00:22:17.944 02:45:42 -- common/autotest_common.sh@945 -- # kill 140651 00:22:17.944 02:45:42 -- common/autotest_common.sh@950 -- # wait 140651 00:22:17.944 [2024-07-11 02:45:42.930190] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:17.944 [2024-07-11 02:45:42.930291] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:17.944 [2024-07-11 02:45:42.930411] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:17.944 [2024-07-11 02:45:42.930431] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b180 name raid_bdev1, state offline 00:22:17.944 [2024-07-11 02:45:42.959574] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:18.203 ************************************ 00:22:18.203 END TEST raid5f_superblock_test 00:22:18.203 ************************************ 00:22:18.203 02:45:43 -- bdev/bdev_raid.sh@513 -- # return 0 00:22:18.203 00:22:18.203 real 0m17.782s 00:22:18.203 user 0m33.760s 00:22:18.203 sys 0m2.059s 00:22:18.203 02:45:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:18.203 02:45:43 -- common/autotest_common.sh@10 -- # set +x 00:22:18.203 02:45:43 -- bdev/bdev_raid.sh@747 -- # '[' true = true ']' 00:22:18.203 02:45:43 -- bdev/bdev_raid.sh@748 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false 00:22:18.203 02:45:43 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:22:18.203 02:45:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:18.203 02:45:43 -- common/autotest_common.sh@10 -- # set +x 00:22:18.203 ************************************ 00:22:18.203 START TEST raid5f_rebuild_test 00:22:18.203 ************************************ 00:22:18.203 02:45:43 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid5f 3 false false 00:22:18.203 02:45:43 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:22:18.203 02:45:43 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=3 00:22:18.203 02:45:43 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:22:18.203 02:45:43 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:22:18.203 02:45:43 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:22:18.203 02:45:43 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:22:18.203 02:45:43 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:18.203 02:45:43 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:22:18.203 02:45:43 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:18.203 02:45:43 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:18.203 02:45:43 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:22:18.203 02:45:43 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:18.203 02:45:43 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:18.203 02:45:43 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:22:18.203 02:45:43 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:18.203 02:45:43 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:18.203 02:45:43 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:22:18.203 02:45:43 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:22:18.203 02:45:43 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:22:18.203 02:45:43 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:22:18.203 02:45:43 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:22:18.203 02:45:43 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:22:18.203 02:45:43 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:22:18.203 02:45:43 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:22:18.203 02:45:43 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:22:18.203 02:45:43 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:22:18.203 02:45:43 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:22:18.203 02:45:43 -- bdev/bdev_raid.sh@544 -- # raid_pid=141277 00:22:18.203 02:45:43 -- bdev/bdev_raid.sh@545 -- # waitforlisten 141277 /var/tmp/spdk-raid.sock 00:22:18.203 02:45:43 -- common/autotest_common.sh@819 -- # '[' -z 141277 ']' 00:22:18.203 02:45:43 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:18.203 02:45:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:18.203 02:45:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:18.203 02:45:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:18.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:18.203 02:45:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:18.203 02:45:43 -- common/autotest_common.sh@10 -- # set +x 00:22:18.462 [2024-07-11 02:45:43.309711] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:22:18.462 [2024-07-11 02:45:43.309960] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141277 ] 00:22:18.462 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:18.462 Zero copy mechanism will not be used. 00:22:18.462 [2024-07-11 02:45:43.458749] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:18.462 [2024-07-11 02:45:43.527685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:18.719 [2024-07-11 02:45:43.584050] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:19.285 02:45:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:19.285 02:45:44 -- common/autotest_common.sh@852 -- # return 0 00:22:19.285 02:45:44 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:19.285 02:45:44 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:19.285 02:45:44 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:19.543 BaseBdev1 00:22:19.543 02:45:44 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:19.543 02:45:44 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:19.543 02:45:44 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:19.543 BaseBdev2 00:22:19.801 02:45:44 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:19.801 02:45:44 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:19.801 02:45:44 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:19.801 BaseBdev3 00:22:19.802 02:45:44 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:20.059 spare_malloc 00:22:20.059 02:45:45 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:20.318 spare_delay 00:22:20.318 02:45:45 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:20.625 [2024-07-11 02:45:45.440604] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:20.625 [2024-07-11 02:45:45.440716] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:20.625 [2024-07-11 02:45:45.440752] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:22:20.625 [2024-07-11 02:45:45.440797] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:20.625 [2024-07-11 02:45:45.442998] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:20.625 [2024-07-11 02:45:45.443061] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:20.625 spare 00:22:20.625 02:45:45 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:22:20.625 [2024-07-11 02:45:45.628701] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:20.625 [2024-07-11 02:45:45.630414] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:20.625 [2024-07-11 02:45:45.630463] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:20.625 [2024-07-11 02:45:45.630531] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007b80 00:22:20.625 [2024-07-11 02:45:45.630542] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:22:20.625 [2024-07-11 02:45:45.630719] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:22:20.625 [2024-07-11 02:45:45.631417] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007b80 00:22:20.625 [2024-07-11 02:45:45.631438] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007b80 00:22:20.625 [2024-07-11 02:45:45.631633] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:20.625 02:45:45 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:20.625 02:45:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:20.625 02:45:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:20.625 02:45:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:20.625 02:45:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:20.625 02:45:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:20.625 02:45:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:20.625 02:45:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:20.625 02:45:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:20.625 02:45:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:20.625 02:45:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:20.625 02:45:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:20.912 02:45:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:20.912 "name": "raid_bdev1", 00:22:20.912 "uuid": "eb60e4e4-8f5a-43f9-bdbe-e6c3f7f6f4ed", 00:22:20.912 "strip_size_kb": 64, 00:22:20.912 "state": "online", 00:22:20.912 "raid_level": "raid5f", 00:22:20.912 "superblock": false, 00:22:20.912 "num_base_bdevs": 3, 00:22:20.912 "num_base_bdevs_discovered": 3, 00:22:20.912 "num_base_bdevs_operational": 3, 00:22:20.912 "base_bdevs_list": [ 00:22:20.912 { 00:22:20.912 "name": "BaseBdev1", 00:22:20.912 "uuid": "39c5f79b-7f77-474c-9ce7-6de04d627bdd", 00:22:20.912 "is_configured": true, 00:22:20.912 "data_offset": 0, 00:22:20.912 "data_size": 65536 00:22:20.912 }, 00:22:20.912 { 00:22:20.912 "name": "BaseBdev2", 00:22:20.912 "uuid": "2dac8ff3-a060-46f3-a618-c6b7ba35bdbe", 00:22:20.912 "is_configured": true, 00:22:20.912 "data_offset": 0, 00:22:20.912 "data_size": 65536 00:22:20.912 }, 00:22:20.912 { 00:22:20.912 "name": "BaseBdev3", 00:22:20.912 "uuid": "24a5af30-e9d4-484e-8850-86423565a91f", 00:22:20.912 "is_configured": true, 00:22:20.912 "data_offset": 0, 00:22:20.912 "data_size": 65536 00:22:20.912 } 00:22:20.912 ] 00:22:20.912 }' 00:22:20.912 02:45:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:20.912 02:45:45 -- common/autotest_common.sh@10 -- # set +x 00:22:21.480 02:45:46 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:21.480 02:45:46 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:22:21.739 [2024-07-11 02:45:46.685685] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:21.739 02:45:46 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=131072 00:22:21.739 02:45:46 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:21.739 02:45:46 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:21.996 02:45:46 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:22:21.996 02:45:46 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:22:21.996 02:45:46 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:22:21.996 02:45:46 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:22:21.996 02:45:46 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:21.996 02:45:46 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:22:21.996 02:45:46 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:21.996 02:45:46 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:22:21.996 02:45:46 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:21.996 02:45:46 -- bdev/nbd_common.sh@12 -- # local i 00:22:21.996 02:45:46 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:21.996 02:45:46 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:21.996 02:45:46 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:22.256 [2024-07-11 02:45:47.117604] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:22:22.256 /dev/nbd0 00:22:22.256 02:45:47 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:22.256 02:45:47 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:22.256 02:45:47 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:22:22.256 02:45:47 -- common/autotest_common.sh@857 -- # local i 00:22:22.256 02:45:47 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:22.256 02:45:47 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:22.256 02:45:47 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:22:22.256 02:45:47 -- common/autotest_common.sh@861 -- # break 00:22:22.256 02:45:47 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:22.256 02:45:47 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:22.256 02:45:47 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:22.256 1+0 records in 00:22:22.256 1+0 records out 00:22:22.256 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000636062 s, 6.4 MB/s 00:22:22.257 02:45:47 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:22.257 02:45:47 -- common/autotest_common.sh@874 -- # size=4096 00:22:22.257 02:45:47 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:22.257 02:45:47 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:22.257 02:45:47 -- common/autotest_common.sh@877 -- # return 0 00:22:22.257 02:45:47 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:22.257 02:45:47 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:22.257 02:45:47 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:22:22.257 02:45:47 -- bdev/bdev_raid.sh@581 -- # write_unit_size=256 00:22:22.257 02:45:47 -- bdev/bdev_raid.sh@582 -- # echo 128 00:22:22.257 02:45:47 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:22:22.514 512+0 records in 00:22:22.514 512+0 records out 00:22:22.514 67108864 bytes (67 MB, 64 MiB) copied, 0.335468 s, 200 MB/s 00:22:22.514 02:45:47 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:22.514 02:45:47 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:22.514 02:45:47 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:22:22.514 02:45:47 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:22.514 02:45:47 -- bdev/nbd_common.sh@51 -- # local i 00:22:22.514 02:45:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:22.514 02:45:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:22.772 02:45:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:22.772 02:45:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:22.772 02:45:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:22.772 02:45:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:22.772 02:45:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:22.772 02:45:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:22.772 02:45:47 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:22:22.772 [2024-07-11 02:45:47.713730] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:22.772 02:45:47 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:22:22.772 02:45:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:22.772 02:45:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:22.772 02:45:47 -- bdev/nbd_common.sh@41 -- # break 00:22:22.772 02:45:47 -- bdev/nbd_common.sh@45 -- # return 0 00:22:22.773 02:45:47 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:23.030 [2024-07-11 02:45:48.073313] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:23.030 02:45:48 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:23.030 02:45:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:23.030 02:45:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:23.030 02:45:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:23.030 02:45:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:23.030 02:45:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:23.030 02:45:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:23.030 02:45:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:23.030 02:45:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:23.030 02:45:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:23.030 02:45:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:23.030 02:45:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:23.289 02:45:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:23.289 "name": "raid_bdev1", 00:22:23.289 "uuid": "eb60e4e4-8f5a-43f9-bdbe-e6c3f7f6f4ed", 00:22:23.289 "strip_size_kb": 64, 00:22:23.289 "state": "online", 00:22:23.289 "raid_level": "raid5f", 00:22:23.289 "superblock": false, 00:22:23.289 "num_base_bdevs": 3, 00:22:23.289 "num_base_bdevs_discovered": 2, 00:22:23.289 "num_base_bdevs_operational": 2, 00:22:23.289 "base_bdevs_list": [ 00:22:23.289 { 00:22:23.289 "name": null, 00:22:23.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:23.289 "is_configured": false, 00:22:23.289 "data_offset": 0, 00:22:23.289 "data_size": 65536 00:22:23.289 }, 00:22:23.289 { 00:22:23.289 "name": "BaseBdev2", 00:22:23.289 "uuid": "2dac8ff3-a060-46f3-a618-c6b7ba35bdbe", 00:22:23.289 "is_configured": true, 00:22:23.289 "data_offset": 0, 00:22:23.289 "data_size": 65536 00:22:23.289 }, 00:22:23.289 { 00:22:23.289 "name": "BaseBdev3", 00:22:23.289 "uuid": "24a5af30-e9d4-484e-8850-86423565a91f", 00:22:23.289 "is_configured": true, 00:22:23.289 "data_offset": 0, 00:22:23.289 "data_size": 65536 00:22:23.289 } 00:22:23.289 ] 00:22:23.289 }' 00:22:23.289 02:45:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:23.289 02:45:48 -- common/autotest_common.sh@10 -- # set +x 00:22:24.223 02:45:48 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:24.223 [2024-07-11 02:45:49.193504] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:24.223 [2024-07-11 02:45:49.193570] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:24.223 [2024-07-11 02:45:49.198293] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029940 00:22:24.223 [2024-07-11 02:45:49.200569] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:24.223 02:45:49 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:22:25.159 02:45:50 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:25.159 02:45:50 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:25.159 02:45:50 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:25.159 02:45:50 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:25.159 02:45:50 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:25.159 02:45:50 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:25.159 02:45:50 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:25.417 02:45:50 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:25.417 "name": "raid_bdev1", 00:22:25.417 "uuid": "eb60e4e4-8f5a-43f9-bdbe-e6c3f7f6f4ed", 00:22:25.417 "strip_size_kb": 64, 00:22:25.417 "state": "online", 00:22:25.417 "raid_level": "raid5f", 00:22:25.417 "superblock": false, 00:22:25.417 "num_base_bdevs": 3, 00:22:25.417 "num_base_bdevs_discovered": 3, 00:22:25.417 "num_base_bdevs_operational": 3, 00:22:25.417 "process": { 00:22:25.417 "type": "rebuild", 00:22:25.417 "target": "spare", 00:22:25.417 "progress": { 00:22:25.417 "blocks": 24576, 00:22:25.417 "percent": 18 00:22:25.417 } 00:22:25.417 }, 00:22:25.417 "base_bdevs_list": [ 00:22:25.417 { 00:22:25.417 "name": "spare", 00:22:25.417 "uuid": "a5c9c046-c350-54ee-8eb1-4dd39a74659c", 00:22:25.417 "is_configured": true, 00:22:25.417 "data_offset": 0, 00:22:25.417 "data_size": 65536 00:22:25.417 }, 00:22:25.417 { 00:22:25.417 "name": "BaseBdev2", 00:22:25.417 "uuid": "2dac8ff3-a060-46f3-a618-c6b7ba35bdbe", 00:22:25.417 "is_configured": true, 00:22:25.417 "data_offset": 0, 00:22:25.417 "data_size": 65536 00:22:25.417 }, 00:22:25.417 { 00:22:25.417 "name": "BaseBdev3", 00:22:25.417 "uuid": "24a5af30-e9d4-484e-8850-86423565a91f", 00:22:25.417 "is_configured": true, 00:22:25.417 "data_offset": 0, 00:22:25.417 "data_size": 65536 00:22:25.417 } 00:22:25.417 ] 00:22:25.417 }' 00:22:25.417 02:45:50 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:25.417 02:45:50 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:25.417 02:45:50 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:25.676 02:45:50 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:25.676 02:45:50 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:25.676 [2024-07-11 02:45:50.758606] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:25.934 [2024-07-11 02:45:50.812713] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:25.934 [2024-07-11 02:45:50.812813] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:25.934 02:45:50 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:25.934 02:45:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:25.934 02:45:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:25.934 02:45:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:25.934 02:45:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:25.934 02:45:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:25.934 02:45:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:25.934 02:45:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:25.934 02:45:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:25.934 02:45:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:25.934 02:45:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:25.934 02:45:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:26.193 02:45:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:26.193 "name": "raid_bdev1", 00:22:26.193 "uuid": "eb60e4e4-8f5a-43f9-bdbe-e6c3f7f6f4ed", 00:22:26.193 "strip_size_kb": 64, 00:22:26.193 "state": "online", 00:22:26.193 "raid_level": "raid5f", 00:22:26.193 "superblock": false, 00:22:26.193 "num_base_bdevs": 3, 00:22:26.193 "num_base_bdevs_discovered": 2, 00:22:26.193 "num_base_bdevs_operational": 2, 00:22:26.193 "base_bdevs_list": [ 00:22:26.193 { 00:22:26.193 "name": null, 00:22:26.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:26.193 "is_configured": false, 00:22:26.193 "data_offset": 0, 00:22:26.193 "data_size": 65536 00:22:26.193 }, 00:22:26.193 { 00:22:26.193 "name": "BaseBdev2", 00:22:26.193 "uuid": "2dac8ff3-a060-46f3-a618-c6b7ba35bdbe", 00:22:26.193 "is_configured": true, 00:22:26.193 "data_offset": 0, 00:22:26.193 "data_size": 65536 00:22:26.193 }, 00:22:26.193 { 00:22:26.193 "name": "BaseBdev3", 00:22:26.193 "uuid": "24a5af30-e9d4-484e-8850-86423565a91f", 00:22:26.193 "is_configured": true, 00:22:26.193 "data_offset": 0, 00:22:26.193 "data_size": 65536 00:22:26.193 } 00:22:26.193 ] 00:22:26.193 }' 00:22:26.193 02:45:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:26.193 02:45:51 -- common/autotest_common.sh@10 -- # set +x 00:22:26.760 02:45:51 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:26.760 02:45:51 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:26.760 02:45:51 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:26.760 02:45:51 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:26.760 02:45:51 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:26.760 02:45:51 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:26.760 02:45:51 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:27.019 02:45:51 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:27.019 "name": "raid_bdev1", 00:22:27.019 "uuid": "eb60e4e4-8f5a-43f9-bdbe-e6c3f7f6f4ed", 00:22:27.019 "strip_size_kb": 64, 00:22:27.019 "state": "online", 00:22:27.019 "raid_level": "raid5f", 00:22:27.019 "superblock": false, 00:22:27.019 "num_base_bdevs": 3, 00:22:27.019 "num_base_bdevs_discovered": 2, 00:22:27.019 "num_base_bdevs_operational": 2, 00:22:27.019 "base_bdevs_list": [ 00:22:27.019 { 00:22:27.019 "name": null, 00:22:27.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:27.019 "is_configured": false, 00:22:27.019 "data_offset": 0, 00:22:27.019 "data_size": 65536 00:22:27.019 }, 00:22:27.019 { 00:22:27.019 "name": "BaseBdev2", 00:22:27.019 "uuid": "2dac8ff3-a060-46f3-a618-c6b7ba35bdbe", 00:22:27.019 "is_configured": true, 00:22:27.019 "data_offset": 0, 00:22:27.019 "data_size": 65536 00:22:27.019 }, 00:22:27.019 { 00:22:27.019 "name": "BaseBdev3", 00:22:27.019 "uuid": "24a5af30-e9d4-484e-8850-86423565a91f", 00:22:27.019 "is_configured": true, 00:22:27.019 "data_offset": 0, 00:22:27.019 "data_size": 65536 00:22:27.019 } 00:22:27.019 ] 00:22:27.019 }' 00:22:27.019 02:45:51 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:27.019 02:45:51 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:27.019 02:45:51 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:27.019 02:45:52 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:27.019 02:45:52 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:27.278 [2024-07-11 02:45:52.247156] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:27.278 [2024-07-11 02:45:52.247199] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:27.278 [2024-07-11 02:45:52.251580] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029ae0 00:22:27.278 [2024-07-11 02:45:52.253706] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:27.278 02:45:52 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:22:28.214 02:45:53 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:28.214 02:45:53 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:28.214 02:45:53 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:28.214 02:45:53 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:28.214 02:45:53 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:28.214 02:45:53 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:28.214 02:45:53 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:28.472 02:45:53 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:28.472 "name": "raid_bdev1", 00:22:28.472 "uuid": "eb60e4e4-8f5a-43f9-bdbe-e6c3f7f6f4ed", 00:22:28.472 "strip_size_kb": 64, 00:22:28.472 "state": "online", 00:22:28.472 "raid_level": "raid5f", 00:22:28.472 "superblock": false, 00:22:28.472 "num_base_bdevs": 3, 00:22:28.472 "num_base_bdevs_discovered": 3, 00:22:28.472 "num_base_bdevs_operational": 3, 00:22:28.472 "process": { 00:22:28.472 "type": "rebuild", 00:22:28.472 "target": "spare", 00:22:28.472 "progress": { 00:22:28.472 "blocks": 24576, 00:22:28.472 "percent": 18 00:22:28.472 } 00:22:28.472 }, 00:22:28.472 "base_bdevs_list": [ 00:22:28.472 { 00:22:28.472 "name": "spare", 00:22:28.472 "uuid": "a5c9c046-c350-54ee-8eb1-4dd39a74659c", 00:22:28.472 "is_configured": true, 00:22:28.472 "data_offset": 0, 00:22:28.472 "data_size": 65536 00:22:28.472 }, 00:22:28.472 { 00:22:28.472 "name": "BaseBdev2", 00:22:28.472 "uuid": "2dac8ff3-a060-46f3-a618-c6b7ba35bdbe", 00:22:28.472 "is_configured": true, 00:22:28.472 "data_offset": 0, 00:22:28.472 "data_size": 65536 00:22:28.472 }, 00:22:28.472 { 00:22:28.472 "name": "BaseBdev3", 00:22:28.472 "uuid": "24a5af30-e9d4-484e-8850-86423565a91f", 00:22:28.472 "is_configured": true, 00:22:28.472 "data_offset": 0, 00:22:28.472 "data_size": 65536 00:22:28.472 } 00:22:28.472 ] 00:22:28.472 }' 00:22:28.472 02:45:53 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:28.472 02:45:53 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:28.472 02:45:53 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:28.730 02:45:53 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:28.730 02:45:53 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:22:28.730 02:45:53 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=3 00:22:28.730 02:45:53 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:22:28.730 02:45:53 -- bdev/bdev_raid.sh@657 -- # local timeout=567 00:22:28.730 02:45:53 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:28.730 02:45:53 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:28.730 02:45:53 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:28.730 02:45:53 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:28.730 02:45:53 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:28.730 02:45:53 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:28.730 02:45:53 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:28.730 02:45:53 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:28.730 02:45:53 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:28.730 "name": "raid_bdev1", 00:22:28.730 "uuid": "eb60e4e4-8f5a-43f9-bdbe-e6c3f7f6f4ed", 00:22:28.730 "strip_size_kb": 64, 00:22:28.730 "state": "online", 00:22:28.730 "raid_level": "raid5f", 00:22:28.730 "superblock": false, 00:22:28.730 "num_base_bdevs": 3, 00:22:28.730 "num_base_bdevs_discovered": 3, 00:22:28.730 "num_base_bdevs_operational": 3, 00:22:28.730 "process": { 00:22:28.730 "type": "rebuild", 00:22:28.730 "target": "spare", 00:22:28.730 "progress": { 00:22:28.730 "blocks": 30720, 00:22:28.730 "percent": 23 00:22:28.730 } 00:22:28.730 }, 00:22:28.730 "base_bdevs_list": [ 00:22:28.730 { 00:22:28.730 "name": "spare", 00:22:28.730 "uuid": "a5c9c046-c350-54ee-8eb1-4dd39a74659c", 00:22:28.730 "is_configured": true, 00:22:28.730 "data_offset": 0, 00:22:28.730 "data_size": 65536 00:22:28.730 }, 00:22:28.730 { 00:22:28.730 "name": "BaseBdev2", 00:22:28.730 "uuid": "2dac8ff3-a060-46f3-a618-c6b7ba35bdbe", 00:22:28.730 "is_configured": true, 00:22:28.730 "data_offset": 0, 00:22:28.730 "data_size": 65536 00:22:28.730 }, 00:22:28.730 { 00:22:28.730 "name": "BaseBdev3", 00:22:28.730 "uuid": "24a5af30-e9d4-484e-8850-86423565a91f", 00:22:28.730 "is_configured": true, 00:22:28.730 "data_offset": 0, 00:22:28.730 "data_size": 65536 00:22:28.730 } 00:22:28.730 ] 00:22:28.730 }' 00:22:28.730 02:45:53 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:28.988 02:45:53 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:28.988 02:45:53 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:28.988 02:45:53 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:28.988 02:45:53 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:29.920 02:45:54 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:29.920 02:45:54 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:29.920 02:45:54 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:29.920 02:45:54 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:29.920 02:45:54 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:29.920 02:45:54 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:29.920 02:45:54 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:29.920 02:45:54 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:30.177 02:45:55 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:30.177 "name": "raid_bdev1", 00:22:30.177 "uuid": "eb60e4e4-8f5a-43f9-bdbe-e6c3f7f6f4ed", 00:22:30.177 "strip_size_kb": 64, 00:22:30.177 "state": "online", 00:22:30.177 "raid_level": "raid5f", 00:22:30.177 "superblock": false, 00:22:30.177 "num_base_bdevs": 3, 00:22:30.177 "num_base_bdevs_discovered": 3, 00:22:30.177 "num_base_bdevs_operational": 3, 00:22:30.177 "process": { 00:22:30.177 "type": "rebuild", 00:22:30.177 "target": "spare", 00:22:30.177 "progress": { 00:22:30.177 "blocks": 57344, 00:22:30.177 "percent": 43 00:22:30.177 } 00:22:30.177 }, 00:22:30.177 "base_bdevs_list": [ 00:22:30.177 { 00:22:30.177 "name": "spare", 00:22:30.177 "uuid": "a5c9c046-c350-54ee-8eb1-4dd39a74659c", 00:22:30.177 "is_configured": true, 00:22:30.177 "data_offset": 0, 00:22:30.177 "data_size": 65536 00:22:30.177 }, 00:22:30.177 { 00:22:30.177 "name": "BaseBdev2", 00:22:30.177 "uuid": "2dac8ff3-a060-46f3-a618-c6b7ba35bdbe", 00:22:30.177 "is_configured": true, 00:22:30.177 "data_offset": 0, 00:22:30.177 "data_size": 65536 00:22:30.177 }, 00:22:30.177 { 00:22:30.177 "name": "BaseBdev3", 00:22:30.177 "uuid": "24a5af30-e9d4-484e-8850-86423565a91f", 00:22:30.177 "is_configured": true, 00:22:30.177 "data_offset": 0, 00:22:30.177 "data_size": 65536 00:22:30.177 } 00:22:30.177 ] 00:22:30.177 }' 00:22:30.177 02:45:55 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:30.177 02:45:55 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:30.177 02:45:55 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:30.436 02:45:55 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:30.436 02:45:55 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:31.369 02:45:56 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:31.369 02:45:56 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:31.369 02:45:56 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:31.369 02:45:56 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:31.369 02:45:56 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:31.369 02:45:56 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:31.369 02:45:56 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:31.369 02:45:56 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:31.626 02:45:56 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:31.626 "name": "raid_bdev1", 00:22:31.626 "uuid": "eb60e4e4-8f5a-43f9-bdbe-e6c3f7f6f4ed", 00:22:31.626 "strip_size_kb": 64, 00:22:31.626 "state": "online", 00:22:31.626 "raid_level": "raid5f", 00:22:31.626 "superblock": false, 00:22:31.626 "num_base_bdevs": 3, 00:22:31.626 "num_base_bdevs_discovered": 3, 00:22:31.626 "num_base_bdevs_operational": 3, 00:22:31.626 "process": { 00:22:31.626 "type": "rebuild", 00:22:31.626 "target": "spare", 00:22:31.626 "progress": { 00:22:31.626 "blocks": 86016, 00:22:31.626 "percent": 65 00:22:31.626 } 00:22:31.626 }, 00:22:31.626 "base_bdevs_list": [ 00:22:31.626 { 00:22:31.626 "name": "spare", 00:22:31.626 "uuid": "a5c9c046-c350-54ee-8eb1-4dd39a74659c", 00:22:31.626 "is_configured": true, 00:22:31.626 "data_offset": 0, 00:22:31.626 "data_size": 65536 00:22:31.626 }, 00:22:31.626 { 00:22:31.626 "name": "BaseBdev2", 00:22:31.626 "uuid": "2dac8ff3-a060-46f3-a618-c6b7ba35bdbe", 00:22:31.626 "is_configured": true, 00:22:31.626 "data_offset": 0, 00:22:31.626 "data_size": 65536 00:22:31.626 }, 00:22:31.626 { 00:22:31.626 "name": "BaseBdev3", 00:22:31.626 "uuid": "24a5af30-e9d4-484e-8850-86423565a91f", 00:22:31.626 "is_configured": true, 00:22:31.626 "data_offset": 0, 00:22:31.626 "data_size": 65536 00:22:31.626 } 00:22:31.626 ] 00:22:31.626 }' 00:22:31.626 02:45:56 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:31.626 02:45:56 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:31.626 02:45:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:31.626 02:45:56 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:31.626 02:45:56 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:32.559 02:45:57 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:32.559 02:45:57 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:32.559 02:45:57 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:32.559 02:45:57 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:32.559 02:45:57 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:32.559 02:45:57 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:32.559 02:45:57 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:32.559 02:45:57 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:32.816 02:45:57 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:32.816 "name": "raid_bdev1", 00:22:32.816 "uuid": "eb60e4e4-8f5a-43f9-bdbe-e6c3f7f6f4ed", 00:22:32.816 "strip_size_kb": 64, 00:22:32.816 "state": "online", 00:22:32.816 "raid_level": "raid5f", 00:22:32.816 "superblock": false, 00:22:32.816 "num_base_bdevs": 3, 00:22:32.816 "num_base_bdevs_discovered": 3, 00:22:32.816 "num_base_bdevs_operational": 3, 00:22:32.816 "process": { 00:22:32.816 "type": "rebuild", 00:22:32.816 "target": "spare", 00:22:32.816 "progress": { 00:22:32.816 "blocks": 112640, 00:22:32.816 "percent": 85 00:22:32.816 } 00:22:32.816 }, 00:22:32.816 "base_bdevs_list": [ 00:22:32.816 { 00:22:32.816 "name": "spare", 00:22:32.816 "uuid": "a5c9c046-c350-54ee-8eb1-4dd39a74659c", 00:22:32.816 "is_configured": true, 00:22:32.816 "data_offset": 0, 00:22:32.816 "data_size": 65536 00:22:32.816 }, 00:22:32.816 { 00:22:32.816 "name": "BaseBdev2", 00:22:32.816 "uuid": "2dac8ff3-a060-46f3-a618-c6b7ba35bdbe", 00:22:32.816 "is_configured": true, 00:22:32.816 "data_offset": 0, 00:22:32.816 "data_size": 65536 00:22:32.816 }, 00:22:32.816 { 00:22:32.816 "name": "BaseBdev3", 00:22:32.816 "uuid": "24a5af30-e9d4-484e-8850-86423565a91f", 00:22:32.816 "is_configured": true, 00:22:32.816 "data_offset": 0, 00:22:32.816 "data_size": 65536 00:22:32.816 } 00:22:32.816 ] 00:22:32.816 }' 00:22:32.816 02:45:57 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:33.074 02:45:57 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:33.074 02:45:57 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:33.074 02:45:57 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:33.074 02:45:57 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:33.664 [2024-07-11 02:45:58.700701] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:33.664 [2024-07-11 02:45:58.700785] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:33.664 [2024-07-11 02:45:58.700868] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:33.922 02:45:58 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:33.922 02:45:58 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:33.922 02:45:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:33.922 02:45:58 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:33.922 02:45:58 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:33.922 02:45:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:33.922 02:45:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:33.922 02:45:59 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:34.179 02:45:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:34.179 "name": "raid_bdev1", 00:22:34.179 "uuid": "eb60e4e4-8f5a-43f9-bdbe-e6c3f7f6f4ed", 00:22:34.179 "strip_size_kb": 64, 00:22:34.179 "state": "online", 00:22:34.179 "raid_level": "raid5f", 00:22:34.179 "superblock": false, 00:22:34.179 "num_base_bdevs": 3, 00:22:34.179 "num_base_bdevs_discovered": 3, 00:22:34.179 "num_base_bdevs_operational": 3, 00:22:34.179 "base_bdevs_list": [ 00:22:34.179 { 00:22:34.179 "name": "spare", 00:22:34.180 "uuid": "a5c9c046-c350-54ee-8eb1-4dd39a74659c", 00:22:34.180 "is_configured": true, 00:22:34.180 "data_offset": 0, 00:22:34.180 "data_size": 65536 00:22:34.180 }, 00:22:34.180 { 00:22:34.180 "name": "BaseBdev2", 00:22:34.180 "uuid": "2dac8ff3-a060-46f3-a618-c6b7ba35bdbe", 00:22:34.180 "is_configured": true, 00:22:34.180 "data_offset": 0, 00:22:34.180 "data_size": 65536 00:22:34.180 }, 00:22:34.180 { 00:22:34.180 "name": "BaseBdev3", 00:22:34.180 "uuid": "24a5af30-e9d4-484e-8850-86423565a91f", 00:22:34.180 "is_configured": true, 00:22:34.180 "data_offset": 0, 00:22:34.180 "data_size": 65536 00:22:34.180 } 00:22:34.180 ] 00:22:34.180 }' 00:22:34.180 02:45:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:34.438 02:45:59 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:34.438 02:45:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:34.438 02:45:59 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:22:34.438 02:45:59 -- bdev/bdev_raid.sh@660 -- # break 00:22:34.438 02:45:59 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:34.438 02:45:59 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:34.438 02:45:59 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:34.438 02:45:59 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:34.438 02:45:59 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:34.438 02:45:59 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:34.438 02:45:59 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:34.695 02:45:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:34.695 "name": "raid_bdev1", 00:22:34.695 "uuid": "eb60e4e4-8f5a-43f9-bdbe-e6c3f7f6f4ed", 00:22:34.695 "strip_size_kb": 64, 00:22:34.695 "state": "online", 00:22:34.695 "raid_level": "raid5f", 00:22:34.695 "superblock": false, 00:22:34.695 "num_base_bdevs": 3, 00:22:34.695 "num_base_bdevs_discovered": 3, 00:22:34.695 "num_base_bdevs_operational": 3, 00:22:34.695 "base_bdevs_list": [ 00:22:34.695 { 00:22:34.695 "name": "spare", 00:22:34.695 "uuid": "a5c9c046-c350-54ee-8eb1-4dd39a74659c", 00:22:34.695 "is_configured": true, 00:22:34.695 "data_offset": 0, 00:22:34.695 "data_size": 65536 00:22:34.695 }, 00:22:34.695 { 00:22:34.695 "name": "BaseBdev2", 00:22:34.695 "uuid": "2dac8ff3-a060-46f3-a618-c6b7ba35bdbe", 00:22:34.695 "is_configured": true, 00:22:34.695 "data_offset": 0, 00:22:34.695 "data_size": 65536 00:22:34.695 }, 00:22:34.695 { 00:22:34.695 "name": "BaseBdev3", 00:22:34.695 "uuid": "24a5af30-e9d4-484e-8850-86423565a91f", 00:22:34.695 "is_configured": true, 00:22:34.695 "data_offset": 0, 00:22:34.695 "data_size": 65536 00:22:34.695 } 00:22:34.695 ] 00:22:34.695 }' 00:22:34.695 02:45:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:34.695 02:45:59 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:34.695 02:45:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:34.695 02:45:59 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:34.695 02:45:59 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:34.695 02:45:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:34.695 02:45:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:34.695 02:45:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:34.695 02:45:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:34.695 02:45:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:34.695 02:45:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:34.695 02:45:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:34.695 02:45:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:34.695 02:45:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:34.695 02:45:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:34.695 02:45:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:34.953 02:45:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:34.953 "name": "raid_bdev1", 00:22:34.953 "uuid": "eb60e4e4-8f5a-43f9-bdbe-e6c3f7f6f4ed", 00:22:34.953 "strip_size_kb": 64, 00:22:34.953 "state": "online", 00:22:34.953 "raid_level": "raid5f", 00:22:34.953 "superblock": false, 00:22:34.953 "num_base_bdevs": 3, 00:22:34.953 "num_base_bdevs_discovered": 3, 00:22:34.953 "num_base_bdevs_operational": 3, 00:22:34.953 "base_bdevs_list": [ 00:22:34.953 { 00:22:34.953 "name": "spare", 00:22:34.953 "uuid": "a5c9c046-c350-54ee-8eb1-4dd39a74659c", 00:22:34.953 "is_configured": true, 00:22:34.953 "data_offset": 0, 00:22:34.953 "data_size": 65536 00:22:34.953 }, 00:22:34.953 { 00:22:34.953 "name": "BaseBdev2", 00:22:34.953 "uuid": "2dac8ff3-a060-46f3-a618-c6b7ba35bdbe", 00:22:34.953 "is_configured": true, 00:22:34.953 "data_offset": 0, 00:22:34.953 "data_size": 65536 00:22:34.953 }, 00:22:34.953 { 00:22:34.953 "name": "BaseBdev3", 00:22:34.953 "uuid": "24a5af30-e9d4-484e-8850-86423565a91f", 00:22:34.953 "is_configured": true, 00:22:34.953 "data_offset": 0, 00:22:34.953 "data_size": 65536 00:22:34.953 } 00:22:34.953 ] 00:22:34.953 }' 00:22:34.953 02:45:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:34.953 02:45:59 -- common/autotest_common.sh@10 -- # set +x 00:22:35.887 02:46:00 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:35.887 [2024-07-11 02:46:00.901259] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:35.887 [2024-07-11 02:46:00.901290] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:35.887 [2024-07-11 02:46:00.901393] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:35.887 [2024-07-11 02:46:00.901479] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:35.887 [2024-07-11 02:46:00.901491] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007b80 name raid_bdev1, state offline 00:22:35.887 02:46:00 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:35.887 02:46:00 -- bdev/bdev_raid.sh@671 -- # jq length 00:22:36.146 02:46:01 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:22:36.146 02:46:01 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:22:36.146 02:46:01 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:36.146 02:46:01 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:36.146 02:46:01 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:22:36.146 02:46:01 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:36.146 02:46:01 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:22:36.146 02:46:01 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:36.146 02:46:01 -- bdev/nbd_common.sh@12 -- # local i 00:22:36.146 02:46:01 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:36.146 02:46:01 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:36.146 02:46:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:36.404 /dev/nbd0 00:22:36.404 02:46:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:36.404 02:46:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:36.404 02:46:01 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:22:36.404 02:46:01 -- common/autotest_common.sh@857 -- # local i 00:22:36.404 02:46:01 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:36.404 02:46:01 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:36.404 02:46:01 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:22:36.404 02:46:01 -- common/autotest_common.sh@861 -- # break 00:22:36.404 02:46:01 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:36.404 02:46:01 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:36.404 02:46:01 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:36.404 1+0 records in 00:22:36.404 1+0 records out 00:22:36.404 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000564869 s, 7.3 MB/s 00:22:36.404 02:46:01 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:36.404 02:46:01 -- common/autotest_common.sh@874 -- # size=4096 00:22:36.404 02:46:01 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:36.404 02:46:01 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:36.404 02:46:01 -- common/autotest_common.sh@877 -- # return 0 00:22:36.404 02:46:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:36.404 02:46:01 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:36.404 02:46:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:22:36.663 /dev/nbd1 00:22:36.663 02:46:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:36.663 02:46:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:36.663 02:46:01 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:22:36.663 02:46:01 -- common/autotest_common.sh@857 -- # local i 00:22:36.663 02:46:01 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:36.663 02:46:01 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:36.663 02:46:01 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:22:36.663 02:46:01 -- common/autotest_common.sh@861 -- # break 00:22:36.663 02:46:01 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:36.663 02:46:01 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:36.663 02:46:01 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:36.663 1+0 records in 00:22:36.663 1+0 records out 00:22:36.663 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000441927 s, 9.3 MB/s 00:22:36.663 02:46:01 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:36.663 02:46:01 -- common/autotest_common.sh@874 -- # size=4096 00:22:36.663 02:46:01 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:36.663 02:46:01 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:36.663 02:46:01 -- common/autotest_common.sh@877 -- # return 0 00:22:36.663 02:46:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:36.663 02:46:01 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:36.663 02:46:01 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:22:36.663 02:46:01 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:22:36.663 02:46:01 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:36.663 02:46:01 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:22:36.663 02:46:01 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:36.663 02:46:01 -- bdev/nbd_common.sh@51 -- # local i 00:22:36.663 02:46:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:36.663 02:46:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:36.922 02:46:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:36.922 02:46:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:36.922 02:46:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:36.922 02:46:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:36.922 02:46:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:36.922 02:46:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:36.922 02:46:01 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:22:37.180 02:46:02 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:22:37.180 02:46:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:37.180 02:46:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:37.180 02:46:02 -- bdev/nbd_common.sh@41 -- # break 00:22:37.180 02:46:02 -- bdev/nbd_common.sh@45 -- # return 0 00:22:37.180 02:46:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:37.180 02:46:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:37.438 02:46:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:37.438 02:46:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:37.438 02:46:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:37.438 02:46:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:37.438 02:46:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:37.438 02:46:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:37.438 02:46:02 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:22:37.438 02:46:02 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:22:37.438 02:46:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:37.438 02:46:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:37.438 02:46:02 -- bdev/nbd_common.sh@41 -- # break 00:22:37.438 02:46:02 -- bdev/nbd_common.sh@45 -- # return 0 00:22:37.438 02:46:02 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:22:37.438 02:46:02 -- bdev/bdev_raid.sh@709 -- # killprocess 141277 00:22:37.438 02:46:02 -- common/autotest_common.sh@926 -- # '[' -z 141277 ']' 00:22:37.438 02:46:02 -- common/autotest_common.sh@930 -- # kill -0 141277 00:22:37.438 02:46:02 -- common/autotest_common.sh@931 -- # uname 00:22:37.438 02:46:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:37.438 02:46:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 141277 00:22:37.438 02:46:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:37.438 killing process with pid 141277 00:22:37.438 Received shutdown signal, test time was about 60.000000 seconds 00:22:37.438 00:22:37.438 Latency(us) 00:22:37.438 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:37.438 =================================================================================================================== 00:22:37.438 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:37.438 02:46:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:37.438 02:46:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 141277' 00:22:37.438 02:46:02 -- common/autotest_common.sh@945 -- # kill 141277 00:22:37.438 02:46:02 -- common/autotest_common.sh@950 -- # wait 141277 00:22:37.438 [2024-07-11 02:46:02.417957] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:37.438 [2024-07-11 02:46:02.452472] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:37.696 02:46:02 -- bdev/bdev_raid.sh@711 -- # return 0 00:22:37.696 00:22:37.696 real 0m19.439s 00:22:37.696 user 0m29.726s 00:22:37.696 sys 0m2.404s 00:22:37.696 02:46:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:37.696 02:46:02 -- common/autotest_common.sh@10 -- # set +x 00:22:37.696 ************************************ 00:22:37.696 END TEST raid5f_rebuild_test 00:22:37.696 ************************************ 00:22:37.696 02:46:02 -- bdev/bdev_raid.sh@749 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false 00:22:37.696 02:46:02 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:22:37.696 02:46:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:37.696 02:46:02 -- common/autotest_common.sh@10 -- # set +x 00:22:37.696 ************************************ 00:22:37.696 START TEST raid5f_rebuild_test_sb 00:22:37.696 ************************************ 00:22:37.696 02:46:02 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid5f 3 true false 00:22:37.696 02:46:02 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:22:37.696 02:46:02 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=3 00:22:37.696 02:46:02 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:22:37.696 02:46:02 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:22:37.696 02:46:02 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:22:37.696 02:46:02 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:22:37.696 02:46:02 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:37.696 02:46:02 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:22:37.696 02:46:02 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:37.697 02:46:02 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:37.697 02:46:02 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:22:37.697 02:46:02 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:37.697 02:46:02 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:37.697 02:46:02 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:22:37.697 02:46:02 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:37.697 02:46:02 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:37.697 02:46:02 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:22:37.697 02:46:02 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:22:37.697 02:46:02 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:22:37.697 02:46:02 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:22:37.697 02:46:02 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:22:37.697 02:46:02 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:22:37.697 02:46:02 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:22:37.697 02:46:02 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:22:37.697 02:46:02 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:22:37.697 02:46:02 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:22:37.697 02:46:02 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:22:37.697 02:46:02 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:22:37.697 02:46:02 -- bdev/bdev_raid.sh@544 -- # raid_pid=141838 00:22:37.697 02:46:02 -- bdev/bdev_raid.sh@545 -- # waitforlisten 141838 /var/tmp/spdk-raid.sock 00:22:37.697 02:46:02 -- common/autotest_common.sh@819 -- # '[' -z 141838 ']' 00:22:37.697 02:46:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:37.697 02:46:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:37.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:37.697 02:46:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:37.697 02:46:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:37.697 02:46:02 -- common/autotest_common.sh@10 -- # set +x 00:22:37.697 02:46:02 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:37.955 [2024-07-11 02:46:02.799284] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:22:37.955 [2024-07-11 02:46:02.799793] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141838 ] 00:22:37.955 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:37.955 Zero copy mechanism will not be used. 00:22:37.955 [2024-07-11 02:46:02.945285] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:37.955 [2024-07-11 02:46:03.004541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:38.213 [2024-07-11 02:46:03.055071] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:38.779 02:46:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:38.779 02:46:03 -- common/autotest_common.sh@852 -- # return 0 00:22:38.779 02:46:03 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:38.779 02:46:03 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:38.779 02:46:03 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:39.037 BaseBdev1_malloc 00:22:39.037 02:46:03 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:39.037 [2024-07-11 02:46:04.114283] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:39.037 [2024-07-11 02:46:04.114465] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:39.037 [2024-07-11 02:46:04.114501] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005d80 00:22:39.037 [2024-07-11 02:46:04.114540] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:39.037 [2024-07-11 02:46:04.116633] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:39.037 [2024-07-11 02:46:04.116679] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:39.037 BaseBdev1 00:22:39.037 02:46:04 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:39.037 02:46:04 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:39.037 02:46:04 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:39.295 BaseBdev2_malloc 00:22:39.295 02:46:04 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:39.554 [2024-07-11 02:46:04.520225] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:39.554 [2024-07-11 02:46:04.520340] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:39.554 [2024-07-11 02:46:04.520379] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:22:39.554 [2024-07-11 02:46:04.520415] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:39.554 [2024-07-11 02:46:04.522557] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:39.554 [2024-07-11 02:46:04.522601] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:39.554 BaseBdev2 00:22:39.554 02:46:04 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:39.554 02:46:04 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:39.554 02:46:04 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:39.811 BaseBdev3_malloc 00:22:39.811 02:46:04 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:40.070 [2024-07-11 02:46:04.933087] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:40.070 [2024-07-11 02:46:04.933171] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:40.070 [2024-07-11 02:46:04.933207] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:22:40.070 [2024-07-11 02:46:04.933244] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:40.070 [2024-07-11 02:46:04.935208] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:40.070 [2024-07-11 02:46:04.935256] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:40.070 BaseBdev3 00:22:40.070 02:46:04 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:40.070 spare_malloc 00:22:40.070 02:46:05 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:40.328 spare_delay 00:22:40.328 02:46:05 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:40.585 [2024-07-11 02:46:05.507232] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:40.585 [2024-07-11 02:46:05.507334] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:40.585 [2024-07-11 02:46:05.507371] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:22:40.585 [2024-07-11 02:46:05.507410] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:40.585 [2024-07-11 02:46:05.509526] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:40.585 [2024-07-11 02:46:05.509591] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:40.585 spare 00:22:40.585 02:46:05 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:22:40.843 [2024-07-11 02:46:05.703385] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:40.843 [2024-07-11 02:46:05.705463] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:40.843 [2024-07-11 02:46:05.705552] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:40.843 [2024-07-11 02:46:05.705813] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:22:40.843 [2024-07-11 02:46:05.705837] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:40.843 [2024-07-11 02:46:05.706029] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:22:40.843 [2024-07-11 02:46:05.706796] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:22:40.843 [2024-07-11 02:46:05.706820] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:22:40.843 [2024-07-11 02:46:05.706967] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:40.844 02:46:05 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:40.844 02:46:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:40.844 02:46:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:40.844 02:46:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:40.844 02:46:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:40.844 02:46:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:40.844 02:46:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:40.844 02:46:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:40.844 02:46:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:40.844 02:46:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:40.844 02:46:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:40.844 02:46:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:41.102 02:46:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:41.102 "name": "raid_bdev1", 00:22:41.102 "uuid": "5d130492-5d14-4cca-b95f-a36cd7a5fd7d", 00:22:41.102 "strip_size_kb": 64, 00:22:41.102 "state": "online", 00:22:41.102 "raid_level": "raid5f", 00:22:41.102 "superblock": true, 00:22:41.102 "num_base_bdevs": 3, 00:22:41.102 "num_base_bdevs_discovered": 3, 00:22:41.102 "num_base_bdevs_operational": 3, 00:22:41.102 "base_bdevs_list": [ 00:22:41.102 { 00:22:41.102 "name": "BaseBdev1", 00:22:41.102 "uuid": "32fc8819-e36b-5eea-a22b-710a4fce9a81", 00:22:41.102 "is_configured": true, 00:22:41.102 "data_offset": 2048, 00:22:41.102 "data_size": 63488 00:22:41.102 }, 00:22:41.102 { 00:22:41.102 "name": "BaseBdev2", 00:22:41.102 "uuid": "8c16db0b-3115-5708-a8d8-8018fa70ec8d", 00:22:41.102 "is_configured": true, 00:22:41.102 "data_offset": 2048, 00:22:41.102 "data_size": 63488 00:22:41.102 }, 00:22:41.102 { 00:22:41.102 "name": "BaseBdev3", 00:22:41.102 "uuid": "ad1b9570-90f2-5e8e-b22b-d2f1b39021a6", 00:22:41.102 "is_configured": true, 00:22:41.102 "data_offset": 2048, 00:22:41.102 "data_size": 63488 00:22:41.102 } 00:22:41.102 ] 00:22:41.102 }' 00:22:41.102 02:46:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:41.102 02:46:05 -- common/autotest_common.sh@10 -- # set +x 00:22:41.669 02:46:06 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:41.669 02:46:06 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:22:41.669 [2024-07-11 02:46:06.708944] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:41.669 02:46:06 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=126976 00:22:41.669 02:46:06 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:41.669 02:46:06 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:41.928 02:46:06 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:22:41.928 02:46:06 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:22:41.928 02:46:06 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:22:41.928 02:46:06 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:22:41.928 02:46:06 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:41.928 02:46:06 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:22:41.928 02:46:06 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:41.928 02:46:06 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:22:41.928 02:46:06 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:41.928 02:46:06 -- bdev/nbd_common.sh@12 -- # local i 00:22:41.928 02:46:06 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:41.928 02:46:06 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:41.928 02:46:06 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:42.186 [2024-07-11 02:46:07.156936] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:22:42.186 /dev/nbd0 00:22:42.186 02:46:07 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:42.186 02:46:07 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:42.186 02:46:07 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:22:42.186 02:46:07 -- common/autotest_common.sh@857 -- # local i 00:22:42.186 02:46:07 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:42.186 02:46:07 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:42.186 02:46:07 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:22:42.186 02:46:07 -- common/autotest_common.sh@861 -- # break 00:22:42.186 02:46:07 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:42.186 02:46:07 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:42.186 02:46:07 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:42.186 1+0 records in 00:22:42.186 1+0 records out 00:22:42.186 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225834 s, 18.1 MB/s 00:22:42.186 02:46:07 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:42.186 02:46:07 -- common/autotest_common.sh@874 -- # size=4096 00:22:42.186 02:46:07 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:42.186 02:46:07 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:42.186 02:46:07 -- common/autotest_common.sh@877 -- # return 0 00:22:42.186 02:46:07 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:42.186 02:46:07 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:42.186 02:46:07 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:22:42.186 02:46:07 -- bdev/bdev_raid.sh@581 -- # write_unit_size=256 00:22:42.186 02:46:07 -- bdev/bdev_raid.sh@582 -- # echo 128 00:22:42.186 02:46:07 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:22:42.445 496+0 records in 00:22:42.445 496+0 records out 00:22:42.445 65011712 bytes (65 MB, 62 MiB) copied, 0.284015 s, 229 MB/s 00:22:42.445 02:46:07 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:42.445 02:46:07 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:42.445 02:46:07 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:22:42.445 02:46:07 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:42.445 02:46:07 -- bdev/nbd_common.sh@51 -- # local i 00:22:42.445 02:46:07 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:42.445 02:46:07 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:42.704 02:46:07 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:42.704 02:46:07 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:42.704 02:46:07 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:42.704 02:46:07 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:42.704 02:46:07 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:42.704 02:46:07 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:42.704 02:46:07 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:22:42.704 [2024-07-11 02:46:07.701808] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:42.963 02:46:07 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:22:42.963 02:46:07 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:42.963 02:46:07 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:42.963 02:46:07 -- bdev/nbd_common.sh@41 -- # break 00:22:42.963 02:46:07 -- bdev/nbd_common.sh@45 -- # return 0 00:22:42.963 02:46:07 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:42.963 [2024-07-11 02:46:07.989449] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:42.963 02:46:08 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:42.963 02:46:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:42.963 02:46:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:42.963 02:46:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:42.963 02:46:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:42.963 02:46:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:42.963 02:46:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:42.963 02:46:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:42.963 02:46:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:42.963 02:46:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:42.963 02:46:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:42.963 02:46:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:43.222 02:46:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:43.222 "name": "raid_bdev1", 00:22:43.222 "uuid": "5d130492-5d14-4cca-b95f-a36cd7a5fd7d", 00:22:43.222 "strip_size_kb": 64, 00:22:43.222 "state": "online", 00:22:43.222 "raid_level": "raid5f", 00:22:43.222 "superblock": true, 00:22:43.222 "num_base_bdevs": 3, 00:22:43.222 "num_base_bdevs_discovered": 2, 00:22:43.222 "num_base_bdevs_operational": 2, 00:22:43.222 "base_bdevs_list": [ 00:22:43.222 { 00:22:43.222 "name": null, 00:22:43.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:43.222 "is_configured": false, 00:22:43.222 "data_offset": 2048, 00:22:43.222 "data_size": 63488 00:22:43.222 }, 00:22:43.222 { 00:22:43.222 "name": "BaseBdev2", 00:22:43.222 "uuid": "8c16db0b-3115-5708-a8d8-8018fa70ec8d", 00:22:43.222 "is_configured": true, 00:22:43.222 "data_offset": 2048, 00:22:43.222 "data_size": 63488 00:22:43.222 }, 00:22:43.222 { 00:22:43.222 "name": "BaseBdev3", 00:22:43.222 "uuid": "ad1b9570-90f2-5e8e-b22b-d2f1b39021a6", 00:22:43.222 "is_configured": true, 00:22:43.222 "data_offset": 2048, 00:22:43.222 "data_size": 63488 00:22:43.222 } 00:22:43.222 ] 00:22:43.222 }' 00:22:43.222 02:46:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:43.222 02:46:08 -- common/autotest_common.sh@10 -- # set +x 00:22:43.789 02:46:08 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:44.046 [2024-07-11 02:46:09.041692] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:44.046 [2024-07-11 02:46:09.041766] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:44.046 [2024-07-11 02:46:09.046572] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027650 00:22:44.046 [2024-07-11 02:46:09.048979] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:44.046 02:46:09 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:22:44.981 02:46:10 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:44.981 02:46:10 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:44.981 02:46:10 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:44.981 02:46:10 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:44.981 02:46:10 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:44.981 02:46:10 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:44.981 02:46:10 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:45.289 02:46:10 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:45.289 "name": "raid_bdev1", 00:22:45.289 "uuid": "5d130492-5d14-4cca-b95f-a36cd7a5fd7d", 00:22:45.289 "strip_size_kb": 64, 00:22:45.289 "state": "online", 00:22:45.289 "raid_level": "raid5f", 00:22:45.289 "superblock": true, 00:22:45.289 "num_base_bdevs": 3, 00:22:45.289 "num_base_bdevs_discovered": 3, 00:22:45.289 "num_base_bdevs_operational": 3, 00:22:45.289 "process": { 00:22:45.289 "type": "rebuild", 00:22:45.289 "target": "spare", 00:22:45.289 "progress": { 00:22:45.289 "blocks": 24576, 00:22:45.289 "percent": 19 00:22:45.289 } 00:22:45.289 }, 00:22:45.289 "base_bdevs_list": [ 00:22:45.289 { 00:22:45.289 "name": "spare", 00:22:45.289 "uuid": "24bd4820-4f35-5c79-986b-0728ab9d9096", 00:22:45.289 "is_configured": true, 00:22:45.289 "data_offset": 2048, 00:22:45.289 "data_size": 63488 00:22:45.289 }, 00:22:45.289 { 00:22:45.289 "name": "BaseBdev2", 00:22:45.289 "uuid": "8c16db0b-3115-5708-a8d8-8018fa70ec8d", 00:22:45.289 "is_configured": true, 00:22:45.289 "data_offset": 2048, 00:22:45.289 "data_size": 63488 00:22:45.289 }, 00:22:45.289 { 00:22:45.289 "name": "BaseBdev3", 00:22:45.290 "uuid": "ad1b9570-90f2-5e8e-b22b-d2f1b39021a6", 00:22:45.290 "is_configured": true, 00:22:45.290 "data_offset": 2048, 00:22:45.290 "data_size": 63488 00:22:45.290 } 00:22:45.290 ] 00:22:45.290 }' 00:22:45.290 02:46:10 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:45.290 02:46:10 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:45.290 02:46:10 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:45.548 02:46:10 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:45.548 02:46:10 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:45.895 [2024-07-11 02:46:10.651100] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:45.895 [2024-07-11 02:46:10.662523] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:45.895 [2024-07-11 02:46:10.662617] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:45.895 02:46:10 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:45.895 02:46:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:45.895 02:46:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:45.895 02:46:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:45.895 02:46:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:45.895 02:46:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:45.895 02:46:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:45.895 02:46:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:45.895 02:46:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:45.895 02:46:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:45.895 02:46:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:45.895 02:46:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:45.895 02:46:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:45.895 "name": "raid_bdev1", 00:22:45.895 "uuid": "5d130492-5d14-4cca-b95f-a36cd7a5fd7d", 00:22:45.895 "strip_size_kb": 64, 00:22:45.895 "state": "online", 00:22:45.895 "raid_level": "raid5f", 00:22:45.895 "superblock": true, 00:22:45.895 "num_base_bdevs": 3, 00:22:45.895 "num_base_bdevs_discovered": 2, 00:22:45.895 "num_base_bdevs_operational": 2, 00:22:45.895 "base_bdevs_list": [ 00:22:45.895 { 00:22:45.895 "name": null, 00:22:45.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.895 "is_configured": false, 00:22:45.895 "data_offset": 2048, 00:22:45.895 "data_size": 63488 00:22:45.895 }, 00:22:45.895 { 00:22:45.895 "name": "BaseBdev2", 00:22:45.895 "uuid": "8c16db0b-3115-5708-a8d8-8018fa70ec8d", 00:22:45.895 "is_configured": true, 00:22:45.895 "data_offset": 2048, 00:22:45.895 "data_size": 63488 00:22:45.895 }, 00:22:45.895 { 00:22:45.895 "name": "BaseBdev3", 00:22:45.895 "uuid": "ad1b9570-90f2-5e8e-b22b-d2f1b39021a6", 00:22:45.895 "is_configured": true, 00:22:45.895 "data_offset": 2048, 00:22:45.895 "data_size": 63488 00:22:45.895 } 00:22:45.895 ] 00:22:45.895 }' 00:22:45.895 02:46:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:45.895 02:46:10 -- common/autotest_common.sh@10 -- # set +x 00:22:46.458 02:46:11 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:46.458 02:46:11 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:46.458 02:46:11 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:46.458 02:46:11 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:46.458 02:46:11 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:46.458 02:46:11 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:46.458 02:46:11 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:46.714 02:46:11 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:46.714 "name": "raid_bdev1", 00:22:46.714 "uuid": "5d130492-5d14-4cca-b95f-a36cd7a5fd7d", 00:22:46.714 "strip_size_kb": 64, 00:22:46.714 "state": "online", 00:22:46.714 "raid_level": "raid5f", 00:22:46.714 "superblock": true, 00:22:46.714 "num_base_bdevs": 3, 00:22:46.714 "num_base_bdevs_discovered": 2, 00:22:46.714 "num_base_bdevs_operational": 2, 00:22:46.714 "base_bdevs_list": [ 00:22:46.715 { 00:22:46.715 "name": null, 00:22:46.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:46.715 "is_configured": false, 00:22:46.715 "data_offset": 2048, 00:22:46.715 "data_size": 63488 00:22:46.715 }, 00:22:46.715 { 00:22:46.715 "name": "BaseBdev2", 00:22:46.715 "uuid": "8c16db0b-3115-5708-a8d8-8018fa70ec8d", 00:22:46.715 "is_configured": true, 00:22:46.715 "data_offset": 2048, 00:22:46.715 "data_size": 63488 00:22:46.715 }, 00:22:46.715 { 00:22:46.715 "name": "BaseBdev3", 00:22:46.715 "uuid": "ad1b9570-90f2-5e8e-b22b-d2f1b39021a6", 00:22:46.715 "is_configured": true, 00:22:46.715 "data_offset": 2048, 00:22:46.715 "data_size": 63488 00:22:46.715 } 00:22:46.715 ] 00:22:46.715 }' 00:22:46.715 02:46:11 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:46.972 02:46:11 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:46.972 02:46:11 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:46.972 02:46:11 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:46.972 02:46:11 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:46.972 [2024-07-11 02:46:12.043394] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:46.972 [2024-07-11 02:46:12.043456] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:46.972 [2024-07-11 02:46:12.048062] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000277f0 00:22:46.972 [2024-07-11 02:46:12.050310] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:46.972 02:46:12 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:22:48.347 02:46:13 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:48.347 02:46:13 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:48.347 02:46:13 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:48.347 02:46:13 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:48.347 02:46:13 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:48.347 02:46:13 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:48.347 02:46:13 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:48.347 02:46:13 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:48.347 "name": "raid_bdev1", 00:22:48.347 "uuid": "5d130492-5d14-4cca-b95f-a36cd7a5fd7d", 00:22:48.347 "strip_size_kb": 64, 00:22:48.347 "state": "online", 00:22:48.347 "raid_level": "raid5f", 00:22:48.347 "superblock": true, 00:22:48.347 "num_base_bdevs": 3, 00:22:48.347 "num_base_bdevs_discovered": 3, 00:22:48.347 "num_base_bdevs_operational": 3, 00:22:48.348 "process": { 00:22:48.348 "type": "rebuild", 00:22:48.348 "target": "spare", 00:22:48.348 "progress": { 00:22:48.348 "blocks": 22528, 00:22:48.348 "percent": 17 00:22:48.348 } 00:22:48.348 }, 00:22:48.348 "base_bdevs_list": [ 00:22:48.348 { 00:22:48.348 "name": "spare", 00:22:48.348 "uuid": "24bd4820-4f35-5c79-986b-0728ab9d9096", 00:22:48.348 "is_configured": true, 00:22:48.348 "data_offset": 2048, 00:22:48.348 "data_size": 63488 00:22:48.348 }, 00:22:48.348 { 00:22:48.348 "name": "BaseBdev2", 00:22:48.348 "uuid": "8c16db0b-3115-5708-a8d8-8018fa70ec8d", 00:22:48.348 "is_configured": true, 00:22:48.348 "data_offset": 2048, 00:22:48.348 "data_size": 63488 00:22:48.348 }, 00:22:48.348 { 00:22:48.348 "name": "BaseBdev3", 00:22:48.348 "uuid": "ad1b9570-90f2-5e8e-b22b-d2f1b39021a6", 00:22:48.348 "is_configured": true, 00:22:48.348 "data_offset": 2048, 00:22:48.348 "data_size": 63488 00:22:48.348 } 00:22:48.348 ] 00:22:48.348 }' 00:22:48.348 02:46:13 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:48.348 02:46:13 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:48.348 02:46:13 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:48.348 02:46:13 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:48.348 02:46:13 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:22:48.348 02:46:13 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:22:48.348 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:22:48.348 02:46:13 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=3 00:22:48.348 02:46:13 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:22:48.348 02:46:13 -- bdev/bdev_raid.sh@657 -- # local timeout=587 00:22:48.348 02:46:13 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:48.348 02:46:13 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:48.348 02:46:13 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:48.348 02:46:13 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:48.348 02:46:13 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:48.348 02:46:13 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:48.348 02:46:13 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:48.348 02:46:13 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:48.607 02:46:13 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:48.607 "name": "raid_bdev1", 00:22:48.607 "uuid": "5d130492-5d14-4cca-b95f-a36cd7a5fd7d", 00:22:48.607 "strip_size_kb": 64, 00:22:48.607 "state": "online", 00:22:48.607 "raid_level": "raid5f", 00:22:48.607 "superblock": true, 00:22:48.607 "num_base_bdevs": 3, 00:22:48.607 "num_base_bdevs_discovered": 3, 00:22:48.607 "num_base_bdevs_operational": 3, 00:22:48.607 "process": { 00:22:48.607 "type": "rebuild", 00:22:48.607 "target": "spare", 00:22:48.607 "progress": { 00:22:48.607 "blocks": 30720, 00:22:48.607 "percent": 24 00:22:48.607 } 00:22:48.607 }, 00:22:48.607 "base_bdevs_list": [ 00:22:48.607 { 00:22:48.607 "name": "spare", 00:22:48.607 "uuid": "24bd4820-4f35-5c79-986b-0728ab9d9096", 00:22:48.607 "is_configured": true, 00:22:48.607 "data_offset": 2048, 00:22:48.607 "data_size": 63488 00:22:48.607 }, 00:22:48.607 { 00:22:48.607 "name": "BaseBdev2", 00:22:48.607 "uuid": "8c16db0b-3115-5708-a8d8-8018fa70ec8d", 00:22:48.607 "is_configured": true, 00:22:48.607 "data_offset": 2048, 00:22:48.607 "data_size": 63488 00:22:48.607 }, 00:22:48.607 { 00:22:48.607 "name": "BaseBdev3", 00:22:48.607 "uuid": "ad1b9570-90f2-5e8e-b22b-d2f1b39021a6", 00:22:48.607 "is_configured": true, 00:22:48.607 "data_offset": 2048, 00:22:48.607 "data_size": 63488 00:22:48.607 } 00:22:48.607 ] 00:22:48.607 }' 00:22:48.607 02:46:13 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:48.607 02:46:13 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:48.607 02:46:13 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:48.607 02:46:13 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:48.607 02:46:13 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:49.982 02:46:14 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:49.982 02:46:14 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:49.982 02:46:14 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:49.982 02:46:14 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:49.982 02:46:14 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:49.982 02:46:14 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:49.982 02:46:14 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:49.982 02:46:14 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:49.982 02:46:14 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:49.982 "name": "raid_bdev1", 00:22:49.982 "uuid": "5d130492-5d14-4cca-b95f-a36cd7a5fd7d", 00:22:49.982 "strip_size_kb": 64, 00:22:49.982 "state": "online", 00:22:49.982 "raid_level": "raid5f", 00:22:49.982 "superblock": true, 00:22:49.982 "num_base_bdevs": 3, 00:22:49.982 "num_base_bdevs_discovered": 3, 00:22:49.982 "num_base_bdevs_operational": 3, 00:22:49.982 "process": { 00:22:49.982 "type": "rebuild", 00:22:49.982 "target": "spare", 00:22:49.982 "progress": { 00:22:49.982 "blocks": 57344, 00:22:49.982 "percent": 45 00:22:49.982 } 00:22:49.982 }, 00:22:49.982 "base_bdevs_list": [ 00:22:49.982 { 00:22:49.982 "name": "spare", 00:22:49.982 "uuid": "24bd4820-4f35-5c79-986b-0728ab9d9096", 00:22:49.982 "is_configured": true, 00:22:49.982 "data_offset": 2048, 00:22:49.982 "data_size": 63488 00:22:49.982 }, 00:22:49.982 { 00:22:49.982 "name": "BaseBdev2", 00:22:49.982 "uuid": "8c16db0b-3115-5708-a8d8-8018fa70ec8d", 00:22:49.982 "is_configured": true, 00:22:49.982 "data_offset": 2048, 00:22:49.982 "data_size": 63488 00:22:49.982 }, 00:22:49.982 { 00:22:49.982 "name": "BaseBdev3", 00:22:49.982 "uuid": "ad1b9570-90f2-5e8e-b22b-d2f1b39021a6", 00:22:49.982 "is_configured": true, 00:22:49.982 "data_offset": 2048, 00:22:49.982 "data_size": 63488 00:22:49.982 } 00:22:49.982 ] 00:22:49.982 }' 00:22:49.982 02:46:14 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:49.982 02:46:14 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:49.982 02:46:14 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:49.982 02:46:15 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:49.982 02:46:15 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:51.358 02:46:16 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:51.358 02:46:16 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:51.358 02:46:16 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:51.358 02:46:16 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:51.358 02:46:16 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:51.358 02:46:16 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:51.358 02:46:16 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:51.358 02:46:16 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:51.358 02:46:16 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:51.358 "name": "raid_bdev1", 00:22:51.358 "uuid": "5d130492-5d14-4cca-b95f-a36cd7a5fd7d", 00:22:51.358 "strip_size_kb": 64, 00:22:51.358 "state": "online", 00:22:51.358 "raid_level": "raid5f", 00:22:51.358 "superblock": true, 00:22:51.358 "num_base_bdevs": 3, 00:22:51.358 "num_base_bdevs_discovered": 3, 00:22:51.358 "num_base_bdevs_operational": 3, 00:22:51.358 "process": { 00:22:51.358 "type": "rebuild", 00:22:51.358 "target": "spare", 00:22:51.358 "progress": { 00:22:51.358 "blocks": 83968, 00:22:51.358 "percent": 66 00:22:51.358 } 00:22:51.358 }, 00:22:51.358 "base_bdevs_list": [ 00:22:51.358 { 00:22:51.358 "name": "spare", 00:22:51.358 "uuid": "24bd4820-4f35-5c79-986b-0728ab9d9096", 00:22:51.358 "is_configured": true, 00:22:51.358 "data_offset": 2048, 00:22:51.358 "data_size": 63488 00:22:51.358 }, 00:22:51.358 { 00:22:51.358 "name": "BaseBdev2", 00:22:51.358 "uuid": "8c16db0b-3115-5708-a8d8-8018fa70ec8d", 00:22:51.358 "is_configured": true, 00:22:51.358 "data_offset": 2048, 00:22:51.358 "data_size": 63488 00:22:51.358 }, 00:22:51.358 { 00:22:51.358 "name": "BaseBdev3", 00:22:51.358 "uuid": "ad1b9570-90f2-5e8e-b22b-d2f1b39021a6", 00:22:51.358 "is_configured": true, 00:22:51.358 "data_offset": 2048, 00:22:51.358 "data_size": 63488 00:22:51.358 } 00:22:51.358 ] 00:22:51.358 }' 00:22:51.358 02:46:16 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:51.358 02:46:16 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:51.358 02:46:16 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:51.358 02:46:16 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:51.358 02:46:16 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:52.294 02:46:17 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:52.294 02:46:17 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:52.294 02:46:17 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:52.294 02:46:17 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:52.294 02:46:17 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:52.294 02:46:17 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:52.294 02:46:17 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:52.294 02:46:17 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:52.552 02:46:17 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:52.552 "name": "raid_bdev1", 00:22:52.552 "uuid": "5d130492-5d14-4cca-b95f-a36cd7a5fd7d", 00:22:52.552 "strip_size_kb": 64, 00:22:52.552 "state": "online", 00:22:52.552 "raid_level": "raid5f", 00:22:52.552 "superblock": true, 00:22:52.552 "num_base_bdevs": 3, 00:22:52.552 "num_base_bdevs_discovered": 3, 00:22:52.552 "num_base_bdevs_operational": 3, 00:22:52.552 "process": { 00:22:52.552 "type": "rebuild", 00:22:52.552 "target": "spare", 00:22:52.552 "progress": { 00:22:52.552 "blocks": 110592, 00:22:52.552 "percent": 87 00:22:52.552 } 00:22:52.552 }, 00:22:52.552 "base_bdevs_list": [ 00:22:52.552 { 00:22:52.552 "name": "spare", 00:22:52.552 "uuid": "24bd4820-4f35-5c79-986b-0728ab9d9096", 00:22:52.552 "is_configured": true, 00:22:52.552 "data_offset": 2048, 00:22:52.552 "data_size": 63488 00:22:52.552 }, 00:22:52.552 { 00:22:52.552 "name": "BaseBdev2", 00:22:52.552 "uuid": "8c16db0b-3115-5708-a8d8-8018fa70ec8d", 00:22:52.552 "is_configured": true, 00:22:52.552 "data_offset": 2048, 00:22:52.552 "data_size": 63488 00:22:52.552 }, 00:22:52.552 { 00:22:52.552 "name": "BaseBdev3", 00:22:52.552 "uuid": "ad1b9570-90f2-5e8e-b22b-d2f1b39021a6", 00:22:52.552 "is_configured": true, 00:22:52.552 "data_offset": 2048, 00:22:52.552 "data_size": 63488 00:22:52.552 } 00:22:52.552 ] 00:22:52.552 }' 00:22:52.552 02:46:17 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:52.552 02:46:17 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:52.552 02:46:17 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:52.810 02:46:17 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:52.810 02:46:17 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:53.377 [2024-07-11 02:46:18.299655] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:53.377 [2024-07-11 02:46:18.299732] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:53.377 [2024-07-11 02:46:18.299881] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:53.636 02:46:18 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:53.636 02:46:18 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:53.636 02:46:18 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:53.636 02:46:18 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:53.636 02:46:18 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:53.636 02:46:18 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:53.636 02:46:18 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:53.636 02:46:18 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:53.894 02:46:18 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:53.894 "name": "raid_bdev1", 00:22:53.894 "uuid": "5d130492-5d14-4cca-b95f-a36cd7a5fd7d", 00:22:53.894 "strip_size_kb": 64, 00:22:53.894 "state": "online", 00:22:53.894 "raid_level": "raid5f", 00:22:53.894 "superblock": true, 00:22:53.894 "num_base_bdevs": 3, 00:22:53.894 "num_base_bdevs_discovered": 3, 00:22:53.894 "num_base_bdevs_operational": 3, 00:22:53.894 "base_bdevs_list": [ 00:22:53.894 { 00:22:53.894 "name": "spare", 00:22:53.894 "uuid": "24bd4820-4f35-5c79-986b-0728ab9d9096", 00:22:53.894 "is_configured": true, 00:22:53.895 "data_offset": 2048, 00:22:53.895 "data_size": 63488 00:22:53.895 }, 00:22:53.895 { 00:22:53.895 "name": "BaseBdev2", 00:22:53.895 "uuid": "8c16db0b-3115-5708-a8d8-8018fa70ec8d", 00:22:53.895 "is_configured": true, 00:22:53.895 "data_offset": 2048, 00:22:53.895 "data_size": 63488 00:22:53.895 }, 00:22:53.895 { 00:22:53.895 "name": "BaseBdev3", 00:22:53.895 "uuid": "ad1b9570-90f2-5e8e-b22b-d2f1b39021a6", 00:22:53.895 "is_configured": true, 00:22:53.895 "data_offset": 2048, 00:22:53.895 "data_size": 63488 00:22:53.895 } 00:22:53.895 ] 00:22:53.895 }' 00:22:53.895 02:46:18 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:54.153 02:46:19 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:54.153 02:46:19 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:54.153 02:46:19 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:22:54.153 02:46:19 -- bdev/bdev_raid.sh@660 -- # break 00:22:54.153 02:46:19 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:54.153 02:46:19 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:54.153 02:46:19 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:54.153 02:46:19 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:54.153 02:46:19 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:54.153 02:46:19 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:54.153 02:46:19 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:54.411 02:46:19 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:54.411 "name": "raid_bdev1", 00:22:54.411 "uuid": "5d130492-5d14-4cca-b95f-a36cd7a5fd7d", 00:22:54.411 "strip_size_kb": 64, 00:22:54.411 "state": "online", 00:22:54.411 "raid_level": "raid5f", 00:22:54.411 "superblock": true, 00:22:54.411 "num_base_bdevs": 3, 00:22:54.411 "num_base_bdevs_discovered": 3, 00:22:54.411 "num_base_bdevs_operational": 3, 00:22:54.411 "base_bdevs_list": [ 00:22:54.411 { 00:22:54.411 "name": "spare", 00:22:54.411 "uuid": "24bd4820-4f35-5c79-986b-0728ab9d9096", 00:22:54.411 "is_configured": true, 00:22:54.411 "data_offset": 2048, 00:22:54.411 "data_size": 63488 00:22:54.411 }, 00:22:54.411 { 00:22:54.411 "name": "BaseBdev2", 00:22:54.411 "uuid": "8c16db0b-3115-5708-a8d8-8018fa70ec8d", 00:22:54.411 "is_configured": true, 00:22:54.412 "data_offset": 2048, 00:22:54.412 "data_size": 63488 00:22:54.412 }, 00:22:54.412 { 00:22:54.412 "name": "BaseBdev3", 00:22:54.412 "uuid": "ad1b9570-90f2-5e8e-b22b-d2f1b39021a6", 00:22:54.412 "is_configured": true, 00:22:54.412 "data_offset": 2048, 00:22:54.412 "data_size": 63488 00:22:54.412 } 00:22:54.412 ] 00:22:54.412 }' 00:22:54.412 02:46:19 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:54.412 02:46:19 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:54.412 02:46:19 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:54.412 02:46:19 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:54.412 02:46:19 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:54.412 02:46:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:54.412 02:46:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:54.412 02:46:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:54.412 02:46:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:54.412 02:46:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:54.412 02:46:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:54.412 02:46:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:54.412 02:46:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:54.412 02:46:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:54.412 02:46:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:54.412 02:46:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:54.670 02:46:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:54.670 "name": "raid_bdev1", 00:22:54.670 "uuid": "5d130492-5d14-4cca-b95f-a36cd7a5fd7d", 00:22:54.670 "strip_size_kb": 64, 00:22:54.670 "state": "online", 00:22:54.670 "raid_level": "raid5f", 00:22:54.670 "superblock": true, 00:22:54.670 "num_base_bdevs": 3, 00:22:54.670 "num_base_bdevs_discovered": 3, 00:22:54.670 "num_base_bdevs_operational": 3, 00:22:54.670 "base_bdevs_list": [ 00:22:54.670 { 00:22:54.670 "name": "spare", 00:22:54.670 "uuid": "24bd4820-4f35-5c79-986b-0728ab9d9096", 00:22:54.670 "is_configured": true, 00:22:54.670 "data_offset": 2048, 00:22:54.670 "data_size": 63488 00:22:54.670 }, 00:22:54.670 { 00:22:54.670 "name": "BaseBdev2", 00:22:54.670 "uuid": "8c16db0b-3115-5708-a8d8-8018fa70ec8d", 00:22:54.670 "is_configured": true, 00:22:54.670 "data_offset": 2048, 00:22:54.670 "data_size": 63488 00:22:54.670 }, 00:22:54.670 { 00:22:54.670 "name": "BaseBdev3", 00:22:54.670 "uuid": "ad1b9570-90f2-5e8e-b22b-d2f1b39021a6", 00:22:54.670 "is_configured": true, 00:22:54.670 "data_offset": 2048, 00:22:54.670 "data_size": 63488 00:22:54.670 } 00:22:54.670 ] 00:22:54.670 }' 00:22:54.670 02:46:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:54.670 02:46:19 -- common/autotest_common.sh@10 -- # set +x 00:22:55.607 02:46:20 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:55.607 [2024-07-11 02:46:20.505887] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:55.607 [2024-07-11 02:46:20.505922] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:55.607 [2024-07-11 02:46:20.506016] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:55.607 [2024-07-11 02:46:20.506108] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:55.607 [2024-07-11 02:46:20.506120] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:22:55.607 02:46:20 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:55.607 02:46:20 -- bdev/bdev_raid.sh@671 -- # jq length 00:22:55.865 02:46:20 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:22:55.865 02:46:20 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:22:55.865 02:46:20 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:55.865 02:46:20 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:55.865 02:46:20 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:22:55.865 02:46:20 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:55.865 02:46:20 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:22:55.865 02:46:20 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:55.865 02:46:20 -- bdev/nbd_common.sh@12 -- # local i 00:22:55.865 02:46:20 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:55.865 02:46:20 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:55.865 02:46:20 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:56.124 /dev/nbd0 00:22:56.124 02:46:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:56.124 02:46:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:56.124 02:46:21 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:22:56.124 02:46:21 -- common/autotest_common.sh@857 -- # local i 00:22:56.124 02:46:21 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:56.124 02:46:21 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:56.124 02:46:21 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:22:56.124 02:46:21 -- common/autotest_common.sh@861 -- # break 00:22:56.124 02:46:21 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:56.124 02:46:21 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:56.124 02:46:21 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:56.124 1+0 records in 00:22:56.124 1+0 records out 00:22:56.124 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000527779 s, 7.8 MB/s 00:22:56.124 02:46:21 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:56.124 02:46:21 -- common/autotest_common.sh@874 -- # size=4096 00:22:56.124 02:46:21 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:56.124 02:46:21 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:56.124 02:46:21 -- common/autotest_common.sh@877 -- # return 0 00:22:56.124 02:46:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:56.124 02:46:21 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:56.124 02:46:21 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:22:56.383 /dev/nbd1 00:22:56.383 02:46:21 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:56.383 02:46:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:56.383 02:46:21 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:22:56.383 02:46:21 -- common/autotest_common.sh@857 -- # local i 00:22:56.383 02:46:21 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:56.383 02:46:21 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:56.383 02:46:21 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:22:56.383 02:46:21 -- common/autotest_common.sh@861 -- # break 00:22:56.383 02:46:21 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:56.383 02:46:21 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:56.383 02:46:21 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:56.383 1+0 records in 00:22:56.383 1+0 records out 00:22:56.383 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000508643 s, 8.1 MB/s 00:22:56.383 02:46:21 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:56.383 02:46:21 -- common/autotest_common.sh@874 -- # size=4096 00:22:56.383 02:46:21 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:56.383 02:46:21 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:56.383 02:46:21 -- common/autotest_common.sh@877 -- # return 0 00:22:56.383 02:46:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:56.383 02:46:21 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:56.383 02:46:21 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:56.383 02:46:21 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:22:56.383 02:46:21 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:56.383 02:46:21 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:22:56.383 02:46:21 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:56.383 02:46:21 -- bdev/nbd_common.sh@51 -- # local i 00:22:56.383 02:46:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:56.383 02:46:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:56.642 02:46:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:56.642 02:46:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:56.642 02:46:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:56.642 02:46:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:56.642 02:46:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:56.642 02:46:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:56.642 02:46:21 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:22:56.642 02:46:21 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:22:56.642 02:46:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:56.642 02:46:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:56.642 02:46:21 -- bdev/nbd_common.sh@41 -- # break 00:22:56.642 02:46:21 -- bdev/nbd_common.sh@45 -- # return 0 00:22:56.642 02:46:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:56.642 02:46:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:56.900 02:46:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:56.900 02:46:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:56.900 02:46:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:56.900 02:46:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:56.900 02:46:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:56.900 02:46:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:56.900 02:46:21 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:22:56.900 02:46:21 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:22:56.900 02:46:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:56.900 02:46:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:56.900 02:46:21 -- bdev/nbd_common.sh@41 -- # break 00:22:56.900 02:46:21 -- bdev/nbd_common.sh@45 -- # return 0 00:22:56.900 02:46:21 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:22:56.900 02:46:21 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:56.900 02:46:21 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:22:56.900 02:46:21 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:22:57.158 02:46:22 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:57.416 [2024-07-11 02:46:22.317498] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:57.416 [2024-07-11 02:46:22.317609] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:57.416 [2024-07-11 02:46:22.317689] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:22:57.416 [2024-07-11 02:46:22.317716] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:57.416 [2024-07-11 02:46:22.319902] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:57.416 [2024-07-11 02:46:22.319983] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:57.416 [2024-07-11 02:46:22.320082] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:57.416 [2024-07-11 02:46:22.320139] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:57.416 BaseBdev1 00:22:57.416 02:46:22 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:57.416 02:46:22 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:22:57.417 02:46:22 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:22:57.683 02:46:22 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:57.683 [2024-07-11 02:46:22.689574] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:57.683 [2024-07-11 02:46:22.689653] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:57.683 [2024-07-11 02:46:22.689687] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:22:57.683 [2024-07-11 02:46:22.689708] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:57.683 [2024-07-11 02:46:22.690098] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:57.683 [2024-07-11 02:46:22.690147] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:57.683 [2024-07-11 02:46:22.690218] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:22:57.683 [2024-07-11 02:46:22.690231] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:22:57.683 [2024-07-11 02:46:22.690239] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:57.683 [2024-07-11 02:46:22.690267] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a280 name raid_bdev1, state configuring 00:22:57.683 [2024-07-11 02:46:22.690310] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:57.683 BaseBdev2 00:22:57.683 02:46:22 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:57.683 02:46:22 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:22:57.683 02:46:22 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:22:57.967 02:46:22 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:58.225 [2024-07-11 02:46:23.065667] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:58.225 [2024-07-11 02:46:23.065756] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:58.225 [2024-07-11 02:46:23.065791] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:22:58.225 [2024-07-11 02:46:23.065811] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:58.225 [2024-07-11 02:46:23.066234] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:58.225 [2024-07-11 02:46:23.066286] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:58.225 [2024-07-11 02:46:23.066382] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:22:58.225 [2024-07-11 02:46:23.066411] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:58.225 BaseBdev3 00:22:58.225 02:46:23 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:58.225 02:46:23 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:58.483 [2024-07-11 02:46:23.437752] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:58.483 [2024-07-11 02:46:23.437833] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:58.483 [2024-07-11 02:46:23.437868] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:22:58.483 [2024-07-11 02:46:23.437894] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:58.483 [2024-07-11 02:46:23.438351] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:58.483 [2024-07-11 02:46:23.438427] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:58.483 [2024-07-11 02:46:23.438517] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:22:58.483 [2024-07-11 02:46:23.438554] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:58.483 spare 00:22:58.483 02:46:23 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:58.483 02:46:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:58.483 02:46:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:58.483 02:46:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:58.483 02:46:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:58.484 02:46:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:58.484 02:46:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:58.484 02:46:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:58.484 02:46:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:58.484 02:46:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:58.484 02:46:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:58.484 02:46:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:58.484 [2024-07-11 02:46:23.538674] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:22:58.484 [2024-07-11 02:46:23.538693] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:58.484 [2024-07-11 02:46:23.538848] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000484d0 00:22:58.484 [2024-07-11 02:46:23.539552] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:22:58.484 [2024-07-11 02:46:23.539572] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:22:58.484 [2024-07-11 02:46:23.539765] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:58.741 02:46:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:58.741 "name": "raid_bdev1", 00:22:58.741 "uuid": "5d130492-5d14-4cca-b95f-a36cd7a5fd7d", 00:22:58.741 "strip_size_kb": 64, 00:22:58.741 "state": "online", 00:22:58.741 "raid_level": "raid5f", 00:22:58.741 "superblock": true, 00:22:58.741 "num_base_bdevs": 3, 00:22:58.741 "num_base_bdevs_discovered": 3, 00:22:58.741 "num_base_bdevs_operational": 3, 00:22:58.741 "base_bdevs_list": [ 00:22:58.741 { 00:22:58.741 "name": "spare", 00:22:58.741 "uuid": "24bd4820-4f35-5c79-986b-0728ab9d9096", 00:22:58.741 "is_configured": true, 00:22:58.741 "data_offset": 2048, 00:22:58.741 "data_size": 63488 00:22:58.741 }, 00:22:58.741 { 00:22:58.741 "name": "BaseBdev2", 00:22:58.741 "uuid": "8c16db0b-3115-5708-a8d8-8018fa70ec8d", 00:22:58.741 "is_configured": true, 00:22:58.741 "data_offset": 2048, 00:22:58.741 "data_size": 63488 00:22:58.741 }, 00:22:58.741 { 00:22:58.741 "name": "BaseBdev3", 00:22:58.741 "uuid": "ad1b9570-90f2-5e8e-b22b-d2f1b39021a6", 00:22:58.741 "is_configured": true, 00:22:58.741 "data_offset": 2048, 00:22:58.741 "data_size": 63488 00:22:58.741 } 00:22:58.741 ] 00:22:58.741 }' 00:22:58.741 02:46:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:58.741 02:46:23 -- common/autotest_common.sh@10 -- # set +x 00:22:59.306 02:46:24 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:59.306 02:46:24 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:59.306 02:46:24 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:59.306 02:46:24 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:59.306 02:46:24 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:59.306 02:46:24 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:59.306 02:46:24 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:59.564 02:46:24 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:59.564 "name": "raid_bdev1", 00:22:59.564 "uuid": "5d130492-5d14-4cca-b95f-a36cd7a5fd7d", 00:22:59.564 "strip_size_kb": 64, 00:22:59.564 "state": "online", 00:22:59.564 "raid_level": "raid5f", 00:22:59.564 "superblock": true, 00:22:59.564 "num_base_bdevs": 3, 00:22:59.564 "num_base_bdevs_discovered": 3, 00:22:59.564 "num_base_bdevs_operational": 3, 00:22:59.564 "base_bdevs_list": [ 00:22:59.564 { 00:22:59.564 "name": "spare", 00:22:59.564 "uuid": "24bd4820-4f35-5c79-986b-0728ab9d9096", 00:22:59.564 "is_configured": true, 00:22:59.564 "data_offset": 2048, 00:22:59.564 "data_size": 63488 00:22:59.564 }, 00:22:59.564 { 00:22:59.564 "name": "BaseBdev2", 00:22:59.564 "uuid": "8c16db0b-3115-5708-a8d8-8018fa70ec8d", 00:22:59.564 "is_configured": true, 00:22:59.564 "data_offset": 2048, 00:22:59.564 "data_size": 63488 00:22:59.564 }, 00:22:59.564 { 00:22:59.564 "name": "BaseBdev3", 00:22:59.564 "uuid": "ad1b9570-90f2-5e8e-b22b-d2f1b39021a6", 00:22:59.564 "is_configured": true, 00:22:59.564 "data_offset": 2048, 00:22:59.564 "data_size": 63488 00:22:59.564 } 00:22:59.564 ] 00:22:59.564 }' 00:22:59.564 02:46:24 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:59.822 02:46:24 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:59.822 02:46:24 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:59.822 02:46:24 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:59.822 02:46:24 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:59.822 02:46:24 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:00.080 02:46:24 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:23:00.080 02:46:24 -- bdev/bdev_raid.sh@709 -- # killprocess 141838 00:23:00.080 02:46:24 -- common/autotest_common.sh@926 -- # '[' -z 141838 ']' 00:23:00.080 02:46:24 -- common/autotest_common.sh@930 -- # kill -0 141838 00:23:00.080 02:46:24 -- common/autotest_common.sh@931 -- # uname 00:23:00.080 02:46:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:00.080 02:46:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 141838 00:23:00.080 killing process with pid 141838 00:23:00.080 Received shutdown signal, test time was about 60.000000 seconds 00:23:00.080 00:23:00.080 Latency(us) 00:23:00.080 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:00.080 =================================================================================================================== 00:23:00.080 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:00.080 02:46:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:00.080 02:46:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:00.080 02:46:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 141838' 00:23:00.080 02:46:24 -- common/autotest_common.sh@945 -- # kill 141838 00:23:00.080 02:46:24 -- common/autotest_common.sh@950 -- # wait 141838 00:23:00.080 [2024-07-11 02:46:25.001415] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:00.080 [2024-07-11 02:46:25.001541] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:00.080 [2024-07-11 02:46:25.001654] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:00.080 [2024-07-11 02:46:25.001690] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:23:00.080 [2024-07-11 02:46:25.035957] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:00.339 ************************************ 00:23:00.339 END TEST raid5f_rebuild_test_sb 00:23:00.339 ************************************ 00:23:00.339 02:46:25 -- bdev/bdev_raid.sh@711 -- # return 0 00:23:00.339 00:23:00.339 real 0m22.499s 00:23:00.339 user 0m36.107s 00:23:00.339 sys 0m2.391s 00:23:00.339 02:46:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:00.339 02:46:25 -- common/autotest_common.sh@10 -- # set +x 00:23:00.339 02:46:25 -- bdev/bdev_raid.sh@743 -- # for n in {3..4} 00:23:00.339 02:46:25 -- bdev/bdev_raid.sh@744 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:23:00.339 02:46:25 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:23:00.339 02:46:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:00.339 02:46:25 -- common/autotest_common.sh@10 -- # set +x 00:23:00.339 ************************************ 00:23:00.339 START TEST raid5f_state_function_test 00:23:00.339 ************************************ 00:23:00.339 02:46:25 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid5f 4 false 00:23:00.339 02:46:25 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:23:00.339 02:46:25 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:23:00.339 02:46:25 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:23:00.339 02:46:25 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:23:00.339 02:46:25 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:23:00.339 02:46:25 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:23:00.339 02:46:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:00.339 02:46:25 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:23:00.339 02:46:25 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:00.339 02:46:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:00.339 02:46:25 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:23:00.339 02:46:25 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:00.339 02:46:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:00.339 02:46:25 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:23:00.339 02:46:25 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:00.339 02:46:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:00.339 02:46:25 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:23:00.339 02:46:25 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:00.339 02:46:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:00.339 02:46:25 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:23:00.339 02:46:25 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:23:00.339 02:46:25 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:23:00.339 02:46:25 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:23:00.339 02:46:25 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:23:00.339 02:46:25 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:23:00.339 02:46:25 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:23:00.339 02:46:25 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:23:00.339 02:46:25 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:23:00.339 02:46:25 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:23:00.339 02:46:25 -- bdev/bdev_raid.sh@226 -- # raid_pid=142501 00:23:00.339 02:46:25 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 142501' 00:23:00.339 Process raid pid: 142501 00:23:00.339 02:46:25 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:23:00.339 02:46:25 -- bdev/bdev_raid.sh@228 -- # waitforlisten 142501 /var/tmp/spdk-raid.sock 00:23:00.339 02:46:25 -- common/autotest_common.sh@819 -- # '[' -z 142501 ']' 00:23:00.339 02:46:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:00.339 02:46:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:00.339 02:46:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:00.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:00.339 02:46:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:00.339 02:46:25 -- common/autotest_common.sh@10 -- # set +x 00:23:00.339 [2024-07-11 02:46:25.348141] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:23:00.339 [2024-07-11 02:46:25.348327] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:00.598 [2024-07-11 02:46:25.482829] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.598 [2024-07-11 02:46:25.548739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:00.598 [2024-07-11 02:46:25.599237] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:01.534 02:46:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:01.534 02:46:26 -- common/autotest_common.sh@852 -- # return 0 00:23:01.534 02:46:26 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:01.534 [2024-07-11 02:46:26.499106] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:01.534 [2024-07-11 02:46:26.499206] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:01.534 [2024-07-11 02:46:26.499219] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:01.534 [2024-07-11 02:46:26.499236] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:01.534 [2024-07-11 02:46:26.499243] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:01.534 [2024-07-11 02:46:26.499278] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:01.534 [2024-07-11 02:46:26.499287] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:01.534 [2024-07-11 02:46:26.499308] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:01.534 02:46:26 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:01.534 02:46:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:01.534 02:46:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:01.534 02:46:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:01.534 02:46:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:01.534 02:46:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:01.534 02:46:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:01.534 02:46:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:01.534 02:46:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:01.534 02:46:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:01.534 02:46:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:01.534 02:46:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:01.792 02:46:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:01.792 "name": "Existed_Raid", 00:23:01.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:01.792 "strip_size_kb": 64, 00:23:01.792 "state": "configuring", 00:23:01.792 "raid_level": "raid5f", 00:23:01.792 "superblock": false, 00:23:01.792 "num_base_bdevs": 4, 00:23:01.792 "num_base_bdevs_discovered": 0, 00:23:01.792 "num_base_bdevs_operational": 4, 00:23:01.792 "base_bdevs_list": [ 00:23:01.792 { 00:23:01.792 "name": "BaseBdev1", 00:23:01.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:01.792 "is_configured": false, 00:23:01.792 "data_offset": 0, 00:23:01.792 "data_size": 0 00:23:01.792 }, 00:23:01.792 { 00:23:01.792 "name": "BaseBdev2", 00:23:01.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:01.792 "is_configured": false, 00:23:01.792 "data_offset": 0, 00:23:01.792 "data_size": 0 00:23:01.792 }, 00:23:01.792 { 00:23:01.792 "name": "BaseBdev3", 00:23:01.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:01.792 "is_configured": false, 00:23:01.792 "data_offset": 0, 00:23:01.792 "data_size": 0 00:23:01.792 }, 00:23:01.792 { 00:23:01.792 "name": "BaseBdev4", 00:23:01.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:01.792 "is_configured": false, 00:23:01.792 "data_offset": 0, 00:23:01.792 "data_size": 0 00:23:01.792 } 00:23:01.792 ] 00:23:01.792 }' 00:23:01.792 02:46:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:01.792 02:46:26 -- common/autotest_common.sh@10 -- # set +x 00:23:02.359 02:46:27 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:02.617 [2024-07-11 02:46:27.483164] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:02.618 [2024-07-11 02:46:27.483204] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:23:02.618 02:46:27 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:02.876 [2024-07-11 02:46:27.711218] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:02.876 [2024-07-11 02:46:27.711296] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:02.876 [2024-07-11 02:46:27.711323] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:02.876 [2024-07-11 02:46:27.711346] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:02.876 [2024-07-11 02:46:27.711354] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:02.876 [2024-07-11 02:46:27.711370] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:02.876 [2024-07-11 02:46:27.711377] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:02.876 [2024-07-11 02:46:27.711398] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:02.876 02:46:27 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:02.876 [2024-07-11 02:46:27.917728] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:02.876 BaseBdev1 00:23:02.876 02:46:27 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:23:02.876 02:46:27 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:23:02.876 02:46:27 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:02.876 02:46:27 -- common/autotest_common.sh@889 -- # local i 00:23:02.876 02:46:27 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:02.876 02:46:27 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:02.876 02:46:27 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:03.134 02:46:28 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:03.393 [ 00:23:03.393 { 00:23:03.393 "name": "BaseBdev1", 00:23:03.393 "aliases": [ 00:23:03.393 "f9030255-9fe9-42f1-ad1d-fbfd82064d7c" 00:23:03.393 ], 00:23:03.393 "product_name": "Malloc disk", 00:23:03.393 "block_size": 512, 00:23:03.393 "num_blocks": 65536, 00:23:03.393 "uuid": "f9030255-9fe9-42f1-ad1d-fbfd82064d7c", 00:23:03.393 "assigned_rate_limits": { 00:23:03.393 "rw_ios_per_sec": 0, 00:23:03.393 "rw_mbytes_per_sec": 0, 00:23:03.393 "r_mbytes_per_sec": 0, 00:23:03.393 "w_mbytes_per_sec": 0 00:23:03.393 }, 00:23:03.393 "claimed": true, 00:23:03.393 "claim_type": "exclusive_write", 00:23:03.393 "zoned": false, 00:23:03.393 "supported_io_types": { 00:23:03.393 "read": true, 00:23:03.393 "write": true, 00:23:03.393 "unmap": true, 00:23:03.393 "write_zeroes": true, 00:23:03.393 "flush": true, 00:23:03.393 "reset": true, 00:23:03.393 "compare": false, 00:23:03.393 "compare_and_write": false, 00:23:03.393 "abort": true, 00:23:03.393 "nvme_admin": false, 00:23:03.393 "nvme_io": false 00:23:03.393 }, 00:23:03.393 "memory_domains": [ 00:23:03.393 { 00:23:03.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:03.393 "dma_device_type": 2 00:23:03.393 } 00:23:03.393 ], 00:23:03.393 "driver_specific": {} 00:23:03.393 } 00:23:03.393 ] 00:23:03.393 02:46:28 -- common/autotest_common.sh@895 -- # return 0 00:23:03.393 02:46:28 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:03.393 02:46:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:03.393 02:46:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:03.393 02:46:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:03.393 02:46:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:03.393 02:46:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:03.393 02:46:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:03.393 02:46:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:03.393 02:46:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:03.393 02:46:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:03.393 02:46:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:03.393 02:46:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:03.653 02:46:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:03.653 "name": "Existed_Raid", 00:23:03.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:03.653 "strip_size_kb": 64, 00:23:03.653 "state": "configuring", 00:23:03.653 "raid_level": "raid5f", 00:23:03.653 "superblock": false, 00:23:03.653 "num_base_bdevs": 4, 00:23:03.653 "num_base_bdevs_discovered": 1, 00:23:03.653 "num_base_bdevs_operational": 4, 00:23:03.653 "base_bdevs_list": [ 00:23:03.653 { 00:23:03.653 "name": "BaseBdev1", 00:23:03.653 "uuid": "f9030255-9fe9-42f1-ad1d-fbfd82064d7c", 00:23:03.653 "is_configured": true, 00:23:03.653 "data_offset": 0, 00:23:03.653 "data_size": 65536 00:23:03.653 }, 00:23:03.653 { 00:23:03.653 "name": "BaseBdev2", 00:23:03.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:03.653 "is_configured": false, 00:23:03.653 "data_offset": 0, 00:23:03.653 "data_size": 0 00:23:03.653 }, 00:23:03.653 { 00:23:03.653 "name": "BaseBdev3", 00:23:03.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:03.653 "is_configured": false, 00:23:03.653 "data_offset": 0, 00:23:03.653 "data_size": 0 00:23:03.653 }, 00:23:03.653 { 00:23:03.653 "name": "BaseBdev4", 00:23:03.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:03.653 "is_configured": false, 00:23:03.653 "data_offset": 0, 00:23:03.653 "data_size": 0 00:23:03.653 } 00:23:03.653 ] 00:23:03.653 }' 00:23:03.653 02:46:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:03.653 02:46:28 -- common/autotest_common.sh@10 -- # set +x 00:23:04.220 02:46:29 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:04.479 [2024-07-11 02:46:29.354091] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:04.479 [2024-07-11 02:46:29.354160] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005a80 name Existed_Raid, state configuring 00:23:04.479 02:46:29 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:23:04.479 02:46:29 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:04.479 [2024-07-11 02:46:29.554190] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:04.479 [2024-07-11 02:46:29.556067] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:04.479 [2024-07-11 02:46:29.556156] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:04.479 [2024-07-11 02:46:29.556185] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:04.479 [2024-07-11 02:46:29.556208] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:04.479 [2024-07-11 02:46:29.556216] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:04.479 [2024-07-11 02:46:29.556232] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:04.479 02:46:29 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:23:04.479 02:46:29 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:04.479 02:46:29 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:04.479 02:46:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:04.479 02:46:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:04.479 02:46:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:04.479 02:46:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:04.479 02:46:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:04.479 02:46:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:04.479 02:46:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:04.479 02:46:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:04.479 02:46:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:04.479 02:46:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:04.479 02:46:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:04.738 02:46:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:04.738 "name": "Existed_Raid", 00:23:04.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:04.738 "strip_size_kb": 64, 00:23:04.738 "state": "configuring", 00:23:04.738 "raid_level": "raid5f", 00:23:04.738 "superblock": false, 00:23:04.738 "num_base_bdevs": 4, 00:23:04.738 "num_base_bdevs_discovered": 1, 00:23:04.738 "num_base_bdevs_operational": 4, 00:23:04.738 "base_bdevs_list": [ 00:23:04.738 { 00:23:04.738 "name": "BaseBdev1", 00:23:04.738 "uuid": "f9030255-9fe9-42f1-ad1d-fbfd82064d7c", 00:23:04.738 "is_configured": true, 00:23:04.738 "data_offset": 0, 00:23:04.738 "data_size": 65536 00:23:04.738 }, 00:23:04.738 { 00:23:04.738 "name": "BaseBdev2", 00:23:04.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:04.738 "is_configured": false, 00:23:04.738 "data_offset": 0, 00:23:04.738 "data_size": 0 00:23:04.738 }, 00:23:04.738 { 00:23:04.738 "name": "BaseBdev3", 00:23:04.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:04.738 "is_configured": false, 00:23:04.738 "data_offset": 0, 00:23:04.738 "data_size": 0 00:23:04.738 }, 00:23:04.738 { 00:23:04.738 "name": "BaseBdev4", 00:23:04.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:04.738 "is_configured": false, 00:23:04.738 "data_offset": 0, 00:23:04.738 "data_size": 0 00:23:04.738 } 00:23:04.738 ] 00:23:04.738 }' 00:23:04.738 02:46:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:04.738 02:46:29 -- common/autotest_common.sh@10 -- # set +x 00:23:05.673 02:46:30 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:05.673 [2024-07-11 02:46:30.709307] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:05.673 BaseBdev2 00:23:05.673 02:46:30 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:23:05.673 02:46:30 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:23:05.673 02:46:30 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:05.673 02:46:30 -- common/autotest_common.sh@889 -- # local i 00:23:05.673 02:46:30 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:05.674 02:46:30 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:05.674 02:46:30 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:05.932 02:46:30 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:06.191 [ 00:23:06.191 { 00:23:06.191 "name": "BaseBdev2", 00:23:06.191 "aliases": [ 00:23:06.191 "4feae1ba-30fd-46a6-a0e1-98c025685c42" 00:23:06.191 ], 00:23:06.191 "product_name": "Malloc disk", 00:23:06.191 "block_size": 512, 00:23:06.191 "num_blocks": 65536, 00:23:06.191 "uuid": "4feae1ba-30fd-46a6-a0e1-98c025685c42", 00:23:06.191 "assigned_rate_limits": { 00:23:06.191 "rw_ios_per_sec": 0, 00:23:06.191 "rw_mbytes_per_sec": 0, 00:23:06.191 "r_mbytes_per_sec": 0, 00:23:06.191 "w_mbytes_per_sec": 0 00:23:06.191 }, 00:23:06.191 "claimed": true, 00:23:06.191 "claim_type": "exclusive_write", 00:23:06.191 "zoned": false, 00:23:06.191 "supported_io_types": { 00:23:06.191 "read": true, 00:23:06.191 "write": true, 00:23:06.191 "unmap": true, 00:23:06.191 "write_zeroes": true, 00:23:06.191 "flush": true, 00:23:06.191 "reset": true, 00:23:06.191 "compare": false, 00:23:06.191 "compare_and_write": false, 00:23:06.191 "abort": true, 00:23:06.191 "nvme_admin": false, 00:23:06.191 "nvme_io": false 00:23:06.191 }, 00:23:06.191 "memory_domains": [ 00:23:06.191 { 00:23:06.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:06.191 "dma_device_type": 2 00:23:06.191 } 00:23:06.191 ], 00:23:06.191 "driver_specific": {} 00:23:06.191 } 00:23:06.191 ] 00:23:06.191 02:46:31 -- common/autotest_common.sh@895 -- # return 0 00:23:06.191 02:46:31 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:23:06.191 02:46:31 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:06.191 02:46:31 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:06.191 02:46:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:06.191 02:46:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:06.191 02:46:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:06.191 02:46:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:06.191 02:46:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:06.191 02:46:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:06.191 02:46:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:06.191 02:46:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:06.191 02:46:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:06.191 02:46:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:06.191 02:46:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:06.450 02:46:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:06.450 "name": "Existed_Raid", 00:23:06.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:06.450 "strip_size_kb": 64, 00:23:06.450 "state": "configuring", 00:23:06.450 "raid_level": "raid5f", 00:23:06.450 "superblock": false, 00:23:06.450 "num_base_bdevs": 4, 00:23:06.450 "num_base_bdevs_discovered": 2, 00:23:06.450 "num_base_bdevs_operational": 4, 00:23:06.450 "base_bdevs_list": [ 00:23:06.450 { 00:23:06.450 "name": "BaseBdev1", 00:23:06.450 "uuid": "f9030255-9fe9-42f1-ad1d-fbfd82064d7c", 00:23:06.450 "is_configured": true, 00:23:06.450 "data_offset": 0, 00:23:06.450 "data_size": 65536 00:23:06.450 }, 00:23:06.450 { 00:23:06.450 "name": "BaseBdev2", 00:23:06.450 "uuid": "4feae1ba-30fd-46a6-a0e1-98c025685c42", 00:23:06.450 "is_configured": true, 00:23:06.450 "data_offset": 0, 00:23:06.450 "data_size": 65536 00:23:06.450 }, 00:23:06.450 { 00:23:06.450 "name": "BaseBdev3", 00:23:06.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:06.450 "is_configured": false, 00:23:06.450 "data_offset": 0, 00:23:06.450 "data_size": 0 00:23:06.450 }, 00:23:06.450 { 00:23:06.450 "name": "BaseBdev4", 00:23:06.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:06.450 "is_configured": false, 00:23:06.450 "data_offset": 0, 00:23:06.450 "data_size": 0 00:23:06.450 } 00:23:06.450 ] 00:23:06.450 }' 00:23:06.450 02:46:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:06.450 02:46:31 -- common/autotest_common.sh@10 -- # set +x 00:23:07.017 02:46:32 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:07.275 [2024-07-11 02:46:32.241419] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:07.275 BaseBdev3 00:23:07.275 02:46:32 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:23:07.275 02:46:32 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:23:07.275 02:46:32 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:07.275 02:46:32 -- common/autotest_common.sh@889 -- # local i 00:23:07.275 02:46:32 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:07.275 02:46:32 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:07.275 02:46:32 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:07.534 02:46:32 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:07.793 [ 00:23:07.793 { 00:23:07.793 "name": "BaseBdev3", 00:23:07.793 "aliases": [ 00:23:07.793 "d362f137-b76a-4266-8801-6d8737cb368b" 00:23:07.793 ], 00:23:07.793 "product_name": "Malloc disk", 00:23:07.793 "block_size": 512, 00:23:07.793 "num_blocks": 65536, 00:23:07.793 "uuid": "d362f137-b76a-4266-8801-6d8737cb368b", 00:23:07.793 "assigned_rate_limits": { 00:23:07.793 "rw_ios_per_sec": 0, 00:23:07.793 "rw_mbytes_per_sec": 0, 00:23:07.793 "r_mbytes_per_sec": 0, 00:23:07.793 "w_mbytes_per_sec": 0 00:23:07.793 }, 00:23:07.793 "claimed": true, 00:23:07.793 "claim_type": "exclusive_write", 00:23:07.793 "zoned": false, 00:23:07.793 "supported_io_types": { 00:23:07.793 "read": true, 00:23:07.793 "write": true, 00:23:07.793 "unmap": true, 00:23:07.793 "write_zeroes": true, 00:23:07.793 "flush": true, 00:23:07.793 "reset": true, 00:23:07.793 "compare": false, 00:23:07.793 "compare_and_write": false, 00:23:07.793 "abort": true, 00:23:07.793 "nvme_admin": false, 00:23:07.793 "nvme_io": false 00:23:07.793 }, 00:23:07.793 "memory_domains": [ 00:23:07.793 { 00:23:07.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:07.793 "dma_device_type": 2 00:23:07.793 } 00:23:07.793 ], 00:23:07.793 "driver_specific": {} 00:23:07.793 } 00:23:07.793 ] 00:23:07.793 02:46:32 -- common/autotest_common.sh@895 -- # return 0 00:23:07.793 02:46:32 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:23:07.793 02:46:32 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:07.793 02:46:32 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:07.793 02:46:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:07.793 02:46:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:07.793 02:46:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:07.793 02:46:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:07.793 02:46:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:07.793 02:46:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:07.793 02:46:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:07.793 02:46:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:07.793 02:46:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:07.793 02:46:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:07.793 02:46:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:08.051 02:46:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:08.051 "name": "Existed_Raid", 00:23:08.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:08.051 "strip_size_kb": 64, 00:23:08.051 "state": "configuring", 00:23:08.051 "raid_level": "raid5f", 00:23:08.051 "superblock": false, 00:23:08.051 "num_base_bdevs": 4, 00:23:08.051 "num_base_bdevs_discovered": 3, 00:23:08.051 "num_base_bdevs_operational": 4, 00:23:08.051 "base_bdevs_list": [ 00:23:08.051 { 00:23:08.051 "name": "BaseBdev1", 00:23:08.052 "uuid": "f9030255-9fe9-42f1-ad1d-fbfd82064d7c", 00:23:08.052 "is_configured": true, 00:23:08.052 "data_offset": 0, 00:23:08.052 "data_size": 65536 00:23:08.052 }, 00:23:08.052 { 00:23:08.052 "name": "BaseBdev2", 00:23:08.052 "uuid": "4feae1ba-30fd-46a6-a0e1-98c025685c42", 00:23:08.052 "is_configured": true, 00:23:08.052 "data_offset": 0, 00:23:08.052 "data_size": 65536 00:23:08.052 }, 00:23:08.052 { 00:23:08.052 "name": "BaseBdev3", 00:23:08.052 "uuid": "d362f137-b76a-4266-8801-6d8737cb368b", 00:23:08.052 "is_configured": true, 00:23:08.052 "data_offset": 0, 00:23:08.052 "data_size": 65536 00:23:08.052 }, 00:23:08.052 { 00:23:08.052 "name": "BaseBdev4", 00:23:08.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:08.052 "is_configured": false, 00:23:08.052 "data_offset": 0, 00:23:08.052 "data_size": 0 00:23:08.052 } 00:23:08.052 ] 00:23:08.052 }' 00:23:08.052 02:46:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:08.052 02:46:32 -- common/autotest_common.sh@10 -- # set +x 00:23:08.621 02:46:33 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:23:08.881 [2024-07-11 02:46:33.750120] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:08.881 [2024-07-11 02:46:33.750187] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006380 00:23:08.881 [2024-07-11 02:46:33.750198] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:23:08.881 [2024-07-11 02:46:33.750338] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:23:08.881 [2024-07-11 02:46:33.751162] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006380 00:23:08.881 [2024-07-11 02:46:33.751185] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006380 00:23:08.881 [2024-07-11 02:46:33.751504] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:08.881 BaseBdev4 00:23:08.881 02:46:33 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:23:08.881 02:46:33 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:23:08.881 02:46:33 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:08.881 02:46:33 -- common/autotest_common.sh@889 -- # local i 00:23:08.881 02:46:33 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:08.881 02:46:33 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:08.881 02:46:33 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:08.881 02:46:33 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:09.139 [ 00:23:09.139 { 00:23:09.139 "name": "BaseBdev4", 00:23:09.139 "aliases": [ 00:23:09.139 "363b3a26-9122-46b3-9b6e-6f34c99fff88" 00:23:09.139 ], 00:23:09.139 "product_name": "Malloc disk", 00:23:09.139 "block_size": 512, 00:23:09.139 "num_blocks": 65536, 00:23:09.139 "uuid": "363b3a26-9122-46b3-9b6e-6f34c99fff88", 00:23:09.139 "assigned_rate_limits": { 00:23:09.139 "rw_ios_per_sec": 0, 00:23:09.139 "rw_mbytes_per_sec": 0, 00:23:09.139 "r_mbytes_per_sec": 0, 00:23:09.139 "w_mbytes_per_sec": 0 00:23:09.139 }, 00:23:09.139 "claimed": true, 00:23:09.139 "claim_type": "exclusive_write", 00:23:09.139 "zoned": false, 00:23:09.139 "supported_io_types": { 00:23:09.139 "read": true, 00:23:09.139 "write": true, 00:23:09.139 "unmap": true, 00:23:09.139 "write_zeroes": true, 00:23:09.139 "flush": true, 00:23:09.139 "reset": true, 00:23:09.139 "compare": false, 00:23:09.139 "compare_and_write": false, 00:23:09.139 "abort": true, 00:23:09.139 "nvme_admin": false, 00:23:09.139 "nvme_io": false 00:23:09.139 }, 00:23:09.139 "memory_domains": [ 00:23:09.139 { 00:23:09.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:09.139 "dma_device_type": 2 00:23:09.139 } 00:23:09.139 ], 00:23:09.139 "driver_specific": {} 00:23:09.139 } 00:23:09.139 ] 00:23:09.139 02:46:34 -- common/autotest_common.sh@895 -- # return 0 00:23:09.139 02:46:34 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:23:09.139 02:46:34 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:09.139 02:46:34 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:23:09.139 02:46:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:09.139 02:46:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:09.139 02:46:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:09.139 02:46:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:09.139 02:46:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:09.139 02:46:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:09.139 02:46:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:09.139 02:46:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:09.139 02:46:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:09.139 02:46:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:09.139 02:46:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:09.397 02:46:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:09.397 "name": "Existed_Raid", 00:23:09.397 "uuid": "1c6e8845-a8b3-4e4d-af90-6cc609ea9047", 00:23:09.397 "strip_size_kb": 64, 00:23:09.397 "state": "online", 00:23:09.397 "raid_level": "raid5f", 00:23:09.397 "superblock": false, 00:23:09.397 "num_base_bdevs": 4, 00:23:09.397 "num_base_bdevs_discovered": 4, 00:23:09.397 "num_base_bdevs_operational": 4, 00:23:09.397 "base_bdevs_list": [ 00:23:09.397 { 00:23:09.397 "name": "BaseBdev1", 00:23:09.397 "uuid": "f9030255-9fe9-42f1-ad1d-fbfd82064d7c", 00:23:09.397 "is_configured": true, 00:23:09.397 "data_offset": 0, 00:23:09.397 "data_size": 65536 00:23:09.397 }, 00:23:09.397 { 00:23:09.397 "name": "BaseBdev2", 00:23:09.397 "uuid": "4feae1ba-30fd-46a6-a0e1-98c025685c42", 00:23:09.397 "is_configured": true, 00:23:09.397 "data_offset": 0, 00:23:09.397 "data_size": 65536 00:23:09.397 }, 00:23:09.397 { 00:23:09.397 "name": "BaseBdev3", 00:23:09.397 "uuid": "d362f137-b76a-4266-8801-6d8737cb368b", 00:23:09.397 "is_configured": true, 00:23:09.397 "data_offset": 0, 00:23:09.397 "data_size": 65536 00:23:09.397 }, 00:23:09.397 { 00:23:09.397 "name": "BaseBdev4", 00:23:09.397 "uuid": "363b3a26-9122-46b3-9b6e-6f34c99fff88", 00:23:09.397 "is_configured": true, 00:23:09.397 "data_offset": 0, 00:23:09.397 "data_size": 65536 00:23:09.397 } 00:23:09.397 ] 00:23:09.397 }' 00:23:09.397 02:46:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:09.397 02:46:34 -- common/autotest_common.sh@10 -- # set +x 00:23:10.009 02:46:35 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:10.267 [2024-07-11 02:46:35.302163] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:10.267 02:46:35 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:23:10.267 02:46:35 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:23:10.267 02:46:35 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:23:10.267 02:46:35 -- bdev/bdev_raid.sh@196 -- # return 0 00:23:10.267 02:46:35 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:23:10.267 02:46:35 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:23:10.267 02:46:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:10.267 02:46:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:10.267 02:46:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:10.267 02:46:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:10.267 02:46:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:10.267 02:46:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:10.267 02:46:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:10.267 02:46:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:10.267 02:46:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:10.267 02:46:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:10.267 02:46:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:10.525 02:46:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:10.525 "name": "Existed_Raid", 00:23:10.525 "uuid": "1c6e8845-a8b3-4e4d-af90-6cc609ea9047", 00:23:10.525 "strip_size_kb": 64, 00:23:10.525 "state": "online", 00:23:10.525 "raid_level": "raid5f", 00:23:10.525 "superblock": false, 00:23:10.525 "num_base_bdevs": 4, 00:23:10.525 "num_base_bdevs_discovered": 3, 00:23:10.525 "num_base_bdevs_operational": 3, 00:23:10.525 "base_bdevs_list": [ 00:23:10.525 { 00:23:10.525 "name": null, 00:23:10.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:10.525 "is_configured": false, 00:23:10.525 "data_offset": 0, 00:23:10.525 "data_size": 65536 00:23:10.525 }, 00:23:10.525 { 00:23:10.525 "name": "BaseBdev2", 00:23:10.525 "uuid": "4feae1ba-30fd-46a6-a0e1-98c025685c42", 00:23:10.525 "is_configured": true, 00:23:10.525 "data_offset": 0, 00:23:10.525 "data_size": 65536 00:23:10.525 }, 00:23:10.525 { 00:23:10.525 "name": "BaseBdev3", 00:23:10.525 "uuid": "d362f137-b76a-4266-8801-6d8737cb368b", 00:23:10.525 "is_configured": true, 00:23:10.525 "data_offset": 0, 00:23:10.525 "data_size": 65536 00:23:10.525 }, 00:23:10.525 { 00:23:10.525 "name": "BaseBdev4", 00:23:10.525 "uuid": "363b3a26-9122-46b3-9b6e-6f34c99fff88", 00:23:10.525 "is_configured": true, 00:23:10.525 "data_offset": 0, 00:23:10.525 "data_size": 65536 00:23:10.525 } 00:23:10.525 ] 00:23:10.525 }' 00:23:10.525 02:46:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:10.525 02:46:35 -- common/autotest_common.sh@10 -- # set +x 00:23:11.091 02:46:36 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:23:11.091 02:46:36 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:11.091 02:46:36 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:11.091 02:46:36 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:23:11.349 02:46:36 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:23:11.349 02:46:36 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:11.349 02:46:36 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:23:11.606 [2024-07-11 02:46:36.548029] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:11.606 [2024-07-11 02:46:36.548063] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:11.606 [2024-07-11 02:46:36.548125] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:11.606 02:46:36 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:23:11.606 02:46:36 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:11.606 02:46:36 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:11.606 02:46:36 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:23:11.864 02:46:36 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:23:11.864 02:46:36 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:11.864 02:46:36 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:23:12.122 [2024-07-11 02:46:37.022541] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:12.122 02:46:37 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:23:12.122 02:46:37 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:12.122 02:46:37 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:12.122 02:46:37 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:23:12.380 02:46:37 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:23:12.380 02:46:37 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:12.380 02:46:37 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:23:12.380 [2024-07-11 02:46:37.453989] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:23:12.380 [2024-07-11 02:46:37.454050] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state offline 00:23:12.638 02:46:37 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:23:12.638 02:46:37 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:12.638 02:46:37 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:12.638 02:46:37 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:23:12.638 02:46:37 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:23:12.638 02:46:37 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:23:12.638 02:46:37 -- bdev/bdev_raid.sh@287 -- # killprocess 142501 00:23:12.638 02:46:37 -- common/autotest_common.sh@926 -- # '[' -z 142501 ']' 00:23:12.638 02:46:37 -- common/autotest_common.sh@930 -- # kill -0 142501 00:23:12.638 02:46:37 -- common/autotest_common.sh@931 -- # uname 00:23:12.638 02:46:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:12.638 02:46:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 142501 00:23:12.638 killing process with pid 142501 00:23:12.638 02:46:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:12.638 02:46:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:12.638 02:46:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 142501' 00:23:12.638 02:46:37 -- common/autotest_common.sh@945 -- # kill 142501 00:23:12.638 02:46:37 -- common/autotest_common.sh@950 -- # wait 142501 00:23:12.638 [2024-07-11 02:46:37.682535] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:12.638 [2024-07-11 02:46:37.682632] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:12.897 ************************************ 00:23:12.897 END TEST raid5f_state_function_test 00:23:12.897 ************************************ 00:23:12.897 02:46:37 -- bdev/bdev_raid.sh@289 -- # return 0 00:23:12.897 00:23:12.897 real 0m12.623s 00:23:12.897 user 0m23.598s 00:23:12.897 sys 0m1.411s 00:23:12.897 02:46:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:12.897 02:46:37 -- common/autotest_common.sh@10 -- # set +x 00:23:12.897 02:46:37 -- bdev/bdev_raid.sh@745 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:23:12.897 02:46:37 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:23:12.897 02:46:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:12.897 02:46:37 -- common/autotest_common.sh@10 -- # set +x 00:23:12.897 ************************************ 00:23:12.897 START TEST raid5f_state_function_test_sb 00:23:12.897 ************************************ 00:23:12.897 02:46:37 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid5f 4 true 00:23:12.897 02:46:37 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:23:12.897 02:46:37 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:23:12.897 02:46:37 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:23:12.897 02:46:37 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:23:12.897 02:46:37 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:23:12.897 02:46:37 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:23:12.897 02:46:37 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:12.897 02:46:37 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:23:12.897 02:46:37 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:12.897 02:46:37 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:12.897 02:46:37 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:23:12.897 02:46:37 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:12.897 02:46:37 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:12.897 02:46:37 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:23:12.897 02:46:37 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:12.897 02:46:37 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:12.897 02:46:37 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:23:12.897 02:46:37 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:12.897 02:46:37 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:12.897 02:46:37 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:23:12.897 02:46:37 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:23:12.897 02:46:37 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:23:12.897 02:46:37 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:23:12.897 02:46:37 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:23:12.897 02:46:37 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:23:12.897 02:46:37 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:23:12.897 02:46:37 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:23:12.897 02:46:37 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:23:12.897 02:46:37 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:23:12.897 02:46:37 -- bdev/bdev_raid.sh@226 -- # raid_pid=142946 00:23:12.897 02:46:37 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 142946' 00:23:12.897 Process raid pid: 142946 00:23:12.897 02:46:37 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:23:12.897 02:46:37 -- bdev/bdev_raid.sh@228 -- # waitforlisten 142946 /var/tmp/spdk-raid.sock 00:23:12.897 02:46:37 -- common/autotest_common.sh@819 -- # '[' -z 142946 ']' 00:23:12.897 02:46:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:12.897 02:46:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:12.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:12.897 02:46:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:12.897 02:46:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:12.897 02:46:37 -- common/autotest_common.sh@10 -- # set +x 00:23:13.156 [2024-07-11 02:46:38.034469] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:23:13.156 [2024-07-11 02:46:38.034734] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:13.156 [2024-07-11 02:46:38.186229] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.415 [2024-07-11 02:46:38.280458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:13.415 [2024-07-11 02:46:38.356327] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:13.982 02:46:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:13.982 02:46:38 -- common/autotest_common.sh@852 -- # return 0 00:23:13.983 02:46:38 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:14.241 [2024-07-11 02:46:39.077083] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:14.241 [2024-07-11 02:46:39.077165] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:14.241 [2024-07-11 02:46:39.077180] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:14.241 [2024-07-11 02:46:39.077202] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:14.241 [2024-07-11 02:46:39.077210] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:14.241 [2024-07-11 02:46:39.077252] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:14.241 [2024-07-11 02:46:39.077261] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:14.241 [2024-07-11 02:46:39.077289] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:14.241 02:46:39 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:14.241 02:46:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:14.241 02:46:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:14.241 02:46:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:14.241 02:46:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:14.241 02:46:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:14.241 02:46:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:14.241 02:46:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:14.241 02:46:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:14.242 02:46:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:14.242 02:46:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:14.242 02:46:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:14.500 02:46:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:14.500 "name": "Existed_Raid", 00:23:14.500 "uuid": "3cbfa231-d201-46fa-b98e-7ffa7ed03001", 00:23:14.500 "strip_size_kb": 64, 00:23:14.500 "state": "configuring", 00:23:14.500 "raid_level": "raid5f", 00:23:14.500 "superblock": true, 00:23:14.500 "num_base_bdevs": 4, 00:23:14.500 "num_base_bdevs_discovered": 0, 00:23:14.500 "num_base_bdevs_operational": 4, 00:23:14.500 "base_bdevs_list": [ 00:23:14.500 { 00:23:14.500 "name": "BaseBdev1", 00:23:14.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:14.500 "is_configured": false, 00:23:14.500 "data_offset": 0, 00:23:14.500 "data_size": 0 00:23:14.500 }, 00:23:14.500 { 00:23:14.500 "name": "BaseBdev2", 00:23:14.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:14.500 "is_configured": false, 00:23:14.500 "data_offset": 0, 00:23:14.500 "data_size": 0 00:23:14.500 }, 00:23:14.500 { 00:23:14.500 "name": "BaseBdev3", 00:23:14.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:14.500 "is_configured": false, 00:23:14.500 "data_offset": 0, 00:23:14.500 "data_size": 0 00:23:14.500 }, 00:23:14.500 { 00:23:14.500 "name": "BaseBdev4", 00:23:14.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:14.500 "is_configured": false, 00:23:14.500 "data_offset": 0, 00:23:14.500 "data_size": 0 00:23:14.500 } 00:23:14.500 ] 00:23:14.500 }' 00:23:14.500 02:46:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:14.500 02:46:39 -- common/autotest_common.sh@10 -- # set +x 00:23:15.067 02:46:39 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:15.067 [2024-07-11 02:46:40.129064] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:15.067 [2024-07-11 02:46:40.129108] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:23:15.067 02:46:40 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:15.325 [2024-07-11 02:46:40.317165] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:15.325 [2024-07-11 02:46:40.317222] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:15.325 [2024-07-11 02:46:40.317233] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:15.325 [2024-07-11 02:46:40.317258] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:15.325 [2024-07-11 02:46:40.317267] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:15.325 [2024-07-11 02:46:40.317284] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:15.325 [2024-07-11 02:46:40.317291] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:15.325 [2024-07-11 02:46:40.317317] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:15.325 02:46:40 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:15.584 [2024-07-11 02:46:40.523294] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:15.584 BaseBdev1 00:23:15.584 02:46:40 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:23:15.584 02:46:40 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:23:15.584 02:46:40 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:15.584 02:46:40 -- common/autotest_common.sh@889 -- # local i 00:23:15.584 02:46:40 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:15.584 02:46:40 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:15.584 02:46:40 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:15.843 02:46:40 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:15.843 [ 00:23:15.843 { 00:23:15.843 "name": "BaseBdev1", 00:23:15.843 "aliases": [ 00:23:15.843 "5e73dbf3-c5ef-40d9-b64b-d9187a000360" 00:23:15.843 ], 00:23:15.843 "product_name": "Malloc disk", 00:23:15.843 "block_size": 512, 00:23:15.843 "num_blocks": 65536, 00:23:15.843 "uuid": "5e73dbf3-c5ef-40d9-b64b-d9187a000360", 00:23:15.843 "assigned_rate_limits": { 00:23:15.843 "rw_ios_per_sec": 0, 00:23:15.843 "rw_mbytes_per_sec": 0, 00:23:15.843 "r_mbytes_per_sec": 0, 00:23:15.843 "w_mbytes_per_sec": 0 00:23:15.843 }, 00:23:15.843 "claimed": true, 00:23:15.843 "claim_type": "exclusive_write", 00:23:15.843 "zoned": false, 00:23:15.843 "supported_io_types": { 00:23:15.843 "read": true, 00:23:15.843 "write": true, 00:23:15.843 "unmap": true, 00:23:15.843 "write_zeroes": true, 00:23:15.843 "flush": true, 00:23:15.843 "reset": true, 00:23:15.843 "compare": false, 00:23:15.843 "compare_and_write": false, 00:23:15.843 "abort": true, 00:23:15.843 "nvme_admin": false, 00:23:15.843 "nvme_io": false 00:23:15.843 }, 00:23:15.843 "memory_domains": [ 00:23:15.843 { 00:23:15.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:15.843 "dma_device_type": 2 00:23:15.843 } 00:23:15.843 ], 00:23:15.843 "driver_specific": {} 00:23:15.843 } 00:23:15.843 ] 00:23:15.843 02:46:40 -- common/autotest_common.sh@895 -- # return 0 00:23:15.843 02:46:40 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:15.843 02:46:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:15.843 02:46:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:15.843 02:46:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:15.843 02:46:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:15.843 02:46:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:15.843 02:46:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:15.843 02:46:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:15.843 02:46:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:15.843 02:46:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:15.843 02:46:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:15.843 02:46:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:16.101 02:46:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:16.101 "name": "Existed_Raid", 00:23:16.101 "uuid": "b57a1dbf-1407-4956-81f2-95cd01da9cc7", 00:23:16.101 "strip_size_kb": 64, 00:23:16.101 "state": "configuring", 00:23:16.101 "raid_level": "raid5f", 00:23:16.101 "superblock": true, 00:23:16.101 "num_base_bdevs": 4, 00:23:16.101 "num_base_bdevs_discovered": 1, 00:23:16.101 "num_base_bdevs_operational": 4, 00:23:16.101 "base_bdevs_list": [ 00:23:16.101 { 00:23:16.101 "name": "BaseBdev1", 00:23:16.101 "uuid": "5e73dbf3-c5ef-40d9-b64b-d9187a000360", 00:23:16.101 "is_configured": true, 00:23:16.101 "data_offset": 2048, 00:23:16.101 "data_size": 63488 00:23:16.101 }, 00:23:16.101 { 00:23:16.101 "name": "BaseBdev2", 00:23:16.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:16.101 "is_configured": false, 00:23:16.101 "data_offset": 0, 00:23:16.101 "data_size": 0 00:23:16.101 }, 00:23:16.101 { 00:23:16.101 "name": "BaseBdev3", 00:23:16.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:16.101 "is_configured": false, 00:23:16.101 "data_offset": 0, 00:23:16.101 "data_size": 0 00:23:16.101 }, 00:23:16.101 { 00:23:16.101 "name": "BaseBdev4", 00:23:16.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:16.101 "is_configured": false, 00:23:16.101 "data_offset": 0, 00:23:16.101 "data_size": 0 00:23:16.101 } 00:23:16.101 ] 00:23:16.101 }' 00:23:16.101 02:46:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:16.101 02:46:41 -- common/autotest_common.sh@10 -- # set +x 00:23:16.668 02:46:41 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:16.926 [2024-07-11 02:46:41.911642] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:16.926 [2024-07-11 02:46:41.911716] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005a80 name Existed_Raid, state configuring 00:23:16.926 02:46:41 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:23:16.926 02:46:41 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:17.188 02:46:42 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:17.446 BaseBdev1 00:23:17.446 02:46:42 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:23:17.446 02:46:42 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:23:17.446 02:46:42 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:17.446 02:46:42 -- common/autotest_common.sh@889 -- # local i 00:23:17.446 02:46:42 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:17.446 02:46:42 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:17.446 02:46:42 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:17.446 02:46:42 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:17.703 [ 00:23:17.703 { 00:23:17.703 "name": "BaseBdev1", 00:23:17.703 "aliases": [ 00:23:17.703 "69f9e870-5183-4952-8c61-7a890eb041bc" 00:23:17.703 ], 00:23:17.703 "product_name": "Malloc disk", 00:23:17.703 "block_size": 512, 00:23:17.703 "num_blocks": 65536, 00:23:17.703 "uuid": "69f9e870-5183-4952-8c61-7a890eb041bc", 00:23:17.703 "assigned_rate_limits": { 00:23:17.703 "rw_ios_per_sec": 0, 00:23:17.703 "rw_mbytes_per_sec": 0, 00:23:17.703 "r_mbytes_per_sec": 0, 00:23:17.703 "w_mbytes_per_sec": 0 00:23:17.703 }, 00:23:17.703 "claimed": false, 00:23:17.703 "zoned": false, 00:23:17.703 "supported_io_types": { 00:23:17.703 "read": true, 00:23:17.703 "write": true, 00:23:17.703 "unmap": true, 00:23:17.703 "write_zeroes": true, 00:23:17.703 "flush": true, 00:23:17.703 "reset": true, 00:23:17.703 "compare": false, 00:23:17.703 "compare_and_write": false, 00:23:17.703 "abort": true, 00:23:17.703 "nvme_admin": false, 00:23:17.703 "nvme_io": false 00:23:17.703 }, 00:23:17.703 "memory_domains": [ 00:23:17.703 { 00:23:17.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:17.703 "dma_device_type": 2 00:23:17.703 } 00:23:17.703 ], 00:23:17.703 "driver_specific": {} 00:23:17.703 } 00:23:17.703 ] 00:23:17.703 02:46:42 -- common/autotest_common.sh@895 -- # return 0 00:23:17.703 02:46:42 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:17.962 [2024-07-11 02:46:42.909297] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:17.962 [2024-07-11 02:46:42.911457] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:17.962 [2024-07-11 02:46:42.911543] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:17.962 [2024-07-11 02:46:42.911564] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:17.962 [2024-07-11 02:46:42.911588] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:17.962 [2024-07-11 02:46:42.911596] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:17.962 [2024-07-11 02:46:42.911687] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:17.962 02:46:42 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:23:17.962 02:46:42 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:17.962 02:46:42 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:17.962 02:46:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:17.962 02:46:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:17.962 02:46:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:17.962 02:46:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:17.962 02:46:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:17.962 02:46:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:17.962 02:46:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:17.962 02:46:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:17.962 02:46:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:17.962 02:46:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:17.962 02:46:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:18.220 02:46:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:18.221 "name": "Existed_Raid", 00:23:18.221 "uuid": "1a663a3d-8174-4ced-b236-7ee57a47882d", 00:23:18.221 "strip_size_kb": 64, 00:23:18.221 "state": "configuring", 00:23:18.221 "raid_level": "raid5f", 00:23:18.221 "superblock": true, 00:23:18.221 "num_base_bdevs": 4, 00:23:18.221 "num_base_bdevs_discovered": 1, 00:23:18.221 "num_base_bdevs_operational": 4, 00:23:18.221 "base_bdevs_list": [ 00:23:18.221 { 00:23:18.221 "name": "BaseBdev1", 00:23:18.221 "uuid": "69f9e870-5183-4952-8c61-7a890eb041bc", 00:23:18.221 "is_configured": true, 00:23:18.221 "data_offset": 2048, 00:23:18.221 "data_size": 63488 00:23:18.221 }, 00:23:18.221 { 00:23:18.221 "name": "BaseBdev2", 00:23:18.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:18.221 "is_configured": false, 00:23:18.221 "data_offset": 0, 00:23:18.221 "data_size": 0 00:23:18.221 }, 00:23:18.221 { 00:23:18.221 "name": "BaseBdev3", 00:23:18.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:18.221 "is_configured": false, 00:23:18.221 "data_offset": 0, 00:23:18.221 "data_size": 0 00:23:18.221 }, 00:23:18.221 { 00:23:18.221 "name": "BaseBdev4", 00:23:18.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:18.221 "is_configured": false, 00:23:18.221 "data_offset": 0, 00:23:18.221 "data_size": 0 00:23:18.221 } 00:23:18.221 ] 00:23:18.221 }' 00:23:18.221 02:46:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:18.221 02:46:43 -- common/autotest_common.sh@10 -- # set +x 00:23:18.785 02:46:43 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:19.043 [2024-07-11 02:46:43.998706] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:19.043 BaseBdev2 00:23:19.043 02:46:44 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:23:19.043 02:46:44 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:23:19.043 02:46:44 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:19.043 02:46:44 -- common/autotest_common.sh@889 -- # local i 00:23:19.043 02:46:44 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:19.043 02:46:44 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:19.043 02:46:44 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:19.301 02:46:44 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:19.559 [ 00:23:19.559 { 00:23:19.559 "name": "BaseBdev2", 00:23:19.559 "aliases": [ 00:23:19.559 "bcf3b151-1e51-425e-a4ed-e24b1f04c910" 00:23:19.559 ], 00:23:19.559 "product_name": "Malloc disk", 00:23:19.559 "block_size": 512, 00:23:19.559 "num_blocks": 65536, 00:23:19.559 "uuid": "bcf3b151-1e51-425e-a4ed-e24b1f04c910", 00:23:19.559 "assigned_rate_limits": { 00:23:19.559 "rw_ios_per_sec": 0, 00:23:19.559 "rw_mbytes_per_sec": 0, 00:23:19.559 "r_mbytes_per_sec": 0, 00:23:19.559 "w_mbytes_per_sec": 0 00:23:19.559 }, 00:23:19.559 "claimed": true, 00:23:19.559 "claim_type": "exclusive_write", 00:23:19.559 "zoned": false, 00:23:19.559 "supported_io_types": { 00:23:19.559 "read": true, 00:23:19.559 "write": true, 00:23:19.559 "unmap": true, 00:23:19.559 "write_zeroes": true, 00:23:19.559 "flush": true, 00:23:19.559 "reset": true, 00:23:19.559 "compare": false, 00:23:19.559 "compare_and_write": false, 00:23:19.559 "abort": true, 00:23:19.559 "nvme_admin": false, 00:23:19.559 "nvme_io": false 00:23:19.559 }, 00:23:19.559 "memory_domains": [ 00:23:19.559 { 00:23:19.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:19.559 "dma_device_type": 2 00:23:19.559 } 00:23:19.559 ], 00:23:19.559 "driver_specific": {} 00:23:19.559 } 00:23:19.559 ] 00:23:19.559 02:46:44 -- common/autotest_common.sh@895 -- # return 0 00:23:19.559 02:46:44 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:23:19.559 02:46:44 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:19.559 02:46:44 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:19.559 02:46:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:19.559 02:46:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:19.559 02:46:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:19.559 02:46:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:19.559 02:46:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:19.559 02:46:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:19.559 02:46:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:19.559 02:46:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:19.559 02:46:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:19.559 02:46:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:19.559 02:46:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:19.817 02:46:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:19.817 "name": "Existed_Raid", 00:23:19.817 "uuid": "1a663a3d-8174-4ced-b236-7ee57a47882d", 00:23:19.817 "strip_size_kb": 64, 00:23:19.817 "state": "configuring", 00:23:19.817 "raid_level": "raid5f", 00:23:19.817 "superblock": true, 00:23:19.817 "num_base_bdevs": 4, 00:23:19.817 "num_base_bdevs_discovered": 2, 00:23:19.817 "num_base_bdevs_operational": 4, 00:23:19.817 "base_bdevs_list": [ 00:23:19.817 { 00:23:19.817 "name": "BaseBdev1", 00:23:19.817 "uuid": "69f9e870-5183-4952-8c61-7a890eb041bc", 00:23:19.817 "is_configured": true, 00:23:19.817 "data_offset": 2048, 00:23:19.817 "data_size": 63488 00:23:19.817 }, 00:23:19.817 { 00:23:19.817 "name": "BaseBdev2", 00:23:19.817 "uuid": "bcf3b151-1e51-425e-a4ed-e24b1f04c910", 00:23:19.817 "is_configured": true, 00:23:19.817 "data_offset": 2048, 00:23:19.817 "data_size": 63488 00:23:19.817 }, 00:23:19.817 { 00:23:19.817 "name": "BaseBdev3", 00:23:19.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:19.817 "is_configured": false, 00:23:19.817 "data_offset": 0, 00:23:19.817 "data_size": 0 00:23:19.817 }, 00:23:19.817 { 00:23:19.817 "name": "BaseBdev4", 00:23:19.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:19.818 "is_configured": false, 00:23:19.818 "data_offset": 0, 00:23:19.818 "data_size": 0 00:23:19.818 } 00:23:19.818 ] 00:23:19.818 }' 00:23:19.818 02:46:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:19.818 02:46:44 -- common/autotest_common.sh@10 -- # set +x 00:23:20.384 02:46:45 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:20.642 [2024-07-11 02:46:45.551126] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:20.642 BaseBdev3 00:23:20.642 02:46:45 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:23:20.642 02:46:45 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:23:20.642 02:46:45 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:20.642 02:46:45 -- common/autotest_common.sh@889 -- # local i 00:23:20.642 02:46:45 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:20.642 02:46:45 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:20.642 02:46:45 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:20.900 02:46:45 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:20.900 [ 00:23:20.900 { 00:23:20.900 "name": "BaseBdev3", 00:23:20.900 "aliases": [ 00:23:20.900 "d59463e4-22ce-41f4-a310-e3c063948e37" 00:23:20.900 ], 00:23:20.900 "product_name": "Malloc disk", 00:23:20.900 "block_size": 512, 00:23:20.900 "num_blocks": 65536, 00:23:20.900 "uuid": "d59463e4-22ce-41f4-a310-e3c063948e37", 00:23:20.901 "assigned_rate_limits": { 00:23:20.901 "rw_ios_per_sec": 0, 00:23:20.901 "rw_mbytes_per_sec": 0, 00:23:20.901 "r_mbytes_per_sec": 0, 00:23:20.901 "w_mbytes_per_sec": 0 00:23:20.901 }, 00:23:20.901 "claimed": true, 00:23:20.901 "claim_type": "exclusive_write", 00:23:20.901 "zoned": false, 00:23:20.901 "supported_io_types": { 00:23:20.901 "read": true, 00:23:20.901 "write": true, 00:23:20.901 "unmap": true, 00:23:20.901 "write_zeroes": true, 00:23:20.901 "flush": true, 00:23:20.901 "reset": true, 00:23:20.901 "compare": false, 00:23:20.901 "compare_and_write": false, 00:23:20.901 "abort": true, 00:23:20.901 "nvme_admin": false, 00:23:20.901 "nvme_io": false 00:23:20.901 }, 00:23:20.901 "memory_domains": [ 00:23:20.901 { 00:23:20.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:20.901 "dma_device_type": 2 00:23:20.901 } 00:23:20.901 ], 00:23:20.901 "driver_specific": {} 00:23:20.901 } 00:23:20.901 ] 00:23:20.901 02:46:45 -- common/autotest_common.sh@895 -- # return 0 00:23:20.901 02:46:45 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:23:20.901 02:46:45 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:20.901 02:46:45 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:20.901 02:46:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:20.901 02:46:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:20.901 02:46:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:20.901 02:46:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:20.901 02:46:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:20.901 02:46:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:20.901 02:46:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:20.901 02:46:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:20.901 02:46:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:20.901 02:46:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:20.901 02:46:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:21.160 02:46:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:21.160 "name": "Existed_Raid", 00:23:21.160 "uuid": "1a663a3d-8174-4ced-b236-7ee57a47882d", 00:23:21.160 "strip_size_kb": 64, 00:23:21.160 "state": "configuring", 00:23:21.160 "raid_level": "raid5f", 00:23:21.160 "superblock": true, 00:23:21.160 "num_base_bdevs": 4, 00:23:21.160 "num_base_bdevs_discovered": 3, 00:23:21.160 "num_base_bdevs_operational": 4, 00:23:21.160 "base_bdevs_list": [ 00:23:21.160 { 00:23:21.160 "name": "BaseBdev1", 00:23:21.160 "uuid": "69f9e870-5183-4952-8c61-7a890eb041bc", 00:23:21.160 "is_configured": true, 00:23:21.160 "data_offset": 2048, 00:23:21.160 "data_size": 63488 00:23:21.160 }, 00:23:21.160 { 00:23:21.160 "name": "BaseBdev2", 00:23:21.160 "uuid": "bcf3b151-1e51-425e-a4ed-e24b1f04c910", 00:23:21.160 "is_configured": true, 00:23:21.160 "data_offset": 2048, 00:23:21.160 "data_size": 63488 00:23:21.160 }, 00:23:21.160 { 00:23:21.160 "name": "BaseBdev3", 00:23:21.160 "uuid": "d59463e4-22ce-41f4-a310-e3c063948e37", 00:23:21.160 "is_configured": true, 00:23:21.160 "data_offset": 2048, 00:23:21.160 "data_size": 63488 00:23:21.160 }, 00:23:21.160 { 00:23:21.160 "name": "BaseBdev4", 00:23:21.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:21.160 "is_configured": false, 00:23:21.160 "data_offset": 0, 00:23:21.160 "data_size": 0 00:23:21.160 } 00:23:21.160 ] 00:23:21.160 }' 00:23:21.160 02:46:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:21.160 02:46:46 -- common/autotest_common.sh@10 -- # set +x 00:23:21.728 02:46:46 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:23:21.985 [2024-07-11 02:46:46.920332] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:21.985 [2024-07-11 02:46:46.920658] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006980 00:23:21.985 [2024-07-11 02:46:46.920681] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:21.985 BaseBdev4 00:23:21.985 [2024-07-11 02:46:46.920822] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:23:21.985 [2024-07-11 02:46:46.921620] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006980 00:23:21.985 [2024-07-11 02:46:46.921661] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006980 00:23:21.985 [2024-07-11 02:46:46.921816] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:21.985 02:46:46 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:23:21.985 02:46:46 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:23:21.985 02:46:46 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:21.985 02:46:46 -- common/autotest_common.sh@889 -- # local i 00:23:21.985 02:46:46 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:21.985 02:46:46 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:21.985 02:46:46 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:22.243 02:46:47 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:22.243 [ 00:23:22.243 { 00:23:22.243 "name": "BaseBdev4", 00:23:22.243 "aliases": [ 00:23:22.243 "469f7389-d46e-465a-8df4-6204a0864e19" 00:23:22.243 ], 00:23:22.243 "product_name": "Malloc disk", 00:23:22.243 "block_size": 512, 00:23:22.243 "num_blocks": 65536, 00:23:22.243 "uuid": "469f7389-d46e-465a-8df4-6204a0864e19", 00:23:22.243 "assigned_rate_limits": { 00:23:22.243 "rw_ios_per_sec": 0, 00:23:22.243 "rw_mbytes_per_sec": 0, 00:23:22.243 "r_mbytes_per_sec": 0, 00:23:22.243 "w_mbytes_per_sec": 0 00:23:22.243 }, 00:23:22.243 "claimed": true, 00:23:22.243 "claim_type": "exclusive_write", 00:23:22.243 "zoned": false, 00:23:22.243 "supported_io_types": { 00:23:22.243 "read": true, 00:23:22.243 "write": true, 00:23:22.243 "unmap": true, 00:23:22.243 "write_zeroes": true, 00:23:22.243 "flush": true, 00:23:22.243 "reset": true, 00:23:22.243 "compare": false, 00:23:22.243 "compare_and_write": false, 00:23:22.243 "abort": true, 00:23:22.243 "nvme_admin": false, 00:23:22.243 "nvme_io": false 00:23:22.243 }, 00:23:22.243 "memory_domains": [ 00:23:22.243 { 00:23:22.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:22.243 "dma_device_type": 2 00:23:22.243 } 00:23:22.243 ], 00:23:22.243 "driver_specific": {} 00:23:22.243 } 00:23:22.243 ] 00:23:22.243 02:46:47 -- common/autotest_common.sh@895 -- # return 0 00:23:22.243 02:46:47 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:23:22.243 02:46:47 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:22.243 02:46:47 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:23:22.243 02:46:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:22.243 02:46:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:22.243 02:46:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:22.243 02:46:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:22.243 02:46:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:22.243 02:46:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:22.243 02:46:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:22.243 02:46:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:22.243 02:46:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:22.243 02:46:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:22.243 02:46:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:22.521 02:46:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:22.521 "name": "Existed_Raid", 00:23:22.521 "uuid": "1a663a3d-8174-4ced-b236-7ee57a47882d", 00:23:22.521 "strip_size_kb": 64, 00:23:22.521 "state": "online", 00:23:22.521 "raid_level": "raid5f", 00:23:22.521 "superblock": true, 00:23:22.521 "num_base_bdevs": 4, 00:23:22.521 "num_base_bdevs_discovered": 4, 00:23:22.521 "num_base_bdevs_operational": 4, 00:23:22.521 "base_bdevs_list": [ 00:23:22.521 { 00:23:22.521 "name": "BaseBdev1", 00:23:22.521 "uuid": "69f9e870-5183-4952-8c61-7a890eb041bc", 00:23:22.521 "is_configured": true, 00:23:22.521 "data_offset": 2048, 00:23:22.521 "data_size": 63488 00:23:22.521 }, 00:23:22.521 { 00:23:22.521 "name": "BaseBdev2", 00:23:22.521 "uuid": "bcf3b151-1e51-425e-a4ed-e24b1f04c910", 00:23:22.521 "is_configured": true, 00:23:22.521 "data_offset": 2048, 00:23:22.521 "data_size": 63488 00:23:22.521 }, 00:23:22.521 { 00:23:22.521 "name": "BaseBdev3", 00:23:22.521 "uuid": "d59463e4-22ce-41f4-a310-e3c063948e37", 00:23:22.521 "is_configured": true, 00:23:22.521 "data_offset": 2048, 00:23:22.521 "data_size": 63488 00:23:22.521 }, 00:23:22.521 { 00:23:22.521 "name": "BaseBdev4", 00:23:22.521 "uuid": "469f7389-d46e-465a-8df4-6204a0864e19", 00:23:22.521 "is_configured": true, 00:23:22.521 "data_offset": 2048, 00:23:22.521 "data_size": 63488 00:23:22.521 } 00:23:22.521 ] 00:23:22.521 }' 00:23:22.521 02:46:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:22.521 02:46:47 -- common/autotest_common.sh@10 -- # set +x 00:23:23.088 02:46:48 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:23.347 [2024-07-11 02:46:48.389160] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:23.347 02:46:48 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:23:23.347 02:46:48 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:23:23.347 02:46:48 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:23:23.347 02:46:48 -- bdev/bdev_raid.sh@196 -- # return 0 00:23:23.347 02:46:48 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:23:23.347 02:46:48 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:23:23.347 02:46:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:23.347 02:46:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:23.347 02:46:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:23.347 02:46:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:23.347 02:46:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:23.347 02:46:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:23.347 02:46:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:23.347 02:46:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:23.347 02:46:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:23.347 02:46:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:23.347 02:46:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:23.606 02:46:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:23.606 "name": "Existed_Raid", 00:23:23.606 "uuid": "1a663a3d-8174-4ced-b236-7ee57a47882d", 00:23:23.606 "strip_size_kb": 64, 00:23:23.606 "state": "online", 00:23:23.606 "raid_level": "raid5f", 00:23:23.606 "superblock": true, 00:23:23.606 "num_base_bdevs": 4, 00:23:23.606 "num_base_bdevs_discovered": 3, 00:23:23.606 "num_base_bdevs_operational": 3, 00:23:23.606 "base_bdevs_list": [ 00:23:23.606 { 00:23:23.606 "name": null, 00:23:23.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:23.606 "is_configured": false, 00:23:23.606 "data_offset": 2048, 00:23:23.606 "data_size": 63488 00:23:23.606 }, 00:23:23.606 { 00:23:23.606 "name": "BaseBdev2", 00:23:23.606 "uuid": "bcf3b151-1e51-425e-a4ed-e24b1f04c910", 00:23:23.606 "is_configured": true, 00:23:23.606 "data_offset": 2048, 00:23:23.606 "data_size": 63488 00:23:23.606 }, 00:23:23.606 { 00:23:23.606 "name": "BaseBdev3", 00:23:23.606 "uuid": "d59463e4-22ce-41f4-a310-e3c063948e37", 00:23:23.606 "is_configured": true, 00:23:23.606 "data_offset": 2048, 00:23:23.606 "data_size": 63488 00:23:23.606 }, 00:23:23.606 { 00:23:23.606 "name": "BaseBdev4", 00:23:23.606 "uuid": "469f7389-d46e-465a-8df4-6204a0864e19", 00:23:23.606 "is_configured": true, 00:23:23.606 "data_offset": 2048, 00:23:23.606 "data_size": 63488 00:23:23.606 } 00:23:23.606 ] 00:23:23.606 }' 00:23:23.606 02:46:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:23.606 02:46:48 -- common/autotest_common.sh@10 -- # set +x 00:23:24.541 02:46:49 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:23:24.541 02:46:49 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:24.541 02:46:49 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:24.541 02:46:49 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:23:24.541 02:46:49 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:23:24.541 02:46:49 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:24.541 02:46:49 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:23:24.800 [2024-07-11 02:46:49.731662] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:24.800 [2024-07-11 02:46:49.731697] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:24.800 [2024-07-11 02:46:49.731796] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:24.800 02:46:49 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:23:24.800 02:46:49 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:24.800 02:46:49 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:24.800 02:46:49 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:23:25.059 02:46:50 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:23:25.059 02:46:50 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:25.059 02:46:50 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:23:25.318 [2024-07-11 02:46:50.249438] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:25.318 02:46:50 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:23:25.318 02:46:50 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:25.318 02:46:50 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:25.318 02:46:50 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:23:25.577 02:46:50 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:23:25.577 02:46:50 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:25.577 02:46:50 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:23:25.835 [2024-07-11 02:46:50.694964] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:23:25.835 [2024-07-11 02:46:50.695043] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state offline 00:23:25.835 02:46:50 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:23:25.835 02:46:50 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:25.835 02:46:50 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:25.835 02:46:50 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:23:26.094 02:46:50 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:23:26.094 02:46:50 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:23:26.094 02:46:50 -- bdev/bdev_raid.sh@287 -- # killprocess 142946 00:23:26.094 02:46:50 -- common/autotest_common.sh@926 -- # '[' -z 142946 ']' 00:23:26.094 02:46:50 -- common/autotest_common.sh@930 -- # kill -0 142946 00:23:26.094 02:46:50 -- common/autotest_common.sh@931 -- # uname 00:23:26.094 02:46:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:26.094 02:46:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 142946 00:23:26.095 killing process with pid 142946 00:23:26.095 02:46:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:26.095 02:46:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:26.095 02:46:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 142946' 00:23:26.095 02:46:50 -- common/autotest_common.sh@945 -- # kill 142946 00:23:26.095 02:46:50 -- common/autotest_common.sh@950 -- # wait 142946 00:23:26.095 [2024-07-11 02:46:50.971848] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:26.095 [2024-07-11 02:46:50.971988] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:26.357 ************************************ 00:23:26.357 END TEST raid5f_state_function_test_sb 00:23:26.357 ************************************ 00:23:26.357 02:46:51 -- bdev/bdev_raid.sh@289 -- # return 0 00:23:26.357 00:23:26.357 real 0m13.230s 00:23:26.357 user 0m24.641s 00:23:26.357 sys 0m1.629s 00:23:26.357 02:46:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:26.357 02:46:51 -- common/autotest_common.sh@10 -- # set +x 00:23:26.357 02:46:51 -- bdev/bdev_raid.sh@746 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:23:26.357 02:46:51 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:23:26.357 02:46:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:26.357 02:46:51 -- common/autotest_common.sh@10 -- # set +x 00:23:26.357 ************************************ 00:23:26.357 START TEST raid5f_superblock_test 00:23:26.357 ************************************ 00:23:26.357 02:46:51 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid5f 4 00:23:26.357 02:46:51 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid5f 00:23:26.357 02:46:51 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:23:26.357 02:46:51 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:23:26.357 02:46:51 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:23:26.357 02:46:51 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:23:26.357 02:46:51 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:23:26.357 02:46:51 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:23:26.357 02:46:51 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:23:26.357 02:46:51 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:23:26.357 02:46:51 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:23:26.357 02:46:51 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:23:26.357 02:46:51 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:23:26.357 02:46:51 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:23:26.357 02:46:51 -- bdev/bdev_raid.sh@349 -- # '[' raid5f '!=' raid1 ']' 00:23:26.357 02:46:51 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:23:26.357 02:46:51 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:23:26.357 02:46:51 -- bdev/bdev_raid.sh@357 -- # raid_pid=143413 00:23:26.357 02:46:51 -- bdev/bdev_raid.sh@358 -- # waitforlisten 143413 /var/tmp/spdk-raid.sock 00:23:26.357 02:46:51 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:23:26.357 02:46:51 -- common/autotest_common.sh@819 -- # '[' -z 143413 ']' 00:23:26.357 02:46:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:26.357 02:46:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:26.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:26.357 02:46:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:26.357 02:46:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:26.357 02:46:51 -- common/autotest_common.sh@10 -- # set +x 00:23:26.357 [2024-07-11 02:46:51.309492] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:23:26.357 [2024-07-11 02:46:51.309774] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143413 ] 00:23:26.357 [2024-07-11 02:46:51.444012] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.616 [2024-07-11 02:46:51.507834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:26.616 [2024-07-11 02:46:51.559294] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:27.183 02:46:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:27.183 02:46:52 -- common/autotest_common.sh@852 -- # return 0 00:23:27.183 02:46:52 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:23:27.183 02:46:52 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:23:27.183 02:46:52 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:23:27.183 02:46:52 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:23:27.183 02:46:52 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:23:27.183 02:46:52 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:27.183 02:46:52 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:23:27.183 02:46:52 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:27.183 02:46:52 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:23:27.441 malloc1 00:23:27.441 02:46:52 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:27.700 [2024-07-11 02:46:52.645903] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:27.700 [2024-07-11 02:46:52.646236] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:27.700 [2024-07-11 02:46:52.646408] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005d80 00:23:27.700 [2024-07-11 02:46:52.646569] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:27.700 [2024-07-11 02:46:52.649194] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:27.700 [2024-07-11 02:46:52.649388] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:27.700 pt1 00:23:27.700 02:46:52 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:23:27.700 02:46:52 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:23:27.700 02:46:52 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:23:27.700 02:46:52 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:23:27.700 02:46:52 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:23:27.700 02:46:52 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:27.700 02:46:52 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:23:27.700 02:46:52 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:27.700 02:46:52 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:23:27.958 malloc2 00:23:27.958 02:46:52 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:28.216 [2024-07-11 02:46:53.084129] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:28.216 [2024-07-11 02:46:53.084393] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:28.216 [2024-07-11 02:46:53.084548] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:23:28.216 [2024-07-11 02:46:53.084715] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:28.216 [2024-07-11 02:46:53.086822] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:28.216 [2024-07-11 02:46:53.087001] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:28.216 pt2 00:23:28.216 02:46:53 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:23:28.216 02:46:53 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:23:28.216 02:46:53 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:23:28.216 02:46:53 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:23:28.216 02:46:53 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:23:28.216 02:46:53 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:28.216 02:46:53 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:23:28.216 02:46:53 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:28.216 02:46:53 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:23:28.216 malloc3 00:23:28.474 02:46:53 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:28.474 [2024-07-11 02:46:53.502412] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:28.474 [2024-07-11 02:46:53.502678] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:28.474 [2024-07-11 02:46:53.502761] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:23:28.474 [2024-07-11 02:46:53.503029] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:28.474 [2024-07-11 02:46:53.505236] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:28.474 [2024-07-11 02:46:53.505422] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:28.474 pt3 00:23:28.474 02:46:53 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:23:28.474 02:46:53 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:23:28.474 02:46:53 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:23:28.474 02:46:53 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:23:28.474 02:46:53 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:23:28.475 02:46:53 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:28.475 02:46:53 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:23:28.475 02:46:53 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:28.475 02:46:53 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:23:28.732 malloc4 00:23:28.732 02:46:53 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:28.990 [2024-07-11 02:46:53.888637] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:28.990 [2024-07-11 02:46:53.888880] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:28.990 [2024-07-11 02:46:53.889030] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:23:28.990 [2024-07-11 02:46:53.889177] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:28.990 [2024-07-11 02:46:53.891304] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:28.990 [2024-07-11 02:46:53.891479] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:28.990 pt4 00:23:28.990 02:46:53 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:23:28.990 02:46:53 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:23:28.990 02:46:53 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:23:28.990 [2024-07-11 02:46:54.080780] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:29.248 [2024-07-11 02:46:54.082870] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:29.248 [2024-07-11 02:46:54.083052] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:29.248 [2024-07-11 02:46:54.083145] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:29.248 [2024-07-11 02:46:54.083485] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780 00:23:29.248 [2024-07-11 02:46:54.083589] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:29.248 [2024-07-11 02:46:54.083839] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:23:29.248 [2024-07-11 02:46:54.084781] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780 00:23:29.248 [2024-07-11 02:46:54.084920] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008780 00:23:29.248 [2024-07-11 02:46:54.085250] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:29.248 02:46:54 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:23:29.248 02:46:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:29.248 02:46:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:29.248 02:46:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:29.248 02:46:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:29.248 02:46:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:29.248 02:46:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:29.248 02:46:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:29.248 02:46:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:29.248 02:46:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:29.248 02:46:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:29.248 02:46:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:29.248 02:46:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:29.248 "name": "raid_bdev1", 00:23:29.248 "uuid": "3a69b0f5-e834-4882-aef8-76138114fb25", 00:23:29.248 "strip_size_kb": 64, 00:23:29.248 "state": "online", 00:23:29.248 "raid_level": "raid5f", 00:23:29.248 "superblock": true, 00:23:29.248 "num_base_bdevs": 4, 00:23:29.248 "num_base_bdevs_discovered": 4, 00:23:29.248 "num_base_bdevs_operational": 4, 00:23:29.248 "base_bdevs_list": [ 00:23:29.248 { 00:23:29.248 "name": "pt1", 00:23:29.248 "uuid": "a1ffa8b5-3cb0-582e-9f7d-0e8be347c5b8", 00:23:29.248 "is_configured": true, 00:23:29.248 "data_offset": 2048, 00:23:29.248 "data_size": 63488 00:23:29.248 }, 00:23:29.248 { 00:23:29.248 "name": "pt2", 00:23:29.248 "uuid": "3643691f-3844-57a8-bfff-4908da753ff7", 00:23:29.248 "is_configured": true, 00:23:29.248 "data_offset": 2048, 00:23:29.248 "data_size": 63488 00:23:29.248 }, 00:23:29.248 { 00:23:29.248 "name": "pt3", 00:23:29.248 "uuid": "ea43d666-6f7a-529b-b939-f113fccecb75", 00:23:29.248 "is_configured": true, 00:23:29.248 "data_offset": 2048, 00:23:29.248 "data_size": 63488 00:23:29.248 }, 00:23:29.248 { 00:23:29.248 "name": "pt4", 00:23:29.248 "uuid": "411e342f-4e6b-5b49-b78e-788c309ada20", 00:23:29.248 "is_configured": true, 00:23:29.248 "data_offset": 2048, 00:23:29.248 "data_size": 63488 00:23:29.248 } 00:23:29.249 ] 00:23:29.249 }' 00:23:29.249 02:46:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:29.249 02:46:54 -- common/autotest_common.sh@10 -- # set +x 00:23:30.183 02:46:54 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:30.183 02:46:54 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:23:30.183 [2024-07-11 02:46:55.113589] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:30.183 02:46:55 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=3a69b0f5-e834-4882-aef8-76138114fb25 00:23:30.183 02:46:55 -- bdev/bdev_raid.sh@380 -- # '[' -z 3a69b0f5-e834-4882-aef8-76138114fb25 ']' 00:23:30.183 02:46:55 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:30.440 [2024-07-11 02:46:55.345435] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:30.440 [2024-07-11 02:46:55.345606] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:30.440 [2024-07-11 02:46:55.345890] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:30.440 [2024-07-11 02:46:55.346152] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:30.440 [2024-07-11 02:46:55.346270] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state offline 00:23:30.440 02:46:55 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:23:30.440 02:46:55 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:30.697 02:46:55 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:23:30.697 02:46:55 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:23:30.697 02:46:55 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:23:30.697 02:46:55 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:30.697 02:46:55 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:23:30.697 02:46:55 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:30.955 02:46:55 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:23:30.955 02:46:55 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:31.212 02:46:56 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:23:31.213 02:46:56 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:23:31.471 02:46:56 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:23:31.471 02:46:56 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:23:31.471 02:46:56 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:23:31.471 02:46:56 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:23:31.471 02:46:56 -- common/autotest_common.sh@640 -- # local es=0 00:23:31.471 02:46:56 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:23:31.471 02:46:56 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:31.471 02:46:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:31.471 02:46:56 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:31.471 02:46:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:31.471 02:46:56 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:31.471 02:46:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:31.471 02:46:56 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:31.471 02:46:56 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:23:31.471 02:46:56 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:23:31.729 [2024-07-11 02:46:56.681664] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:23:31.729 [2024-07-11 02:46:56.683450] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:23:31.729 [2024-07-11 02:46:56.683645] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:23:31.729 [2024-07-11 02:46:56.683728] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:23:31.729 [2024-07-11 02:46:56.683880] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:23:31.729 [2024-07-11 02:46:56.684078] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:23:31.729 [2024-07-11 02:46:56.684223] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:23:31.729 [2024-07-11 02:46:56.684374] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:23:31.729 [2024-07-11 02:46:56.684512] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:31.729 [2024-07-11 02:46:56.684608] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state configuring 00:23:31.729 request: 00:23:31.729 { 00:23:31.729 "name": "raid_bdev1", 00:23:31.729 "raid_level": "raid5f", 00:23:31.729 "base_bdevs": [ 00:23:31.729 "malloc1", 00:23:31.730 "malloc2", 00:23:31.730 "malloc3", 00:23:31.730 "malloc4" 00:23:31.730 ], 00:23:31.730 "superblock": false, 00:23:31.730 "strip_size_kb": 64, 00:23:31.730 "method": "bdev_raid_create", 00:23:31.730 "req_id": 1 00:23:31.730 } 00:23:31.730 Got JSON-RPC error response 00:23:31.730 response: 00:23:31.730 { 00:23:31.730 "code": -17, 00:23:31.730 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:23:31.730 } 00:23:31.730 02:46:56 -- common/autotest_common.sh@643 -- # es=1 00:23:31.730 02:46:56 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:23:31.730 02:46:56 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:23:31.730 02:46:56 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:23:31.730 02:46:56 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:23:31.730 02:46:56 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:31.988 02:46:56 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:23:31.988 02:46:56 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:23:31.988 02:46:56 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:31.988 [2024-07-11 02:46:57.057704] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:31.988 [2024-07-11 02:46:57.057930] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:31.988 [2024-07-11 02:46:57.058024] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:23:31.988 [2024-07-11 02:46:57.058183] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:31.988 [2024-07-11 02:46:57.060306] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:31.989 [2024-07-11 02:46:57.060501] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:31.989 [2024-07-11 02:46:57.060713] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:23:31.989 [2024-07-11 02:46:57.060870] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:31.989 pt1 00:23:31.989 02:46:57 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:23:31.989 02:46:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:31.989 02:46:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:31.989 02:46:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:31.989 02:46:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:31.989 02:46:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:31.989 02:46:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:31.989 02:46:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:31.989 02:46:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:31.989 02:46:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:31.989 02:46:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:31.989 02:46:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:32.247 02:46:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:32.247 "name": "raid_bdev1", 00:23:32.247 "uuid": "3a69b0f5-e834-4882-aef8-76138114fb25", 00:23:32.247 "strip_size_kb": 64, 00:23:32.247 "state": "configuring", 00:23:32.247 "raid_level": "raid5f", 00:23:32.247 "superblock": true, 00:23:32.247 "num_base_bdevs": 4, 00:23:32.247 "num_base_bdevs_discovered": 1, 00:23:32.247 "num_base_bdevs_operational": 4, 00:23:32.247 "base_bdevs_list": [ 00:23:32.247 { 00:23:32.247 "name": "pt1", 00:23:32.247 "uuid": "a1ffa8b5-3cb0-582e-9f7d-0e8be347c5b8", 00:23:32.247 "is_configured": true, 00:23:32.247 "data_offset": 2048, 00:23:32.247 "data_size": 63488 00:23:32.247 }, 00:23:32.247 { 00:23:32.247 "name": null, 00:23:32.247 "uuid": "3643691f-3844-57a8-bfff-4908da753ff7", 00:23:32.247 "is_configured": false, 00:23:32.247 "data_offset": 2048, 00:23:32.247 "data_size": 63488 00:23:32.247 }, 00:23:32.247 { 00:23:32.247 "name": null, 00:23:32.247 "uuid": "ea43d666-6f7a-529b-b939-f113fccecb75", 00:23:32.247 "is_configured": false, 00:23:32.247 "data_offset": 2048, 00:23:32.247 "data_size": 63488 00:23:32.247 }, 00:23:32.247 { 00:23:32.247 "name": null, 00:23:32.247 "uuid": "411e342f-4e6b-5b49-b78e-788c309ada20", 00:23:32.247 "is_configured": false, 00:23:32.247 "data_offset": 2048, 00:23:32.247 "data_size": 63488 00:23:32.247 } 00:23:32.247 ] 00:23:32.247 }' 00:23:32.247 02:46:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:32.247 02:46:57 -- common/autotest_common.sh@10 -- # set +x 00:23:32.814 02:46:57 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:23:32.814 02:46:57 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:33.073 [2024-07-11 02:46:58.101962] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:33.073 [2024-07-11 02:46:58.102236] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:33.073 [2024-07-11 02:46:58.102322] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:23:33.073 [2024-07-11 02:46:58.102517] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:33.073 [2024-07-11 02:46:58.103031] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:33.073 [2024-07-11 02:46:58.103206] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:33.073 [2024-07-11 02:46:58.103402] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:23:33.073 [2024-07-11 02:46:58.103541] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:33.073 pt2 00:23:33.073 02:46:58 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:33.331 [2024-07-11 02:46:58.298059] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:23:33.331 02:46:58 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:23:33.331 02:46:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:33.331 02:46:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:33.331 02:46:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:33.331 02:46:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:33.331 02:46:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:33.331 02:46:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:33.331 02:46:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:33.331 02:46:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:33.331 02:46:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:33.331 02:46:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:33.331 02:46:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:33.628 02:46:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:33.628 "name": "raid_bdev1", 00:23:33.628 "uuid": "3a69b0f5-e834-4882-aef8-76138114fb25", 00:23:33.628 "strip_size_kb": 64, 00:23:33.628 "state": "configuring", 00:23:33.628 "raid_level": "raid5f", 00:23:33.628 "superblock": true, 00:23:33.628 "num_base_bdevs": 4, 00:23:33.628 "num_base_bdevs_discovered": 1, 00:23:33.628 "num_base_bdevs_operational": 4, 00:23:33.628 "base_bdevs_list": [ 00:23:33.628 { 00:23:33.628 "name": "pt1", 00:23:33.628 "uuid": "a1ffa8b5-3cb0-582e-9f7d-0e8be347c5b8", 00:23:33.628 "is_configured": true, 00:23:33.628 "data_offset": 2048, 00:23:33.628 "data_size": 63488 00:23:33.628 }, 00:23:33.628 { 00:23:33.628 "name": null, 00:23:33.628 "uuid": "3643691f-3844-57a8-bfff-4908da753ff7", 00:23:33.628 "is_configured": false, 00:23:33.628 "data_offset": 2048, 00:23:33.628 "data_size": 63488 00:23:33.628 }, 00:23:33.628 { 00:23:33.628 "name": null, 00:23:33.628 "uuid": "ea43d666-6f7a-529b-b939-f113fccecb75", 00:23:33.628 "is_configured": false, 00:23:33.628 "data_offset": 2048, 00:23:33.628 "data_size": 63488 00:23:33.628 }, 00:23:33.628 { 00:23:33.628 "name": null, 00:23:33.628 "uuid": "411e342f-4e6b-5b49-b78e-788c309ada20", 00:23:33.628 "is_configured": false, 00:23:33.628 "data_offset": 2048, 00:23:33.628 "data_size": 63488 00:23:33.628 } 00:23:33.628 ] 00:23:33.628 }' 00:23:33.628 02:46:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:33.628 02:46:58 -- common/autotest_common.sh@10 -- # set +x 00:23:34.199 02:46:59 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:23:34.199 02:46:59 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:23:34.199 02:46:59 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:34.457 [2024-07-11 02:46:59.330250] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:34.457 [2024-07-11 02:46:59.330520] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:34.457 [2024-07-11 02:46:59.330679] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:23:34.457 [2024-07-11 02:46:59.330805] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:34.457 [2024-07-11 02:46:59.331307] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:34.457 [2024-07-11 02:46:59.331499] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:34.457 [2024-07-11 02:46:59.331736] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:23:34.457 [2024-07-11 02:46:59.331881] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:34.457 pt2 00:23:34.457 02:46:59 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:23:34.457 02:46:59 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:23:34.457 02:46:59 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:34.716 [2024-07-11 02:46:59.550353] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:34.716 [2024-07-11 02:46:59.550591] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:34.716 [2024-07-11 02:46:59.550752] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:23:34.716 [2024-07-11 02:46:59.550901] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:34.716 [2024-07-11 02:46:59.551478] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:34.716 [2024-07-11 02:46:59.551696] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:34.716 [2024-07-11 02:46:59.551909] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:23:34.716 [2024-07-11 02:46:59.552042] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:34.716 pt3 00:23:34.716 02:46:59 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:23:34.716 02:46:59 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:23:34.716 02:46:59 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:34.716 [2024-07-11 02:46:59.758377] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:34.716 [2024-07-11 02:46:59.758575] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:34.716 [2024-07-11 02:46:59.758643] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:23:34.716 [2024-07-11 02:46:59.758851] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:34.716 [2024-07-11 02:46:59.759368] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:34.716 [2024-07-11 02:46:59.759577] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:34.716 [2024-07-11 02:46:59.759751] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:23:34.716 [2024-07-11 02:46:59.759870] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:34.716 [2024-07-11 02:46:59.760262] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:23:34.716 [2024-07-11 02:46:59.760504] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:34.716 [2024-07-11 02:46:59.760863] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:23:34.716 [2024-07-11 02:46:59.762534] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:23:34.716 [2024-07-11 02:46:59.762757] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:23:34.716 [2024-07-11 02:46:59.763233] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:34.716 pt4 00:23:34.716 02:46:59 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:23:34.716 02:46:59 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:23:34.716 02:46:59 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:23:34.717 02:46:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:34.717 02:46:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:34.717 02:46:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:34.717 02:46:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:34.717 02:46:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:34.717 02:46:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:34.717 02:46:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:34.717 02:46:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:34.717 02:46:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:34.717 02:46:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:34.717 02:46:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:34.975 02:46:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:34.975 "name": "raid_bdev1", 00:23:34.975 "uuid": "3a69b0f5-e834-4882-aef8-76138114fb25", 00:23:34.975 "strip_size_kb": 64, 00:23:34.975 "state": "online", 00:23:34.975 "raid_level": "raid5f", 00:23:34.975 "superblock": true, 00:23:34.975 "num_base_bdevs": 4, 00:23:34.975 "num_base_bdevs_discovered": 4, 00:23:34.975 "num_base_bdevs_operational": 4, 00:23:34.975 "base_bdevs_list": [ 00:23:34.975 { 00:23:34.975 "name": "pt1", 00:23:34.975 "uuid": "a1ffa8b5-3cb0-582e-9f7d-0e8be347c5b8", 00:23:34.976 "is_configured": true, 00:23:34.976 "data_offset": 2048, 00:23:34.976 "data_size": 63488 00:23:34.976 }, 00:23:34.976 { 00:23:34.976 "name": "pt2", 00:23:34.976 "uuid": "3643691f-3844-57a8-bfff-4908da753ff7", 00:23:34.976 "is_configured": true, 00:23:34.976 "data_offset": 2048, 00:23:34.976 "data_size": 63488 00:23:34.976 }, 00:23:34.976 { 00:23:34.976 "name": "pt3", 00:23:34.976 "uuid": "ea43d666-6f7a-529b-b939-f113fccecb75", 00:23:34.976 "is_configured": true, 00:23:34.976 "data_offset": 2048, 00:23:34.976 "data_size": 63488 00:23:34.976 }, 00:23:34.976 { 00:23:34.976 "name": "pt4", 00:23:34.976 "uuid": "411e342f-4e6b-5b49-b78e-788c309ada20", 00:23:34.976 "is_configured": true, 00:23:34.976 "data_offset": 2048, 00:23:34.976 "data_size": 63488 00:23:34.976 } 00:23:34.976 ] 00:23:34.976 }' 00:23:34.976 02:46:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:34.976 02:46:59 -- common/autotest_common.sh@10 -- # set +x 00:23:35.542 02:47:00 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:35.542 02:47:00 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:23:35.801 [2024-07-11 02:47:00.763397] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:35.801 02:47:00 -- bdev/bdev_raid.sh@430 -- # '[' 3a69b0f5-e834-4882-aef8-76138114fb25 '!=' 3a69b0f5-e834-4882-aef8-76138114fb25 ']' 00:23:35.801 02:47:00 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid5f 00:23:35.801 02:47:00 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:23:35.801 02:47:00 -- bdev/bdev_raid.sh@196 -- # return 0 00:23:35.801 02:47:00 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:36.060 [2024-07-11 02:47:00.955338] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:23:36.060 02:47:00 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:36.060 02:47:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:36.060 02:47:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:36.060 02:47:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:36.060 02:47:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:36.060 02:47:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:36.060 02:47:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:36.060 02:47:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:36.060 02:47:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:36.060 02:47:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:36.060 02:47:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:36.060 02:47:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:36.319 02:47:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:36.319 "name": "raid_bdev1", 00:23:36.319 "uuid": "3a69b0f5-e834-4882-aef8-76138114fb25", 00:23:36.319 "strip_size_kb": 64, 00:23:36.319 "state": "online", 00:23:36.319 "raid_level": "raid5f", 00:23:36.319 "superblock": true, 00:23:36.319 "num_base_bdevs": 4, 00:23:36.319 "num_base_bdevs_discovered": 3, 00:23:36.319 "num_base_bdevs_operational": 3, 00:23:36.319 "base_bdevs_list": [ 00:23:36.319 { 00:23:36.319 "name": null, 00:23:36.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:36.319 "is_configured": false, 00:23:36.319 "data_offset": 2048, 00:23:36.319 "data_size": 63488 00:23:36.319 }, 00:23:36.319 { 00:23:36.319 "name": "pt2", 00:23:36.319 "uuid": "3643691f-3844-57a8-bfff-4908da753ff7", 00:23:36.319 "is_configured": true, 00:23:36.319 "data_offset": 2048, 00:23:36.319 "data_size": 63488 00:23:36.319 }, 00:23:36.319 { 00:23:36.319 "name": "pt3", 00:23:36.319 "uuid": "ea43d666-6f7a-529b-b939-f113fccecb75", 00:23:36.319 "is_configured": true, 00:23:36.319 "data_offset": 2048, 00:23:36.319 "data_size": 63488 00:23:36.319 }, 00:23:36.319 { 00:23:36.319 "name": "pt4", 00:23:36.319 "uuid": "411e342f-4e6b-5b49-b78e-788c309ada20", 00:23:36.319 "is_configured": true, 00:23:36.319 "data_offset": 2048, 00:23:36.319 "data_size": 63488 00:23:36.319 } 00:23:36.319 ] 00:23:36.319 }' 00:23:36.319 02:47:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:36.319 02:47:01 -- common/autotest_common.sh@10 -- # set +x 00:23:36.887 02:47:01 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:37.145 [2024-07-11 02:47:02.003508] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:37.145 [2024-07-11 02:47:02.003697] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:37.145 [2024-07-11 02:47:02.003879] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:37.145 [2024-07-11 02:47:02.004084] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:37.145 [2024-07-11 02:47:02.004188] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:23:37.145 02:47:02 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:37.145 02:47:02 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:23:37.145 02:47:02 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:23:37.145 02:47:02 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:23:37.145 02:47:02 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:23:37.145 02:47:02 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:23:37.145 02:47:02 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:37.404 02:47:02 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:23:37.404 02:47:02 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:23:37.404 02:47:02 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:37.662 02:47:02 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:23:37.662 02:47:02 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:23:37.662 02:47:02 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:23:37.921 02:47:02 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:23:37.921 02:47:02 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:23:37.921 02:47:02 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:23:37.921 02:47:02 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:23:37.921 02:47:02 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:37.921 [2024-07-11 02:47:03.011634] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:37.921 [2024-07-11 02:47:03.011899] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:37.921 [2024-07-11 02:47:03.011971] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:23:37.921 [2024-07-11 02:47:03.012218] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:38.180 [2024-07-11 02:47:03.014369] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:38.180 [2024-07-11 02:47:03.014553] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:38.180 [2024-07-11 02:47:03.014778] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:23:38.180 [2024-07-11 02:47:03.014927] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:38.180 pt2 00:23:38.180 02:47:03 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:23:38.180 02:47:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:38.180 02:47:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:38.180 02:47:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:38.180 02:47:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:38.180 02:47:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:38.180 02:47:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:38.180 02:47:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:38.180 02:47:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:38.180 02:47:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:38.180 02:47:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:38.180 02:47:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:38.180 02:47:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:38.180 "name": "raid_bdev1", 00:23:38.180 "uuid": "3a69b0f5-e834-4882-aef8-76138114fb25", 00:23:38.180 "strip_size_kb": 64, 00:23:38.180 "state": "configuring", 00:23:38.180 "raid_level": "raid5f", 00:23:38.180 "superblock": true, 00:23:38.180 "num_base_bdevs": 4, 00:23:38.180 "num_base_bdevs_discovered": 1, 00:23:38.180 "num_base_bdevs_operational": 3, 00:23:38.180 "base_bdevs_list": [ 00:23:38.180 { 00:23:38.180 "name": null, 00:23:38.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:38.180 "is_configured": false, 00:23:38.180 "data_offset": 2048, 00:23:38.180 "data_size": 63488 00:23:38.180 }, 00:23:38.180 { 00:23:38.180 "name": "pt2", 00:23:38.180 "uuid": "3643691f-3844-57a8-bfff-4908da753ff7", 00:23:38.180 "is_configured": true, 00:23:38.180 "data_offset": 2048, 00:23:38.180 "data_size": 63488 00:23:38.180 }, 00:23:38.180 { 00:23:38.180 "name": null, 00:23:38.180 "uuid": "ea43d666-6f7a-529b-b939-f113fccecb75", 00:23:38.180 "is_configured": false, 00:23:38.180 "data_offset": 2048, 00:23:38.180 "data_size": 63488 00:23:38.180 }, 00:23:38.180 { 00:23:38.180 "name": null, 00:23:38.180 "uuid": "411e342f-4e6b-5b49-b78e-788c309ada20", 00:23:38.180 "is_configured": false, 00:23:38.180 "data_offset": 2048, 00:23:38.180 "data_size": 63488 00:23:38.180 } 00:23:38.180 ] 00:23:38.180 }' 00:23:38.180 02:47:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:38.180 02:47:03 -- common/autotest_common.sh@10 -- # set +x 00:23:38.745 02:47:03 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:23:38.745 02:47:03 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:23:38.745 02:47:03 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:39.002 [2024-07-11 02:47:04.075894] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:39.002 [2024-07-11 02:47:04.076122] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:39.002 [2024-07-11 02:47:04.076288] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:23:39.002 [2024-07-11 02:47:04.076401] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:39.002 [2024-07-11 02:47:04.076946] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:39.002 [2024-07-11 02:47:04.077131] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:39.002 [2024-07-11 02:47:04.077318] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:23:39.002 [2024-07-11 02:47:04.077435] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:39.002 pt3 00:23:39.261 02:47:04 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:23:39.261 02:47:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:39.261 02:47:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:39.261 02:47:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:39.261 02:47:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:39.261 02:47:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:39.261 02:47:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:39.261 02:47:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:39.261 02:47:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:39.261 02:47:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:39.261 02:47:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:39.261 02:47:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:39.261 02:47:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:39.261 "name": "raid_bdev1", 00:23:39.261 "uuid": "3a69b0f5-e834-4882-aef8-76138114fb25", 00:23:39.261 "strip_size_kb": 64, 00:23:39.261 "state": "configuring", 00:23:39.261 "raid_level": "raid5f", 00:23:39.261 "superblock": true, 00:23:39.261 "num_base_bdevs": 4, 00:23:39.261 "num_base_bdevs_discovered": 2, 00:23:39.261 "num_base_bdevs_operational": 3, 00:23:39.261 "base_bdevs_list": [ 00:23:39.261 { 00:23:39.261 "name": null, 00:23:39.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:39.261 "is_configured": false, 00:23:39.261 "data_offset": 2048, 00:23:39.261 "data_size": 63488 00:23:39.261 }, 00:23:39.261 { 00:23:39.261 "name": "pt2", 00:23:39.261 "uuid": "3643691f-3844-57a8-bfff-4908da753ff7", 00:23:39.261 "is_configured": true, 00:23:39.261 "data_offset": 2048, 00:23:39.261 "data_size": 63488 00:23:39.261 }, 00:23:39.261 { 00:23:39.261 "name": "pt3", 00:23:39.261 "uuid": "ea43d666-6f7a-529b-b939-f113fccecb75", 00:23:39.261 "is_configured": true, 00:23:39.261 "data_offset": 2048, 00:23:39.261 "data_size": 63488 00:23:39.261 }, 00:23:39.261 { 00:23:39.261 "name": null, 00:23:39.261 "uuid": "411e342f-4e6b-5b49-b78e-788c309ada20", 00:23:39.261 "is_configured": false, 00:23:39.261 "data_offset": 2048, 00:23:39.261 "data_size": 63488 00:23:39.261 } 00:23:39.261 ] 00:23:39.261 }' 00:23:39.261 02:47:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:39.261 02:47:04 -- common/autotest_common.sh@10 -- # set +x 00:23:40.193 02:47:04 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:23:40.193 02:47:04 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:23:40.193 02:47:04 -- bdev/bdev_raid.sh@462 -- # i=3 00:23:40.193 02:47:04 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:40.193 [2024-07-11 02:47:05.204103] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:40.193 [2024-07-11 02:47:05.204313] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:40.193 [2024-07-11 02:47:05.204385] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:23:40.193 [2024-07-11 02:47:05.204623] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:40.193 [2024-07-11 02:47:05.205095] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:40.193 [2024-07-11 02:47:05.205255] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:40.193 [2024-07-11 02:47:05.205426] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:23:40.193 [2024-07-11 02:47:05.205540] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:40.193 [2024-07-11 02:47:05.205753] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ae80 00:23:40.193 [2024-07-11 02:47:05.205856] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:40.193 [2024-07-11 02:47:05.206057] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002bb0 00:23:40.193 [2024-07-11 02:47:05.206922] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ae80 00:23:40.193 [2024-07-11 02:47:05.207040] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ae80 00:23:40.193 [2024-07-11 02:47:05.207356] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:40.193 pt4 00:23:40.193 02:47:05 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:40.193 02:47:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:40.193 02:47:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:40.193 02:47:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:40.193 02:47:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:40.193 02:47:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:40.193 02:47:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:40.193 02:47:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:40.193 02:47:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:40.193 02:47:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:40.193 02:47:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:40.193 02:47:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:40.450 02:47:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:40.450 "name": "raid_bdev1", 00:23:40.450 "uuid": "3a69b0f5-e834-4882-aef8-76138114fb25", 00:23:40.450 "strip_size_kb": 64, 00:23:40.450 "state": "online", 00:23:40.450 "raid_level": "raid5f", 00:23:40.450 "superblock": true, 00:23:40.450 "num_base_bdevs": 4, 00:23:40.450 "num_base_bdevs_discovered": 3, 00:23:40.450 "num_base_bdevs_operational": 3, 00:23:40.450 "base_bdevs_list": [ 00:23:40.450 { 00:23:40.450 "name": null, 00:23:40.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:40.450 "is_configured": false, 00:23:40.450 "data_offset": 2048, 00:23:40.450 "data_size": 63488 00:23:40.450 }, 00:23:40.450 { 00:23:40.450 "name": "pt2", 00:23:40.450 "uuid": "3643691f-3844-57a8-bfff-4908da753ff7", 00:23:40.450 "is_configured": true, 00:23:40.450 "data_offset": 2048, 00:23:40.450 "data_size": 63488 00:23:40.450 }, 00:23:40.450 { 00:23:40.450 "name": "pt3", 00:23:40.450 "uuid": "ea43d666-6f7a-529b-b939-f113fccecb75", 00:23:40.450 "is_configured": true, 00:23:40.450 "data_offset": 2048, 00:23:40.450 "data_size": 63488 00:23:40.450 }, 00:23:40.450 { 00:23:40.450 "name": "pt4", 00:23:40.450 "uuid": "411e342f-4e6b-5b49-b78e-788c309ada20", 00:23:40.450 "is_configured": true, 00:23:40.450 "data_offset": 2048, 00:23:40.450 "data_size": 63488 00:23:40.450 } 00:23:40.450 ] 00:23:40.450 }' 00:23:40.450 02:47:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:40.450 02:47:05 -- common/autotest_common.sh@10 -- # set +x 00:23:41.015 02:47:06 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:23:41.015 02:47:06 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:41.272 [2024-07-11 02:47:06.337435] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:41.272 [2024-07-11 02:47:06.338130] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:41.272 [2024-07-11 02:47:06.338305] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:41.272 [2024-07-11 02:47:06.338535] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:41.272 [2024-07-11 02:47:06.338644] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ae80 name raid_bdev1, state offline 00:23:41.272 02:47:06 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:41.272 02:47:06 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:23:41.529 02:47:06 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:23:41.529 02:47:06 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:23:41.529 02:47:06 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:41.787 [2024-07-11 02:47:06.741498] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:41.787 [2024-07-11 02:47:06.741585] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:41.787 [2024-07-11 02:47:06.741626] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:23:41.787 [2024-07-11 02:47:06.741659] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:41.787 [2024-07-11 02:47:06.743828] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:41.787 [2024-07-11 02:47:06.743914] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:41.787 [2024-07-11 02:47:06.743988] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:23:41.787 [2024-07-11 02:47:06.744038] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:41.787 pt1 00:23:41.787 02:47:06 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:23:41.787 02:47:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:41.787 02:47:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:41.787 02:47:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:41.787 02:47:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:41.787 02:47:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:41.787 02:47:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:41.787 02:47:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:41.787 02:47:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:41.787 02:47:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:41.787 02:47:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:41.787 02:47:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:42.044 02:47:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:42.044 "name": "raid_bdev1", 00:23:42.044 "uuid": "3a69b0f5-e834-4882-aef8-76138114fb25", 00:23:42.044 "strip_size_kb": 64, 00:23:42.044 "state": "configuring", 00:23:42.044 "raid_level": "raid5f", 00:23:42.044 "superblock": true, 00:23:42.044 "num_base_bdevs": 4, 00:23:42.044 "num_base_bdevs_discovered": 1, 00:23:42.044 "num_base_bdevs_operational": 4, 00:23:42.044 "base_bdevs_list": [ 00:23:42.044 { 00:23:42.044 "name": "pt1", 00:23:42.044 "uuid": "a1ffa8b5-3cb0-582e-9f7d-0e8be347c5b8", 00:23:42.044 "is_configured": true, 00:23:42.044 "data_offset": 2048, 00:23:42.044 "data_size": 63488 00:23:42.044 }, 00:23:42.044 { 00:23:42.044 "name": null, 00:23:42.044 "uuid": "3643691f-3844-57a8-bfff-4908da753ff7", 00:23:42.044 "is_configured": false, 00:23:42.044 "data_offset": 2048, 00:23:42.044 "data_size": 63488 00:23:42.044 }, 00:23:42.044 { 00:23:42.045 "name": null, 00:23:42.045 "uuid": "ea43d666-6f7a-529b-b939-f113fccecb75", 00:23:42.045 "is_configured": false, 00:23:42.045 "data_offset": 2048, 00:23:42.045 "data_size": 63488 00:23:42.045 }, 00:23:42.045 { 00:23:42.045 "name": null, 00:23:42.045 "uuid": "411e342f-4e6b-5b49-b78e-788c309ada20", 00:23:42.045 "is_configured": false, 00:23:42.045 "data_offset": 2048, 00:23:42.045 "data_size": 63488 00:23:42.045 } 00:23:42.045 ] 00:23:42.045 }' 00:23:42.045 02:47:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:42.045 02:47:06 -- common/autotest_common.sh@10 -- # set +x 00:23:42.610 02:47:07 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:23:42.610 02:47:07 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:23:42.610 02:47:07 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:42.868 02:47:07 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:23:42.868 02:47:07 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:23:42.868 02:47:07 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:43.126 02:47:08 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:23:43.126 02:47:08 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:23:43.126 02:47:08 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:23:43.384 02:47:08 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:23:43.384 02:47:08 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:23:43.384 02:47:08 -- bdev/bdev_raid.sh@489 -- # i=3 00:23:43.384 02:47:08 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:43.641 [2024-07-11 02:47:08.506180] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:43.641 [2024-07-11 02:47:08.506305] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:43.641 [2024-07-11 02:47:08.506338] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:23:43.641 [2024-07-11 02:47:08.506364] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:43.641 [2024-07-11 02:47:08.507061] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:43.641 [2024-07-11 02:47:08.507126] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:43.641 [2024-07-11 02:47:08.507197] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:23:43.641 [2024-07-11 02:47:08.507211] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:23:43.641 [2024-07-11 02:47:08.507217] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:43.641 [2024-07-11 02:47:08.507251] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state configuring 00:23:43.642 [2024-07-11 02:47:08.507569] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:43.642 pt4 00:23:43.642 02:47:08 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:23:43.642 02:47:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:43.642 02:47:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:43.642 02:47:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:43.642 02:47:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:43.642 02:47:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:43.642 02:47:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:43.642 02:47:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:43.642 02:47:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:43.642 02:47:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:43.642 02:47:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:43.642 02:47:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:43.642 02:47:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:43.642 "name": "raid_bdev1", 00:23:43.642 "uuid": "3a69b0f5-e834-4882-aef8-76138114fb25", 00:23:43.642 "strip_size_kb": 64, 00:23:43.642 "state": "configuring", 00:23:43.642 "raid_level": "raid5f", 00:23:43.642 "superblock": true, 00:23:43.642 "num_base_bdevs": 4, 00:23:43.642 "num_base_bdevs_discovered": 1, 00:23:43.642 "num_base_bdevs_operational": 3, 00:23:43.642 "base_bdevs_list": [ 00:23:43.642 { 00:23:43.642 "name": null, 00:23:43.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:43.642 "is_configured": false, 00:23:43.642 "data_offset": 2048, 00:23:43.642 "data_size": 63488 00:23:43.642 }, 00:23:43.642 { 00:23:43.642 "name": null, 00:23:43.642 "uuid": "3643691f-3844-57a8-bfff-4908da753ff7", 00:23:43.642 "is_configured": false, 00:23:43.642 "data_offset": 2048, 00:23:43.642 "data_size": 63488 00:23:43.642 }, 00:23:43.642 { 00:23:43.642 "name": null, 00:23:43.642 "uuid": "ea43d666-6f7a-529b-b939-f113fccecb75", 00:23:43.642 "is_configured": false, 00:23:43.642 "data_offset": 2048, 00:23:43.642 "data_size": 63488 00:23:43.642 }, 00:23:43.642 { 00:23:43.642 "name": "pt4", 00:23:43.642 "uuid": "411e342f-4e6b-5b49-b78e-788c309ada20", 00:23:43.642 "is_configured": true, 00:23:43.642 "data_offset": 2048, 00:23:43.642 "data_size": 63488 00:23:43.642 } 00:23:43.642 ] 00:23:43.642 }' 00:23:43.642 02:47:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:43.642 02:47:08 -- common/autotest_common.sh@10 -- # set +x 00:23:44.576 02:47:09 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:23:44.576 02:47:09 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:23:44.576 02:47:09 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:44.576 [2024-07-11 02:47:09.625830] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:44.576 [2024-07-11 02:47:09.625960] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:44.576 [2024-07-11 02:47:09.625998] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:23:44.576 [2024-07-11 02:47:09.626026] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:44.576 [2024-07-11 02:47:09.626768] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:44.576 [2024-07-11 02:47:09.626843] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:44.576 [2024-07-11 02:47:09.626925] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:23:44.576 [2024-07-11 02:47:09.626951] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:44.576 pt2 00:23:44.576 02:47:09 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:23:44.576 02:47:09 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:23:44.576 02:47:09 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:44.834 [2024-07-11 02:47:09.870782] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:44.834 [2024-07-11 02:47:09.871188] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:44.834 [2024-07-11 02:47:09.871496] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:23:44.834 [2024-07-11 02:47:09.871826] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:44.834 [2024-07-11 02:47:09.872610] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:44.834 [2024-07-11 02:47:09.872807] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:44.834 [2024-07-11 02:47:09.873035] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:23:44.834 [2024-07-11 02:47:09.873089] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:44.834 [2024-07-11 02:47:09.873257] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c380 00:23:44.834 [2024-07-11 02:47:09.873286] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:44.834 [2024-07-11 02:47:09.873391] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002fc0 00:23:44.834 [2024-07-11 02:47:09.874408] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c380 00:23:44.834 [2024-07-11 02:47:09.874443] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c380 00:23:44.834 [2024-07-11 02:47:09.874728] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:44.834 pt3 00:23:44.834 02:47:09 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:23:44.834 02:47:09 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:23:44.834 02:47:09 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:44.834 02:47:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:44.834 02:47:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:44.834 02:47:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:44.834 02:47:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:44.834 02:47:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:44.834 02:47:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:44.834 02:47:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:44.834 02:47:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:44.834 02:47:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:44.834 02:47:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:44.834 02:47:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:45.093 02:47:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:45.093 "name": "raid_bdev1", 00:23:45.093 "uuid": "3a69b0f5-e834-4882-aef8-76138114fb25", 00:23:45.093 "strip_size_kb": 64, 00:23:45.093 "state": "online", 00:23:45.093 "raid_level": "raid5f", 00:23:45.093 "superblock": true, 00:23:45.093 "num_base_bdevs": 4, 00:23:45.093 "num_base_bdevs_discovered": 3, 00:23:45.093 "num_base_bdevs_operational": 3, 00:23:45.093 "base_bdevs_list": [ 00:23:45.093 { 00:23:45.093 "name": null, 00:23:45.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:45.093 "is_configured": false, 00:23:45.093 "data_offset": 2048, 00:23:45.093 "data_size": 63488 00:23:45.093 }, 00:23:45.093 { 00:23:45.093 "name": "pt2", 00:23:45.093 "uuid": "3643691f-3844-57a8-bfff-4908da753ff7", 00:23:45.093 "is_configured": true, 00:23:45.093 "data_offset": 2048, 00:23:45.093 "data_size": 63488 00:23:45.093 }, 00:23:45.093 { 00:23:45.093 "name": "pt3", 00:23:45.093 "uuid": "ea43d666-6f7a-529b-b939-f113fccecb75", 00:23:45.093 "is_configured": true, 00:23:45.093 "data_offset": 2048, 00:23:45.093 "data_size": 63488 00:23:45.093 }, 00:23:45.093 { 00:23:45.093 "name": "pt4", 00:23:45.093 "uuid": "411e342f-4e6b-5b49-b78e-788c309ada20", 00:23:45.093 "is_configured": true, 00:23:45.093 "data_offset": 2048, 00:23:45.093 "data_size": 63488 00:23:45.093 } 00:23:45.093 ] 00:23:45.093 }' 00:23:45.093 02:47:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:45.093 02:47:10 -- common/autotest_common.sh@10 -- # set +x 00:23:45.660 02:47:10 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:45.660 02:47:10 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:23:45.972 [2024-07-11 02:47:10.965940] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:45.972 02:47:10 -- bdev/bdev_raid.sh@506 -- # '[' 3a69b0f5-e834-4882-aef8-76138114fb25 '!=' 3a69b0f5-e834-4882-aef8-76138114fb25 ']' 00:23:45.972 02:47:10 -- bdev/bdev_raid.sh@511 -- # killprocess 143413 00:23:45.972 02:47:10 -- common/autotest_common.sh@926 -- # '[' -z 143413 ']' 00:23:45.972 02:47:10 -- common/autotest_common.sh@930 -- # kill -0 143413 00:23:45.972 02:47:10 -- common/autotest_common.sh@931 -- # uname 00:23:45.972 02:47:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:45.972 02:47:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 143413 00:23:45.972 killing process with pid 143413 00:23:45.972 02:47:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:45.972 02:47:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:45.972 02:47:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 143413' 00:23:45.972 02:47:11 -- common/autotest_common.sh@945 -- # kill 143413 00:23:45.972 02:47:11 -- common/autotest_common.sh@950 -- # wait 143413 00:23:45.972 [2024-07-11 02:47:11.002693] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:45.972 [2024-07-11 02:47:11.002842] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:45.972 [2024-07-11 02:47:11.002997] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:45.972 [2024-07-11 02:47:11.003029] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c380 name raid_bdev1, state offline 00:23:45.972 [2024-07-11 02:47:11.047406] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:46.260 ************************************ 00:23:46.260 END TEST raid5f_superblock_test 00:23:46.260 ************************************ 00:23:46.260 02:47:11 -- bdev/bdev_raid.sh@513 -- # return 0 00:23:46.260 00:23:46.260 real 0m20.015s 00:23:46.260 user 0m37.874s 00:23:46.260 sys 0m2.337s 00:23:46.260 02:47:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:46.260 02:47:11 -- common/autotest_common.sh@10 -- # set +x 00:23:46.260 02:47:11 -- bdev/bdev_raid.sh@747 -- # '[' true = true ']' 00:23:46.260 02:47:11 -- bdev/bdev_raid.sh@748 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false 00:23:46.260 02:47:11 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:23:46.260 02:47:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:46.260 02:47:11 -- common/autotest_common.sh@10 -- # set +x 00:23:46.260 ************************************ 00:23:46.260 START TEST raid5f_rebuild_test 00:23:46.260 ************************************ 00:23:46.260 02:47:11 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid5f 4 false false 00:23:46.260 02:47:11 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:23:46.260 02:47:11 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:23:46.260 02:47:11 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:23:46.260 02:47:11 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:23:46.260 02:47:11 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:23:46.260 02:47:11 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:23:46.260 02:47:11 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:46.260 02:47:11 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:23:46.260 02:47:11 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:46.260 02:47:11 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:46.260 02:47:11 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:23:46.260 02:47:11 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:46.260 02:47:11 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:46.260 02:47:11 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:23:46.260 02:47:11 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:46.260 02:47:11 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:46.260 02:47:11 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:23:46.260 02:47:11 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:46.260 02:47:11 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:46.260 02:47:11 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:23:46.260 02:47:11 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:23:46.260 02:47:11 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:23:46.260 02:47:11 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:23:46.260 02:47:11 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:23:46.260 02:47:11 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:23:46.260 02:47:11 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:23:46.260 02:47:11 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:23:46.260 02:47:11 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:23:46.260 02:47:11 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:23:46.260 02:47:11 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:23:46.260 02:47:11 -- bdev/bdev_raid.sh@544 -- # raid_pid=144098 00:23:46.260 02:47:11 -- bdev/bdev_raid.sh@545 -- # waitforlisten 144098 /var/tmp/spdk-raid.sock 00:23:46.260 02:47:11 -- common/autotest_common.sh@819 -- # '[' -z 144098 ']' 00:23:46.260 02:47:11 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:46.260 02:47:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:46.260 02:47:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:46.260 02:47:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:46.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:46.260 02:47:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:46.260 02:47:11 -- common/autotest_common.sh@10 -- # set +x 00:23:46.519 [2024-07-11 02:47:11.397182] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:23:46.519 [2024-07-11 02:47:11.397547] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144098 ] 00:23:46.519 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:46.519 Zero copy mechanism will not be used. 00:23:46.519 [2024-07-11 02:47:11.537247] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.519 [2024-07-11 02:47:11.598219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:46.776 [2024-07-11 02:47:11.651763] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:47.343 02:47:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:47.343 02:47:12 -- common/autotest_common.sh@852 -- # return 0 00:23:47.343 02:47:12 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:47.343 02:47:12 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:23:47.343 02:47:12 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:47.602 BaseBdev1 00:23:47.602 02:47:12 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:47.602 02:47:12 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:23:47.602 02:47:12 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:47.860 BaseBdev2 00:23:47.860 02:47:12 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:47.860 02:47:12 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:23:47.860 02:47:12 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:48.119 BaseBdev3 00:23:48.119 02:47:12 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:48.119 02:47:12 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:23:48.119 02:47:12 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:23:48.119 BaseBdev4 00:23:48.119 02:47:13 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:23:48.377 spare_malloc 00:23:48.377 02:47:13 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:48.646 spare_delay 00:23:48.646 02:47:13 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:48.905 [2024-07-11 02:47:13.757254] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:48.905 [2024-07-11 02:47:13.757377] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:48.905 [2024-07-11 02:47:13.757416] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:23:48.905 [2024-07-11 02:47:13.757459] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:48.905 [2024-07-11 02:47:13.760016] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:48.905 [2024-07-11 02:47:13.760105] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:48.905 spare 00:23:48.905 02:47:13 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:23:48.905 [2024-07-11 02:47:13.953338] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:48.905 [2024-07-11 02:47:13.955246] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:48.905 [2024-07-11 02:47:13.955329] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:48.905 [2024-07-11 02:47:13.955401] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:48.905 [2024-07-11 02:47:13.955533] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:23:48.905 [2024-07-11 02:47:13.955549] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:23:48.905 [2024-07-11 02:47:13.955767] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:23:48.905 [2024-07-11 02:47:13.956647] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:23:48.905 [2024-07-11 02:47:13.956673] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008180 00:23:48.905 [2024-07-11 02:47:13.956905] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:48.905 02:47:13 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:23:48.905 02:47:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:48.905 02:47:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:48.905 02:47:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:48.905 02:47:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:48.905 02:47:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:48.905 02:47:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:48.905 02:47:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:48.905 02:47:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:48.905 02:47:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:48.905 02:47:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:48.905 02:47:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:49.164 02:47:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:49.164 "name": "raid_bdev1", 00:23:49.164 "uuid": "c78e9457-1e3d-4d3a-a4ea-55da71fda69e", 00:23:49.164 "strip_size_kb": 64, 00:23:49.164 "state": "online", 00:23:49.164 "raid_level": "raid5f", 00:23:49.164 "superblock": false, 00:23:49.164 "num_base_bdevs": 4, 00:23:49.164 "num_base_bdevs_discovered": 4, 00:23:49.164 "num_base_bdevs_operational": 4, 00:23:49.164 "base_bdevs_list": [ 00:23:49.164 { 00:23:49.164 "name": "BaseBdev1", 00:23:49.164 "uuid": "07cfd0e0-3e8b-4e2d-bd7f-47e4f67f52c5", 00:23:49.164 "is_configured": true, 00:23:49.164 "data_offset": 0, 00:23:49.164 "data_size": 65536 00:23:49.164 }, 00:23:49.164 { 00:23:49.164 "name": "BaseBdev2", 00:23:49.164 "uuid": "54243702-1a99-40fa-be38-0b41125bd6bd", 00:23:49.164 "is_configured": true, 00:23:49.164 "data_offset": 0, 00:23:49.164 "data_size": 65536 00:23:49.164 }, 00:23:49.164 { 00:23:49.164 "name": "BaseBdev3", 00:23:49.164 "uuid": "9ede9002-f207-483f-b266-c640a3c908b8", 00:23:49.164 "is_configured": true, 00:23:49.164 "data_offset": 0, 00:23:49.164 "data_size": 65536 00:23:49.164 }, 00:23:49.164 { 00:23:49.164 "name": "BaseBdev4", 00:23:49.164 "uuid": "8d66cfc7-aeb6-4aad-bad3-7d60b779362b", 00:23:49.164 "is_configured": true, 00:23:49.164 "data_offset": 0, 00:23:49.164 "data_size": 65536 00:23:49.164 } 00:23:49.164 ] 00:23:49.164 }' 00:23:49.164 02:47:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:49.164 02:47:14 -- common/autotest_common.sh@10 -- # set +x 00:23:50.099 02:47:14 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:50.099 02:47:14 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:23:50.099 [2024-07-11 02:47:15.015401] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:50.099 02:47:15 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=196608 00:23:50.099 02:47:15 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:50.099 02:47:15 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:50.468 02:47:15 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:23:50.468 02:47:15 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:23:50.468 02:47:15 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:23:50.468 02:47:15 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:23:50.468 02:47:15 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:50.468 02:47:15 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:23:50.468 02:47:15 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:50.468 02:47:15 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:23:50.468 02:47:15 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:50.468 02:47:15 -- bdev/nbd_common.sh@12 -- # local i 00:23:50.468 02:47:15 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:50.468 02:47:15 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:50.468 02:47:15 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:50.468 [2024-07-11 02:47:15.519436] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:23:50.726 /dev/nbd0 00:23:50.726 02:47:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:50.726 02:47:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:50.726 02:47:15 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:23:50.726 02:47:15 -- common/autotest_common.sh@857 -- # local i 00:23:50.726 02:47:15 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:50.726 02:47:15 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:50.726 02:47:15 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:23:50.726 02:47:15 -- common/autotest_common.sh@861 -- # break 00:23:50.726 02:47:15 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:50.726 02:47:15 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:50.726 02:47:15 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:50.726 1+0 records in 00:23:50.726 1+0 records out 00:23:50.726 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000214319 s, 19.1 MB/s 00:23:50.726 02:47:15 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:50.726 02:47:15 -- common/autotest_common.sh@874 -- # size=4096 00:23:50.726 02:47:15 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:50.726 02:47:15 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:50.726 02:47:15 -- common/autotest_common.sh@877 -- # return 0 00:23:50.726 02:47:15 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:50.726 02:47:15 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:50.726 02:47:15 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:23:50.726 02:47:15 -- bdev/bdev_raid.sh@581 -- # write_unit_size=384 00:23:50.726 02:47:15 -- bdev/bdev_raid.sh@582 -- # echo 192 00:23:50.726 02:47:15 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:23:50.985 512+0 records in 00:23:50.985 512+0 records out 00:23:50.985 100663296 bytes (101 MB, 96 MiB) copied, 0.431645 s, 233 MB/s 00:23:50.985 02:47:16 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:50.985 02:47:16 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:50.985 02:47:16 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:23:50.985 02:47:16 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:50.985 02:47:16 -- bdev/nbd_common.sh@51 -- # local i 00:23:50.985 02:47:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:50.985 02:47:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:51.244 02:47:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:51.244 02:47:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:51.244 02:47:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:51.244 02:47:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:51.244 02:47:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:51.244 02:47:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:51.244 02:47:16 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:23:51.244 [2024-07-11 02:47:16.275320] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:51.502 02:47:16 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:23:51.502 02:47:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:51.502 02:47:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:51.502 02:47:16 -- bdev/nbd_common.sh@41 -- # break 00:23:51.502 02:47:16 -- bdev/nbd_common.sh@45 -- # return 0 00:23:51.502 02:47:16 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:51.762 [2024-07-11 02:47:16.606913] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:51.762 02:47:16 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:51.762 02:47:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:51.762 02:47:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:51.762 02:47:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:51.762 02:47:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:51.762 02:47:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:51.762 02:47:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:51.762 02:47:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:51.762 02:47:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:51.762 02:47:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:51.762 02:47:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:51.762 02:47:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:52.020 02:47:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:52.020 "name": "raid_bdev1", 00:23:52.020 "uuid": "c78e9457-1e3d-4d3a-a4ea-55da71fda69e", 00:23:52.020 "strip_size_kb": 64, 00:23:52.020 "state": "online", 00:23:52.020 "raid_level": "raid5f", 00:23:52.020 "superblock": false, 00:23:52.020 "num_base_bdevs": 4, 00:23:52.020 "num_base_bdevs_discovered": 3, 00:23:52.020 "num_base_bdevs_operational": 3, 00:23:52.020 "base_bdevs_list": [ 00:23:52.020 { 00:23:52.020 "name": null, 00:23:52.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:52.020 "is_configured": false, 00:23:52.020 "data_offset": 0, 00:23:52.020 "data_size": 65536 00:23:52.020 }, 00:23:52.020 { 00:23:52.020 "name": "BaseBdev2", 00:23:52.020 "uuid": "54243702-1a99-40fa-be38-0b41125bd6bd", 00:23:52.020 "is_configured": true, 00:23:52.020 "data_offset": 0, 00:23:52.020 "data_size": 65536 00:23:52.020 }, 00:23:52.020 { 00:23:52.020 "name": "BaseBdev3", 00:23:52.020 "uuid": "9ede9002-f207-483f-b266-c640a3c908b8", 00:23:52.020 "is_configured": true, 00:23:52.020 "data_offset": 0, 00:23:52.020 "data_size": 65536 00:23:52.020 }, 00:23:52.020 { 00:23:52.020 "name": "BaseBdev4", 00:23:52.020 "uuid": "8d66cfc7-aeb6-4aad-bad3-7d60b779362b", 00:23:52.020 "is_configured": true, 00:23:52.020 "data_offset": 0, 00:23:52.020 "data_size": 65536 00:23:52.020 } 00:23:52.020 ] 00:23:52.020 }' 00:23:52.020 02:47:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:52.020 02:47:16 -- common/autotest_common.sh@10 -- # set +x 00:23:52.607 02:47:17 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:52.866 [2024-07-11 02:47:17.735140] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:52.866 [2024-07-11 02:47:17.735215] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:52.866 [2024-07-11 02:47:17.739579] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029bb0 00:23:52.866 [2024-07-11 02:47:17.742005] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:52.866 02:47:17 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:23:53.799 02:47:18 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:53.799 02:47:18 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:53.799 02:47:18 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:53.799 02:47:18 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:53.799 02:47:18 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:53.799 02:47:18 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:53.799 02:47:18 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:54.059 02:47:19 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:54.059 "name": "raid_bdev1", 00:23:54.059 "uuid": "c78e9457-1e3d-4d3a-a4ea-55da71fda69e", 00:23:54.059 "strip_size_kb": 64, 00:23:54.059 "state": "online", 00:23:54.059 "raid_level": "raid5f", 00:23:54.059 "superblock": false, 00:23:54.059 "num_base_bdevs": 4, 00:23:54.059 "num_base_bdevs_discovered": 4, 00:23:54.059 "num_base_bdevs_operational": 4, 00:23:54.059 "process": { 00:23:54.059 "type": "rebuild", 00:23:54.059 "target": "spare", 00:23:54.059 "progress": { 00:23:54.059 "blocks": 23040, 00:23:54.059 "percent": 11 00:23:54.059 } 00:23:54.059 }, 00:23:54.059 "base_bdevs_list": [ 00:23:54.059 { 00:23:54.059 "name": "spare", 00:23:54.059 "uuid": "4efaf22f-424a-566e-a6b1-1c57a91d547b", 00:23:54.059 "is_configured": true, 00:23:54.059 "data_offset": 0, 00:23:54.059 "data_size": 65536 00:23:54.059 }, 00:23:54.059 { 00:23:54.059 "name": "BaseBdev2", 00:23:54.059 "uuid": "54243702-1a99-40fa-be38-0b41125bd6bd", 00:23:54.059 "is_configured": true, 00:23:54.059 "data_offset": 0, 00:23:54.059 "data_size": 65536 00:23:54.059 }, 00:23:54.059 { 00:23:54.059 "name": "BaseBdev3", 00:23:54.059 "uuid": "9ede9002-f207-483f-b266-c640a3c908b8", 00:23:54.059 "is_configured": true, 00:23:54.059 "data_offset": 0, 00:23:54.059 "data_size": 65536 00:23:54.059 }, 00:23:54.059 { 00:23:54.059 "name": "BaseBdev4", 00:23:54.059 "uuid": "8d66cfc7-aeb6-4aad-bad3-7d60b779362b", 00:23:54.059 "is_configured": true, 00:23:54.059 "data_offset": 0, 00:23:54.059 "data_size": 65536 00:23:54.059 } 00:23:54.059 ] 00:23:54.059 }' 00:23:54.059 02:47:19 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:54.059 02:47:19 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:54.059 02:47:19 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:54.059 02:47:19 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:54.059 02:47:19 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:54.318 [2024-07-11 02:47:19.304664] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:54.318 [2024-07-11 02:47:19.352704] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:54.318 [2024-07-11 02:47:19.353337] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:54.318 02:47:19 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:54.318 02:47:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:54.318 02:47:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:54.318 02:47:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:54.318 02:47:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:54.318 02:47:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:54.318 02:47:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:54.318 02:47:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:54.318 02:47:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:54.318 02:47:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:54.318 02:47:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:54.318 02:47:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:54.577 02:47:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:54.577 "name": "raid_bdev1", 00:23:54.577 "uuid": "c78e9457-1e3d-4d3a-a4ea-55da71fda69e", 00:23:54.577 "strip_size_kb": 64, 00:23:54.577 "state": "online", 00:23:54.577 "raid_level": "raid5f", 00:23:54.577 "superblock": false, 00:23:54.577 "num_base_bdevs": 4, 00:23:54.577 "num_base_bdevs_discovered": 3, 00:23:54.577 "num_base_bdevs_operational": 3, 00:23:54.577 "base_bdevs_list": [ 00:23:54.577 { 00:23:54.577 "name": null, 00:23:54.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:54.577 "is_configured": false, 00:23:54.577 "data_offset": 0, 00:23:54.577 "data_size": 65536 00:23:54.577 }, 00:23:54.577 { 00:23:54.577 "name": "BaseBdev2", 00:23:54.577 "uuid": "54243702-1a99-40fa-be38-0b41125bd6bd", 00:23:54.577 "is_configured": true, 00:23:54.577 "data_offset": 0, 00:23:54.577 "data_size": 65536 00:23:54.577 }, 00:23:54.577 { 00:23:54.577 "name": "BaseBdev3", 00:23:54.577 "uuid": "9ede9002-f207-483f-b266-c640a3c908b8", 00:23:54.577 "is_configured": true, 00:23:54.577 "data_offset": 0, 00:23:54.577 "data_size": 65536 00:23:54.577 }, 00:23:54.577 { 00:23:54.577 "name": "BaseBdev4", 00:23:54.577 "uuid": "8d66cfc7-aeb6-4aad-bad3-7d60b779362b", 00:23:54.577 "is_configured": true, 00:23:54.577 "data_offset": 0, 00:23:54.577 "data_size": 65536 00:23:54.577 } 00:23:54.577 ] 00:23:54.577 }' 00:23:54.577 02:47:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:54.577 02:47:19 -- common/autotest_common.sh@10 -- # set +x 00:23:55.513 02:47:20 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:55.513 02:47:20 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:55.513 02:47:20 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:55.513 02:47:20 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:55.513 02:47:20 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:55.513 02:47:20 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:55.513 02:47:20 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:55.513 02:47:20 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:55.513 "name": "raid_bdev1", 00:23:55.513 "uuid": "c78e9457-1e3d-4d3a-a4ea-55da71fda69e", 00:23:55.513 "strip_size_kb": 64, 00:23:55.513 "state": "online", 00:23:55.513 "raid_level": "raid5f", 00:23:55.513 "superblock": false, 00:23:55.513 "num_base_bdevs": 4, 00:23:55.513 "num_base_bdevs_discovered": 3, 00:23:55.513 "num_base_bdevs_operational": 3, 00:23:55.513 "base_bdevs_list": [ 00:23:55.513 { 00:23:55.513 "name": null, 00:23:55.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:55.513 "is_configured": false, 00:23:55.513 "data_offset": 0, 00:23:55.513 "data_size": 65536 00:23:55.513 }, 00:23:55.513 { 00:23:55.513 "name": "BaseBdev2", 00:23:55.513 "uuid": "54243702-1a99-40fa-be38-0b41125bd6bd", 00:23:55.513 "is_configured": true, 00:23:55.513 "data_offset": 0, 00:23:55.513 "data_size": 65536 00:23:55.513 }, 00:23:55.513 { 00:23:55.513 "name": "BaseBdev3", 00:23:55.513 "uuid": "9ede9002-f207-483f-b266-c640a3c908b8", 00:23:55.513 "is_configured": true, 00:23:55.513 "data_offset": 0, 00:23:55.513 "data_size": 65536 00:23:55.513 }, 00:23:55.513 { 00:23:55.513 "name": "BaseBdev4", 00:23:55.513 "uuid": "8d66cfc7-aeb6-4aad-bad3-7d60b779362b", 00:23:55.513 "is_configured": true, 00:23:55.513 "data_offset": 0, 00:23:55.513 "data_size": 65536 00:23:55.513 } 00:23:55.513 ] 00:23:55.513 }' 00:23:55.513 02:47:20 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:55.513 02:47:20 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:55.513 02:47:20 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:55.513 02:47:20 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:55.514 02:47:20 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:55.772 [2024-07-11 02:47:20.802431] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:55.772 [2024-07-11 02:47:20.802529] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:55.772 [2024-07-11 02:47:20.806665] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029d50 00:23:55.772 [2024-07-11 02:47:20.808968] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:55.772 02:47:20 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:23:57.148 02:47:21 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:57.148 02:47:21 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:57.148 02:47:21 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:57.148 02:47:21 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:57.148 02:47:21 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:57.148 02:47:21 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:57.148 02:47:21 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:57.148 02:47:22 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:57.148 "name": "raid_bdev1", 00:23:57.148 "uuid": "c78e9457-1e3d-4d3a-a4ea-55da71fda69e", 00:23:57.148 "strip_size_kb": 64, 00:23:57.148 "state": "online", 00:23:57.148 "raid_level": "raid5f", 00:23:57.148 "superblock": false, 00:23:57.148 "num_base_bdevs": 4, 00:23:57.148 "num_base_bdevs_discovered": 4, 00:23:57.148 "num_base_bdevs_operational": 4, 00:23:57.148 "process": { 00:23:57.148 "type": "rebuild", 00:23:57.148 "target": "spare", 00:23:57.148 "progress": { 00:23:57.148 "blocks": 23040, 00:23:57.148 "percent": 11 00:23:57.148 } 00:23:57.148 }, 00:23:57.148 "base_bdevs_list": [ 00:23:57.148 { 00:23:57.148 "name": "spare", 00:23:57.148 "uuid": "4efaf22f-424a-566e-a6b1-1c57a91d547b", 00:23:57.148 "is_configured": true, 00:23:57.148 "data_offset": 0, 00:23:57.148 "data_size": 65536 00:23:57.148 }, 00:23:57.148 { 00:23:57.148 "name": "BaseBdev2", 00:23:57.148 "uuid": "54243702-1a99-40fa-be38-0b41125bd6bd", 00:23:57.148 "is_configured": true, 00:23:57.148 "data_offset": 0, 00:23:57.148 "data_size": 65536 00:23:57.148 }, 00:23:57.148 { 00:23:57.148 "name": "BaseBdev3", 00:23:57.148 "uuid": "9ede9002-f207-483f-b266-c640a3c908b8", 00:23:57.148 "is_configured": true, 00:23:57.148 "data_offset": 0, 00:23:57.148 "data_size": 65536 00:23:57.148 }, 00:23:57.148 { 00:23:57.148 "name": "BaseBdev4", 00:23:57.148 "uuid": "8d66cfc7-aeb6-4aad-bad3-7d60b779362b", 00:23:57.148 "is_configured": true, 00:23:57.148 "data_offset": 0, 00:23:57.148 "data_size": 65536 00:23:57.148 } 00:23:57.148 ] 00:23:57.148 }' 00:23:57.148 02:47:22 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:57.148 02:47:22 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:57.148 02:47:22 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:57.148 02:47:22 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:57.148 02:47:22 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:23:57.148 02:47:22 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:23:57.148 02:47:22 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:23:57.148 02:47:22 -- bdev/bdev_raid.sh@657 -- # local timeout=656 00:23:57.148 02:47:22 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:57.148 02:47:22 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:57.148 02:47:22 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:57.148 02:47:22 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:57.148 02:47:22 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:57.148 02:47:22 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:57.148 02:47:22 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:57.148 02:47:22 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:57.405 02:47:22 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:57.405 "name": "raid_bdev1", 00:23:57.405 "uuid": "c78e9457-1e3d-4d3a-a4ea-55da71fda69e", 00:23:57.405 "strip_size_kb": 64, 00:23:57.405 "state": "online", 00:23:57.405 "raid_level": "raid5f", 00:23:57.405 "superblock": false, 00:23:57.405 "num_base_bdevs": 4, 00:23:57.405 "num_base_bdevs_discovered": 4, 00:23:57.405 "num_base_bdevs_operational": 4, 00:23:57.405 "process": { 00:23:57.405 "type": "rebuild", 00:23:57.405 "target": "spare", 00:23:57.405 "progress": { 00:23:57.405 "blocks": 28800, 00:23:57.405 "percent": 14 00:23:57.405 } 00:23:57.405 }, 00:23:57.405 "base_bdevs_list": [ 00:23:57.405 { 00:23:57.405 "name": "spare", 00:23:57.405 "uuid": "4efaf22f-424a-566e-a6b1-1c57a91d547b", 00:23:57.405 "is_configured": true, 00:23:57.405 "data_offset": 0, 00:23:57.405 "data_size": 65536 00:23:57.405 }, 00:23:57.405 { 00:23:57.405 "name": "BaseBdev2", 00:23:57.405 "uuid": "54243702-1a99-40fa-be38-0b41125bd6bd", 00:23:57.405 "is_configured": true, 00:23:57.405 "data_offset": 0, 00:23:57.405 "data_size": 65536 00:23:57.405 }, 00:23:57.405 { 00:23:57.405 "name": "BaseBdev3", 00:23:57.405 "uuid": "9ede9002-f207-483f-b266-c640a3c908b8", 00:23:57.405 "is_configured": true, 00:23:57.405 "data_offset": 0, 00:23:57.405 "data_size": 65536 00:23:57.405 }, 00:23:57.405 { 00:23:57.405 "name": "BaseBdev4", 00:23:57.405 "uuid": "8d66cfc7-aeb6-4aad-bad3-7d60b779362b", 00:23:57.405 "is_configured": true, 00:23:57.405 "data_offset": 0, 00:23:57.405 "data_size": 65536 00:23:57.405 } 00:23:57.405 ] 00:23:57.405 }' 00:23:57.405 02:47:22 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:57.405 02:47:22 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:57.405 02:47:22 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:57.405 02:47:22 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:57.405 02:47:22 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:58.779 02:47:23 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:58.779 02:47:23 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:58.779 02:47:23 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:58.779 02:47:23 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:58.779 02:47:23 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:58.779 02:47:23 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:58.779 02:47:23 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:58.779 02:47:23 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:58.779 02:47:23 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:58.779 "name": "raid_bdev1", 00:23:58.779 "uuid": "c78e9457-1e3d-4d3a-a4ea-55da71fda69e", 00:23:58.779 "strip_size_kb": 64, 00:23:58.779 "state": "online", 00:23:58.779 "raid_level": "raid5f", 00:23:58.779 "superblock": false, 00:23:58.779 "num_base_bdevs": 4, 00:23:58.779 "num_base_bdevs_discovered": 4, 00:23:58.779 "num_base_bdevs_operational": 4, 00:23:58.779 "process": { 00:23:58.779 "type": "rebuild", 00:23:58.779 "target": "spare", 00:23:58.779 "progress": { 00:23:58.779 "blocks": 55680, 00:23:58.779 "percent": 28 00:23:58.779 } 00:23:58.779 }, 00:23:58.779 "base_bdevs_list": [ 00:23:58.779 { 00:23:58.779 "name": "spare", 00:23:58.779 "uuid": "4efaf22f-424a-566e-a6b1-1c57a91d547b", 00:23:58.779 "is_configured": true, 00:23:58.779 "data_offset": 0, 00:23:58.779 "data_size": 65536 00:23:58.779 }, 00:23:58.779 { 00:23:58.779 "name": "BaseBdev2", 00:23:58.779 "uuid": "54243702-1a99-40fa-be38-0b41125bd6bd", 00:23:58.779 "is_configured": true, 00:23:58.779 "data_offset": 0, 00:23:58.779 "data_size": 65536 00:23:58.779 }, 00:23:58.779 { 00:23:58.779 "name": "BaseBdev3", 00:23:58.779 "uuid": "9ede9002-f207-483f-b266-c640a3c908b8", 00:23:58.779 "is_configured": true, 00:23:58.779 "data_offset": 0, 00:23:58.779 "data_size": 65536 00:23:58.779 }, 00:23:58.779 { 00:23:58.779 "name": "BaseBdev4", 00:23:58.779 "uuid": "8d66cfc7-aeb6-4aad-bad3-7d60b779362b", 00:23:58.779 "is_configured": true, 00:23:58.779 "data_offset": 0, 00:23:58.779 "data_size": 65536 00:23:58.779 } 00:23:58.779 ] 00:23:58.779 }' 00:23:58.779 02:47:23 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:58.779 02:47:23 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:58.779 02:47:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:58.779 02:47:23 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:58.779 02:47:23 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:00.153 02:47:24 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:00.153 02:47:24 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:00.153 02:47:24 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:00.153 02:47:24 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:00.153 02:47:24 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:00.153 02:47:24 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:00.153 02:47:24 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:00.153 02:47:24 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:00.153 02:47:25 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:00.153 "name": "raid_bdev1", 00:24:00.153 "uuid": "c78e9457-1e3d-4d3a-a4ea-55da71fda69e", 00:24:00.153 "strip_size_kb": 64, 00:24:00.153 "state": "online", 00:24:00.153 "raid_level": "raid5f", 00:24:00.153 "superblock": false, 00:24:00.153 "num_base_bdevs": 4, 00:24:00.153 "num_base_bdevs_discovered": 4, 00:24:00.153 "num_base_bdevs_operational": 4, 00:24:00.153 "process": { 00:24:00.153 "type": "rebuild", 00:24:00.153 "target": "spare", 00:24:00.153 "progress": { 00:24:00.153 "blocks": 80640, 00:24:00.153 "percent": 41 00:24:00.153 } 00:24:00.153 }, 00:24:00.153 "base_bdevs_list": [ 00:24:00.153 { 00:24:00.153 "name": "spare", 00:24:00.153 "uuid": "4efaf22f-424a-566e-a6b1-1c57a91d547b", 00:24:00.153 "is_configured": true, 00:24:00.153 "data_offset": 0, 00:24:00.153 "data_size": 65536 00:24:00.153 }, 00:24:00.153 { 00:24:00.153 "name": "BaseBdev2", 00:24:00.153 "uuid": "54243702-1a99-40fa-be38-0b41125bd6bd", 00:24:00.153 "is_configured": true, 00:24:00.153 "data_offset": 0, 00:24:00.153 "data_size": 65536 00:24:00.153 }, 00:24:00.153 { 00:24:00.153 "name": "BaseBdev3", 00:24:00.153 "uuid": "9ede9002-f207-483f-b266-c640a3c908b8", 00:24:00.154 "is_configured": true, 00:24:00.154 "data_offset": 0, 00:24:00.154 "data_size": 65536 00:24:00.154 }, 00:24:00.154 { 00:24:00.154 "name": "BaseBdev4", 00:24:00.154 "uuid": "8d66cfc7-aeb6-4aad-bad3-7d60b779362b", 00:24:00.154 "is_configured": true, 00:24:00.154 "data_offset": 0, 00:24:00.154 "data_size": 65536 00:24:00.154 } 00:24:00.154 ] 00:24:00.154 }' 00:24:00.154 02:47:25 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:00.154 02:47:25 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:00.154 02:47:25 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:00.154 02:47:25 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:00.154 02:47:25 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:01.530 02:47:26 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:01.530 02:47:26 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:01.530 02:47:26 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:01.530 02:47:26 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:01.530 02:47:26 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:01.530 02:47:26 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:01.530 02:47:26 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:01.530 02:47:26 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:01.530 02:47:26 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:01.530 "name": "raid_bdev1", 00:24:01.530 "uuid": "c78e9457-1e3d-4d3a-a4ea-55da71fda69e", 00:24:01.530 "strip_size_kb": 64, 00:24:01.530 "state": "online", 00:24:01.530 "raid_level": "raid5f", 00:24:01.530 "superblock": false, 00:24:01.530 "num_base_bdevs": 4, 00:24:01.530 "num_base_bdevs_discovered": 4, 00:24:01.530 "num_base_bdevs_operational": 4, 00:24:01.530 "process": { 00:24:01.530 "type": "rebuild", 00:24:01.530 "target": "spare", 00:24:01.530 "progress": { 00:24:01.530 "blocks": 107520, 00:24:01.530 "percent": 54 00:24:01.530 } 00:24:01.530 }, 00:24:01.530 "base_bdevs_list": [ 00:24:01.530 { 00:24:01.530 "name": "spare", 00:24:01.530 "uuid": "4efaf22f-424a-566e-a6b1-1c57a91d547b", 00:24:01.530 "is_configured": true, 00:24:01.530 "data_offset": 0, 00:24:01.530 "data_size": 65536 00:24:01.530 }, 00:24:01.530 { 00:24:01.530 "name": "BaseBdev2", 00:24:01.530 "uuid": "54243702-1a99-40fa-be38-0b41125bd6bd", 00:24:01.530 "is_configured": true, 00:24:01.530 "data_offset": 0, 00:24:01.530 "data_size": 65536 00:24:01.530 }, 00:24:01.530 { 00:24:01.530 "name": "BaseBdev3", 00:24:01.530 "uuid": "9ede9002-f207-483f-b266-c640a3c908b8", 00:24:01.530 "is_configured": true, 00:24:01.530 "data_offset": 0, 00:24:01.530 "data_size": 65536 00:24:01.530 }, 00:24:01.530 { 00:24:01.530 "name": "BaseBdev4", 00:24:01.530 "uuid": "8d66cfc7-aeb6-4aad-bad3-7d60b779362b", 00:24:01.530 "is_configured": true, 00:24:01.530 "data_offset": 0, 00:24:01.530 "data_size": 65536 00:24:01.530 } 00:24:01.530 ] 00:24:01.530 }' 00:24:01.530 02:47:26 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:01.530 02:47:26 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:01.530 02:47:26 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:01.530 02:47:26 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:01.530 02:47:26 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:02.906 02:47:27 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:02.906 02:47:27 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:02.906 02:47:27 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:02.906 02:47:27 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:02.906 02:47:27 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:02.906 02:47:27 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:02.906 02:47:27 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:02.906 02:47:27 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:02.906 02:47:27 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:02.906 "name": "raid_bdev1", 00:24:02.906 "uuid": "c78e9457-1e3d-4d3a-a4ea-55da71fda69e", 00:24:02.906 "strip_size_kb": 64, 00:24:02.906 "state": "online", 00:24:02.906 "raid_level": "raid5f", 00:24:02.906 "superblock": false, 00:24:02.906 "num_base_bdevs": 4, 00:24:02.906 "num_base_bdevs_discovered": 4, 00:24:02.906 "num_base_bdevs_operational": 4, 00:24:02.906 "process": { 00:24:02.906 "type": "rebuild", 00:24:02.906 "target": "spare", 00:24:02.906 "progress": { 00:24:02.906 "blocks": 134400, 00:24:02.906 "percent": 68 00:24:02.906 } 00:24:02.906 }, 00:24:02.906 "base_bdevs_list": [ 00:24:02.906 { 00:24:02.906 "name": "spare", 00:24:02.906 "uuid": "4efaf22f-424a-566e-a6b1-1c57a91d547b", 00:24:02.906 "is_configured": true, 00:24:02.906 "data_offset": 0, 00:24:02.906 "data_size": 65536 00:24:02.906 }, 00:24:02.906 { 00:24:02.906 "name": "BaseBdev2", 00:24:02.906 "uuid": "54243702-1a99-40fa-be38-0b41125bd6bd", 00:24:02.906 "is_configured": true, 00:24:02.906 "data_offset": 0, 00:24:02.906 "data_size": 65536 00:24:02.906 }, 00:24:02.906 { 00:24:02.906 "name": "BaseBdev3", 00:24:02.906 "uuid": "9ede9002-f207-483f-b266-c640a3c908b8", 00:24:02.906 "is_configured": true, 00:24:02.906 "data_offset": 0, 00:24:02.906 "data_size": 65536 00:24:02.906 }, 00:24:02.906 { 00:24:02.906 "name": "BaseBdev4", 00:24:02.906 "uuid": "8d66cfc7-aeb6-4aad-bad3-7d60b779362b", 00:24:02.906 "is_configured": true, 00:24:02.906 "data_offset": 0, 00:24:02.906 "data_size": 65536 00:24:02.906 } 00:24:02.906 ] 00:24:02.906 }' 00:24:02.906 02:47:27 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:02.906 02:47:27 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:02.906 02:47:27 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:03.164 02:47:28 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:03.164 02:47:28 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:04.099 02:47:29 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:04.099 02:47:29 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:04.099 02:47:29 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:04.099 02:47:29 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:04.099 02:47:29 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:04.099 02:47:29 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:04.099 02:47:29 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:04.099 02:47:29 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:04.358 02:47:29 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:04.358 "name": "raid_bdev1", 00:24:04.358 "uuid": "c78e9457-1e3d-4d3a-a4ea-55da71fda69e", 00:24:04.358 "strip_size_kb": 64, 00:24:04.358 "state": "online", 00:24:04.358 "raid_level": "raid5f", 00:24:04.358 "superblock": false, 00:24:04.358 "num_base_bdevs": 4, 00:24:04.358 "num_base_bdevs_discovered": 4, 00:24:04.358 "num_base_bdevs_operational": 4, 00:24:04.358 "process": { 00:24:04.358 "type": "rebuild", 00:24:04.358 "target": "spare", 00:24:04.358 "progress": { 00:24:04.358 "blocks": 159360, 00:24:04.358 "percent": 81 00:24:04.358 } 00:24:04.358 }, 00:24:04.358 "base_bdevs_list": [ 00:24:04.358 { 00:24:04.358 "name": "spare", 00:24:04.358 "uuid": "4efaf22f-424a-566e-a6b1-1c57a91d547b", 00:24:04.358 "is_configured": true, 00:24:04.358 "data_offset": 0, 00:24:04.358 "data_size": 65536 00:24:04.358 }, 00:24:04.358 { 00:24:04.358 "name": "BaseBdev2", 00:24:04.358 "uuid": "54243702-1a99-40fa-be38-0b41125bd6bd", 00:24:04.358 "is_configured": true, 00:24:04.358 "data_offset": 0, 00:24:04.358 "data_size": 65536 00:24:04.358 }, 00:24:04.358 { 00:24:04.358 "name": "BaseBdev3", 00:24:04.358 "uuid": "9ede9002-f207-483f-b266-c640a3c908b8", 00:24:04.358 "is_configured": true, 00:24:04.358 "data_offset": 0, 00:24:04.358 "data_size": 65536 00:24:04.358 }, 00:24:04.358 { 00:24:04.358 "name": "BaseBdev4", 00:24:04.358 "uuid": "8d66cfc7-aeb6-4aad-bad3-7d60b779362b", 00:24:04.358 "is_configured": true, 00:24:04.358 "data_offset": 0, 00:24:04.358 "data_size": 65536 00:24:04.358 } 00:24:04.358 ] 00:24:04.358 }' 00:24:04.358 02:47:29 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:04.358 02:47:29 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:04.358 02:47:29 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:04.358 02:47:29 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:04.358 02:47:29 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:05.290 02:47:30 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:05.290 02:47:30 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:05.290 02:47:30 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:05.290 02:47:30 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:05.290 02:47:30 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:05.290 02:47:30 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:05.290 02:47:30 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:05.290 02:47:30 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:05.548 02:47:30 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:05.548 "name": "raid_bdev1", 00:24:05.548 "uuid": "c78e9457-1e3d-4d3a-a4ea-55da71fda69e", 00:24:05.548 "strip_size_kb": 64, 00:24:05.548 "state": "online", 00:24:05.548 "raid_level": "raid5f", 00:24:05.548 "superblock": false, 00:24:05.548 "num_base_bdevs": 4, 00:24:05.548 "num_base_bdevs_discovered": 4, 00:24:05.548 "num_base_bdevs_operational": 4, 00:24:05.548 "process": { 00:24:05.548 "type": "rebuild", 00:24:05.548 "target": "spare", 00:24:05.548 "progress": { 00:24:05.548 "blocks": 186240, 00:24:05.548 "percent": 94 00:24:05.548 } 00:24:05.548 }, 00:24:05.548 "base_bdevs_list": [ 00:24:05.548 { 00:24:05.548 "name": "spare", 00:24:05.548 "uuid": "4efaf22f-424a-566e-a6b1-1c57a91d547b", 00:24:05.548 "is_configured": true, 00:24:05.548 "data_offset": 0, 00:24:05.548 "data_size": 65536 00:24:05.548 }, 00:24:05.548 { 00:24:05.548 "name": "BaseBdev2", 00:24:05.548 "uuid": "54243702-1a99-40fa-be38-0b41125bd6bd", 00:24:05.548 "is_configured": true, 00:24:05.548 "data_offset": 0, 00:24:05.548 "data_size": 65536 00:24:05.548 }, 00:24:05.548 { 00:24:05.548 "name": "BaseBdev3", 00:24:05.548 "uuid": "9ede9002-f207-483f-b266-c640a3c908b8", 00:24:05.548 "is_configured": true, 00:24:05.548 "data_offset": 0, 00:24:05.548 "data_size": 65536 00:24:05.548 }, 00:24:05.548 { 00:24:05.548 "name": "BaseBdev4", 00:24:05.548 "uuid": "8d66cfc7-aeb6-4aad-bad3-7d60b779362b", 00:24:05.548 "is_configured": true, 00:24:05.548 "data_offset": 0, 00:24:05.548 "data_size": 65536 00:24:05.548 } 00:24:05.548 ] 00:24:05.548 }' 00:24:05.548 02:47:30 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:05.806 02:47:30 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:05.806 02:47:30 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:05.806 02:47:30 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:05.806 02:47:30 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:06.372 [2024-07-11 02:47:31.172870] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:06.372 [2024-07-11 02:47:31.172944] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:06.372 [2024-07-11 02:47:31.173037] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:06.630 02:47:31 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:06.630 02:47:31 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:06.630 02:47:31 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:06.630 02:47:31 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:06.630 02:47:31 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:06.630 02:47:31 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:06.630 02:47:31 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:06.889 02:47:31 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:06.889 02:47:31 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:06.889 "name": "raid_bdev1", 00:24:06.889 "uuid": "c78e9457-1e3d-4d3a-a4ea-55da71fda69e", 00:24:06.889 "strip_size_kb": 64, 00:24:06.889 "state": "online", 00:24:06.889 "raid_level": "raid5f", 00:24:06.889 "superblock": false, 00:24:06.889 "num_base_bdevs": 4, 00:24:06.889 "num_base_bdevs_discovered": 4, 00:24:06.889 "num_base_bdevs_operational": 4, 00:24:06.889 "base_bdevs_list": [ 00:24:06.889 { 00:24:06.889 "name": "spare", 00:24:06.889 "uuid": "4efaf22f-424a-566e-a6b1-1c57a91d547b", 00:24:06.889 "is_configured": true, 00:24:06.889 "data_offset": 0, 00:24:06.889 "data_size": 65536 00:24:06.889 }, 00:24:06.889 { 00:24:06.889 "name": "BaseBdev2", 00:24:06.889 "uuid": "54243702-1a99-40fa-be38-0b41125bd6bd", 00:24:06.889 "is_configured": true, 00:24:06.889 "data_offset": 0, 00:24:06.889 "data_size": 65536 00:24:06.889 }, 00:24:06.889 { 00:24:06.889 "name": "BaseBdev3", 00:24:06.889 "uuid": "9ede9002-f207-483f-b266-c640a3c908b8", 00:24:06.889 "is_configured": true, 00:24:06.889 "data_offset": 0, 00:24:06.889 "data_size": 65536 00:24:06.889 }, 00:24:06.889 { 00:24:06.889 "name": "BaseBdev4", 00:24:06.889 "uuid": "8d66cfc7-aeb6-4aad-bad3-7d60b779362b", 00:24:06.889 "is_configured": true, 00:24:06.889 "data_offset": 0, 00:24:06.889 "data_size": 65536 00:24:06.889 } 00:24:06.889 ] 00:24:06.889 }' 00:24:06.889 02:47:31 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:07.147 02:47:32 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:07.147 02:47:32 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:07.147 02:47:32 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:24:07.147 02:47:32 -- bdev/bdev_raid.sh@660 -- # break 00:24:07.147 02:47:32 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:07.147 02:47:32 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:07.147 02:47:32 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:07.147 02:47:32 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:07.147 02:47:32 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:07.147 02:47:32 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:07.147 02:47:32 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:07.405 02:47:32 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:07.405 "name": "raid_bdev1", 00:24:07.405 "uuid": "c78e9457-1e3d-4d3a-a4ea-55da71fda69e", 00:24:07.405 "strip_size_kb": 64, 00:24:07.405 "state": "online", 00:24:07.405 "raid_level": "raid5f", 00:24:07.405 "superblock": false, 00:24:07.405 "num_base_bdevs": 4, 00:24:07.405 "num_base_bdevs_discovered": 4, 00:24:07.405 "num_base_bdevs_operational": 4, 00:24:07.405 "base_bdevs_list": [ 00:24:07.406 { 00:24:07.406 "name": "spare", 00:24:07.406 "uuid": "4efaf22f-424a-566e-a6b1-1c57a91d547b", 00:24:07.406 "is_configured": true, 00:24:07.406 "data_offset": 0, 00:24:07.406 "data_size": 65536 00:24:07.406 }, 00:24:07.406 { 00:24:07.406 "name": "BaseBdev2", 00:24:07.406 "uuid": "54243702-1a99-40fa-be38-0b41125bd6bd", 00:24:07.406 "is_configured": true, 00:24:07.406 "data_offset": 0, 00:24:07.406 "data_size": 65536 00:24:07.406 }, 00:24:07.406 { 00:24:07.406 "name": "BaseBdev3", 00:24:07.406 "uuid": "9ede9002-f207-483f-b266-c640a3c908b8", 00:24:07.406 "is_configured": true, 00:24:07.406 "data_offset": 0, 00:24:07.406 "data_size": 65536 00:24:07.406 }, 00:24:07.406 { 00:24:07.406 "name": "BaseBdev4", 00:24:07.406 "uuid": "8d66cfc7-aeb6-4aad-bad3-7d60b779362b", 00:24:07.406 "is_configured": true, 00:24:07.406 "data_offset": 0, 00:24:07.406 "data_size": 65536 00:24:07.406 } 00:24:07.406 ] 00:24:07.406 }' 00:24:07.406 02:47:32 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:07.406 02:47:32 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:07.406 02:47:32 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:07.406 02:47:32 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:07.406 02:47:32 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:24:07.406 02:47:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:07.406 02:47:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:07.406 02:47:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:07.406 02:47:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:07.406 02:47:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:07.406 02:47:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:07.406 02:47:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:07.406 02:47:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:07.406 02:47:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:07.406 02:47:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:07.406 02:47:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:07.663 02:47:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:07.663 "name": "raid_bdev1", 00:24:07.663 "uuid": "c78e9457-1e3d-4d3a-a4ea-55da71fda69e", 00:24:07.663 "strip_size_kb": 64, 00:24:07.663 "state": "online", 00:24:07.663 "raid_level": "raid5f", 00:24:07.663 "superblock": false, 00:24:07.663 "num_base_bdevs": 4, 00:24:07.663 "num_base_bdevs_discovered": 4, 00:24:07.663 "num_base_bdevs_operational": 4, 00:24:07.663 "base_bdevs_list": [ 00:24:07.663 { 00:24:07.663 "name": "spare", 00:24:07.663 "uuid": "4efaf22f-424a-566e-a6b1-1c57a91d547b", 00:24:07.663 "is_configured": true, 00:24:07.663 "data_offset": 0, 00:24:07.663 "data_size": 65536 00:24:07.663 }, 00:24:07.663 { 00:24:07.663 "name": "BaseBdev2", 00:24:07.663 "uuid": "54243702-1a99-40fa-be38-0b41125bd6bd", 00:24:07.663 "is_configured": true, 00:24:07.663 "data_offset": 0, 00:24:07.663 "data_size": 65536 00:24:07.663 }, 00:24:07.663 { 00:24:07.663 "name": "BaseBdev3", 00:24:07.663 "uuid": "9ede9002-f207-483f-b266-c640a3c908b8", 00:24:07.663 "is_configured": true, 00:24:07.663 "data_offset": 0, 00:24:07.663 "data_size": 65536 00:24:07.663 }, 00:24:07.663 { 00:24:07.663 "name": "BaseBdev4", 00:24:07.663 "uuid": "8d66cfc7-aeb6-4aad-bad3-7d60b779362b", 00:24:07.663 "is_configured": true, 00:24:07.663 "data_offset": 0, 00:24:07.663 "data_size": 65536 00:24:07.663 } 00:24:07.663 ] 00:24:07.663 }' 00:24:07.663 02:47:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:07.663 02:47:32 -- common/autotest_common.sh@10 -- # set +x 00:24:08.229 02:47:33 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:08.487 [2024-07-11 02:47:33.539081] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:08.487 [2024-07-11 02:47:33.539138] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:08.487 [2024-07-11 02:47:33.539259] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:08.487 [2024-07-11 02:47:33.539375] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:08.487 [2024-07-11 02:47:33.539424] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state offline 00:24:08.487 02:47:33 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:08.487 02:47:33 -- bdev/bdev_raid.sh@671 -- # jq length 00:24:08.745 02:47:33 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:24:08.745 02:47:33 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:24:08.745 02:47:33 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:24:08.745 02:47:33 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:08.745 02:47:33 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:24:08.745 02:47:33 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:08.745 02:47:33 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:24:08.745 02:47:33 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:08.745 02:47:33 -- bdev/nbd_common.sh@12 -- # local i 00:24:08.745 02:47:33 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:08.745 02:47:33 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:08.745 02:47:33 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:24:09.003 /dev/nbd0 00:24:09.003 02:47:34 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:09.003 02:47:34 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:09.003 02:47:34 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:24:09.003 02:47:34 -- common/autotest_common.sh@857 -- # local i 00:24:09.003 02:47:34 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:24:09.003 02:47:34 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:24:09.003 02:47:34 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:24:09.003 02:47:34 -- common/autotest_common.sh@861 -- # break 00:24:09.003 02:47:34 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:24:09.003 02:47:34 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:24:09.003 02:47:34 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:09.003 1+0 records in 00:24:09.003 1+0 records out 00:24:09.003 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000472736 s, 8.7 MB/s 00:24:09.003 02:47:34 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:09.003 02:47:34 -- common/autotest_common.sh@874 -- # size=4096 00:24:09.003 02:47:34 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:09.003 02:47:34 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:24:09.003 02:47:34 -- common/autotest_common.sh@877 -- # return 0 00:24:09.003 02:47:34 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:09.003 02:47:34 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:09.003 02:47:34 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:24:09.261 /dev/nbd1 00:24:09.261 02:47:34 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:09.261 02:47:34 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:09.261 02:47:34 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:24:09.261 02:47:34 -- common/autotest_common.sh@857 -- # local i 00:24:09.261 02:47:34 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:24:09.261 02:47:34 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:24:09.261 02:47:34 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:24:09.520 02:47:34 -- common/autotest_common.sh@861 -- # break 00:24:09.520 02:47:34 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:24:09.520 02:47:34 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:24:09.520 02:47:34 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:09.520 1+0 records in 00:24:09.520 1+0 records out 00:24:09.520 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000646835 s, 6.3 MB/s 00:24:09.520 02:47:34 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:09.520 02:47:34 -- common/autotest_common.sh@874 -- # size=4096 00:24:09.520 02:47:34 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:09.520 02:47:34 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:24:09.520 02:47:34 -- common/autotest_common.sh@877 -- # return 0 00:24:09.520 02:47:34 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:09.520 02:47:34 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:09.520 02:47:34 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:24:09.520 02:47:34 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:24:09.520 02:47:34 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:09.520 02:47:34 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:24:09.520 02:47:34 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:09.520 02:47:34 -- bdev/nbd_common.sh@51 -- # local i 00:24:09.520 02:47:34 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:09.520 02:47:34 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:09.779 02:47:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:09.779 02:47:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:09.779 02:47:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:09.779 02:47:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:09.779 02:47:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:09.779 02:47:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:09.779 02:47:34 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:24:09.779 02:47:34 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:24:09.779 02:47:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:09.779 02:47:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:09.779 02:47:34 -- bdev/nbd_common.sh@41 -- # break 00:24:09.779 02:47:34 -- bdev/nbd_common.sh@45 -- # return 0 00:24:09.779 02:47:34 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:09.779 02:47:34 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:10.037 02:47:35 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:10.296 02:47:35 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:10.296 02:47:35 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:10.296 02:47:35 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:10.296 02:47:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:10.296 02:47:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:10.296 02:47:35 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:24:10.296 02:47:35 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:24:10.296 02:47:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:10.296 02:47:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:10.296 02:47:35 -- bdev/nbd_common.sh@41 -- # break 00:24:10.296 02:47:35 -- bdev/nbd_common.sh@45 -- # return 0 00:24:10.296 02:47:35 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:24:10.296 02:47:35 -- bdev/bdev_raid.sh@709 -- # killprocess 144098 00:24:10.296 02:47:35 -- common/autotest_common.sh@926 -- # '[' -z 144098 ']' 00:24:10.296 02:47:35 -- common/autotest_common.sh@930 -- # kill -0 144098 00:24:10.296 02:47:35 -- common/autotest_common.sh@931 -- # uname 00:24:10.296 02:47:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:10.296 02:47:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 144098 00:24:10.296 02:47:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:10.296 02:47:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:10.296 killing process with pid 144098 00:24:10.296 02:47:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 144098' 00:24:10.296 Received shutdown signal, test time was about 60.000000 seconds 00:24:10.296 00:24:10.296 Latency(us) 00:24:10.296 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:10.296 =================================================================================================================== 00:24:10.296 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:10.297 02:47:35 -- common/autotest_common.sh@945 -- # kill 144098 00:24:10.297 02:47:35 -- common/autotest_common.sh@950 -- # wait 144098 00:24:10.297 [2024-07-11 02:47:35.262803] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:10.297 [2024-07-11 02:47:35.306002] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:10.556 ************************************ 00:24:10.556 END TEST raid5f_rebuild_test 00:24:10.556 ************************************ 00:24:10.556 02:47:35 -- bdev/bdev_raid.sh@711 -- # return 0 00:24:10.556 00:24:10.556 real 0m24.194s 00:24:10.556 user 0m36.331s 00:24:10.556 sys 0m2.416s 00:24:10.556 02:47:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:10.556 02:47:35 -- common/autotest_common.sh@10 -- # set +x 00:24:10.556 02:47:35 -- bdev/bdev_raid.sh@749 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false 00:24:10.556 02:47:35 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:24:10.556 02:47:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:10.556 02:47:35 -- common/autotest_common.sh@10 -- # set +x 00:24:10.556 ************************************ 00:24:10.556 START TEST raid5f_rebuild_test_sb 00:24:10.556 ************************************ 00:24:10.556 02:47:35 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid5f 4 true false 00:24:10.556 02:47:35 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:24:10.556 02:47:35 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:24:10.556 02:47:35 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:24:10.556 02:47:35 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:24:10.556 02:47:35 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:24:10.556 02:47:35 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:24:10.556 02:47:35 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:10.556 02:47:35 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:24:10.556 02:47:35 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:10.556 02:47:35 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:10.556 02:47:35 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:24:10.556 02:47:35 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:10.556 02:47:35 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:10.556 02:47:35 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:24:10.556 02:47:35 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:10.556 02:47:35 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:10.556 02:47:35 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:24:10.556 02:47:35 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:10.556 02:47:35 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:10.556 02:47:35 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:24:10.556 02:47:35 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:24:10.556 02:47:35 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:24:10.556 02:47:35 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:24:10.556 02:47:35 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:24:10.557 02:47:35 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:24:10.557 02:47:35 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:24:10.557 02:47:35 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:24:10.557 02:47:35 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:24:10.557 02:47:35 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:24:10.557 02:47:35 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:24:10.557 02:47:35 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:24:10.557 02:47:35 -- bdev/bdev_raid.sh@544 -- # raid_pid=144754 00:24:10.557 02:47:35 -- bdev/bdev_raid.sh@545 -- # waitforlisten 144754 /var/tmp/spdk-raid.sock 00:24:10.557 02:47:35 -- common/autotest_common.sh@819 -- # '[' -z 144754 ']' 00:24:10.557 02:47:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:10.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:10.557 02:47:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:10.557 02:47:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:10.557 02:47:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:10.557 02:47:35 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:10.557 02:47:35 -- common/autotest_common.sh@10 -- # set +x 00:24:10.557 [2024-07-11 02:47:35.642357] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:24:10.557 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:10.557 Zero copy mechanism will not be used. 00:24:10.557 [2024-07-11 02:47:35.642564] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144754 ] 00:24:10.816 [2024-07-11 02:47:35.788289] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.816 [2024-07-11 02:47:35.862746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:11.074 [2024-07-11 02:47:35.915626] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:11.642 02:47:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:11.642 02:47:36 -- common/autotest_common.sh@852 -- # return 0 00:24:11.642 02:47:36 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:11.642 02:47:36 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:24:11.642 02:47:36 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:11.901 BaseBdev1_malloc 00:24:11.901 02:47:36 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:12.203 [2024-07-11 02:47:37.003840] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:12.203 [2024-07-11 02:47:37.003956] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:12.203 [2024-07-11 02:47:37.003998] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005d80 00:24:12.203 [2024-07-11 02:47:37.004058] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:12.203 [2024-07-11 02:47:37.006559] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:12.203 [2024-07-11 02:47:37.006626] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:12.203 BaseBdev1 00:24:12.203 02:47:37 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:12.203 02:47:37 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:24:12.203 02:47:37 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:12.203 BaseBdev2_malloc 00:24:12.203 02:47:37 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:12.491 [2024-07-11 02:47:37.402085] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:12.491 [2024-07-11 02:47:37.402182] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:12.491 [2024-07-11 02:47:37.402225] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:24:12.491 [2024-07-11 02:47:37.402269] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:12.491 [2024-07-11 02:47:37.404312] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:12.491 [2024-07-11 02:47:37.404376] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:12.491 BaseBdev2 00:24:12.491 02:47:37 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:12.491 02:47:37 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:24:12.491 02:47:37 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:12.793 BaseBdev3_malloc 00:24:12.793 02:47:37 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:24:12.793 [2024-07-11 02:47:37.827837] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:24:12.793 [2024-07-11 02:47:37.827934] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:12.793 [2024-07-11 02:47:37.827978] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:24:12.793 [2024-07-11 02:47:37.828024] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:12.793 [2024-07-11 02:47:37.830160] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:12.793 [2024-07-11 02:47:37.830229] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:12.793 BaseBdev3 00:24:12.793 02:47:37 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:12.793 02:47:37 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:24:12.793 02:47:37 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:24:13.070 BaseBdev4_malloc 00:24:13.070 02:47:38 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:24:13.329 [2024-07-11 02:47:38.238988] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:24:13.329 [2024-07-11 02:47:38.239098] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:13.329 [2024-07-11 02:47:38.239139] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:24:13.329 [2024-07-11 02:47:38.239183] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:13.329 [2024-07-11 02:47:38.241324] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:13.329 [2024-07-11 02:47:38.241390] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:13.329 BaseBdev4 00:24:13.329 02:47:38 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:24:13.587 spare_malloc 00:24:13.587 02:47:38 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:13.587 spare_delay 00:24:13.588 02:47:38 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:13.846 [2024-07-11 02:47:38.837371] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:13.846 [2024-07-11 02:47:38.837467] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:13.846 [2024-07-11 02:47:38.837502] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:24:13.846 [2024-07-11 02:47:38.837543] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:13.846 [2024-07-11 02:47:38.839949] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:13.846 [2024-07-11 02:47:38.840041] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:13.846 spare 00:24:13.846 02:47:38 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:24:14.105 [2024-07-11 02:47:39.033503] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:14.105 [2024-07-11 02:47:39.035191] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:14.105 [2024-07-11 02:47:39.035266] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:14.105 [2024-07-11 02:47:39.035330] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:14.105 [2024-07-11 02:47:39.035581] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:24:14.105 [2024-07-11 02:47:39.035597] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:24:14.105 [2024-07-11 02:47:39.035749] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:24:14.105 [2024-07-11 02:47:39.036531] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:24:14.105 [2024-07-11 02:47:39.036556] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:24:14.105 [2024-07-11 02:47:39.036727] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:14.105 02:47:39 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:24:14.105 02:47:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:14.105 02:47:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:14.105 02:47:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:14.105 02:47:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:14.105 02:47:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:14.105 02:47:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:14.105 02:47:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:14.105 02:47:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:14.105 02:47:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:14.105 02:47:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:14.105 02:47:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:14.364 02:47:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:14.364 "name": "raid_bdev1", 00:24:14.364 "uuid": "8cf04bc2-8522-4524-a9c7-006717db747f", 00:24:14.364 "strip_size_kb": 64, 00:24:14.364 "state": "online", 00:24:14.364 "raid_level": "raid5f", 00:24:14.364 "superblock": true, 00:24:14.364 "num_base_bdevs": 4, 00:24:14.364 "num_base_bdevs_discovered": 4, 00:24:14.364 "num_base_bdevs_operational": 4, 00:24:14.364 "base_bdevs_list": [ 00:24:14.364 { 00:24:14.364 "name": "BaseBdev1", 00:24:14.364 "uuid": "319cf1cf-4ebe-5bc0-933a-45eb27209db9", 00:24:14.364 "is_configured": true, 00:24:14.364 "data_offset": 2048, 00:24:14.364 "data_size": 63488 00:24:14.364 }, 00:24:14.364 { 00:24:14.364 "name": "BaseBdev2", 00:24:14.364 "uuid": "9ed29240-6c63-59bb-8f2c-0862cbba6d91", 00:24:14.364 "is_configured": true, 00:24:14.364 "data_offset": 2048, 00:24:14.364 "data_size": 63488 00:24:14.364 }, 00:24:14.364 { 00:24:14.364 "name": "BaseBdev3", 00:24:14.364 "uuid": "ccd86753-155b-59e5-b88e-042f332ccc20", 00:24:14.364 "is_configured": true, 00:24:14.364 "data_offset": 2048, 00:24:14.364 "data_size": 63488 00:24:14.364 }, 00:24:14.364 { 00:24:14.364 "name": "BaseBdev4", 00:24:14.364 "uuid": "658af9de-f19f-5a2c-bcbf-144b8e1ba258", 00:24:14.364 "is_configured": true, 00:24:14.364 "data_offset": 2048, 00:24:14.364 "data_size": 63488 00:24:14.364 } 00:24:14.364 ] 00:24:14.364 }' 00:24:14.364 02:47:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:14.364 02:47:39 -- common/autotest_common.sh@10 -- # set +x 00:24:14.931 02:47:39 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:14.931 02:47:39 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:24:15.190 [2024-07-11 02:47:40.123197] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:15.190 02:47:40 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=190464 00:24:15.190 02:47:40 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:15.190 02:47:40 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:15.449 02:47:40 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:24:15.449 02:47:40 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:24:15.449 02:47:40 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:24:15.449 02:47:40 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:24:15.449 02:47:40 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:15.449 02:47:40 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:24:15.449 02:47:40 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:15.449 02:47:40 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:24:15.449 02:47:40 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:15.449 02:47:40 -- bdev/nbd_common.sh@12 -- # local i 00:24:15.449 02:47:40 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:15.449 02:47:40 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:15.449 02:47:40 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:24:15.708 [2024-07-11 02:47:40.559227] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:24:15.708 /dev/nbd0 00:24:15.708 02:47:40 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:15.708 02:47:40 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:15.708 02:47:40 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:24:15.708 02:47:40 -- common/autotest_common.sh@857 -- # local i 00:24:15.708 02:47:40 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:24:15.708 02:47:40 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:24:15.708 02:47:40 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:24:15.708 02:47:40 -- common/autotest_common.sh@861 -- # break 00:24:15.708 02:47:40 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:24:15.708 02:47:40 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:24:15.708 02:47:40 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:15.708 1+0 records in 00:24:15.708 1+0 records out 00:24:15.708 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000627359 s, 6.5 MB/s 00:24:15.708 02:47:40 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:15.708 02:47:40 -- common/autotest_common.sh@874 -- # size=4096 00:24:15.708 02:47:40 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:15.708 02:47:40 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:24:15.708 02:47:40 -- common/autotest_common.sh@877 -- # return 0 00:24:15.708 02:47:40 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:15.708 02:47:40 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:15.708 02:47:40 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:24:15.708 02:47:40 -- bdev/bdev_raid.sh@581 -- # write_unit_size=384 00:24:15.708 02:47:40 -- bdev/bdev_raid.sh@582 -- # echo 192 00:24:15.708 02:47:40 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:24:15.967 496+0 records in 00:24:15.967 496+0 records out 00:24:15.967 97517568 bytes (98 MB, 93 MiB) copied, 0.39687 s, 246 MB/s 00:24:15.967 02:47:41 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:24:15.967 02:47:41 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:15.967 02:47:41 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:24:15.967 02:47:41 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:15.967 02:47:41 -- bdev/nbd_common.sh@51 -- # local i 00:24:15.967 02:47:41 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:15.967 02:47:41 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:16.225 02:47:41 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:16.225 02:47:41 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:16.225 02:47:41 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:16.225 02:47:41 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:16.225 02:47:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:16.225 02:47:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:16.225 [2024-07-11 02:47:41.275196] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:16.225 02:47:41 -- bdev/nbd_common.sh@41 -- # break 00:24:16.225 02:47:41 -- bdev/nbd_common.sh@45 -- # return 0 00:24:16.225 02:47:41 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:24:16.485 [2024-07-11 02:47:41.506721] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:16.485 02:47:41 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:16.485 02:47:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:16.485 02:47:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:16.485 02:47:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:16.485 02:47:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:16.485 02:47:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:16.485 02:47:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:16.485 02:47:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:16.485 02:47:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:16.485 02:47:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:16.485 02:47:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:16.485 02:47:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:16.744 02:47:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:16.744 "name": "raid_bdev1", 00:24:16.744 "uuid": "8cf04bc2-8522-4524-a9c7-006717db747f", 00:24:16.744 "strip_size_kb": 64, 00:24:16.744 "state": "online", 00:24:16.744 "raid_level": "raid5f", 00:24:16.744 "superblock": true, 00:24:16.744 "num_base_bdevs": 4, 00:24:16.744 "num_base_bdevs_discovered": 3, 00:24:16.744 "num_base_bdevs_operational": 3, 00:24:16.744 "base_bdevs_list": [ 00:24:16.744 { 00:24:16.744 "name": null, 00:24:16.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:16.744 "is_configured": false, 00:24:16.744 "data_offset": 2048, 00:24:16.744 "data_size": 63488 00:24:16.744 }, 00:24:16.744 { 00:24:16.744 "name": "BaseBdev2", 00:24:16.744 "uuid": "9ed29240-6c63-59bb-8f2c-0862cbba6d91", 00:24:16.744 "is_configured": true, 00:24:16.745 "data_offset": 2048, 00:24:16.745 "data_size": 63488 00:24:16.745 }, 00:24:16.745 { 00:24:16.745 "name": "BaseBdev3", 00:24:16.745 "uuid": "ccd86753-155b-59e5-b88e-042f332ccc20", 00:24:16.745 "is_configured": true, 00:24:16.745 "data_offset": 2048, 00:24:16.745 "data_size": 63488 00:24:16.745 }, 00:24:16.745 { 00:24:16.745 "name": "BaseBdev4", 00:24:16.745 "uuid": "658af9de-f19f-5a2c-bcbf-144b8e1ba258", 00:24:16.745 "is_configured": true, 00:24:16.745 "data_offset": 2048, 00:24:16.745 "data_size": 63488 00:24:16.745 } 00:24:16.745 ] 00:24:16.745 }' 00:24:16.745 02:47:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:16.745 02:47:41 -- common/autotest_common.sh@10 -- # set +x 00:24:17.313 02:47:42 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:17.572 [2024-07-11 02:47:42.598946] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:24:17.572 [2024-07-11 02:47:42.599002] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:17.572 [2024-07-11 02:47:42.603167] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000291f0 00:24:17.572 [2024-07-11 02:47:42.605461] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:17.572 02:47:42 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:24:18.947 02:47:43 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:18.947 02:47:43 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:18.947 02:47:43 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:18.947 02:47:43 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:18.947 02:47:43 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:18.947 02:47:43 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:18.947 02:47:43 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:18.947 02:47:43 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:18.947 "name": "raid_bdev1", 00:24:18.947 "uuid": "8cf04bc2-8522-4524-a9c7-006717db747f", 00:24:18.947 "strip_size_kb": 64, 00:24:18.947 "state": "online", 00:24:18.947 "raid_level": "raid5f", 00:24:18.947 "superblock": true, 00:24:18.947 "num_base_bdevs": 4, 00:24:18.947 "num_base_bdevs_discovered": 4, 00:24:18.947 "num_base_bdevs_operational": 4, 00:24:18.947 "process": { 00:24:18.947 "type": "rebuild", 00:24:18.947 "target": "spare", 00:24:18.947 "progress": { 00:24:18.947 "blocks": 23040, 00:24:18.947 "percent": 12 00:24:18.947 } 00:24:18.947 }, 00:24:18.947 "base_bdevs_list": [ 00:24:18.947 { 00:24:18.947 "name": "spare", 00:24:18.947 "uuid": "9dd0f803-d04e-5aa7-83fb-43a48fd0675d", 00:24:18.947 "is_configured": true, 00:24:18.947 "data_offset": 2048, 00:24:18.947 "data_size": 63488 00:24:18.947 }, 00:24:18.947 { 00:24:18.947 "name": "BaseBdev2", 00:24:18.947 "uuid": "9ed29240-6c63-59bb-8f2c-0862cbba6d91", 00:24:18.948 "is_configured": true, 00:24:18.948 "data_offset": 2048, 00:24:18.948 "data_size": 63488 00:24:18.948 }, 00:24:18.948 { 00:24:18.948 "name": "BaseBdev3", 00:24:18.948 "uuid": "ccd86753-155b-59e5-b88e-042f332ccc20", 00:24:18.948 "is_configured": true, 00:24:18.948 "data_offset": 2048, 00:24:18.948 "data_size": 63488 00:24:18.948 }, 00:24:18.948 { 00:24:18.948 "name": "BaseBdev4", 00:24:18.948 "uuid": "658af9de-f19f-5a2c-bcbf-144b8e1ba258", 00:24:18.948 "is_configured": true, 00:24:18.948 "data_offset": 2048, 00:24:18.948 "data_size": 63488 00:24:18.948 } 00:24:18.948 ] 00:24:18.948 }' 00:24:18.948 02:47:43 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:18.948 02:47:43 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:18.948 02:47:43 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:18.948 02:47:43 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:18.948 02:47:43 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:19.205 [2024-07-11 02:47:44.148579] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:19.205 [2024-07-11 02:47:44.216759] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:19.205 [2024-07-11 02:47:44.217309] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:19.205 02:47:44 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:19.205 02:47:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:19.205 02:47:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:19.205 02:47:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:19.205 02:47:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:19.205 02:47:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:19.205 02:47:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:19.205 02:47:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:19.205 02:47:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:19.205 02:47:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:19.205 02:47:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:19.205 02:47:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:19.464 02:47:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:19.464 "name": "raid_bdev1", 00:24:19.464 "uuid": "8cf04bc2-8522-4524-a9c7-006717db747f", 00:24:19.464 "strip_size_kb": 64, 00:24:19.464 "state": "online", 00:24:19.464 "raid_level": "raid5f", 00:24:19.464 "superblock": true, 00:24:19.464 "num_base_bdevs": 4, 00:24:19.464 "num_base_bdevs_discovered": 3, 00:24:19.464 "num_base_bdevs_operational": 3, 00:24:19.464 "base_bdevs_list": [ 00:24:19.464 { 00:24:19.464 "name": null, 00:24:19.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:19.464 "is_configured": false, 00:24:19.464 "data_offset": 2048, 00:24:19.464 "data_size": 63488 00:24:19.464 }, 00:24:19.464 { 00:24:19.464 "name": "BaseBdev2", 00:24:19.464 "uuid": "9ed29240-6c63-59bb-8f2c-0862cbba6d91", 00:24:19.464 "is_configured": true, 00:24:19.464 "data_offset": 2048, 00:24:19.464 "data_size": 63488 00:24:19.464 }, 00:24:19.464 { 00:24:19.464 "name": "BaseBdev3", 00:24:19.464 "uuid": "ccd86753-155b-59e5-b88e-042f332ccc20", 00:24:19.464 "is_configured": true, 00:24:19.464 "data_offset": 2048, 00:24:19.464 "data_size": 63488 00:24:19.464 }, 00:24:19.464 { 00:24:19.464 "name": "BaseBdev4", 00:24:19.464 "uuid": "658af9de-f19f-5a2c-bcbf-144b8e1ba258", 00:24:19.464 "is_configured": true, 00:24:19.464 "data_offset": 2048, 00:24:19.464 "data_size": 63488 00:24:19.464 } 00:24:19.464 ] 00:24:19.464 }' 00:24:19.464 02:47:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:19.464 02:47:44 -- common/autotest_common.sh@10 -- # set +x 00:24:20.400 02:47:45 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:20.400 02:47:45 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:20.400 02:47:45 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:20.400 02:47:45 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:20.400 02:47:45 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:20.400 02:47:45 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:20.400 02:47:45 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:20.400 02:47:45 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:20.400 "name": "raid_bdev1", 00:24:20.400 "uuid": "8cf04bc2-8522-4524-a9c7-006717db747f", 00:24:20.400 "strip_size_kb": 64, 00:24:20.400 "state": "online", 00:24:20.400 "raid_level": "raid5f", 00:24:20.400 "superblock": true, 00:24:20.400 "num_base_bdevs": 4, 00:24:20.400 "num_base_bdevs_discovered": 3, 00:24:20.400 "num_base_bdevs_operational": 3, 00:24:20.400 "base_bdevs_list": [ 00:24:20.400 { 00:24:20.400 "name": null, 00:24:20.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:20.400 "is_configured": false, 00:24:20.400 "data_offset": 2048, 00:24:20.400 "data_size": 63488 00:24:20.400 }, 00:24:20.400 { 00:24:20.400 "name": "BaseBdev2", 00:24:20.400 "uuid": "9ed29240-6c63-59bb-8f2c-0862cbba6d91", 00:24:20.400 "is_configured": true, 00:24:20.400 "data_offset": 2048, 00:24:20.400 "data_size": 63488 00:24:20.400 }, 00:24:20.400 { 00:24:20.400 "name": "BaseBdev3", 00:24:20.400 "uuid": "ccd86753-155b-59e5-b88e-042f332ccc20", 00:24:20.400 "is_configured": true, 00:24:20.400 "data_offset": 2048, 00:24:20.400 "data_size": 63488 00:24:20.400 }, 00:24:20.400 { 00:24:20.400 "name": "BaseBdev4", 00:24:20.400 "uuid": "658af9de-f19f-5a2c-bcbf-144b8e1ba258", 00:24:20.400 "is_configured": true, 00:24:20.400 "data_offset": 2048, 00:24:20.400 "data_size": 63488 00:24:20.400 } 00:24:20.400 ] 00:24:20.400 }' 00:24:20.400 02:47:45 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:20.401 02:47:45 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:20.401 02:47:45 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:20.401 02:47:45 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:20.401 02:47:45 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:20.659 [2024-07-11 02:47:45.743686] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:24:20.659 [2024-07-11 02:47:45.743761] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:20.659 [2024-07-11 02:47:45.747824] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029390 00:24:20.659 [2024-07-11 02:47:45.749936] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:20.917 02:47:45 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:24:21.853 02:47:46 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:21.853 02:47:46 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:21.853 02:47:46 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:21.853 02:47:46 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:21.853 02:47:46 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:21.853 02:47:46 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:21.853 02:47:46 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:22.112 02:47:46 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:22.112 "name": "raid_bdev1", 00:24:22.112 "uuid": "8cf04bc2-8522-4524-a9c7-006717db747f", 00:24:22.112 "strip_size_kb": 64, 00:24:22.112 "state": "online", 00:24:22.112 "raid_level": "raid5f", 00:24:22.112 "superblock": true, 00:24:22.112 "num_base_bdevs": 4, 00:24:22.112 "num_base_bdevs_discovered": 4, 00:24:22.112 "num_base_bdevs_operational": 4, 00:24:22.112 "process": { 00:24:22.112 "type": "rebuild", 00:24:22.112 "target": "spare", 00:24:22.112 "progress": { 00:24:22.112 "blocks": 23040, 00:24:22.112 "percent": 12 00:24:22.112 } 00:24:22.112 }, 00:24:22.112 "base_bdevs_list": [ 00:24:22.112 { 00:24:22.112 "name": "spare", 00:24:22.112 "uuid": "9dd0f803-d04e-5aa7-83fb-43a48fd0675d", 00:24:22.112 "is_configured": true, 00:24:22.112 "data_offset": 2048, 00:24:22.112 "data_size": 63488 00:24:22.112 }, 00:24:22.112 { 00:24:22.112 "name": "BaseBdev2", 00:24:22.112 "uuid": "9ed29240-6c63-59bb-8f2c-0862cbba6d91", 00:24:22.112 "is_configured": true, 00:24:22.112 "data_offset": 2048, 00:24:22.112 "data_size": 63488 00:24:22.112 }, 00:24:22.112 { 00:24:22.112 "name": "BaseBdev3", 00:24:22.112 "uuid": "ccd86753-155b-59e5-b88e-042f332ccc20", 00:24:22.112 "is_configured": true, 00:24:22.112 "data_offset": 2048, 00:24:22.112 "data_size": 63488 00:24:22.112 }, 00:24:22.112 { 00:24:22.112 "name": "BaseBdev4", 00:24:22.112 "uuid": "658af9de-f19f-5a2c-bcbf-144b8e1ba258", 00:24:22.112 "is_configured": true, 00:24:22.112 "data_offset": 2048, 00:24:22.112 "data_size": 63488 00:24:22.112 } 00:24:22.112 ] 00:24:22.112 }' 00:24:22.112 02:47:46 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:22.112 02:47:47 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:22.112 02:47:47 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:22.112 02:47:47 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:22.112 02:47:47 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:24:22.112 02:47:47 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:24:22.112 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:24:22.112 02:47:47 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:24:22.112 02:47:47 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:24:22.112 02:47:47 -- bdev/bdev_raid.sh@657 -- # local timeout=681 00:24:22.112 02:47:47 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:22.112 02:47:47 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:22.112 02:47:47 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:22.112 02:47:47 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:22.112 02:47:47 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:22.112 02:47:47 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:22.112 02:47:47 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:22.112 02:47:47 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:22.371 02:47:47 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:22.371 "name": "raid_bdev1", 00:24:22.371 "uuid": "8cf04bc2-8522-4524-a9c7-006717db747f", 00:24:22.371 "strip_size_kb": 64, 00:24:22.371 "state": "online", 00:24:22.371 "raid_level": "raid5f", 00:24:22.371 "superblock": true, 00:24:22.371 "num_base_bdevs": 4, 00:24:22.371 "num_base_bdevs_discovered": 4, 00:24:22.371 "num_base_bdevs_operational": 4, 00:24:22.371 "process": { 00:24:22.371 "type": "rebuild", 00:24:22.371 "target": "spare", 00:24:22.371 "progress": { 00:24:22.371 "blocks": 28800, 00:24:22.371 "percent": 15 00:24:22.371 } 00:24:22.371 }, 00:24:22.371 "base_bdevs_list": [ 00:24:22.371 { 00:24:22.371 "name": "spare", 00:24:22.371 "uuid": "9dd0f803-d04e-5aa7-83fb-43a48fd0675d", 00:24:22.371 "is_configured": true, 00:24:22.371 "data_offset": 2048, 00:24:22.371 "data_size": 63488 00:24:22.371 }, 00:24:22.371 { 00:24:22.371 "name": "BaseBdev2", 00:24:22.371 "uuid": "9ed29240-6c63-59bb-8f2c-0862cbba6d91", 00:24:22.371 "is_configured": true, 00:24:22.371 "data_offset": 2048, 00:24:22.371 "data_size": 63488 00:24:22.371 }, 00:24:22.371 { 00:24:22.371 "name": "BaseBdev3", 00:24:22.371 "uuid": "ccd86753-155b-59e5-b88e-042f332ccc20", 00:24:22.371 "is_configured": true, 00:24:22.371 "data_offset": 2048, 00:24:22.371 "data_size": 63488 00:24:22.371 }, 00:24:22.371 { 00:24:22.371 "name": "BaseBdev4", 00:24:22.371 "uuid": "658af9de-f19f-5a2c-bcbf-144b8e1ba258", 00:24:22.371 "is_configured": true, 00:24:22.371 "data_offset": 2048, 00:24:22.371 "data_size": 63488 00:24:22.371 } 00:24:22.371 ] 00:24:22.371 }' 00:24:22.371 02:47:47 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:22.371 02:47:47 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:22.371 02:47:47 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:22.629 02:47:47 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:22.629 02:47:47 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:23.565 02:47:48 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:23.565 02:47:48 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:23.565 02:47:48 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:23.565 02:47:48 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:23.565 02:47:48 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:23.565 02:47:48 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:23.565 02:47:48 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:23.565 02:47:48 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:23.823 02:47:48 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:23.823 "name": "raid_bdev1", 00:24:23.823 "uuid": "8cf04bc2-8522-4524-a9c7-006717db747f", 00:24:23.823 "strip_size_kb": 64, 00:24:23.823 "state": "online", 00:24:23.823 "raid_level": "raid5f", 00:24:23.824 "superblock": true, 00:24:23.824 "num_base_bdevs": 4, 00:24:23.824 "num_base_bdevs_discovered": 4, 00:24:23.824 "num_base_bdevs_operational": 4, 00:24:23.824 "process": { 00:24:23.824 "type": "rebuild", 00:24:23.824 "target": "spare", 00:24:23.824 "progress": { 00:24:23.824 "blocks": 55680, 00:24:23.824 "percent": 29 00:24:23.824 } 00:24:23.824 }, 00:24:23.824 "base_bdevs_list": [ 00:24:23.824 { 00:24:23.824 "name": "spare", 00:24:23.824 "uuid": "9dd0f803-d04e-5aa7-83fb-43a48fd0675d", 00:24:23.824 "is_configured": true, 00:24:23.824 "data_offset": 2048, 00:24:23.824 "data_size": 63488 00:24:23.824 }, 00:24:23.824 { 00:24:23.824 "name": "BaseBdev2", 00:24:23.824 "uuid": "9ed29240-6c63-59bb-8f2c-0862cbba6d91", 00:24:23.824 "is_configured": true, 00:24:23.824 "data_offset": 2048, 00:24:23.824 "data_size": 63488 00:24:23.824 }, 00:24:23.824 { 00:24:23.824 "name": "BaseBdev3", 00:24:23.824 "uuid": "ccd86753-155b-59e5-b88e-042f332ccc20", 00:24:23.824 "is_configured": true, 00:24:23.824 "data_offset": 2048, 00:24:23.824 "data_size": 63488 00:24:23.824 }, 00:24:23.824 { 00:24:23.824 "name": "BaseBdev4", 00:24:23.824 "uuid": "658af9de-f19f-5a2c-bcbf-144b8e1ba258", 00:24:23.824 "is_configured": true, 00:24:23.824 "data_offset": 2048, 00:24:23.824 "data_size": 63488 00:24:23.824 } 00:24:23.824 ] 00:24:23.824 }' 00:24:23.824 02:47:48 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:23.824 02:47:48 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:23.824 02:47:48 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:23.824 02:47:48 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:23.824 02:47:48 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:25.200 02:47:49 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:25.200 02:47:49 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:25.200 02:47:49 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:25.200 02:47:49 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:25.200 02:47:49 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:25.200 02:47:49 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:25.200 02:47:49 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:25.200 02:47:49 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:25.200 02:47:50 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:25.200 "name": "raid_bdev1", 00:24:25.200 "uuid": "8cf04bc2-8522-4524-a9c7-006717db747f", 00:24:25.200 "strip_size_kb": 64, 00:24:25.200 "state": "online", 00:24:25.200 "raid_level": "raid5f", 00:24:25.200 "superblock": true, 00:24:25.200 "num_base_bdevs": 4, 00:24:25.200 "num_base_bdevs_discovered": 4, 00:24:25.200 "num_base_bdevs_operational": 4, 00:24:25.200 "process": { 00:24:25.200 "type": "rebuild", 00:24:25.200 "target": "spare", 00:24:25.200 "progress": { 00:24:25.200 "blocks": 82560, 00:24:25.200 "percent": 43 00:24:25.200 } 00:24:25.200 }, 00:24:25.200 "base_bdevs_list": [ 00:24:25.200 { 00:24:25.200 "name": "spare", 00:24:25.200 "uuid": "9dd0f803-d04e-5aa7-83fb-43a48fd0675d", 00:24:25.200 "is_configured": true, 00:24:25.200 "data_offset": 2048, 00:24:25.200 "data_size": 63488 00:24:25.200 }, 00:24:25.200 { 00:24:25.200 "name": "BaseBdev2", 00:24:25.200 "uuid": "9ed29240-6c63-59bb-8f2c-0862cbba6d91", 00:24:25.200 "is_configured": true, 00:24:25.200 "data_offset": 2048, 00:24:25.200 "data_size": 63488 00:24:25.200 }, 00:24:25.200 { 00:24:25.200 "name": "BaseBdev3", 00:24:25.200 "uuid": "ccd86753-155b-59e5-b88e-042f332ccc20", 00:24:25.200 "is_configured": true, 00:24:25.200 "data_offset": 2048, 00:24:25.200 "data_size": 63488 00:24:25.200 }, 00:24:25.200 { 00:24:25.200 "name": "BaseBdev4", 00:24:25.200 "uuid": "658af9de-f19f-5a2c-bcbf-144b8e1ba258", 00:24:25.200 "is_configured": true, 00:24:25.200 "data_offset": 2048, 00:24:25.200 "data_size": 63488 00:24:25.200 } 00:24:25.200 ] 00:24:25.200 }' 00:24:25.200 02:47:50 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:25.200 02:47:50 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:25.200 02:47:50 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:25.200 02:47:50 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:25.200 02:47:50 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:26.135 02:47:51 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:26.135 02:47:51 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:26.135 02:47:51 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:26.135 02:47:51 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:26.135 02:47:51 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:26.135 02:47:51 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:26.393 02:47:51 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:26.393 02:47:51 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:26.393 02:47:51 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:26.393 "name": "raid_bdev1", 00:24:26.393 "uuid": "8cf04bc2-8522-4524-a9c7-006717db747f", 00:24:26.393 "strip_size_kb": 64, 00:24:26.393 "state": "online", 00:24:26.393 "raid_level": "raid5f", 00:24:26.393 "superblock": true, 00:24:26.393 "num_base_bdevs": 4, 00:24:26.393 "num_base_bdevs_discovered": 4, 00:24:26.393 "num_base_bdevs_operational": 4, 00:24:26.393 "process": { 00:24:26.393 "type": "rebuild", 00:24:26.393 "target": "spare", 00:24:26.393 "progress": { 00:24:26.393 "blocks": 107520, 00:24:26.393 "percent": 56 00:24:26.393 } 00:24:26.393 }, 00:24:26.393 "base_bdevs_list": [ 00:24:26.393 { 00:24:26.393 "name": "spare", 00:24:26.393 "uuid": "9dd0f803-d04e-5aa7-83fb-43a48fd0675d", 00:24:26.393 "is_configured": true, 00:24:26.393 "data_offset": 2048, 00:24:26.393 "data_size": 63488 00:24:26.393 }, 00:24:26.393 { 00:24:26.393 "name": "BaseBdev2", 00:24:26.393 "uuid": "9ed29240-6c63-59bb-8f2c-0862cbba6d91", 00:24:26.393 "is_configured": true, 00:24:26.393 "data_offset": 2048, 00:24:26.393 "data_size": 63488 00:24:26.393 }, 00:24:26.393 { 00:24:26.393 "name": "BaseBdev3", 00:24:26.393 "uuid": "ccd86753-155b-59e5-b88e-042f332ccc20", 00:24:26.393 "is_configured": true, 00:24:26.393 "data_offset": 2048, 00:24:26.393 "data_size": 63488 00:24:26.393 }, 00:24:26.393 { 00:24:26.393 "name": "BaseBdev4", 00:24:26.393 "uuid": "658af9de-f19f-5a2c-bcbf-144b8e1ba258", 00:24:26.393 "is_configured": true, 00:24:26.393 "data_offset": 2048, 00:24:26.393 "data_size": 63488 00:24:26.393 } 00:24:26.393 ] 00:24:26.393 }' 00:24:26.393 02:47:51 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:26.661 02:47:51 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:26.661 02:47:51 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:26.661 02:47:51 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:26.661 02:47:51 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:27.607 02:47:52 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:27.607 02:47:52 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:27.607 02:47:52 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:27.607 02:47:52 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:27.607 02:47:52 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:27.607 02:47:52 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:27.607 02:47:52 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:27.607 02:47:52 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:27.892 02:47:52 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:27.892 "name": "raid_bdev1", 00:24:27.892 "uuid": "8cf04bc2-8522-4524-a9c7-006717db747f", 00:24:27.892 "strip_size_kb": 64, 00:24:27.892 "state": "online", 00:24:27.892 "raid_level": "raid5f", 00:24:27.892 "superblock": true, 00:24:27.892 "num_base_bdevs": 4, 00:24:27.892 "num_base_bdevs_discovered": 4, 00:24:27.892 "num_base_bdevs_operational": 4, 00:24:27.892 "process": { 00:24:27.892 "type": "rebuild", 00:24:27.892 "target": "spare", 00:24:27.892 "progress": { 00:24:27.892 "blocks": 132480, 00:24:27.892 "percent": 69 00:24:27.892 } 00:24:27.892 }, 00:24:27.892 "base_bdevs_list": [ 00:24:27.892 { 00:24:27.892 "name": "spare", 00:24:27.892 "uuid": "9dd0f803-d04e-5aa7-83fb-43a48fd0675d", 00:24:27.892 "is_configured": true, 00:24:27.892 "data_offset": 2048, 00:24:27.892 "data_size": 63488 00:24:27.892 }, 00:24:27.892 { 00:24:27.892 "name": "BaseBdev2", 00:24:27.892 "uuid": "9ed29240-6c63-59bb-8f2c-0862cbba6d91", 00:24:27.892 "is_configured": true, 00:24:27.892 "data_offset": 2048, 00:24:27.892 "data_size": 63488 00:24:27.892 }, 00:24:27.892 { 00:24:27.892 "name": "BaseBdev3", 00:24:27.892 "uuid": "ccd86753-155b-59e5-b88e-042f332ccc20", 00:24:27.892 "is_configured": true, 00:24:27.892 "data_offset": 2048, 00:24:27.892 "data_size": 63488 00:24:27.892 }, 00:24:27.892 { 00:24:27.892 "name": "BaseBdev4", 00:24:27.892 "uuid": "658af9de-f19f-5a2c-bcbf-144b8e1ba258", 00:24:27.892 "is_configured": true, 00:24:27.892 "data_offset": 2048, 00:24:27.892 "data_size": 63488 00:24:27.892 } 00:24:27.892 ] 00:24:27.892 }' 00:24:27.892 02:47:52 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:27.892 02:47:52 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:27.892 02:47:52 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:27.892 02:47:52 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:27.892 02:47:52 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:28.824 02:47:53 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:28.825 02:47:53 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:28.825 02:47:53 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:28.825 02:47:53 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:28.825 02:47:53 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:28.825 02:47:53 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:28.825 02:47:53 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:28.825 02:47:53 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:29.081 02:47:54 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:29.081 "name": "raid_bdev1", 00:24:29.081 "uuid": "8cf04bc2-8522-4524-a9c7-006717db747f", 00:24:29.081 "strip_size_kb": 64, 00:24:29.081 "state": "online", 00:24:29.081 "raid_level": "raid5f", 00:24:29.081 "superblock": true, 00:24:29.081 "num_base_bdevs": 4, 00:24:29.081 "num_base_bdevs_discovered": 4, 00:24:29.082 "num_base_bdevs_operational": 4, 00:24:29.082 "process": { 00:24:29.082 "type": "rebuild", 00:24:29.082 "target": "spare", 00:24:29.082 "progress": { 00:24:29.082 "blocks": 159360, 00:24:29.082 "percent": 83 00:24:29.082 } 00:24:29.082 }, 00:24:29.082 "base_bdevs_list": [ 00:24:29.082 { 00:24:29.082 "name": "spare", 00:24:29.082 "uuid": "9dd0f803-d04e-5aa7-83fb-43a48fd0675d", 00:24:29.082 "is_configured": true, 00:24:29.082 "data_offset": 2048, 00:24:29.082 "data_size": 63488 00:24:29.082 }, 00:24:29.082 { 00:24:29.082 "name": "BaseBdev2", 00:24:29.082 "uuid": "9ed29240-6c63-59bb-8f2c-0862cbba6d91", 00:24:29.082 "is_configured": true, 00:24:29.082 "data_offset": 2048, 00:24:29.082 "data_size": 63488 00:24:29.082 }, 00:24:29.082 { 00:24:29.082 "name": "BaseBdev3", 00:24:29.082 "uuid": "ccd86753-155b-59e5-b88e-042f332ccc20", 00:24:29.082 "is_configured": true, 00:24:29.082 "data_offset": 2048, 00:24:29.082 "data_size": 63488 00:24:29.082 }, 00:24:29.082 { 00:24:29.082 "name": "BaseBdev4", 00:24:29.082 "uuid": "658af9de-f19f-5a2c-bcbf-144b8e1ba258", 00:24:29.082 "is_configured": true, 00:24:29.082 "data_offset": 2048, 00:24:29.082 "data_size": 63488 00:24:29.082 } 00:24:29.082 ] 00:24:29.082 }' 00:24:29.082 02:47:54 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:29.339 02:47:54 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:29.339 02:47:54 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:29.339 02:47:54 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:29.339 02:47:54 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:30.270 02:47:55 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:30.270 02:47:55 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:30.270 02:47:55 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:30.270 02:47:55 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:30.271 02:47:55 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:30.271 02:47:55 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:30.271 02:47:55 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:30.271 02:47:55 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:30.527 02:47:55 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:30.527 "name": "raid_bdev1", 00:24:30.527 "uuid": "8cf04bc2-8522-4524-a9c7-006717db747f", 00:24:30.527 "strip_size_kb": 64, 00:24:30.527 "state": "online", 00:24:30.527 "raid_level": "raid5f", 00:24:30.528 "superblock": true, 00:24:30.528 "num_base_bdevs": 4, 00:24:30.528 "num_base_bdevs_discovered": 4, 00:24:30.528 "num_base_bdevs_operational": 4, 00:24:30.528 "process": { 00:24:30.528 "type": "rebuild", 00:24:30.528 "target": "spare", 00:24:30.528 "progress": { 00:24:30.528 "blocks": 184320, 00:24:30.528 "percent": 96 00:24:30.528 } 00:24:30.528 }, 00:24:30.528 "base_bdevs_list": [ 00:24:30.528 { 00:24:30.528 "name": "spare", 00:24:30.528 "uuid": "9dd0f803-d04e-5aa7-83fb-43a48fd0675d", 00:24:30.528 "is_configured": true, 00:24:30.528 "data_offset": 2048, 00:24:30.528 "data_size": 63488 00:24:30.528 }, 00:24:30.528 { 00:24:30.528 "name": "BaseBdev2", 00:24:30.528 "uuid": "9ed29240-6c63-59bb-8f2c-0862cbba6d91", 00:24:30.528 "is_configured": true, 00:24:30.528 "data_offset": 2048, 00:24:30.528 "data_size": 63488 00:24:30.528 }, 00:24:30.528 { 00:24:30.528 "name": "BaseBdev3", 00:24:30.528 "uuid": "ccd86753-155b-59e5-b88e-042f332ccc20", 00:24:30.528 "is_configured": true, 00:24:30.528 "data_offset": 2048, 00:24:30.528 "data_size": 63488 00:24:30.528 }, 00:24:30.528 { 00:24:30.528 "name": "BaseBdev4", 00:24:30.528 "uuid": "658af9de-f19f-5a2c-bcbf-144b8e1ba258", 00:24:30.528 "is_configured": true, 00:24:30.528 "data_offset": 2048, 00:24:30.528 "data_size": 63488 00:24:30.528 } 00:24:30.528 ] 00:24:30.528 }' 00:24:30.528 02:47:55 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:30.528 02:47:55 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:30.528 02:47:55 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:30.528 02:47:55 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:30.528 02:47:55 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:30.785 [2024-07-11 02:47:55.822623] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:30.785 [2024-07-11 02:47:55.822694] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:30.785 [2024-07-11 02:47:55.822855] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:31.719 02:47:56 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:31.719 02:47:56 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:31.719 02:47:56 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:31.719 02:47:56 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:31.719 02:47:56 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:31.719 02:47:56 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:31.719 02:47:56 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:31.719 02:47:56 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:31.977 02:47:56 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:31.977 "name": "raid_bdev1", 00:24:31.977 "uuid": "8cf04bc2-8522-4524-a9c7-006717db747f", 00:24:31.977 "strip_size_kb": 64, 00:24:31.977 "state": "online", 00:24:31.977 "raid_level": "raid5f", 00:24:31.977 "superblock": true, 00:24:31.977 "num_base_bdevs": 4, 00:24:31.977 "num_base_bdevs_discovered": 4, 00:24:31.977 "num_base_bdevs_operational": 4, 00:24:31.977 "base_bdevs_list": [ 00:24:31.977 { 00:24:31.977 "name": "spare", 00:24:31.977 "uuid": "9dd0f803-d04e-5aa7-83fb-43a48fd0675d", 00:24:31.977 "is_configured": true, 00:24:31.977 "data_offset": 2048, 00:24:31.977 "data_size": 63488 00:24:31.977 }, 00:24:31.977 { 00:24:31.977 "name": "BaseBdev2", 00:24:31.977 "uuid": "9ed29240-6c63-59bb-8f2c-0862cbba6d91", 00:24:31.977 "is_configured": true, 00:24:31.977 "data_offset": 2048, 00:24:31.977 "data_size": 63488 00:24:31.977 }, 00:24:31.977 { 00:24:31.977 "name": "BaseBdev3", 00:24:31.977 "uuid": "ccd86753-155b-59e5-b88e-042f332ccc20", 00:24:31.977 "is_configured": true, 00:24:31.977 "data_offset": 2048, 00:24:31.977 "data_size": 63488 00:24:31.977 }, 00:24:31.977 { 00:24:31.977 "name": "BaseBdev4", 00:24:31.977 "uuid": "658af9de-f19f-5a2c-bcbf-144b8e1ba258", 00:24:31.977 "is_configured": true, 00:24:31.977 "data_offset": 2048, 00:24:31.977 "data_size": 63488 00:24:31.977 } 00:24:31.977 ] 00:24:31.977 }' 00:24:31.977 02:47:56 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:31.977 02:47:56 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:31.977 02:47:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:31.977 02:47:56 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:24:31.977 02:47:56 -- bdev/bdev_raid.sh@660 -- # break 00:24:31.977 02:47:56 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:31.977 02:47:56 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:31.977 02:47:56 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:31.977 02:47:56 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:31.977 02:47:56 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:31.977 02:47:56 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:31.977 02:47:56 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:32.235 02:47:57 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:32.235 "name": "raid_bdev1", 00:24:32.235 "uuid": "8cf04bc2-8522-4524-a9c7-006717db747f", 00:24:32.235 "strip_size_kb": 64, 00:24:32.235 "state": "online", 00:24:32.235 "raid_level": "raid5f", 00:24:32.235 "superblock": true, 00:24:32.235 "num_base_bdevs": 4, 00:24:32.235 "num_base_bdevs_discovered": 4, 00:24:32.236 "num_base_bdevs_operational": 4, 00:24:32.236 "base_bdevs_list": [ 00:24:32.236 { 00:24:32.236 "name": "spare", 00:24:32.236 "uuid": "9dd0f803-d04e-5aa7-83fb-43a48fd0675d", 00:24:32.236 "is_configured": true, 00:24:32.236 "data_offset": 2048, 00:24:32.236 "data_size": 63488 00:24:32.236 }, 00:24:32.236 { 00:24:32.236 "name": "BaseBdev2", 00:24:32.236 "uuid": "9ed29240-6c63-59bb-8f2c-0862cbba6d91", 00:24:32.236 "is_configured": true, 00:24:32.236 "data_offset": 2048, 00:24:32.236 "data_size": 63488 00:24:32.236 }, 00:24:32.236 { 00:24:32.236 "name": "BaseBdev3", 00:24:32.236 "uuid": "ccd86753-155b-59e5-b88e-042f332ccc20", 00:24:32.236 "is_configured": true, 00:24:32.236 "data_offset": 2048, 00:24:32.236 "data_size": 63488 00:24:32.236 }, 00:24:32.236 { 00:24:32.236 "name": "BaseBdev4", 00:24:32.236 "uuid": "658af9de-f19f-5a2c-bcbf-144b8e1ba258", 00:24:32.236 "is_configured": true, 00:24:32.236 "data_offset": 2048, 00:24:32.236 "data_size": 63488 00:24:32.236 } 00:24:32.236 ] 00:24:32.236 }' 00:24:32.236 02:47:57 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:32.236 02:47:57 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:32.236 02:47:57 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:32.236 02:47:57 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:32.236 02:47:57 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:24:32.236 02:47:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:32.236 02:47:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:32.236 02:47:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:32.236 02:47:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:32.236 02:47:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:32.236 02:47:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:32.236 02:47:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:32.236 02:47:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:32.236 02:47:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:32.236 02:47:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:32.236 02:47:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:32.816 02:47:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:32.816 "name": "raid_bdev1", 00:24:32.816 "uuid": "8cf04bc2-8522-4524-a9c7-006717db747f", 00:24:32.816 "strip_size_kb": 64, 00:24:32.816 "state": "online", 00:24:32.816 "raid_level": "raid5f", 00:24:32.816 "superblock": true, 00:24:32.816 "num_base_bdevs": 4, 00:24:32.816 "num_base_bdevs_discovered": 4, 00:24:32.816 "num_base_bdevs_operational": 4, 00:24:32.816 "base_bdevs_list": [ 00:24:32.816 { 00:24:32.816 "name": "spare", 00:24:32.816 "uuid": "9dd0f803-d04e-5aa7-83fb-43a48fd0675d", 00:24:32.816 "is_configured": true, 00:24:32.816 "data_offset": 2048, 00:24:32.816 "data_size": 63488 00:24:32.816 }, 00:24:32.816 { 00:24:32.816 "name": "BaseBdev2", 00:24:32.816 "uuid": "9ed29240-6c63-59bb-8f2c-0862cbba6d91", 00:24:32.816 "is_configured": true, 00:24:32.816 "data_offset": 2048, 00:24:32.816 "data_size": 63488 00:24:32.816 }, 00:24:32.816 { 00:24:32.816 "name": "BaseBdev3", 00:24:32.816 "uuid": "ccd86753-155b-59e5-b88e-042f332ccc20", 00:24:32.816 "is_configured": true, 00:24:32.816 "data_offset": 2048, 00:24:32.816 "data_size": 63488 00:24:32.816 }, 00:24:32.816 { 00:24:32.816 "name": "BaseBdev4", 00:24:32.816 "uuid": "658af9de-f19f-5a2c-bcbf-144b8e1ba258", 00:24:32.816 "is_configured": true, 00:24:32.816 "data_offset": 2048, 00:24:32.816 "data_size": 63488 00:24:32.816 } 00:24:32.816 ] 00:24:32.816 }' 00:24:32.816 02:47:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:32.816 02:47:57 -- common/autotest_common.sh@10 -- # set +x 00:24:33.384 02:47:58 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:33.384 [2024-07-11 02:47:58.441087] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:33.384 [2024-07-11 02:47:58.441125] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:33.384 [2024-07-11 02:47:58.441233] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:33.384 [2024-07-11 02:47:58.441340] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:33.384 [2024-07-11 02:47:58.441356] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:24:33.384 02:47:58 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:33.384 02:47:58 -- bdev/bdev_raid.sh@671 -- # jq length 00:24:33.642 02:47:58 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:24:33.642 02:47:58 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:24:33.642 02:47:58 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:24:33.642 02:47:58 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:33.642 02:47:58 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:24:33.642 02:47:58 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:33.642 02:47:58 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:24:33.642 02:47:58 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:33.642 02:47:58 -- bdev/nbd_common.sh@12 -- # local i 00:24:33.642 02:47:58 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:33.642 02:47:58 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:33.642 02:47:58 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:24:33.900 /dev/nbd0 00:24:33.900 02:47:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:33.900 02:47:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:33.900 02:47:58 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:24:33.900 02:47:58 -- common/autotest_common.sh@857 -- # local i 00:24:33.900 02:47:58 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:24:33.900 02:47:58 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:24:33.900 02:47:58 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:24:33.900 02:47:58 -- common/autotest_common.sh@861 -- # break 00:24:33.900 02:47:58 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:24:33.900 02:47:58 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:24:33.900 02:47:58 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:33.900 1+0 records in 00:24:33.900 1+0 records out 00:24:33.900 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000302716 s, 13.5 MB/s 00:24:33.900 02:47:58 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:33.900 02:47:58 -- common/autotest_common.sh@874 -- # size=4096 00:24:33.900 02:47:58 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:33.900 02:47:58 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:24:33.900 02:47:58 -- common/autotest_common.sh@877 -- # return 0 00:24:33.900 02:47:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:33.900 02:47:58 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:33.900 02:47:58 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:24:34.158 /dev/nbd1 00:24:34.417 02:47:59 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:34.417 02:47:59 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:34.417 02:47:59 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:24:34.417 02:47:59 -- common/autotest_common.sh@857 -- # local i 00:24:34.417 02:47:59 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:24:34.417 02:47:59 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:24:34.417 02:47:59 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:24:34.417 02:47:59 -- common/autotest_common.sh@861 -- # break 00:24:34.417 02:47:59 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:24:34.417 02:47:59 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:24:34.417 02:47:59 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:34.417 1+0 records in 00:24:34.417 1+0 records out 00:24:34.417 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000472134 s, 8.7 MB/s 00:24:34.417 02:47:59 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:34.417 02:47:59 -- common/autotest_common.sh@874 -- # size=4096 00:24:34.417 02:47:59 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:34.417 02:47:59 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:24:34.417 02:47:59 -- common/autotest_common.sh@877 -- # return 0 00:24:34.417 02:47:59 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:34.417 02:47:59 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:34.417 02:47:59 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:24:34.417 02:47:59 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:24:34.417 02:47:59 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:34.417 02:47:59 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:24:34.417 02:47:59 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:34.417 02:47:59 -- bdev/nbd_common.sh@51 -- # local i 00:24:34.417 02:47:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:34.417 02:47:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:34.674 02:47:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:34.674 02:47:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:34.674 02:47:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:34.674 02:47:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:34.674 02:47:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:34.674 02:47:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:34.674 02:47:59 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:24:34.674 02:47:59 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:24:34.674 02:47:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:34.674 02:47:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:34.674 02:47:59 -- bdev/nbd_common.sh@41 -- # break 00:24:34.674 02:47:59 -- bdev/nbd_common.sh@45 -- # return 0 00:24:34.674 02:47:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:34.674 02:47:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:34.932 02:47:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:34.932 02:47:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:34.932 02:47:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:34.932 02:47:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:34.932 02:47:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:34.932 02:47:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:34.932 02:47:59 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:24:35.190 02:48:00 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:24:35.190 02:48:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:35.190 02:48:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:35.190 02:48:00 -- bdev/nbd_common.sh@41 -- # break 00:24:35.190 02:48:00 -- bdev/nbd_common.sh@45 -- # return 0 00:24:35.190 02:48:00 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:24:35.190 02:48:00 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:35.190 02:48:00 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:24:35.190 02:48:00 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:24:35.448 02:48:00 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:35.706 [2024-07-11 02:48:00.558310] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:35.706 [2024-07-11 02:48:00.558398] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:35.706 [2024-07-11 02:48:00.558440] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:24:35.706 [2024-07-11 02:48:00.558460] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:35.706 [2024-07-11 02:48:00.560576] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:35.706 [2024-07-11 02:48:00.560657] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:35.706 [2024-07-11 02:48:00.560741] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:24:35.706 [2024-07-11 02:48:00.560803] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:35.706 BaseBdev1 00:24:35.706 02:48:00 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:35.706 02:48:00 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:24:35.706 02:48:00 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:24:35.706 02:48:00 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:35.964 [2024-07-11 02:48:00.974450] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:35.964 [2024-07-11 02:48:00.974539] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:35.964 [2024-07-11 02:48:00.974593] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:24:35.964 [2024-07-11 02:48:00.974613] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:35.964 [2024-07-11 02:48:00.975061] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:35.964 [2024-07-11 02:48:00.975115] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:35.964 [2024-07-11 02:48:00.975190] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:24:35.964 [2024-07-11 02:48:00.975204] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:24:35.964 [2024-07-11 02:48:00.975211] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:35.964 [2024-07-11 02:48:00.975244] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ae80 name raid_bdev1, state configuring 00:24:35.964 [2024-07-11 02:48:00.975297] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:35.964 BaseBdev2 00:24:35.964 02:48:00 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:35.964 02:48:00 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:24:35.964 02:48:00 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:24:36.221 02:48:01 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:24:36.478 [2024-07-11 02:48:01.398503] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:24:36.478 [2024-07-11 02:48:01.398790] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:36.478 [2024-07-11 02:48:01.398919] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:24:36.478 [2024-07-11 02:48:01.399054] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:36.478 [2024-07-11 02:48:01.399592] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:36.478 [2024-07-11 02:48:01.399777] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:36.478 [2024-07-11 02:48:01.399947] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:24:36.478 [2024-07-11 02:48:01.399988] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:36.478 BaseBdev3 00:24:36.478 02:48:01 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:36.478 02:48:01 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:24:36.478 02:48:01 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:24:36.737 02:48:01 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:24:36.737 [2024-07-11 02:48:01.794586] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:24:36.737 [2024-07-11 02:48:01.794950] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:36.737 [2024-07-11 02:48:01.795089] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:24:36.737 [2024-07-11 02:48:01.795208] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:36.737 [2024-07-11 02:48:01.795632] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:36.737 [2024-07-11 02:48:01.795790] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:36.737 [2024-07-11 02:48:01.795946] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:24:36.737 [2024-07-11 02:48:01.795975] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:36.737 BaseBdev4 00:24:36.737 02:48:01 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:24:36.996 02:48:01 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:37.254 [2024-07-11 02:48:02.198647] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:37.254 [2024-07-11 02:48:02.198837] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:37.254 [2024-07-11 02:48:02.198992] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:24:37.254 [2024-07-11 02:48:02.199133] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:37.255 [2024-07-11 02:48:02.199616] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:37.255 [2024-07-11 02:48:02.199815] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:37.255 [2024-07-11 02:48:02.200003] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:24:37.255 [2024-07-11 02:48:02.200107] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:37.255 spare 00:24:37.255 02:48:02 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:24:37.255 02:48:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:37.255 02:48:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:37.255 02:48:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:37.255 02:48:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:37.255 02:48:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:37.255 02:48:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:37.255 02:48:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:37.255 02:48:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:37.255 02:48:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:37.255 02:48:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:37.255 02:48:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:37.255 [2024-07-11 02:48:02.300243] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000b480 00:24:37.255 [2024-07-11 02:48:02.300263] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:24:37.255 [2024-07-11 02:48:02.300408] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049c60 00:24:37.255 [2024-07-11 02:48:02.301240] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000b480 00:24:37.255 [2024-07-11 02:48:02.301260] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000b480 00:24:37.255 [2024-07-11 02:48:02.301436] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:37.513 02:48:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:37.513 "name": "raid_bdev1", 00:24:37.513 "uuid": "8cf04bc2-8522-4524-a9c7-006717db747f", 00:24:37.513 "strip_size_kb": 64, 00:24:37.513 "state": "online", 00:24:37.513 "raid_level": "raid5f", 00:24:37.513 "superblock": true, 00:24:37.513 "num_base_bdevs": 4, 00:24:37.513 "num_base_bdevs_discovered": 4, 00:24:37.513 "num_base_bdevs_operational": 4, 00:24:37.513 "base_bdevs_list": [ 00:24:37.513 { 00:24:37.513 "name": "spare", 00:24:37.513 "uuid": "9dd0f803-d04e-5aa7-83fb-43a48fd0675d", 00:24:37.513 "is_configured": true, 00:24:37.513 "data_offset": 2048, 00:24:37.513 "data_size": 63488 00:24:37.513 }, 00:24:37.513 { 00:24:37.513 "name": "BaseBdev2", 00:24:37.513 "uuid": "9ed29240-6c63-59bb-8f2c-0862cbba6d91", 00:24:37.513 "is_configured": true, 00:24:37.513 "data_offset": 2048, 00:24:37.513 "data_size": 63488 00:24:37.513 }, 00:24:37.513 { 00:24:37.513 "name": "BaseBdev3", 00:24:37.513 "uuid": "ccd86753-155b-59e5-b88e-042f332ccc20", 00:24:37.513 "is_configured": true, 00:24:37.513 "data_offset": 2048, 00:24:37.513 "data_size": 63488 00:24:37.513 }, 00:24:37.513 { 00:24:37.513 "name": "BaseBdev4", 00:24:37.513 "uuid": "658af9de-f19f-5a2c-bcbf-144b8e1ba258", 00:24:37.513 "is_configured": true, 00:24:37.513 "data_offset": 2048, 00:24:37.513 "data_size": 63488 00:24:37.513 } 00:24:37.513 ] 00:24:37.513 }' 00:24:37.513 02:48:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:37.513 02:48:02 -- common/autotest_common.sh@10 -- # set +x 00:24:38.079 02:48:03 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:38.079 02:48:03 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:38.079 02:48:03 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:38.079 02:48:03 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:38.079 02:48:03 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:38.079 02:48:03 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:38.079 02:48:03 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:38.338 02:48:03 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:38.338 "name": "raid_bdev1", 00:24:38.338 "uuid": "8cf04bc2-8522-4524-a9c7-006717db747f", 00:24:38.338 "strip_size_kb": 64, 00:24:38.338 "state": "online", 00:24:38.338 "raid_level": "raid5f", 00:24:38.338 "superblock": true, 00:24:38.338 "num_base_bdevs": 4, 00:24:38.338 "num_base_bdevs_discovered": 4, 00:24:38.338 "num_base_bdevs_operational": 4, 00:24:38.338 "base_bdevs_list": [ 00:24:38.338 { 00:24:38.338 "name": "spare", 00:24:38.338 "uuid": "9dd0f803-d04e-5aa7-83fb-43a48fd0675d", 00:24:38.338 "is_configured": true, 00:24:38.338 "data_offset": 2048, 00:24:38.338 "data_size": 63488 00:24:38.338 }, 00:24:38.338 { 00:24:38.338 "name": "BaseBdev2", 00:24:38.338 "uuid": "9ed29240-6c63-59bb-8f2c-0862cbba6d91", 00:24:38.338 "is_configured": true, 00:24:38.338 "data_offset": 2048, 00:24:38.338 "data_size": 63488 00:24:38.338 }, 00:24:38.338 { 00:24:38.338 "name": "BaseBdev3", 00:24:38.338 "uuid": "ccd86753-155b-59e5-b88e-042f332ccc20", 00:24:38.338 "is_configured": true, 00:24:38.338 "data_offset": 2048, 00:24:38.338 "data_size": 63488 00:24:38.338 }, 00:24:38.338 { 00:24:38.338 "name": "BaseBdev4", 00:24:38.338 "uuid": "658af9de-f19f-5a2c-bcbf-144b8e1ba258", 00:24:38.338 "is_configured": true, 00:24:38.338 "data_offset": 2048, 00:24:38.338 "data_size": 63488 00:24:38.338 } 00:24:38.338 ] 00:24:38.338 }' 00:24:38.338 02:48:03 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:38.597 02:48:03 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:38.597 02:48:03 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:38.597 02:48:03 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:38.597 02:48:03 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:38.597 02:48:03 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:24:38.854 02:48:03 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:24:38.854 02:48:03 -- bdev/bdev_raid.sh@709 -- # killprocess 144754 00:24:38.854 02:48:03 -- common/autotest_common.sh@926 -- # '[' -z 144754 ']' 00:24:38.854 02:48:03 -- common/autotest_common.sh@930 -- # kill -0 144754 00:24:38.854 02:48:03 -- common/autotest_common.sh@931 -- # uname 00:24:38.854 02:48:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:38.854 02:48:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 144754 00:24:38.854 killing process with pid 144754 00:24:38.854 Received shutdown signal, test time was about 60.000000 seconds 00:24:38.854 00:24:38.854 Latency(us) 00:24:38.854 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:38.854 =================================================================================================================== 00:24:38.855 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:38.855 02:48:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:38.855 02:48:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:38.855 02:48:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 144754' 00:24:38.855 02:48:03 -- common/autotest_common.sh@945 -- # kill 144754 00:24:38.855 02:48:03 -- common/autotest_common.sh@950 -- # wait 144754 00:24:38.855 [2024-07-11 02:48:03.813241] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:38.855 [2024-07-11 02:48:03.813380] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:38.855 [2024-07-11 02:48:03.813490] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:38.855 [2024-07-11 02:48:03.813510] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b480 name raid_bdev1, state offline 00:24:38.855 [2024-07-11 02:48:03.857263] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:39.122 ************************************ 00:24:39.122 END TEST raid5f_rebuild_test_sb 00:24:39.122 ************************************ 00:24:39.122 02:48:04 -- bdev/bdev_raid.sh@711 -- # return 0 00:24:39.122 00:24:39.122 real 0m28.484s 00:24:39.122 user 0m44.548s 00:24:39.122 sys 0m3.055s 00:24:39.122 02:48:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:39.122 02:48:04 -- common/autotest_common.sh@10 -- # set +x 00:24:39.122 02:48:04 -- bdev/bdev_raid.sh@754 -- # rm -f /raidrandtest 00:24:39.122 ************************************ 00:24:39.122 END TEST bdev_raid 00:24:39.122 ************************************ 00:24:39.122 00:24:39.122 real 11m7.452s 00:24:39.122 user 19m10.989s 00:24:39.122 sys 1m23.025s 00:24:39.122 02:48:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:39.122 02:48:04 -- common/autotest_common.sh@10 -- # set +x 00:24:39.122 02:48:04 -- spdk/autotest.sh@197 -- # run_test bdevperf_config /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:24:39.122 02:48:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:24:39.122 02:48:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:39.122 02:48:04 -- common/autotest_common.sh@10 -- # set +x 00:24:39.122 ************************************ 00:24:39.122 START TEST bdevperf_config 00:24:39.122 ************************************ 00:24:39.122 02:48:04 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:24:39.394 * Looking for test storage... 00:24:39.394 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf 00:24:39.394 02:48:04 -- bdevperf/test_config.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh 00:24:39.394 02:48:04 -- bdevperf/common.sh@5 -- # bdevperf=/home/vagrant/spdk_repo/spdk/build/examples/bdevperf 00:24:39.394 02:48:04 -- bdevperf/test_config.sh@12 -- # jsonconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json 00:24:39.394 02:48:04 -- bdevperf/test_config.sh@13 -- # testconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:24:39.394 02:48:04 -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:39.394 02:48:04 -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:24:39.394 02:48:04 -- bdevperf/common.sh@8 -- # local job_section=global 00:24:39.394 02:48:04 -- bdevperf/common.sh@9 -- # local rw=read 00:24:39.394 02:48:04 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:24:39.394 02:48:04 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:24:39.394 02:48:04 -- bdevperf/common.sh@13 -- # cat 00:24:39.394 02:48:04 -- bdevperf/common.sh@18 -- # job='[global]' 00:24:39.394 00:24:39.394 02:48:04 -- bdevperf/common.sh@19 -- # echo 00:24:39.394 02:48:04 -- bdevperf/common.sh@20 -- # cat 00:24:39.394 02:48:04 -- bdevperf/test_config.sh@18 -- # create_job job0 00:24:39.394 02:48:04 -- bdevperf/common.sh@8 -- # local job_section=job0 00:24:39.394 00:24:39.394 02:48:04 -- bdevperf/common.sh@9 -- # local rw= 00:24:39.394 02:48:04 -- bdevperf/common.sh@10 -- # local filename= 00:24:39.394 02:48:04 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:24:39.394 02:48:04 -- bdevperf/common.sh@18 -- # job='[job0]' 00:24:39.394 02:48:04 -- bdevperf/common.sh@19 -- # echo 00:24:39.394 02:48:04 -- bdevperf/common.sh@20 -- # cat 00:24:39.394 02:48:04 -- bdevperf/test_config.sh@19 -- # create_job job1 00:24:39.394 02:48:04 -- bdevperf/common.sh@8 -- # local job_section=job1 00:24:39.394 02:48:04 -- bdevperf/common.sh@9 -- # local rw= 00:24:39.394 02:48:04 -- bdevperf/common.sh@10 -- # local filename= 00:24:39.394 02:48:04 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:24:39.394 02:48:04 -- bdevperf/common.sh@18 -- # job='[job1]' 00:24:39.394 00:24:39.394 02:48:04 -- bdevperf/common.sh@19 -- # echo 00:24:39.394 02:48:04 -- bdevperf/common.sh@20 -- # cat 00:24:39.394 02:48:04 -- bdevperf/test_config.sh@20 -- # create_job job2 00:24:39.394 02:48:04 -- bdevperf/common.sh@8 -- # local job_section=job2 00:24:39.394 02:48:04 -- bdevperf/common.sh@9 -- # local rw= 00:24:39.394 02:48:04 -- bdevperf/common.sh@10 -- # local filename= 00:24:39.394 02:48:04 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:24:39.394 00:24:39.394 02:48:04 -- bdevperf/common.sh@18 -- # job='[job2]' 00:24:39.394 02:48:04 -- bdevperf/common.sh@19 -- # echo 00:24:39.394 02:48:04 -- bdevperf/common.sh@20 -- # cat 00:24:39.394 02:48:04 -- bdevperf/test_config.sh@21 -- # create_job job3 00:24:39.394 02:48:04 -- bdevperf/common.sh@8 -- # local job_section=job3 00:24:39.394 02:48:04 -- bdevperf/common.sh@9 -- # local rw= 00:24:39.394 02:48:04 -- bdevperf/common.sh@10 -- # local filename= 00:24:39.394 02:48:04 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:24:39.394 00:24:39.394 02:48:04 -- bdevperf/common.sh@18 -- # job='[job3]' 00:24:39.394 02:48:04 -- bdevperf/common.sh@19 -- # echo 00:24:39.394 02:48:04 -- bdevperf/common.sh@20 -- # cat 00:24:39.394 02:48:04 -- bdevperf/test_config.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:24:41.924 02:48:06 -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-07-11 02:48:04.316753] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:24:41.924 [2024-07-11 02:48:04.316991] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145570 ] 00:24:41.924 Using job config with 4 jobs 00:24:41.924 [2024-07-11 02:48:04.462238] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:41.924 [2024-07-11 02:48:04.541664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:41.924 cpumask for '\''job0'\'' is too big 00:24:41.924 cpumask for '\''job1'\'' is too big 00:24:41.924 cpumask for '\''job2'\'' is too big 00:24:41.924 cpumask for '\''job3'\'' is too big 00:24:41.924 Running I/O for 2 seconds... 00:24:41.924 00:24:41.924 Latency(us) 00:24:41.924 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:41.924 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:41.924 Malloc0 : 2.01 31837.52 31.09 0.00 0.00 8032.44 1712.87 13107.20 00:24:41.924 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:41.924 Malloc0 : 2.02 31851.39 31.10 0.00 0.00 8013.21 1653.29 11439.01 00:24:41.924 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:41.924 Malloc0 : 2.02 31830.84 31.08 0.00 0.00 8003.72 1511.80 9770.82 00:24:41.924 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:41.924 Malloc0 : 2.02 31810.18 31.06 0.00 0.00 7995.64 1400.09 9532.51 00:24:41.924 =================================================================================================================== 00:24:41.924 Total : 127329.93 124.35 0.00 0.00 8011.23 1400.09 13107.20' 00:24:41.924 02:48:06 -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-07-11 02:48:04.316753] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:24:41.924 [2024-07-11 02:48:04.316991] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145570 ] 00:24:41.924 Using job config with 4 jobs 00:24:41.925 [2024-07-11 02:48:04.462238] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:41.925 [2024-07-11 02:48:04.541664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:41.925 cpumask for '\''job0'\'' is too big 00:24:41.925 cpumask for '\''job1'\'' is too big 00:24:41.925 cpumask for '\''job2'\'' is too big 00:24:41.925 cpumask for '\''job3'\'' is too big 00:24:41.925 Running I/O for 2 seconds... 00:24:41.925 00:24:41.925 Latency(us) 00:24:41.925 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:41.925 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:41.925 Malloc0 : 2.01 31837.52 31.09 0.00 0.00 8032.44 1712.87 13107.20 00:24:41.925 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:41.925 Malloc0 : 2.02 31851.39 31.10 0.00 0.00 8013.21 1653.29 11439.01 00:24:41.925 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:41.925 Malloc0 : 2.02 31830.84 31.08 0.00 0.00 8003.72 1511.80 9770.82 00:24:41.925 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:41.925 Malloc0 : 2.02 31810.18 31.06 0.00 0.00 7995.64 1400.09 9532.51 00:24:41.925 =================================================================================================================== 00:24:41.925 Total : 127329.93 124.35 0.00 0.00 8011.23 1400.09 13107.20' 00:24:41.925 02:48:06 -- bdevperf/common.sh@32 -- # echo '[2024-07-11 02:48:04.316753] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:24:41.925 [2024-07-11 02:48:04.316991] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145570 ] 00:24:41.925 Using job config with 4 jobs 00:24:41.925 [2024-07-11 02:48:04.462238] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:41.925 [2024-07-11 02:48:04.541664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:41.925 cpumask for '\''job0'\'' is too big 00:24:41.925 cpumask for '\''job1'\'' is too big 00:24:41.925 cpumask for '\''job2'\'' is too big 00:24:41.925 cpumask for '\''job3'\'' is too big 00:24:41.925 Running I/O for 2 seconds... 00:24:41.925 00:24:41.925 Latency(us) 00:24:41.925 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:41.925 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:41.925 Malloc0 : 2.01 31837.52 31.09 0.00 0.00 8032.44 1712.87 13107.20 00:24:41.925 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:41.925 Malloc0 : 2.02 31851.39 31.10 0.00 0.00 8013.21 1653.29 11439.01 00:24:41.925 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:41.925 Malloc0 : 2.02 31830.84 31.08 0.00 0.00 8003.72 1511.80 9770.82 00:24:41.925 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:41.925 Malloc0 : 2.02 31810.18 31.06 0.00 0.00 7995.64 1400.09 9532.51 00:24:41.925 =================================================================================================================== 00:24:41.925 Total : 127329.93 124.35 0.00 0.00 8011.23 1400.09 13107.20' 00:24:41.925 02:48:06 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:24:41.925 02:48:06 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:24:41.925 02:48:06 -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:24:41.925 02:48:06 -- bdevperf/test_config.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:24:42.183 [2024-07-11 02:48:07.021820] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:24:42.183 [2024-07-11 02:48:07.022056] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145613 ] 00:24:42.183 [2024-07-11 02:48:07.167848] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.183 [2024-07-11 02:48:07.247286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:42.441 cpumask for 'job0' is too big 00:24:42.441 cpumask for 'job1' is too big 00:24:42.441 cpumask for 'job2' is too big 00:24:42.441 cpumask for 'job3' is too big 00:24:44.974 02:48:09 -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:24:44.974 Running I/O for 2 seconds... 00:24:44.974 00:24:44.974 Latency(us) 00:24:44.974 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:44.974 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:44.974 Malloc0 : 2.01 33080.95 32.31 0.00 0.00 7735.39 1556.48 12928.47 00:24:44.974 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:44.974 Malloc0 : 2.01 33058.69 32.28 0.00 0.00 7726.83 1437.32 11498.59 00:24:44.974 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:44.974 Malloc0 : 2.01 33036.98 32.26 0.00 0.00 7718.36 1444.77 10068.71 00:24:44.974 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:44.974 Malloc0 : 2.02 33105.79 32.33 0.00 0.00 7689.18 681.43 9353.77 00:24:44.974 =================================================================================================================== 00:24:44.974 Total : 132282.41 129.18 0.00 0.00 7717.41 681.43 12928.47' 00:24:44.974 02:48:09 -- bdevperf/test_config.sh@27 -- # cleanup 00:24:44.974 02:48:09 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:24:44.974 02:48:09 -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:24:44.974 02:48:09 -- bdevperf/common.sh@8 -- # local job_section=job0 00:24:44.974 02:48:09 -- bdevperf/common.sh@9 -- # local rw=write 00:24:44.974 02:48:09 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:24:44.974 00:24:44.974 02:48:09 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:24:44.974 02:48:09 -- bdevperf/common.sh@18 -- # job='[job0]' 00:24:44.974 02:48:09 -- bdevperf/common.sh@19 -- # echo 00:24:44.974 02:48:09 -- bdevperf/common.sh@20 -- # cat 00:24:44.974 02:48:09 -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:24:44.974 02:48:09 -- bdevperf/common.sh@8 -- # local job_section=job1 00:24:44.974 02:48:09 -- bdevperf/common.sh@9 -- # local rw=write 00:24:44.974 02:48:09 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:24:44.974 02:48:09 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:24:44.974 00:24:44.974 02:48:09 -- bdevperf/common.sh@18 -- # job='[job1]' 00:24:44.974 02:48:09 -- bdevperf/common.sh@19 -- # echo 00:24:44.974 02:48:09 -- bdevperf/common.sh@20 -- # cat 00:24:44.974 02:48:09 -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:24:44.974 02:48:09 -- bdevperf/common.sh@8 -- # local job_section=job2 00:24:44.974 02:48:09 -- bdevperf/common.sh@9 -- # local rw=write 00:24:44.974 02:48:09 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:24:44.974 02:48:09 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:24:44.974 00:24:44.974 02:48:09 -- bdevperf/common.sh@18 -- # job='[job2]' 00:24:44.974 02:48:09 -- bdevperf/common.sh@19 -- # echo 00:24:44.974 02:48:09 -- bdevperf/common.sh@20 -- # cat 00:24:44.974 02:48:09 -- bdevperf/test_config.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:24:47.507 02:48:12 -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-07-11 02:48:09.732296] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:24:47.507 [2024-07-11 02:48:09.732893] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145656 ] 00:24:47.507 Using job config with 3 jobs 00:24:47.507 [2024-07-11 02:48:09.869670] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.508 [2024-07-11 02:48:09.939516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:47.508 cpumask for '\''job0'\'' is too big 00:24:47.508 cpumask for '\''job1'\'' is too big 00:24:47.508 cpumask for '\''job2'\'' is too big 00:24:47.508 Running I/O for 2 seconds... 00:24:47.508 00:24:47.508 Latency(us) 00:24:47.508 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:47.508 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:24:47.508 Malloc0 : 2.01 43010.26 42.00 0.00 0.00 5946.41 1422.43 8638.84 00:24:47.508 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:24:47.508 Malloc0 : 2.01 42979.73 41.97 0.00 0.00 5940.37 1429.88 7626.01 00:24:47.508 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:24:47.508 Malloc0 : 2.01 42948.88 41.94 0.00 0.00 5934.92 1400.09 7745.16 00:24:47.508 =================================================================================================================== 00:24:47.508 Total : 128938.88 125.92 0.00 0.00 5940.57 1400.09 8638.84' 00:24:47.508 02:48:12 -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-07-11 02:48:09.732296] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:24:47.508 [2024-07-11 02:48:09.732893] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145656 ] 00:24:47.508 Using job config with 3 jobs 00:24:47.508 [2024-07-11 02:48:09.869670] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.508 [2024-07-11 02:48:09.939516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:47.508 cpumask for '\''job0'\'' is too big 00:24:47.508 cpumask for '\''job1'\'' is too big 00:24:47.508 cpumask for '\''job2'\'' is too big 00:24:47.508 Running I/O for 2 seconds... 00:24:47.508 00:24:47.508 Latency(us) 00:24:47.508 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:47.508 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:24:47.508 Malloc0 : 2.01 43010.26 42.00 0.00 0.00 5946.41 1422.43 8638.84 00:24:47.508 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:24:47.508 Malloc0 : 2.01 42979.73 41.97 0.00 0.00 5940.37 1429.88 7626.01 00:24:47.508 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:24:47.508 Malloc0 : 2.01 42948.88 41.94 0.00 0.00 5934.92 1400.09 7745.16 00:24:47.508 =================================================================================================================== 00:24:47.508 Total : 128938.88 125.92 0.00 0.00 5940.57 1400.09 8638.84' 00:24:47.508 02:48:12 -- bdevperf/common.sh@32 -- # echo '[2024-07-11 02:48:09.732296] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:24:47.508 [2024-07-11 02:48:09.732893] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145656 ] 00:24:47.508 Using job config with 3 jobs 00:24:47.508 [2024-07-11 02:48:09.869670] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.508 [2024-07-11 02:48:09.939516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:47.508 cpumask for '\''job0'\'' is too big 00:24:47.508 cpumask for '\''job1'\'' is too big 00:24:47.508 cpumask for '\''job2'\'' is too big 00:24:47.508 Running I/O for 2 seconds... 00:24:47.508 00:24:47.508 Latency(us) 00:24:47.508 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:47.508 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:24:47.508 Malloc0 : 2.01 43010.26 42.00 0.00 0.00 5946.41 1422.43 8638.84 00:24:47.508 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:24:47.508 Malloc0 : 2.01 42979.73 41.97 0.00 0.00 5940.37 1429.88 7626.01 00:24:47.508 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:24:47.508 Malloc0 : 2.01 42948.88 41.94 0.00 0.00 5934.92 1400.09 7745.16 00:24:47.508 =================================================================================================================== 00:24:47.508 Total : 128938.88 125.92 0.00 0.00 5940.57 1400.09 8638.84' 00:24:47.508 02:48:12 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:24:47.508 02:48:12 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:24:47.508 02:48:12 -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:24:47.508 02:48:12 -- bdevperf/test_config.sh@35 -- # cleanup 00:24:47.508 02:48:12 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:24:47.508 02:48:12 -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:24:47.508 02:48:12 -- bdevperf/common.sh@8 -- # local job_section=global 00:24:47.508 02:48:12 -- bdevperf/common.sh@9 -- # local rw=rw 00:24:47.508 02:48:12 -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:24:47.508 02:48:12 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:24:47.508 02:48:12 -- bdevperf/common.sh@13 -- # cat 00:24:47.508 02:48:12 -- bdevperf/common.sh@18 -- # job='[global]' 00:24:47.508 00:24:47.508 02:48:12 -- bdevperf/common.sh@19 -- # echo 00:24:47.508 02:48:12 -- bdevperf/common.sh@20 -- # cat 00:24:47.508 02:48:12 -- bdevperf/test_config.sh@38 -- # create_job job0 00:24:47.508 02:48:12 -- bdevperf/common.sh@8 -- # local job_section=job0 00:24:47.508 02:48:12 -- bdevperf/common.sh@9 -- # local rw= 00:24:47.508 02:48:12 -- bdevperf/common.sh@10 -- # local filename= 00:24:47.508 02:48:12 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:24:47.508 02:48:12 -- bdevperf/common.sh@18 -- # job='[job0]' 00:24:47.508 00:24:47.508 02:48:12 -- bdevperf/common.sh@19 -- # echo 00:24:47.508 02:48:12 -- bdevperf/common.sh@20 -- # cat 00:24:47.508 02:48:12 -- bdevperf/test_config.sh@39 -- # create_job job1 00:24:47.508 02:48:12 -- bdevperf/common.sh@8 -- # local job_section=job1 00:24:47.508 02:48:12 -- bdevperf/common.sh@9 -- # local rw= 00:24:47.508 02:48:12 -- bdevperf/common.sh@10 -- # local filename= 00:24:47.508 02:48:12 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:24:47.508 02:48:12 -- bdevperf/common.sh@18 -- # job='[job1]' 00:24:47.508 00:24:47.508 02:48:12 -- bdevperf/common.sh@19 -- # echo 00:24:47.508 02:48:12 -- bdevperf/common.sh@20 -- # cat 00:24:47.508 02:48:12 -- bdevperf/test_config.sh@40 -- # create_job job2 00:24:47.508 02:48:12 -- bdevperf/common.sh@8 -- # local job_section=job2 00:24:47.508 02:48:12 -- bdevperf/common.sh@9 -- # local rw= 00:24:47.508 02:48:12 -- bdevperf/common.sh@10 -- # local filename= 00:24:47.508 02:48:12 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:24:47.508 00:24:47.508 02:48:12 -- bdevperf/common.sh@18 -- # job='[job2]' 00:24:47.508 02:48:12 -- bdevperf/common.sh@19 -- # echo 00:24:47.508 02:48:12 -- bdevperf/common.sh@20 -- # cat 00:24:47.508 02:48:12 -- bdevperf/test_config.sh@41 -- # create_job job3 00:24:47.508 02:48:12 -- bdevperf/common.sh@8 -- # local job_section=job3 00:24:47.508 02:48:12 -- bdevperf/common.sh@9 -- # local rw= 00:24:47.508 02:48:12 -- bdevperf/common.sh@10 -- # local filename= 00:24:47.508 02:48:12 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:24:47.508 00:24:47.508 02:48:12 -- bdevperf/common.sh@18 -- # job='[job3]' 00:24:47.508 02:48:12 -- bdevperf/common.sh@19 -- # echo 00:24:47.508 02:48:12 -- bdevperf/common.sh@20 -- # cat 00:24:47.508 02:48:12 -- bdevperf/test_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:24:50.039 02:48:15 -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-07-11 02:48:12.435610] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:24:50.039 [2024-07-11 02:48:12.435812] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145714 ] 00:24:50.039 Using job config with 4 jobs 00:24:50.039 [2024-07-11 02:48:12.570541] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:50.039 [2024-07-11 02:48:12.639602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:50.039 cpumask for '\''job0'\'' is too big 00:24:50.039 cpumask for '\''job1'\'' is too big 00:24:50.039 cpumask for '\''job2'\'' is too big 00:24:50.039 cpumask for '\''job3'\'' is too big 00:24:50.039 Running I/O for 2 seconds... 00:24:50.039 00:24:50.039 Latency(us) 00:24:50.039 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:50.039 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:50.039 Malloc0 : 2.02 15956.63 15.58 0.00 0.00 16035.74 2949.12 24188.74 00:24:50.039 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:50.039 Malloc1 : 2.02 15945.98 15.57 0.00 0.00 16035.17 3470.43 24188.74 00:24:50.039 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:50.039 Malloc0 : 2.02 15934.98 15.56 0.00 0.00 16003.47 2874.65 21328.99 00:24:50.039 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:50.039 Malloc1 : 2.03 15923.75 15.55 0.00 0.00 16005.10 3381.06 21328.99 00:24:50.039 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:50.039 Malloc0 : 2.03 15976.96 15.60 0.00 0.00 15907.71 2889.54 19303.33 00:24:50.039 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:50.039 Malloc1 : 2.04 15966.32 15.59 0.00 0.00 15908.46 3381.06 19184.17 00:24:50.039 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:50.039 Malloc0 : 2.04 15956.00 15.58 0.00 0.00 15875.31 2859.75 19303.33 00:24:50.039 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:50.039 Malloc1 : 2.04 15945.57 15.57 0.00 0.00 15875.71 3381.06 19184.17 00:24:50.039 =================================================================================================================== 00:24:50.039 Total : 127606.18 124.62 0.00 0.00 15955.58 2859.75 24188.74' 00:24:50.039 02:48:15 -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-07-11 02:48:12.435610] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:24:50.039 [2024-07-11 02:48:12.435812] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145714 ] 00:24:50.039 Using job config with 4 jobs 00:24:50.039 [2024-07-11 02:48:12.570541] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:50.039 [2024-07-11 02:48:12.639602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:50.039 cpumask for '\''job0'\'' is too big 00:24:50.039 cpumask for '\''job1'\'' is too big 00:24:50.039 cpumask for '\''job2'\'' is too big 00:24:50.039 cpumask for '\''job3'\'' is too big 00:24:50.039 Running I/O for 2 seconds... 00:24:50.040 00:24:50.040 Latency(us) 00:24:50.040 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:50.040 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:50.040 Malloc0 : 2.02 15956.63 15.58 0.00 0.00 16035.74 2949.12 24188.74 00:24:50.040 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:50.040 Malloc1 : 2.02 15945.98 15.57 0.00 0.00 16035.17 3470.43 24188.74 00:24:50.040 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:50.040 Malloc0 : 2.02 15934.98 15.56 0.00 0.00 16003.47 2874.65 21328.99 00:24:50.040 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:50.040 Malloc1 : 2.03 15923.75 15.55 0.00 0.00 16005.10 3381.06 21328.99 00:24:50.040 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:50.040 Malloc0 : 2.03 15976.96 15.60 0.00 0.00 15907.71 2889.54 19303.33 00:24:50.040 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:50.040 Malloc1 : 2.04 15966.32 15.59 0.00 0.00 15908.46 3381.06 19184.17 00:24:50.040 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:50.040 Malloc0 : 2.04 15956.00 15.58 0.00 0.00 15875.31 2859.75 19303.33 00:24:50.040 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:50.040 Malloc1 : 2.04 15945.57 15.57 0.00 0.00 15875.71 3381.06 19184.17 00:24:50.040 =================================================================================================================== 00:24:50.040 Total : 127606.18 124.62 0.00 0.00 15955.58 2859.75 24188.74' 00:24:50.040 02:48:15 -- bdevperf/common.sh@32 -- # echo '[2024-07-11 02:48:12.435610] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:24:50.040 [2024-07-11 02:48:12.435812] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145714 ] 00:24:50.040 Using job config with 4 jobs 00:24:50.040 [2024-07-11 02:48:12.570541] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:50.040 [2024-07-11 02:48:12.639602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:50.040 cpumask for '\''job0'\'' is too big 00:24:50.040 cpumask for '\''job1'\'' is too big 00:24:50.040 cpumask for '\''job2'\'' is too big 00:24:50.040 cpumask for '\''job3'\'' is too big 00:24:50.040 Running I/O for 2 seconds... 00:24:50.040 00:24:50.040 Latency(us) 00:24:50.040 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:50.040 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:50.040 Malloc0 : 2.02 15956.63 15.58 0.00 0.00 16035.74 2949.12 24188.74 00:24:50.040 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:50.040 Malloc1 : 2.02 15945.98 15.57 0.00 0.00 16035.17 3470.43 24188.74 00:24:50.040 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:50.040 Malloc0 : 2.02 15934.98 15.56 0.00 0.00 16003.47 2874.65 21328.99 00:24:50.040 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:50.040 Malloc1 : 2.03 15923.75 15.55 0.00 0.00 16005.10 3381.06 21328.99 00:24:50.040 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:50.040 Malloc0 : 2.03 15976.96 15.60 0.00 0.00 15907.71 2889.54 19303.33 00:24:50.040 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:50.040 Malloc1 : 2.04 15966.32 15.59 0.00 0.00 15908.46 3381.06 19184.17 00:24:50.040 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:50.040 Malloc0 : 2.04 15956.00 15.58 0.00 0.00 15875.31 2859.75 19303.33 00:24:50.040 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:50.040 Malloc1 : 2.04 15945.57 15.57 0.00 0.00 15875.71 3381.06 19184.17 00:24:50.040 =================================================================================================================== 00:24:50.040 Total : 127606.18 124.62 0.00 0.00 15955.58 2859.75 24188.74' 00:24:50.040 02:48:15 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:24:50.040 02:48:15 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:24:50.040 02:48:15 -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:24:50.040 02:48:15 -- bdevperf/test_config.sh@44 -- # cleanup 00:24:50.040 02:48:15 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:24:50.040 02:48:15 -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:24:50.040 ************************************ 00:24:50.040 END TEST bdevperf_config 00:24:50.040 ************************************ 00:24:50.040 00:24:50.040 real 0m10.924s 00:24:50.040 user 0m9.548s 00:24:50.040 sys 0m0.821s 00:24:50.040 02:48:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:50.040 02:48:15 -- common/autotest_common.sh@10 -- # set +x 00:24:50.301 02:48:15 -- spdk/autotest.sh@198 -- # uname -s 00:24:50.301 02:48:15 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:24:50.301 02:48:15 -- spdk/autotest.sh@199 -- # run_test reactor_set_interrupt /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:24:50.301 02:48:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:24:50.301 02:48:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:50.301 02:48:15 -- common/autotest_common.sh@10 -- # set +x 00:24:50.301 ************************************ 00:24:50.301 START TEST reactor_set_interrupt 00:24:50.301 ************************************ 00:24:50.301 02:48:15 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:24:50.301 * Looking for test storage... 00:24:50.301 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:24:50.301 02:48:15 -- interrupt/reactor_set_interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:24:50.301 02:48:15 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:24:50.301 02:48:15 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:24:50.301 02:48:15 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:24:50.301 02:48:15 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:24:50.301 02:48:15 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:50.301 02:48:15 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:24:50.301 02:48:15 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:24:50.301 02:48:15 -- common/autotest_common.sh@34 -- # set -e 00:24:50.301 02:48:15 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:24:50.301 02:48:15 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:24:50.301 02:48:15 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:24:50.301 02:48:15 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:24:50.301 02:48:15 -- common/build_config.sh@1 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:24:50.301 02:48:15 -- common/build_config.sh@2 -- # CONFIG_FIO_PLUGIN=y 00:24:50.301 02:48:15 -- common/build_config.sh@3 -- # CONFIG_NVME_CUSE=y 00:24:50.301 02:48:15 -- common/build_config.sh@4 -- # CONFIG_RAID5F=y 00:24:50.301 02:48:15 -- common/build_config.sh@5 -- # CONFIG_LTO=n 00:24:50.301 02:48:15 -- common/build_config.sh@6 -- # CONFIG_SMA=n 00:24:50.301 02:48:15 -- common/build_config.sh@7 -- # CONFIG_ISAL=y 00:24:50.301 02:48:15 -- common/build_config.sh@8 -- # CONFIG_OPENSSL_PATH= 00:24:50.301 02:48:15 -- common/build_config.sh@9 -- # CONFIG_IDXD_KERNEL=n 00:24:50.301 02:48:15 -- common/build_config.sh@10 -- # CONFIG_URING_PATH= 00:24:50.301 02:48:15 -- common/build_config.sh@11 -- # CONFIG_DAOS=n 00:24:50.301 02:48:15 -- common/build_config.sh@12 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:24:50.301 02:48:15 -- common/build_config.sh@13 -- # CONFIG_OCF=n 00:24:50.301 02:48:15 -- common/build_config.sh@14 -- # CONFIG_EXAMPLES=y 00:24:50.301 02:48:15 -- common/build_config.sh@15 -- # CONFIG_RDMA_PROV=verbs 00:24:50.301 02:48:15 -- common/build_config.sh@16 -- # CONFIG_ISCSI_INITIATOR=y 00:24:50.301 02:48:15 -- common/build_config.sh@17 -- # CONFIG_VTUNE=n 00:24:50.301 02:48:15 -- common/build_config.sh@18 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:24:50.301 02:48:15 -- common/build_config.sh@19 -- # CONFIG_CET=n 00:24:50.301 02:48:15 -- common/build_config.sh@20 -- # CONFIG_TESTS=y 00:24:50.301 02:48:15 -- common/build_config.sh@21 -- # CONFIG_APPS=y 00:24:50.301 02:48:15 -- common/build_config.sh@22 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:24:50.301 02:48:15 -- common/build_config.sh@23 -- # CONFIG_DAOS_DIR= 00:24:50.301 02:48:15 -- common/build_config.sh@24 -- # CONFIG_CRYPTO_MLX5=n 00:24:50.301 02:48:15 -- common/build_config.sh@25 -- # CONFIG_XNVME=n 00:24:50.301 02:48:15 -- common/build_config.sh@26 -- # CONFIG_UNIT_TESTS=y 00:24:50.301 02:48:15 -- common/build_config.sh@27 -- # CONFIG_FUSE=n 00:24:50.301 02:48:15 -- common/build_config.sh@28 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:24:50.301 02:48:15 -- common/build_config.sh@29 -- # CONFIG_OCF_PATH= 00:24:50.301 02:48:15 -- common/build_config.sh@30 -- # CONFIG_WPDK_DIR= 00:24:50.301 02:48:15 -- common/build_config.sh@31 -- # CONFIG_VFIO_USER=n 00:24:50.301 02:48:15 -- common/build_config.sh@32 -- # CONFIG_MAX_LCORES= 00:24:50.301 02:48:15 -- common/build_config.sh@33 -- # CONFIG_ARCH=native 00:24:50.301 02:48:15 -- common/build_config.sh@34 -- # CONFIG_TSAN=n 00:24:50.301 02:48:15 -- common/build_config.sh@35 -- # CONFIG_VIRTIO=y 00:24:50.301 02:48:15 -- common/build_config.sh@36 -- # CONFIG_IPSEC_MB=n 00:24:50.301 02:48:15 -- common/build_config.sh@37 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:24:50.301 02:48:15 -- common/build_config.sh@38 -- # CONFIG_ASAN=y 00:24:50.301 02:48:15 -- common/build_config.sh@39 -- # CONFIG_SHARED=n 00:24:50.301 02:48:15 -- common/build_config.sh@40 -- # CONFIG_VTUNE_DIR= 00:24:50.301 02:48:15 -- common/build_config.sh@41 -- # CONFIG_RDMA_SET_TOS=y 00:24:50.301 02:48:15 -- common/build_config.sh@42 -- # CONFIG_VBDEV_COMPRESS=n 00:24:50.301 02:48:15 -- common/build_config.sh@43 -- # CONFIG_VFIO_USER_DIR= 00:24:50.301 02:48:15 -- common/build_config.sh@44 -- # CONFIG_FUZZER_LIB= 00:24:50.301 02:48:15 -- common/build_config.sh@45 -- # CONFIG_HAVE_EXECINFO_H=y 00:24:50.301 02:48:15 -- common/build_config.sh@46 -- # CONFIG_USDT=n 00:24:50.301 02:48:15 -- common/build_config.sh@47 -- # CONFIG_URING_ZNS=n 00:24:50.301 02:48:15 -- common/build_config.sh@48 -- # CONFIG_FC_PATH= 00:24:50.301 02:48:15 -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:24:50.301 02:48:15 -- common/build_config.sh@50 -- # CONFIG_CUSTOMOCF=n 00:24:50.301 02:48:15 -- common/build_config.sh@51 -- # CONFIG_DPDK_PKG_CONFIG=n 00:24:50.301 02:48:15 -- common/build_config.sh@52 -- # CONFIG_WERROR=y 00:24:50.301 02:48:15 -- common/build_config.sh@53 -- # CONFIG_DEBUG=y 00:24:50.301 02:48:15 -- common/build_config.sh@54 -- # CONFIG_RDMA=y 00:24:50.301 02:48:15 -- common/build_config.sh@55 -- # CONFIG_HAVE_ARC4RANDOM=n 00:24:50.301 02:48:15 -- common/build_config.sh@56 -- # CONFIG_FUZZER=n 00:24:50.301 02:48:15 -- common/build_config.sh@57 -- # CONFIG_FC=n 00:24:50.301 02:48:15 -- common/build_config.sh@58 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:24:50.301 02:48:15 -- common/build_config.sh@59 -- # CONFIG_HAVE_LIBARCHIVE=n 00:24:50.301 02:48:15 -- common/build_config.sh@60 -- # CONFIG_DPDK_COMPRESSDEV=n 00:24:50.301 02:48:15 -- common/build_config.sh@61 -- # CONFIG_CROSS_PREFIX= 00:24:50.301 02:48:15 -- common/build_config.sh@62 -- # CONFIG_PREFIX=/usr/local 00:24:50.301 02:48:15 -- common/build_config.sh@63 -- # CONFIG_HAVE_LIBBSD=n 00:24:50.301 02:48:15 -- common/build_config.sh@64 -- # CONFIG_UBSAN=y 00:24:50.301 02:48:15 -- common/build_config.sh@65 -- # CONFIG_PGO_CAPTURE=n 00:24:50.301 02:48:15 -- common/build_config.sh@66 -- # CONFIG_UBLK=n 00:24:50.301 02:48:15 -- common/build_config.sh@67 -- # CONFIG_ISAL_CRYPTO=y 00:24:50.301 02:48:15 -- common/build_config.sh@68 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:24:50.301 02:48:15 -- common/build_config.sh@69 -- # CONFIG_CRYPTO=n 00:24:50.301 02:48:15 -- common/build_config.sh@70 -- # CONFIG_RBD=n 00:24:50.301 02:48:15 -- common/build_config.sh@71 -- # CONFIG_LIBDIR= 00:24:50.301 02:48:15 -- common/build_config.sh@72 -- # CONFIG_IPSEC_MB_DIR= 00:24:50.301 02:48:15 -- common/build_config.sh@73 -- # CONFIG_PGO_USE=n 00:24:50.301 02:48:15 -- common/build_config.sh@74 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:24:50.301 02:48:15 -- common/build_config.sh@75 -- # CONFIG_GOLANG=n 00:24:50.301 02:48:15 -- common/build_config.sh@76 -- # CONFIG_VHOST=y 00:24:50.301 02:48:15 -- common/build_config.sh@77 -- # CONFIG_IDXD=y 00:24:50.301 02:48:15 -- common/build_config.sh@78 -- # CONFIG_AVAHI=n 00:24:50.301 02:48:15 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:24:50.301 02:48:15 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:24:50.301 02:48:15 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:24:50.301 02:48:15 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:24:50.301 02:48:15 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:24:50.301 02:48:15 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:24:50.301 02:48:15 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:24:50.301 02:48:15 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:24:50.301 02:48:15 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:24:50.301 02:48:15 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:24:50.301 02:48:15 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:24:50.301 02:48:15 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:24:50.301 02:48:15 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:24:50.301 02:48:15 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:24:50.301 02:48:15 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:24:50.301 02:48:15 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:24:50.301 02:48:15 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:24:50.301 #define SPDK_CONFIG_H 00:24:50.301 #define SPDK_CONFIG_APPS 1 00:24:50.301 #define SPDK_CONFIG_ARCH native 00:24:50.301 #define SPDK_CONFIG_ASAN 1 00:24:50.301 #undef SPDK_CONFIG_AVAHI 00:24:50.301 #undef SPDK_CONFIG_CET 00:24:50.301 #define SPDK_CONFIG_COVERAGE 1 00:24:50.301 #define SPDK_CONFIG_CROSS_PREFIX 00:24:50.301 #undef SPDK_CONFIG_CRYPTO 00:24:50.301 #undef SPDK_CONFIG_CRYPTO_MLX5 00:24:50.301 #undef SPDK_CONFIG_CUSTOMOCF 00:24:50.301 #undef SPDK_CONFIG_DAOS 00:24:50.301 #define SPDK_CONFIG_DAOS_DIR 00:24:50.301 #define SPDK_CONFIG_DEBUG 1 00:24:50.301 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:24:50.301 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:24:50.301 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:24:50.301 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:24:50.301 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:24:50.301 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:24:50.301 #define SPDK_CONFIG_EXAMPLES 1 00:24:50.301 #undef SPDK_CONFIG_FC 00:24:50.301 #define SPDK_CONFIG_FC_PATH 00:24:50.301 #define SPDK_CONFIG_FIO_PLUGIN 1 00:24:50.301 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:24:50.301 #undef SPDK_CONFIG_FUSE 00:24:50.301 #undef SPDK_CONFIG_FUZZER 00:24:50.301 #define SPDK_CONFIG_FUZZER_LIB 00:24:50.301 #undef SPDK_CONFIG_GOLANG 00:24:50.301 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:24:50.301 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:24:50.301 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:24:50.301 #undef SPDK_CONFIG_HAVE_LIBBSD 00:24:50.301 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:24:50.301 #define SPDK_CONFIG_IDXD 1 00:24:50.302 #undef SPDK_CONFIG_IDXD_KERNEL 00:24:50.302 #undef SPDK_CONFIG_IPSEC_MB 00:24:50.302 #define SPDK_CONFIG_IPSEC_MB_DIR 00:24:50.302 #define SPDK_CONFIG_ISAL 1 00:24:50.302 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:24:50.302 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:24:50.302 #define SPDK_CONFIG_LIBDIR 00:24:50.302 #undef SPDK_CONFIG_LTO 00:24:50.302 #define SPDK_CONFIG_MAX_LCORES 00:24:50.302 #define SPDK_CONFIG_NVME_CUSE 1 00:24:50.302 #undef SPDK_CONFIG_OCF 00:24:50.302 #define SPDK_CONFIG_OCF_PATH 00:24:50.302 #define SPDK_CONFIG_OPENSSL_PATH 00:24:50.302 #undef SPDK_CONFIG_PGO_CAPTURE 00:24:50.302 #undef SPDK_CONFIG_PGO_USE 00:24:50.302 #define SPDK_CONFIG_PREFIX /usr/local 00:24:50.302 #define SPDK_CONFIG_RAID5F 1 00:24:50.302 #undef SPDK_CONFIG_RBD 00:24:50.302 #define SPDK_CONFIG_RDMA 1 00:24:50.302 #define SPDK_CONFIG_RDMA_PROV verbs 00:24:50.302 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:24:50.302 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:24:50.302 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:24:50.302 #undef SPDK_CONFIG_SHARED 00:24:50.302 #undef SPDK_CONFIG_SMA 00:24:50.302 #define SPDK_CONFIG_TESTS 1 00:24:50.302 #undef SPDK_CONFIG_TSAN 00:24:50.302 #undef SPDK_CONFIG_UBLK 00:24:50.302 #define SPDK_CONFIG_UBSAN 1 00:24:50.302 #define SPDK_CONFIG_UNIT_TESTS 1 00:24:50.302 #undef SPDK_CONFIG_URING 00:24:50.302 #define SPDK_CONFIG_URING_PATH 00:24:50.302 #undef SPDK_CONFIG_URING_ZNS 00:24:50.302 #undef SPDK_CONFIG_USDT 00:24:50.302 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:24:50.302 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:24:50.302 #undef SPDK_CONFIG_VFIO_USER 00:24:50.302 #define SPDK_CONFIG_VFIO_USER_DIR 00:24:50.302 #define SPDK_CONFIG_VHOST 1 00:24:50.302 #define SPDK_CONFIG_VIRTIO 1 00:24:50.302 #undef SPDK_CONFIG_VTUNE 00:24:50.302 #define SPDK_CONFIG_VTUNE_DIR 00:24:50.302 #define SPDK_CONFIG_WERROR 1 00:24:50.302 #define SPDK_CONFIG_WPDK_DIR 00:24:50.302 #undef SPDK_CONFIG_XNVME 00:24:50.302 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:24:50.302 02:48:15 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:24:50.302 02:48:15 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:50.302 02:48:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:50.302 02:48:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:50.302 02:48:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:50.302 02:48:15 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:50.302 02:48:15 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:50.302 02:48:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:50.302 02:48:15 -- paths/export.sh@5 -- # export PATH 00:24:50.302 02:48:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:50.302 02:48:15 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:24:50.302 02:48:15 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:24:50.302 02:48:15 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:24:50.302 02:48:15 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:24:50.302 02:48:15 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:24:50.302 02:48:15 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:24:50.302 02:48:15 -- pm/common@16 -- # TEST_TAG=N/A 00:24:50.302 02:48:15 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:24:50.302 02:48:15 -- common/autotest_common.sh@52 -- # : 1 00:24:50.302 02:48:15 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:24:50.302 02:48:15 -- common/autotest_common.sh@56 -- # : 0 00:24:50.302 02:48:15 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:24:50.302 02:48:15 -- common/autotest_common.sh@58 -- # : 0 00:24:50.302 02:48:15 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:24:50.302 02:48:15 -- common/autotest_common.sh@60 -- # : 1 00:24:50.302 02:48:15 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:24:50.302 02:48:15 -- common/autotest_common.sh@62 -- # : 1 00:24:50.302 02:48:15 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:24:50.302 02:48:15 -- common/autotest_common.sh@64 -- # : 00:24:50.302 02:48:15 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:24:50.302 02:48:15 -- common/autotest_common.sh@66 -- # : 0 00:24:50.302 02:48:15 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:24:50.302 02:48:15 -- common/autotest_common.sh@68 -- # : 0 00:24:50.302 02:48:15 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:24:50.302 02:48:15 -- common/autotest_common.sh@70 -- # : 0 00:24:50.302 02:48:15 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:24:50.302 02:48:15 -- common/autotest_common.sh@72 -- # : 0 00:24:50.302 02:48:15 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:24:50.302 02:48:15 -- common/autotest_common.sh@74 -- # : 1 00:24:50.302 02:48:15 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:24:50.302 02:48:15 -- common/autotest_common.sh@76 -- # : 0 00:24:50.302 02:48:15 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:24:50.302 02:48:15 -- common/autotest_common.sh@78 -- # : 0 00:24:50.302 02:48:15 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:24:50.302 02:48:15 -- common/autotest_common.sh@80 -- # : 0 00:24:50.302 02:48:15 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:24:50.302 02:48:15 -- common/autotest_common.sh@82 -- # : 0 00:24:50.302 02:48:15 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:24:50.302 02:48:15 -- common/autotest_common.sh@84 -- # : 0 00:24:50.302 02:48:15 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:24:50.302 02:48:15 -- common/autotest_common.sh@86 -- # : 0 00:24:50.302 02:48:15 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:24:50.302 02:48:15 -- common/autotest_common.sh@88 -- # : 0 00:24:50.302 02:48:15 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:24:50.302 02:48:15 -- common/autotest_common.sh@90 -- # : 0 00:24:50.302 02:48:15 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:24:50.302 02:48:15 -- common/autotest_common.sh@92 -- # : 0 00:24:50.302 02:48:15 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:24:50.302 02:48:15 -- common/autotest_common.sh@94 -- # : 0 00:24:50.302 02:48:15 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:24:50.302 02:48:15 -- common/autotest_common.sh@96 -- # : rdma 00:24:50.302 02:48:15 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:24:50.302 02:48:15 -- common/autotest_common.sh@98 -- # : 0 00:24:50.302 02:48:15 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:24:50.302 02:48:15 -- common/autotest_common.sh@100 -- # : 0 00:24:50.302 02:48:15 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:24:50.302 02:48:15 -- common/autotest_common.sh@102 -- # : 1 00:24:50.302 02:48:15 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:24:50.302 02:48:15 -- common/autotest_common.sh@104 -- # : 0 00:24:50.302 02:48:15 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:24:50.302 02:48:15 -- common/autotest_common.sh@106 -- # : 0 00:24:50.302 02:48:15 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:24:50.302 02:48:15 -- common/autotest_common.sh@108 -- # : 0 00:24:50.302 02:48:15 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:24:50.302 02:48:15 -- common/autotest_common.sh@110 -- # : 0 00:24:50.302 02:48:15 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:24:50.302 02:48:15 -- common/autotest_common.sh@112 -- # : 0 00:24:50.302 02:48:15 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:24:50.302 02:48:15 -- common/autotest_common.sh@114 -- # : 1 00:24:50.302 02:48:15 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:24:50.302 02:48:15 -- common/autotest_common.sh@116 -- # : 1 00:24:50.302 02:48:15 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:24:50.302 02:48:15 -- common/autotest_common.sh@118 -- # : /home/vagrant/spdk_repo/dpdk/build 00:24:50.302 02:48:15 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:24:50.302 02:48:15 -- common/autotest_common.sh@120 -- # : 0 00:24:50.302 02:48:15 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:24:50.302 02:48:15 -- common/autotest_common.sh@122 -- # : 0 00:24:50.302 02:48:15 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:24:50.302 02:48:15 -- common/autotest_common.sh@124 -- # : 0 00:24:50.302 02:48:15 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:24:50.302 02:48:15 -- common/autotest_common.sh@126 -- # : 0 00:24:50.302 02:48:15 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:24:50.302 02:48:15 -- common/autotest_common.sh@128 -- # : 0 00:24:50.302 02:48:15 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:24:50.302 02:48:15 -- common/autotest_common.sh@130 -- # : 0 00:24:50.302 02:48:15 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:24:50.302 02:48:15 -- common/autotest_common.sh@132 -- # : v22.11.4 00:24:50.302 02:48:15 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:24:50.302 02:48:15 -- common/autotest_common.sh@134 -- # : true 00:24:50.302 02:48:15 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:24:50.302 02:48:15 -- common/autotest_common.sh@136 -- # : 1 00:24:50.302 02:48:15 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:24:50.302 02:48:15 -- common/autotest_common.sh@138 -- # : 0 00:24:50.302 02:48:15 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:24:50.302 02:48:15 -- common/autotest_common.sh@140 -- # : 0 00:24:50.302 02:48:15 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:24:50.302 02:48:15 -- common/autotest_common.sh@142 -- # : 0 00:24:50.302 02:48:15 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:24:50.302 02:48:15 -- common/autotest_common.sh@144 -- # : 0 00:24:50.302 02:48:15 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:24:50.302 02:48:15 -- common/autotest_common.sh@146 -- # : 0 00:24:50.302 02:48:15 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:24:50.302 02:48:15 -- common/autotest_common.sh@148 -- # : 00:24:50.303 02:48:15 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:24:50.303 02:48:15 -- common/autotest_common.sh@150 -- # : 0 00:24:50.303 02:48:15 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:24:50.303 02:48:15 -- common/autotest_common.sh@152 -- # : 0 00:24:50.303 02:48:15 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:24:50.303 02:48:15 -- common/autotest_common.sh@154 -- # : 0 00:24:50.303 02:48:15 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:24:50.303 02:48:15 -- common/autotest_common.sh@156 -- # : 0 00:24:50.303 02:48:15 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:24:50.303 02:48:15 -- common/autotest_common.sh@158 -- # : 0 00:24:50.303 02:48:15 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:24:50.303 02:48:15 -- common/autotest_common.sh@160 -- # : 0 00:24:50.303 02:48:15 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:24:50.303 02:48:15 -- common/autotest_common.sh@163 -- # : 00:24:50.303 02:48:15 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:24:50.303 02:48:15 -- common/autotest_common.sh@165 -- # : 0 00:24:50.303 02:48:15 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:24:50.303 02:48:15 -- common/autotest_common.sh@167 -- # : 0 00:24:50.303 02:48:15 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:24:50.303 02:48:15 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:24:50.303 02:48:15 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:24:50.303 02:48:15 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:24:50.303 02:48:15 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:24:50.303 02:48:15 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:24:50.303 02:48:15 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:24:50.303 02:48:15 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:24:50.303 02:48:15 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:24:50.303 02:48:15 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:24:50.303 02:48:15 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:24:50.303 02:48:15 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:24:50.303 02:48:15 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:24:50.303 02:48:15 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:24:50.303 02:48:15 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:24:50.303 02:48:15 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:24:50.303 02:48:15 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:24:50.303 02:48:15 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:24:50.303 02:48:15 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:24:50.303 02:48:15 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:24:50.303 02:48:15 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:24:50.303 02:48:15 -- common/autotest_common.sh@196 -- # cat 00:24:50.303 02:48:15 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:24:50.303 02:48:15 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:24:50.303 02:48:15 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:24:50.303 02:48:15 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:24:50.303 02:48:15 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:24:50.303 02:48:15 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:24:50.303 02:48:15 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:24:50.303 02:48:15 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:24:50.303 02:48:15 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:24:50.303 02:48:15 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:24:50.303 02:48:15 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:24:50.303 02:48:15 -- common/autotest_common.sh@239 -- # export QEMU_BIN= 00:24:50.303 02:48:15 -- common/autotest_common.sh@239 -- # QEMU_BIN= 00:24:50.303 02:48:15 -- common/autotest_common.sh@240 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:24:50.303 02:48:15 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:24:50.303 02:48:15 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:24:50.303 02:48:15 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:24:50.303 02:48:15 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:24:50.303 02:48:15 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:24:50.303 02:48:15 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:24:50.303 02:48:15 -- common/autotest_common.sh@249 -- # export valgrind= 00:24:50.303 02:48:15 -- common/autotest_common.sh@249 -- # valgrind= 00:24:50.303 02:48:15 -- common/autotest_common.sh@255 -- # uname -s 00:24:50.303 02:48:15 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:24:50.303 02:48:15 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:24:50.303 02:48:15 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:24:50.303 02:48:15 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:24:50.303 02:48:15 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:24:50.303 02:48:15 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:24:50.303 02:48:15 -- common/autotest_common.sh@265 -- # MAKE=make 00:24:50.303 02:48:15 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j10 00:24:50.303 02:48:15 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:24:50.303 02:48:15 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:24:50.303 02:48:15 -- common/autotest_common.sh@284 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:24:50.303 02:48:15 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:24:50.303 02:48:15 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:24:50.303 02:48:15 -- common/autotest_common.sh@309 -- # [[ -z 145788 ]] 00:24:50.303 02:48:15 -- common/autotest_common.sh@309 -- # kill -0 145788 00:24:50.303 02:48:15 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:24:50.303 02:48:15 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:24:50.303 02:48:15 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:24:50.303 02:48:15 -- common/autotest_common.sh@322 -- # local mount target_dir 00:24:50.303 02:48:15 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:24:50.303 02:48:15 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:24:50.303 02:48:15 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:24:50.303 02:48:15 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:24:50.303 02:48:15 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.ByP7rr 00:24:50.303 02:48:15 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:24:50.303 02:48:15 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:24:50.303 02:48:15 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:24:50.303 02:48:15 -- common/autotest_common.sh@346 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.ByP7rr/tests/interrupt /tmp/spdk.ByP7rr 00:24:50.303 02:48:15 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:24:50.303 02:48:15 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:24:50.303 02:48:15 -- common/autotest_common.sh@318 -- # df -T 00:24:50.303 02:48:15 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:24:50.303 02:48:15 -- common/autotest_common.sh@352 -- # mounts["$mount"]=udev 00:24:50.303 02:48:15 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:24:50.303 02:48:15 -- common/autotest_common.sh@353 -- # avails["$mount"]=6224457728 00:24:50.303 02:48:15 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6224457728 00:24:50.303 02:48:15 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:24:50.303 02:48:15 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:24:50.303 02:48:15 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:24:50.303 02:48:15 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:24:50.303 02:48:15 -- common/autotest_common.sh@353 -- # avails["$mount"]=1249763328 00:24:50.303 02:48:15 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1254514688 00:24:50.303 02:48:15 -- common/autotest_common.sh@354 -- # uses["$mount"]=4751360 00:24:50.303 02:48:15 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:24:50.303 02:48:15 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda1 00:24:50.303 02:48:15 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext4 00:24:50.303 02:48:15 -- common/autotest_common.sh@353 -- # avails["$mount"]=9148592128 00:24:50.303 02:48:15 -- common/autotest_common.sh@353 -- # sizes["$mount"]=20616794112 00:24:50.303 02:48:15 -- common/autotest_common.sh@354 -- # uses["$mount"]=11451424768 00:24:50.303 02:48:15 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:24:50.303 02:48:15 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:24:50.303 02:48:15 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:24:50.303 02:48:15 -- common/autotest_common.sh@353 -- # avails["$mount"]=6271299584 00:24:50.303 02:48:15 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6272557056 00:24:50.303 02:48:15 -- common/autotest_common.sh@354 -- # uses["$mount"]=1257472 00:24:50.303 02:48:15 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:24:50.303 02:48:15 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:24:50.303 02:48:15 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:24:50.303 02:48:15 -- common/autotest_common.sh@353 -- # avails["$mount"]=5242880 00:24:50.304 02:48:15 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5242880 00:24:50.304 02:48:15 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:24:50.304 02:48:15 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:24:50.304 02:48:15 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:24:50.304 02:48:15 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:24:50.304 02:48:15 -- common/autotest_common.sh@353 -- # avails["$mount"]=6272557056 00:24:50.304 02:48:15 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6272557056 00:24:50.304 02:48:15 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:24:50.304 02:48:15 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:24:50.304 02:48:15 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop0 00:24:50.304 02:48:15 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:24:50.304 02:48:15 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:24:50.304 02:48:15 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:24:50.304 02:48:15 -- common/autotest_common.sh@354 -- # uses["$mount"]=67108864 00:24:50.304 02:48:15 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:24:50.304 02:48:15 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop1 00:24:50.304 02:48:15 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:24:50.304 02:48:15 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:24:50.304 02:48:15 -- common/autotest_common.sh@353 -- # sizes["$mount"]=41025536 00:24:50.304 02:48:15 -- common/autotest_common.sh@354 -- # uses["$mount"]=41025536 00:24:50.304 02:48:15 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:24:50.304 02:48:15 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda15 00:24:50.304 02:48:15 -- common/autotest_common.sh@352 -- # fss["$mount"]=vfat 00:24:50.304 02:48:15 -- common/autotest_common.sh@353 -- # avails["$mount"]=103089152 00:24:50.304 02:48:15 -- common/autotest_common.sh@353 -- # sizes["$mount"]=109422592 00:24:50.304 02:48:15 -- common/autotest_common.sh@354 -- # uses["$mount"]=6334464 00:24:50.304 02:48:15 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:24:50.304 02:48:15 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop2 00:24:50.304 02:48:15 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:24:50.304 02:48:15 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:24:50.304 02:48:15 -- common/autotest_common.sh@353 -- # sizes["$mount"]=96337920 00:24:50.304 02:48:15 -- common/autotest_common.sh@354 -- # uses["$mount"]=96337920 00:24:50.304 02:48:15 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:24:50.304 02:48:15 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:24:50.304 02:48:15 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:24:50.304 02:48:15 -- common/autotest_common.sh@353 -- # avails["$mount"]=1254510592 00:24:50.304 02:48:15 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1254510592 00:24:50.304 02:48:15 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:24:50.304 02:48:15 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:24:50.304 02:48:15 -- common/autotest_common.sh@352 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest/ubuntu2004-libvirt/output 00:24:50.304 02:48:15 -- common/autotest_common.sh@352 -- # fss["$mount"]=fuse.sshfs 00:24:50.304 02:48:15 -- common/autotest_common.sh@353 -- # avails["$mount"]=96893214720 00:24:50.304 02:48:15 -- common/autotest_common.sh@353 -- # sizes["$mount"]=105088212992 00:24:50.304 02:48:15 -- common/autotest_common.sh@354 -- # uses["$mount"]=2809565184 00:24:50.304 02:48:15 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:24:50.304 02:48:15 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop3 00:24:50.304 02:48:15 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:24:50.304 02:48:15 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:24:50.304 02:48:15 -- common/autotest_common.sh@353 -- # sizes["$mount"]=40763392 00:24:50.304 02:48:15 -- common/autotest_common.sh@354 -- # uses["$mount"]=40763392 00:24:50.304 02:48:15 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:24:50.304 02:48:15 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop4 00:24:50.304 02:48:15 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:24:50.304 02:48:15 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:24:50.304 02:48:15 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:24:50.304 02:48:15 -- common/autotest_common.sh@354 -- # uses["$mount"]=67108864 00:24:50.304 02:48:15 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:24:50.304 02:48:15 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:24:50.304 * Looking for test storage... 00:24:50.304 02:48:15 -- common/autotest_common.sh@359 -- # local target_space new_size 00:24:50.304 02:48:15 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:24:50.304 02:48:15 -- common/autotest_common.sh@363 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:24:50.304 02:48:15 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:24:50.304 02:48:15 -- common/autotest_common.sh@363 -- # mount=/ 00:24:50.304 02:48:15 -- common/autotest_common.sh@365 -- # target_space=9148592128 00:24:50.304 02:48:15 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:24:50.304 02:48:15 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:24:50.304 02:48:15 -- common/autotest_common.sh@371 -- # [[ ext4 == tmpfs ]] 00:24:50.304 02:48:15 -- common/autotest_common.sh@371 -- # [[ ext4 == ramfs ]] 00:24:50.304 02:48:15 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:24:50.304 02:48:15 -- common/autotest_common.sh@372 -- # new_size=13666017280 00:24:50.304 02:48:15 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:24:50.304 02:48:15 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:24:50.304 02:48:15 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:24:50.304 02:48:15 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:24:50.304 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:24:50.304 02:48:15 -- common/autotest_common.sh@380 -- # return 0 00:24:50.304 02:48:15 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:24:50.304 02:48:15 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:24:50.304 02:48:15 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:24:50.304 02:48:15 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:24:50.304 02:48:15 -- common/autotest_common.sh@1672 -- # true 00:24:50.304 02:48:15 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:24:50.304 02:48:15 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:24:50.304 02:48:15 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:24:50.304 02:48:15 -- common/autotest_common.sh@27 -- # exec 00:24:50.304 02:48:15 -- common/autotest_common.sh@29 -- # exec 00:24:50.304 02:48:15 -- common/autotest_common.sh@31 -- # xtrace_restore 00:24:50.304 02:48:15 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:24:50.304 02:48:15 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:24:50.304 02:48:15 -- common/autotest_common.sh@18 -- # set -x 00:24:50.304 02:48:15 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:50.304 02:48:15 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:24:50.304 02:48:15 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:24:50.304 02:48:15 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:24:50.304 02:48:15 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:24:50.304 02:48:15 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:24:50.304 02:48:15 -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:24:50.304 02:48:15 -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:24:50.304 02:48:15 -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:24:50.304 02:48:15 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:50.304 02:48:15 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:24:50.304 02:48:15 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=145828 00:24:50.304 02:48:15 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:50.304 02:48:15 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 145828 /var/tmp/spdk.sock 00:24:50.304 02:48:15 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:24:50.304 02:48:15 -- common/autotest_common.sh@819 -- # '[' -z 145828 ']' 00:24:50.304 02:48:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:50.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:50.304 02:48:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:50.304 02:48:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:50.304 02:48:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:50.304 02:48:15 -- common/autotest_common.sh@10 -- # set +x 00:24:50.304 [2024-07-11 02:48:15.358915] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:24:50.304 [2024-07-11 02:48:15.359188] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145828 ] 00:24:50.562 [2024-07-11 02:48:15.513719] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:50.562 [2024-07-11 02:48:15.588438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:50.562 [2024-07-11 02:48:15.588575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:50.562 [2024-07-11 02:48:15.588578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:50.820 [2024-07-11 02:48:15.667717] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:51.388 02:48:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:51.388 02:48:16 -- common/autotest_common.sh@852 -- # return 0 00:24:51.388 02:48:16 -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:24:51.388 02:48:16 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:51.646 Malloc0 00:24:51.646 Malloc1 00:24:51.646 Malloc2 00:24:51.646 02:48:16 -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:24:51.646 02:48:16 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:24:51.646 02:48:16 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:24:51.646 02:48:16 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:24:51.646 5000+0 records in 00:24:51.646 5000+0 records out 00:24:51.646 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0251941 s, 406 MB/s 00:24:51.646 02:48:16 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:24:51.905 AIO0 00:24:51.905 02:48:16 -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 145828 00:24:51.905 02:48:16 -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 145828 without_thd 00:24:51.905 02:48:16 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=145828 00:24:51.905 02:48:16 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:24:51.905 02:48:16 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:24:51.905 02:48:16 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:24:51.905 02:48:16 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:24:51.905 02:48:16 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:24:51.905 02:48:16 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:24:51.905 02:48:16 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:51.905 02:48:16 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:24:51.905 02:48:16 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:52.163 02:48:17 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:24:52.163 02:48:17 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:24:52.163 02:48:17 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:24:52.163 02:48:17 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:24:52.163 02:48:17 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:24:52.163 02:48:17 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:24:52.163 02:48:17 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:52.163 02:48:17 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:52.163 02:48:17 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:24:52.422 02:48:17 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:24:52.422 02:48:17 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:24:52.422 spdk_thread ids are 1 on reactor0. 00:24:52.422 02:48:17 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:24:52.422 02:48:17 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:24:52.422 02:48:17 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 145828 0 00:24:52.422 02:48:17 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 145828 0 idle 00:24:52.422 02:48:17 -- interrupt/interrupt_common.sh@33 -- # local pid=145828 00:24:52.422 02:48:17 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:24:52.422 02:48:17 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:24:52.422 02:48:17 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:24:52.422 02:48:17 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:24:52.422 02:48:17 -- interrupt/interrupt_common.sh@41 -- # hash top 00:24:52.422 02:48:17 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:24:52.422 02:48:17 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:24:52.422 02:48:17 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 145828 -w 256 00:24:52.422 02:48:17 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:24:52.422 02:48:17 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 145828 root 20 0 20.1t 56592 26100 S 0.0 0.5 0:00.30 reactor_0' 00:24:52.422 02:48:17 -- interrupt/interrupt_common.sh@48 -- # echo 145828 root 20 0 20.1t 56592 26100 S 0.0 0.5 0:00.30 reactor_0 00:24:52.422 02:48:17 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:24:52.422 02:48:17 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:24:52.422 02:48:17 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:24:52.422 02:48:17 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:24:52.422 02:48:17 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:24:52.422 02:48:17 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:24:52.422 02:48:17 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:24:52.422 02:48:17 -- interrupt/interrupt_common.sh@56 -- # return 0 00:24:52.422 02:48:17 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:24:52.422 02:48:17 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 145828 1 00:24:52.422 02:48:17 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 145828 1 idle 00:24:52.422 02:48:17 -- interrupt/interrupt_common.sh@33 -- # local pid=145828 00:24:52.422 02:48:17 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:24:52.422 02:48:17 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:24:52.422 02:48:17 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:24:52.422 02:48:17 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:24:52.422 02:48:17 -- interrupt/interrupt_common.sh@41 -- # hash top 00:24:52.422 02:48:17 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:24:52.422 02:48:17 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:24:52.422 02:48:17 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 145828 -w 256 00:24:52.422 02:48:17 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:24:52.680 02:48:17 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 145831 root 20 0 20.1t 56592 26100 S 0.0 0.5 0:00.00 reactor_1' 00:24:52.680 02:48:17 -- interrupt/interrupt_common.sh@48 -- # echo 145831 root 20 0 20.1t 56592 26100 S 0.0 0.5 0:00.00 reactor_1 00:24:52.680 02:48:17 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:24:52.680 02:48:17 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:24:52.680 02:48:17 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:24:52.680 02:48:17 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:24:52.680 02:48:17 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:24:52.680 02:48:17 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:24:52.680 02:48:17 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:24:52.680 02:48:17 -- interrupt/interrupt_common.sh@56 -- # return 0 00:24:52.680 02:48:17 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:24:52.680 02:48:17 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 145828 2 00:24:52.680 02:48:17 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 145828 2 idle 00:24:52.680 02:48:17 -- interrupt/interrupt_common.sh@33 -- # local pid=145828 00:24:52.680 02:48:17 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:24:52.680 02:48:17 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:24:52.680 02:48:17 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:24:52.680 02:48:17 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:24:52.680 02:48:17 -- interrupt/interrupt_common.sh@41 -- # hash top 00:24:52.680 02:48:17 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:24:52.680 02:48:17 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:24:52.680 02:48:17 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 145828 -w 256 00:24:52.680 02:48:17 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:24:52.938 02:48:17 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 145832 root 20 0 20.1t 56592 26100 S 0.0 0.5 0:00.00 reactor_2' 00:24:52.938 02:48:17 -- interrupt/interrupt_common.sh@48 -- # echo 145832 root 20 0 20.1t 56592 26100 S 0.0 0.5 0:00.00 reactor_2 00:24:52.938 02:48:17 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:24:52.938 02:48:17 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:24:52.938 02:48:17 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:24:52.938 02:48:17 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:24:52.938 02:48:17 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:24:52.938 02:48:17 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:24:52.938 02:48:17 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:24:52.938 02:48:17 -- interrupt/interrupt_common.sh@56 -- # return 0 00:24:52.938 02:48:17 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:24:52.938 02:48:17 -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:24:52.938 02:48:17 -- interrupt/reactor_set_interrupt.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:24:53.198 [2024-07-11 02:48:18.052082] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:53.198 02:48:18 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:24:53.457 [2024-07-11 02:48:18.307968] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:24:53.457 [2024-07-11 02:48:18.308495] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:53.457 02:48:18 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:24:53.457 [2024-07-11 02:48:18.540022] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:24:53.457 [2024-07-11 02:48:18.540428] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:53.715 02:48:18 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:24:53.715 02:48:18 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 145828 0 00:24:53.715 02:48:18 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 145828 0 busy 00:24:53.715 02:48:18 -- interrupt/interrupt_common.sh@33 -- # local pid=145828 00:24:53.715 02:48:18 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:24:53.715 02:48:18 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:24:53.715 02:48:18 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:24:53.715 02:48:18 -- interrupt/interrupt_common.sh@41 -- # hash top 00:24:53.715 02:48:18 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:24:53.715 02:48:18 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:24:53.715 02:48:18 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:24:53.715 02:48:18 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 145828 -w 256 00:24:53.715 02:48:18 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 145828 root 20 0 20.1t 56744 26100 R 99.9 0.5 0:00.71 reactor_0' 00:24:53.715 02:48:18 -- interrupt/interrupt_common.sh@48 -- # echo 145828 root 20 0 20.1t 56744 26100 R 99.9 0.5 0:00.71 reactor_0 00:24:53.715 02:48:18 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:24:53.715 02:48:18 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:24:53.715 02:48:18 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:24:53.715 02:48:18 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:24:53.715 02:48:18 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:24:53.715 02:48:18 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:24:53.715 02:48:18 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:24:53.715 02:48:18 -- interrupt/interrupt_common.sh@56 -- # return 0 00:24:53.715 02:48:18 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:24:53.715 02:48:18 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 145828 2 00:24:53.715 02:48:18 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 145828 2 busy 00:24:53.715 02:48:18 -- interrupt/interrupt_common.sh@33 -- # local pid=145828 00:24:53.715 02:48:18 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:24:53.715 02:48:18 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:24:53.715 02:48:18 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:24:53.715 02:48:18 -- interrupt/interrupt_common.sh@41 -- # hash top 00:24:53.715 02:48:18 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:24:53.715 02:48:18 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:24:53.715 02:48:18 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 145828 -w 256 00:24:53.715 02:48:18 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:24:53.974 02:48:18 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 145832 root 20 0 20.1t 56744 26100 R 99.9 0.5 0:00.34 reactor_2' 00:24:53.974 02:48:18 -- interrupt/interrupt_common.sh@48 -- # echo 145832 root 20 0 20.1t 56744 26100 R 99.9 0.5 0:00.34 reactor_2 00:24:53.974 02:48:18 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:24:53.974 02:48:18 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:24:53.974 02:48:18 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:24:53.974 02:48:18 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:24:53.974 02:48:18 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:24:53.974 02:48:18 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:24:53.974 02:48:18 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:24:53.974 02:48:18 -- interrupt/interrupt_common.sh@56 -- # return 0 00:24:53.974 02:48:18 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:24:54.242 [2024-07-11 02:48:19.116015] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:24:54.242 [2024-07-11 02:48:19.116396] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:54.242 02:48:19 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:24:54.242 02:48:19 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 145828 2 00:24:54.242 02:48:19 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 145828 2 idle 00:24:54.242 02:48:19 -- interrupt/interrupt_common.sh@33 -- # local pid=145828 00:24:54.242 02:48:19 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:24:54.242 02:48:19 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:24:54.242 02:48:19 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:24:54.242 02:48:19 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:24:54.242 02:48:19 -- interrupt/interrupt_common.sh@41 -- # hash top 00:24:54.242 02:48:19 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:24:54.242 02:48:19 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:24:54.242 02:48:19 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 145828 -w 256 00:24:54.242 02:48:19 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:24:54.242 02:48:19 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 145832 root 20 0 20.1t 56792 26100 S 0.0 0.5 0:00.57 reactor_2' 00:24:54.242 02:48:19 -- interrupt/interrupt_common.sh@48 -- # echo 145832 root 20 0 20.1t 56792 26100 S 0.0 0.5 0:00.57 reactor_2 00:24:54.242 02:48:19 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:24:54.242 02:48:19 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:24:54.242 02:48:19 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:24:54.242 02:48:19 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:24:54.242 02:48:19 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:24:54.242 02:48:19 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:24:54.242 02:48:19 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:24:54.242 02:48:19 -- interrupt/interrupt_common.sh@56 -- # return 0 00:24:54.242 02:48:19 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:24:54.513 [2024-07-11 02:48:19.535903] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:24:54.513 [2024-07-11 02:48:19.536292] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:54.513 02:48:19 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:24:54.513 02:48:19 -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:24:54.513 02:48:19 -- interrupt/reactor_set_interrupt.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:24:54.771 [2024-07-11 02:48:19.736389] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:54.771 02:48:19 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 145828 0 00:24:54.771 02:48:19 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 145828 0 idle 00:24:54.771 02:48:19 -- interrupt/interrupt_common.sh@33 -- # local pid=145828 00:24:54.771 02:48:19 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:24:54.771 02:48:19 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:24:54.772 02:48:19 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:24:54.772 02:48:19 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:24:54.772 02:48:19 -- interrupt/interrupt_common.sh@41 -- # hash top 00:24:54.772 02:48:19 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:24:54.772 02:48:19 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:24:54.772 02:48:19 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 145828 -w 256 00:24:54.772 02:48:19 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:24:55.030 02:48:19 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 145828 root 20 0 20.1t 56892 26100 S 0.0 0.5 0:01.53 reactor_0' 00:24:55.030 02:48:19 -- interrupt/interrupt_common.sh@48 -- # echo 145828 root 20 0 20.1t 56892 26100 S 0.0 0.5 0:01.53 reactor_0 00:24:55.030 02:48:19 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:24:55.030 02:48:19 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:24:55.030 02:48:19 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:24:55.030 02:48:19 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:24:55.030 02:48:19 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:24:55.030 02:48:19 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:24:55.030 02:48:19 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:24:55.030 02:48:19 -- interrupt/interrupt_common.sh@56 -- # return 0 00:24:55.030 02:48:19 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:24:55.030 02:48:19 -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:24:55.030 02:48:19 -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:24:55.030 02:48:19 -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 145828 00:24:55.030 02:48:19 -- common/autotest_common.sh@926 -- # '[' -z 145828 ']' 00:24:55.030 02:48:19 -- common/autotest_common.sh@930 -- # kill -0 145828 00:24:55.030 02:48:19 -- common/autotest_common.sh@931 -- # uname 00:24:55.030 02:48:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:55.030 02:48:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 145828 00:24:55.030 02:48:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:55.030 killing process with pid 145828 00:24:55.030 02:48:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:55.030 02:48:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 145828' 00:24:55.030 02:48:19 -- common/autotest_common.sh@945 -- # kill 145828 00:24:55.030 02:48:19 -- common/autotest_common.sh@950 -- # wait 145828 00:24:55.288 02:48:20 -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:24:55.288 02:48:20 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:24:55.288 02:48:20 -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:24:55.288 02:48:20 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:55.288 02:48:20 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:24:55.288 02:48:20 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=145969 00:24:55.288 02:48:20 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:24:55.288 02:48:20 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:55.288 02:48:20 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 145969 /var/tmp/spdk.sock 00:24:55.288 02:48:20 -- common/autotest_common.sh@819 -- # '[' -z 145969 ']' 00:24:55.288 02:48:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:55.288 02:48:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:55.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:55.288 02:48:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:55.288 02:48:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:55.288 02:48:20 -- common/autotest_common.sh@10 -- # set +x 00:24:55.288 [2024-07-11 02:48:20.232123] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:24:55.288 [2024-07-11 02:48:20.232341] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145969 ] 00:24:55.546 [2024-07-11 02:48:20.380976] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:55.546 [2024-07-11 02:48:20.437772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:55.546 [2024-07-11 02:48:20.437927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:55.546 [2024-07-11 02:48:20.437929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:55.546 [2024-07-11 02:48:20.514537] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:56.479 02:48:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:56.480 02:48:21 -- common/autotest_common.sh@852 -- # return 0 00:24:56.480 02:48:21 -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:24:56.480 02:48:21 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:56.480 Malloc0 00:24:56.480 Malloc1 00:24:56.480 Malloc2 00:24:56.480 02:48:21 -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:24:56.480 02:48:21 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:24:56.480 02:48:21 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:24:56.480 02:48:21 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:24:56.480 5000+0 records in 00:24:56.480 5000+0 records out 00:24:56.480 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0235872 s, 434 MB/s 00:24:56.480 02:48:21 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:24:56.738 AIO0 00:24:56.738 02:48:21 -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 145969 00:24:56.738 02:48:21 -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 145969 00:24:56.738 02:48:21 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=145969 00:24:56.738 02:48:21 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:24:56.738 02:48:21 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:24:56.738 02:48:21 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:24:56.738 02:48:21 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:24:56.738 02:48:21 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:24:56.738 02:48:21 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:24:56.738 02:48:21 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:56.738 02:48:21 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:24:56.738 02:48:21 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:56.995 02:48:22 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:24:56.995 02:48:22 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:24:56.995 02:48:22 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:24:56.995 02:48:22 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:24:56.995 02:48:22 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:24:56.995 02:48:22 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:24:56.995 02:48:22 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:56.995 02:48:22 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:24:56.995 02:48:22 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:57.254 02:48:22 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:24:57.254 spdk_thread ids are 1 on reactor0. 00:24:57.254 02:48:22 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:24:57.254 02:48:22 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:24:57.254 02:48:22 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:24:57.254 02:48:22 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 145969 0 00:24:57.254 02:48:22 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 145969 0 idle 00:24:57.254 02:48:22 -- interrupt/interrupt_common.sh@33 -- # local pid=145969 00:24:57.254 02:48:22 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:24:57.254 02:48:22 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:24:57.254 02:48:22 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:24:57.254 02:48:22 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:24:57.254 02:48:22 -- interrupt/interrupt_common.sh@41 -- # hash top 00:24:57.254 02:48:22 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:24:57.254 02:48:22 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:24:57.254 02:48:22 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:24:57.254 02:48:22 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 145969 -w 256 00:24:57.513 02:48:22 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 145969 root 20 0 20.1t 57840 26048 S 0.0 0.5 0:00.27 reactor_0' 00:24:57.513 02:48:22 -- interrupt/interrupt_common.sh@48 -- # echo 145969 root 20 0 20.1t 57840 26048 S 0.0 0.5 0:00.27 reactor_0 00:24:57.513 02:48:22 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:24:57.513 02:48:22 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:24:57.513 02:48:22 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:24:57.513 02:48:22 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:24:57.513 02:48:22 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:24:57.513 02:48:22 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:24:57.513 02:48:22 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:24:57.513 02:48:22 -- interrupt/interrupt_common.sh@56 -- # return 0 00:24:57.513 02:48:22 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:24:57.513 02:48:22 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 145969 1 00:24:57.513 02:48:22 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 145969 1 idle 00:24:57.513 02:48:22 -- interrupt/interrupt_common.sh@33 -- # local pid=145969 00:24:57.513 02:48:22 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:24:57.513 02:48:22 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:24:57.513 02:48:22 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:24:57.513 02:48:22 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:24:57.513 02:48:22 -- interrupt/interrupt_common.sh@41 -- # hash top 00:24:57.513 02:48:22 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:24:57.513 02:48:22 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:24:57.513 02:48:22 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 145969 -w 256 00:24:57.513 02:48:22 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:24:57.513 02:48:22 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 145972 root 20 0 20.1t 57840 26048 S 0.0 0.5 0:00.00 reactor_1' 00:24:57.513 02:48:22 -- interrupt/interrupt_common.sh@48 -- # echo 145972 root 20 0 20.1t 57840 26048 S 0.0 0.5 0:00.00 reactor_1 00:24:57.513 02:48:22 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:24:57.513 02:48:22 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:24:57.513 02:48:22 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:24:57.513 02:48:22 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:24:57.513 02:48:22 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:24:57.513 02:48:22 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:24:57.513 02:48:22 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:24:57.513 02:48:22 -- interrupt/interrupt_common.sh@56 -- # return 0 00:24:57.513 02:48:22 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:24:57.513 02:48:22 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 145969 2 00:24:57.513 02:48:22 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 145969 2 idle 00:24:57.513 02:48:22 -- interrupt/interrupt_common.sh@33 -- # local pid=145969 00:24:57.513 02:48:22 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:24:57.513 02:48:22 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:24:57.513 02:48:22 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:24:57.513 02:48:22 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:24:57.513 02:48:22 -- interrupt/interrupt_common.sh@41 -- # hash top 00:24:57.513 02:48:22 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:24:57.513 02:48:22 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:24:57.513 02:48:22 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 145969 -w 256 00:24:57.513 02:48:22 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:24:57.772 02:48:22 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 145973 root 20 0 20.1t 57840 26048 S 0.0 0.5 0:00.00 reactor_2' 00:24:57.772 02:48:22 -- interrupt/interrupt_common.sh@48 -- # echo 145973 root 20 0 20.1t 57840 26048 S 0.0 0.5 0:00.00 reactor_2 00:24:57.772 02:48:22 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:24:57.772 02:48:22 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:24:57.772 02:48:22 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:24:57.772 02:48:22 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:24:57.772 02:48:22 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:24:57.772 02:48:22 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:24:57.772 02:48:22 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:24:57.772 02:48:22 -- interrupt/interrupt_common.sh@56 -- # return 0 00:24:57.772 02:48:22 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:24:57.772 02:48:22 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:24:58.031 [2024-07-11 02:48:22.998853] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:24:58.032 [2024-07-11 02:48:22.999115] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:24:58.032 [2024-07-11 02:48:22.999353] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:58.032 02:48:23 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:24:58.290 [2024-07-11 02:48:23.242718] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:24:58.290 [2024-07-11 02:48:23.243160] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:58.290 02:48:23 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:24:58.290 02:48:23 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 145969 0 00:24:58.290 02:48:23 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 145969 0 busy 00:24:58.290 02:48:23 -- interrupt/interrupt_common.sh@33 -- # local pid=145969 00:24:58.290 02:48:23 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:24:58.290 02:48:23 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:24:58.290 02:48:23 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:24:58.290 02:48:23 -- interrupt/interrupt_common.sh@41 -- # hash top 00:24:58.290 02:48:23 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:24:58.290 02:48:23 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:24:58.290 02:48:23 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 145969 -w 256 00:24:58.290 02:48:23 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:24:58.548 02:48:23 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 145969 root 20 0 20.1t 57940 26048 R 99.9 0.5 0:00.69 reactor_0' 00:24:58.548 02:48:23 -- interrupt/interrupt_common.sh@48 -- # echo 145969 root 20 0 20.1t 57940 26048 R 99.9 0.5 0:00.69 reactor_0 00:24:58.548 02:48:23 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:24:58.548 02:48:23 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:24:58.548 02:48:23 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:24:58.548 02:48:23 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:24:58.548 02:48:23 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:24:58.548 02:48:23 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:24:58.548 02:48:23 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:24:58.548 02:48:23 -- interrupt/interrupt_common.sh@56 -- # return 0 00:24:58.548 02:48:23 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:24:58.548 02:48:23 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 145969 2 00:24:58.548 02:48:23 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 145969 2 busy 00:24:58.548 02:48:23 -- interrupt/interrupt_common.sh@33 -- # local pid=145969 00:24:58.548 02:48:23 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:24:58.548 02:48:23 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:24:58.548 02:48:23 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:24:58.548 02:48:23 -- interrupt/interrupt_common.sh@41 -- # hash top 00:24:58.548 02:48:23 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:24:58.548 02:48:23 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:24:58.548 02:48:23 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 145969 -w 256 00:24:58.548 02:48:23 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:24:58.548 02:48:23 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 145973 root 20 0 20.1t 57940 26048 R 99.9 0.5 0:00.34 reactor_2' 00:24:58.548 02:48:23 -- interrupt/interrupt_common.sh@48 -- # echo 145973 root 20 0 20.1t 57940 26048 R 99.9 0.5 0:00.34 reactor_2 00:24:58.548 02:48:23 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:24:58.548 02:48:23 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:24:58.548 02:48:23 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:24:58.548 02:48:23 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:24:58.548 02:48:23 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:24:58.548 02:48:23 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:24:58.548 02:48:23 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:24:58.548 02:48:23 -- interrupt/interrupt_common.sh@56 -- # return 0 00:24:58.548 02:48:23 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:24:58.807 [2024-07-11 02:48:23.782847] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:24:58.807 [2024-07-11 02:48:23.783123] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:58.807 02:48:23 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:24:58.807 02:48:23 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 145969 2 00:24:58.807 02:48:23 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 145969 2 idle 00:24:58.807 02:48:23 -- interrupt/interrupt_common.sh@33 -- # local pid=145969 00:24:58.807 02:48:23 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:24:58.807 02:48:23 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:24:58.807 02:48:23 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:24:58.807 02:48:23 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:24:58.807 02:48:23 -- interrupt/interrupt_common.sh@41 -- # hash top 00:24:58.807 02:48:23 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:24:58.807 02:48:23 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:24:58.807 02:48:23 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 145969 -w 256 00:24:58.807 02:48:23 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:24:59.066 02:48:23 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 145973 root 20 0 20.1t 57988 26048 S 0.0 0.5 0:00.54 reactor_2' 00:24:59.066 02:48:23 -- interrupt/interrupt_common.sh@48 -- # echo 145973 root 20 0 20.1t 57988 26048 S 0.0 0.5 0:00.54 reactor_2 00:24:59.066 02:48:23 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:24:59.066 02:48:23 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:24:59.066 02:48:23 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:24:59.066 02:48:23 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:24:59.066 02:48:23 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:24:59.066 02:48:23 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:24:59.066 02:48:23 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:24:59.066 02:48:23 -- interrupt/interrupt_common.sh@56 -- # return 0 00:24:59.066 02:48:23 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:24:59.066 [2024-07-11 02:48:24.142839] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:24:59.066 [2024-07-11 02:48:24.143223] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:24:59.066 [2024-07-11 02:48:24.143268] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:59.066 02:48:24 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:24:59.066 02:48:24 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 145969 0 00:24:59.066 02:48:24 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 145969 0 idle 00:24:59.066 02:48:24 -- interrupt/interrupt_common.sh@33 -- # local pid=145969 00:24:59.066 02:48:24 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:24:59.066 02:48:24 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:24:59.066 02:48:24 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:24:59.066 02:48:24 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:24:59.066 02:48:24 -- interrupt/interrupt_common.sh@41 -- # hash top 00:24:59.066 02:48:24 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:24:59.066 02:48:24 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:24:59.066 02:48:24 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 145969 -w 256 00:24:59.066 02:48:24 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:24:59.325 02:48:24 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 145969 root 20 0 20.1t 58040 26048 S 6.7 0.5 0:01.42 reactor_0' 00:24:59.325 02:48:24 -- interrupt/interrupt_common.sh@48 -- # echo 145969 root 20 0 20.1t 58040 26048 S 6.7 0.5 0:01.42 reactor_0 00:24:59.325 02:48:24 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:24:59.325 02:48:24 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:24:59.325 02:48:24 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=6.7 00:24:59.325 02:48:24 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=6 00:24:59.325 02:48:24 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:24:59.325 02:48:24 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:24:59.325 02:48:24 -- interrupt/interrupt_common.sh@53 -- # [[ 6 -gt 30 ]] 00:24:59.325 02:48:24 -- interrupt/interrupt_common.sh@56 -- # return 0 00:24:59.325 02:48:24 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:24:59.325 02:48:24 -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:24:59.325 02:48:24 -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:24:59.325 02:48:24 -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 145969 00:24:59.325 02:48:24 -- common/autotest_common.sh@926 -- # '[' -z 145969 ']' 00:24:59.325 02:48:24 -- common/autotest_common.sh@930 -- # kill -0 145969 00:24:59.325 02:48:24 -- common/autotest_common.sh@931 -- # uname 00:24:59.325 02:48:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:59.325 02:48:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 145969 00:24:59.325 killing process with pid 145969 00:24:59.325 02:48:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:59.325 02:48:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:59.325 02:48:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 145969' 00:24:59.325 02:48:24 -- common/autotest_common.sh@945 -- # kill 145969 00:24:59.325 02:48:24 -- common/autotest_common.sh@950 -- # wait 145969 00:24:59.584 02:48:24 -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:24:59.584 02:48:24 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:24:59.584 00:24:59.584 real 0m9.482s 00:24:59.584 user 0m9.253s 00:24:59.584 sys 0m1.348s 00:24:59.584 02:48:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:59.584 02:48:24 -- common/autotest_common.sh@10 -- # set +x 00:24:59.584 ************************************ 00:24:59.584 END TEST reactor_set_interrupt 00:24:59.584 ************************************ 00:24:59.584 02:48:24 -- spdk/autotest.sh@200 -- # run_test reap_unregistered_poller /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:24:59.584 02:48:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:24:59.584 02:48:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:59.584 02:48:24 -- common/autotest_common.sh@10 -- # set +x 00:24:59.844 ************************************ 00:24:59.844 START TEST reap_unregistered_poller 00:24:59.844 ************************************ 00:24:59.844 02:48:24 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:24:59.844 * Looking for test storage... 00:24:59.844 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:24:59.844 02:48:24 -- interrupt/reap_unregistered_poller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:24:59.844 02:48:24 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:24:59.844 02:48:24 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:24:59.844 02:48:24 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:24:59.844 02:48:24 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:24:59.844 02:48:24 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:59.844 02:48:24 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:24:59.844 02:48:24 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:24:59.844 02:48:24 -- common/autotest_common.sh@34 -- # set -e 00:24:59.844 02:48:24 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:24:59.844 02:48:24 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:24:59.844 02:48:24 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:24:59.844 02:48:24 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:24:59.844 02:48:24 -- common/build_config.sh@1 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:24:59.844 02:48:24 -- common/build_config.sh@2 -- # CONFIG_FIO_PLUGIN=y 00:24:59.844 02:48:24 -- common/build_config.sh@3 -- # CONFIG_NVME_CUSE=y 00:24:59.844 02:48:24 -- common/build_config.sh@4 -- # CONFIG_RAID5F=y 00:24:59.844 02:48:24 -- common/build_config.sh@5 -- # CONFIG_LTO=n 00:24:59.844 02:48:24 -- common/build_config.sh@6 -- # CONFIG_SMA=n 00:24:59.844 02:48:24 -- common/build_config.sh@7 -- # CONFIG_ISAL=y 00:24:59.844 02:48:24 -- common/build_config.sh@8 -- # CONFIG_OPENSSL_PATH= 00:24:59.844 02:48:24 -- common/build_config.sh@9 -- # CONFIG_IDXD_KERNEL=n 00:24:59.844 02:48:24 -- common/build_config.sh@10 -- # CONFIG_URING_PATH= 00:24:59.844 02:48:24 -- common/build_config.sh@11 -- # CONFIG_DAOS=n 00:24:59.844 02:48:24 -- common/build_config.sh@12 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:24:59.844 02:48:24 -- common/build_config.sh@13 -- # CONFIG_OCF=n 00:24:59.844 02:48:24 -- common/build_config.sh@14 -- # CONFIG_EXAMPLES=y 00:24:59.844 02:48:24 -- common/build_config.sh@15 -- # CONFIG_RDMA_PROV=verbs 00:24:59.845 02:48:24 -- common/build_config.sh@16 -- # CONFIG_ISCSI_INITIATOR=y 00:24:59.845 02:48:24 -- common/build_config.sh@17 -- # CONFIG_VTUNE=n 00:24:59.845 02:48:24 -- common/build_config.sh@18 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:24:59.845 02:48:24 -- common/build_config.sh@19 -- # CONFIG_CET=n 00:24:59.845 02:48:24 -- common/build_config.sh@20 -- # CONFIG_TESTS=y 00:24:59.845 02:48:24 -- common/build_config.sh@21 -- # CONFIG_APPS=y 00:24:59.845 02:48:24 -- common/build_config.sh@22 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:24:59.845 02:48:24 -- common/build_config.sh@23 -- # CONFIG_DAOS_DIR= 00:24:59.845 02:48:24 -- common/build_config.sh@24 -- # CONFIG_CRYPTO_MLX5=n 00:24:59.845 02:48:24 -- common/build_config.sh@25 -- # CONFIG_XNVME=n 00:24:59.845 02:48:24 -- common/build_config.sh@26 -- # CONFIG_UNIT_TESTS=y 00:24:59.845 02:48:24 -- common/build_config.sh@27 -- # CONFIG_FUSE=n 00:24:59.845 02:48:24 -- common/build_config.sh@28 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:24:59.845 02:48:24 -- common/build_config.sh@29 -- # CONFIG_OCF_PATH= 00:24:59.845 02:48:24 -- common/build_config.sh@30 -- # CONFIG_WPDK_DIR= 00:24:59.845 02:48:24 -- common/build_config.sh@31 -- # CONFIG_VFIO_USER=n 00:24:59.845 02:48:24 -- common/build_config.sh@32 -- # CONFIG_MAX_LCORES= 00:24:59.845 02:48:24 -- common/build_config.sh@33 -- # CONFIG_ARCH=native 00:24:59.845 02:48:24 -- common/build_config.sh@34 -- # CONFIG_TSAN=n 00:24:59.845 02:48:24 -- common/build_config.sh@35 -- # CONFIG_VIRTIO=y 00:24:59.845 02:48:24 -- common/build_config.sh@36 -- # CONFIG_IPSEC_MB=n 00:24:59.845 02:48:24 -- common/build_config.sh@37 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:24:59.845 02:48:24 -- common/build_config.sh@38 -- # CONFIG_ASAN=y 00:24:59.845 02:48:24 -- common/build_config.sh@39 -- # CONFIG_SHARED=n 00:24:59.845 02:48:24 -- common/build_config.sh@40 -- # CONFIG_VTUNE_DIR= 00:24:59.845 02:48:24 -- common/build_config.sh@41 -- # CONFIG_RDMA_SET_TOS=y 00:24:59.845 02:48:24 -- common/build_config.sh@42 -- # CONFIG_VBDEV_COMPRESS=n 00:24:59.845 02:48:24 -- common/build_config.sh@43 -- # CONFIG_VFIO_USER_DIR= 00:24:59.845 02:48:24 -- common/build_config.sh@44 -- # CONFIG_FUZZER_LIB= 00:24:59.845 02:48:24 -- common/build_config.sh@45 -- # CONFIG_HAVE_EXECINFO_H=y 00:24:59.845 02:48:24 -- common/build_config.sh@46 -- # CONFIG_USDT=n 00:24:59.845 02:48:24 -- common/build_config.sh@47 -- # CONFIG_URING_ZNS=n 00:24:59.845 02:48:24 -- common/build_config.sh@48 -- # CONFIG_FC_PATH= 00:24:59.845 02:48:24 -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:24:59.845 02:48:24 -- common/build_config.sh@50 -- # CONFIG_CUSTOMOCF=n 00:24:59.845 02:48:24 -- common/build_config.sh@51 -- # CONFIG_DPDK_PKG_CONFIG=n 00:24:59.845 02:48:24 -- common/build_config.sh@52 -- # CONFIG_WERROR=y 00:24:59.845 02:48:24 -- common/build_config.sh@53 -- # CONFIG_DEBUG=y 00:24:59.845 02:48:24 -- common/build_config.sh@54 -- # CONFIG_RDMA=y 00:24:59.845 02:48:24 -- common/build_config.sh@55 -- # CONFIG_HAVE_ARC4RANDOM=n 00:24:59.845 02:48:24 -- common/build_config.sh@56 -- # CONFIG_FUZZER=n 00:24:59.845 02:48:24 -- common/build_config.sh@57 -- # CONFIG_FC=n 00:24:59.845 02:48:24 -- common/build_config.sh@58 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:24:59.845 02:48:24 -- common/build_config.sh@59 -- # CONFIG_HAVE_LIBARCHIVE=n 00:24:59.845 02:48:24 -- common/build_config.sh@60 -- # CONFIG_DPDK_COMPRESSDEV=n 00:24:59.845 02:48:24 -- common/build_config.sh@61 -- # CONFIG_CROSS_PREFIX= 00:24:59.845 02:48:24 -- common/build_config.sh@62 -- # CONFIG_PREFIX=/usr/local 00:24:59.845 02:48:24 -- common/build_config.sh@63 -- # CONFIG_HAVE_LIBBSD=n 00:24:59.845 02:48:24 -- common/build_config.sh@64 -- # CONFIG_UBSAN=y 00:24:59.845 02:48:24 -- common/build_config.sh@65 -- # CONFIG_PGO_CAPTURE=n 00:24:59.845 02:48:24 -- common/build_config.sh@66 -- # CONFIG_UBLK=n 00:24:59.845 02:48:24 -- common/build_config.sh@67 -- # CONFIG_ISAL_CRYPTO=y 00:24:59.845 02:48:24 -- common/build_config.sh@68 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:24:59.845 02:48:24 -- common/build_config.sh@69 -- # CONFIG_CRYPTO=n 00:24:59.845 02:48:24 -- common/build_config.sh@70 -- # CONFIG_RBD=n 00:24:59.845 02:48:24 -- common/build_config.sh@71 -- # CONFIG_LIBDIR= 00:24:59.845 02:48:24 -- common/build_config.sh@72 -- # CONFIG_IPSEC_MB_DIR= 00:24:59.845 02:48:24 -- common/build_config.sh@73 -- # CONFIG_PGO_USE=n 00:24:59.845 02:48:24 -- common/build_config.sh@74 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:24:59.845 02:48:24 -- common/build_config.sh@75 -- # CONFIG_GOLANG=n 00:24:59.845 02:48:24 -- common/build_config.sh@76 -- # CONFIG_VHOST=y 00:24:59.845 02:48:24 -- common/build_config.sh@77 -- # CONFIG_IDXD=y 00:24:59.845 02:48:24 -- common/build_config.sh@78 -- # CONFIG_AVAHI=n 00:24:59.845 02:48:24 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:24:59.845 02:48:24 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:24:59.845 02:48:24 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:24:59.845 02:48:24 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:24:59.845 02:48:24 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:24:59.845 02:48:24 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:24:59.845 02:48:24 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:24:59.845 02:48:24 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:24:59.845 02:48:24 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:24:59.845 02:48:24 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:24:59.845 02:48:24 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:24:59.845 02:48:24 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:24:59.845 02:48:24 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:24:59.845 02:48:24 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:24:59.845 02:48:24 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:24:59.845 02:48:24 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:24:59.845 02:48:24 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:24:59.845 #define SPDK_CONFIG_H 00:24:59.845 #define SPDK_CONFIG_APPS 1 00:24:59.845 #define SPDK_CONFIG_ARCH native 00:24:59.845 #define SPDK_CONFIG_ASAN 1 00:24:59.845 #undef SPDK_CONFIG_AVAHI 00:24:59.845 #undef SPDK_CONFIG_CET 00:24:59.845 #define SPDK_CONFIG_COVERAGE 1 00:24:59.845 #define SPDK_CONFIG_CROSS_PREFIX 00:24:59.845 #undef SPDK_CONFIG_CRYPTO 00:24:59.845 #undef SPDK_CONFIG_CRYPTO_MLX5 00:24:59.845 #undef SPDK_CONFIG_CUSTOMOCF 00:24:59.845 #undef SPDK_CONFIG_DAOS 00:24:59.845 #define SPDK_CONFIG_DAOS_DIR 00:24:59.845 #define SPDK_CONFIG_DEBUG 1 00:24:59.845 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:24:59.845 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:24:59.845 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:24:59.845 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:24:59.845 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:24:59.845 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:24:59.845 #define SPDK_CONFIG_EXAMPLES 1 00:24:59.845 #undef SPDK_CONFIG_FC 00:24:59.845 #define SPDK_CONFIG_FC_PATH 00:24:59.845 #define SPDK_CONFIG_FIO_PLUGIN 1 00:24:59.845 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:24:59.845 #undef SPDK_CONFIG_FUSE 00:24:59.845 #undef SPDK_CONFIG_FUZZER 00:24:59.845 #define SPDK_CONFIG_FUZZER_LIB 00:24:59.845 #undef SPDK_CONFIG_GOLANG 00:24:59.845 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:24:59.845 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:24:59.845 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:24:59.845 #undef SPDK_CONFIG_HAVE_LIBBSD 00:24:59.845 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:24:59.845 #define SPDK_CONFIG_IDXD 1 00:24:59.845 #undef SPDK_CONFIG_IDXD_KERNEL 00:24:59.845 #undef SPDK_CONFIG_IPSEC_MB 00:24:59.845 #define SPDK_CONFIG_IPSEC_MB_DIR 00:24:59.845 #define SPDK_CONFIG_ISAL 1 00:24:59.845 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:24:59.845 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:24:59.845 #define SPDK_CONFIG_LIBDIR 00:24:59.845 #undef SPDK_CONFIG_LTO 00:24:59.845 #define SPDK_CONFIG_MAX_LCORES 00:24:59.845 #define SPDK_CONFIG_NVME_CUSE 1 00:24:59.845 #undef SPDK_CONFIG_OCF 00:24:59.845 #define SPDK_CONFIG_OCF_PATH 00:24:59.845 #define SPDK_CONFIG_OPENSSL_PATH 00:24:59.845 #undef SPDK_CONFIG_PGO_CAPTURE 00:24:59.845 #undef SPDK_CONFIG_PGO_USE 00:24:59.845 #define SPDK_CONFIG_PREFIX /usr/local 00:24:59.845 #define SPDK_CONFIG_RAID5F 1 00:24:59.845 #undef SPDK_CONFIG_RBD 00:24:59.845 #define SPDK_CONFIG_RDMA 1 00:24:59.845 #define SPDK_CONFIG_RDMA_PROV verbs 00:24:59.845 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:24:59.845 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:24:59.845 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:24:59.845 #undef SPDK_CONFIG_SHARED 00:24:59.845 #undef SPDK_CONFIG_SMA 00:24:59.845 #define SPDK_CONFIG_TESTS 1 00:24:59.845 #undef SPDK_CONFIG_TSAN 00:24:59.845 #undef SPDK_CONFIG_UBLK 00:24:59.845 #define SPDK_CONFIG_UBSAN 1 00:24:59.845 #define SPDK_CONFIG_UNIT_TESTS 1 00:24:59.845 #undef SPDK_CONFIG_URING 00:24:59.845 #define SPDK_CONFIG_URING_PATH 00:24:59.845 #undef SPDK_CONFIG_URING_ZNS 00:24:59.845 #undef SPDK_CONFIG_USDT 00:24:59.845 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:24:59.845 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:24:59.845 #undef SPDK_CONFIG_VFIO_USER 00:24:59.845 #define SPDK_CONFIG_VFIO_USER_DIR 00:24:59.845 #define SPDK_CONFIG_VHOST 1 00:24:59.845 #define SPDK_CONFIG_VIRTIO 1 00:24:59.845 #undef SPDK_CONFIG_VTUNE 00:24:59.845 #define SPDK_CONFIG_VTUNE_DIR 00:24:59.845 #define SPDK_CONFIG_WERROR 1 00:24:59.845 #define SPDK_CONFIG_WPDK_DIR 00:24:59.845 #undef SPDK_CONFIG_XNVME 00:24:59.845 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:24:59.845 02:48:24 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:24:59.845 02:48:24 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:59.845 02:48:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:59.845 02:48:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:59.845 02:48:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:59.845 02:48:24 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:59.845 02:48:24 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:59.845 02:48:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:59.845 02:48:24 -- paths/export.sh@5 -- # export PATH 00:24:59.846 02:48:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:59.846 02:48:24 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:24:59.846 02:48:24 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:24:59.846 02:48:24 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:24:59.846 02:48:24 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:24:59.846 02:48:24 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:24:59.846 02:48:24 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:24:59.846 02:48:24 -- pm/common@16 -- # TEST_TAG=N/A 00:24:59.846 02:48:24 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:24:59.846 02:48:24 -- common/autotest_common.sh@52 -- # : 1 00:24:59.846 02:48:24 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:24:59.846 02:48:24 -- common/autotest_common.sh@56 -- # : 0 00:24:59.846 02:48:24 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:24:59.846 02:48:24 -- common/autotest_common.sh@58 -- # : 0 00:24:59.846 02:48:24 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:24:59.846 02:48:24 -- common/autotest_common.sh@60 -- # : 1 00:24:59.846 02:48:24 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:24:59.846 02:48:24 -- common/autotest_common.sh@62 -- # : 1 00:24:59.846 02:48:24 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:24:59.846 02:48:24 -- common/autotest_common.sh@64 -- # : 00:24:59.846 02:48:24 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:24:59.846 02:48:24 -- common/autotest_common.sh@66 -- # : 0 00:24:59.846 02:48:24 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:24:59.846 02:48:24 -- common/autotest_common.sh@68 -- # : 0 00:24:59.846 02:48:24 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:24:59.846 02:48:24 -- common/autotest_common.sh@70 -- # : 0 00:24:59.846 02:48:24 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:24:59.846 02:48:24 -- common/autotest_common.sh@72 -- # : 0 00:24:59.846 02:48:24 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:24:59.846 02:48:24 -- common/autotest_common.sh@74 -- # : 1 00:24:59.846 02:48:24 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:24:59.846 02:48:24 -- common/autotest_common.sh@76 -- # : 0 00:24:59.846 02:48:24 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:24:59.846 02:48:24 -- common/autotest_common.sh@78 -- # : 0 00:24:59.846 02:48:24 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:24:59.846 02:48:24 -- common/autotest_common.sh@80 -- # : 0 00:24:59.846 02:48:24 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:24:59.846 02:48:24 -- common/autotest_common.sh@82 -- # : 0 00:24:59.846 02:48:24 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:24:59.846 02:48:24 -- common/autotest_common.sh@84 -- # : 0 00:24:59.846 02:48:24 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:24:59.846 02:48:24 -- common/autotest_common.sh@86 -- # : 0 00:24:59.846 02:48:24 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:24:59.846 02:48:24 -- common/autotest_common.sh@88 -- # : 0 00:24:59.846 02:48:24 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:24:59.846 02:48:24 -- common/autotest_common.sh@90 -- # : 0 00:24:59.846 02:48:24 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:24:59.846 02:48:24 -- common/autotest_common.sh@92 -- # : 0 00:24:59.846 02:48:24 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:24:59.846 02:48:24 -- common/autotest_common.sh@94 -- # : 0 00:24:59.846 02:48:24 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:24:59.846 02:48:24 -- common/autotest_common.sh@96 -- # : rdma 00:24:59.846 02:48:24 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:24:59.846 02:48:24 -- common/autotest_common.sh@98 -- # : 0 00:24:59.846 02:48:24 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:24:59.846 02:48:24 -- common/autotest_common.sh@100 -- # : 0 00:24:59.846 02:48:24 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:24:59.846 02:48:24 -- common/autotest_common.sh@102 -- # : 1 00:24:59.846 02:48:24 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:24:59.846 02:48:24 -- common/autotest_common.sh@104 -- # : 0 00:24:59.846 02:48:24 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:24:59.846 02:48:24 -- common/autotest_common.sh@106 -- # : 0 00:24:59.846 02:48:24 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:24:59.846 02:48:24 -- common/autotest_common.sh@108 -- # : 0 00:24:59.846 02:48:24 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:24:59.846 02:48:24 -- common/autotest_common.sh@110 -- # : 0 00:24:59.846 02:48:24 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:24:59.846 02:48:24 -- common/autotest_common.sh@112 -- # : 0 00:24:59.846 02:48:24 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:24:59.846 02:48:24 -- common/autotest_common.sh@114 -- # : 1 00:24:59.846 02:48:24 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:24:59.846 02:48:24 -- common/autotest_common.sh@116 -- # : 1 00:24:59.846 02:48:24 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:24:59.846 02:48:24 -- common/autotest_common.sh@118 -- # : /home/vagrant/spdk_repo/dpdk/build 00:24:59.846 02:48:24 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:24:59.846 02:48:24 -- common/autotest_common.sh@120 -- # : 0 00:24:59.846 02:48:24 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:24:59.846 02:48:24 -- common/autotest_common.sh@122 -- # : 0 00:24:59.846 02:48:24 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:24:59.846 02:48:24 -- common/autotest_common.sh@124 -- # : 0 00:24:59.846 02:48:24 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:24:59.846 02:48:24 -- common/autotest_common.sh@126 -- # : 0 00:24:59.846 02:48:24 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:24:59.846 02:48:24 -- common/autotest_common.sh@128 -- # : 0 00:24:59.846 02:48:24 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:24:59.846 02:48:24 -- common/autotest_common.sh@130 -- # : 0 00:24:59.846 02:48:24 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:24:59.846 02:48:24 -- common/autotest_common.sh@132 -- # : v22.11.4 00:24:59.846 02:48:24 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:24:59.846 02:48:24 -- common/autotest_common.sh@134 -- # : true 00:24:59.846 02:48:24 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:24:59.846 02:48:24 -- common/autotest_common.sh@136 -- # : 1 00:24:59.846 02:48:24 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:24:59.846 02:48:24 -- common/autotest_common.sh@138 -- # : 0 00:24:59.846 02:48:24 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:24:59.846 02:48:24 -- common/autotest_common.sh@140 -- # : 0 00:24:59.846 02:48:24 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:24:59.846 02:48:24 -- common/autotest_common.sh@142 -- # : 0 00:24:59.846 02:48:24 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:24:59.846 02:48:24 -- common/autotest_common.sh@144 -- # : 0 00:24:59.846 02:48:24 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:24:59.846 02:48:24 -- common/autotest_common.sh@146 -- # : 0 00:24:59.846 02:48:24 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:24:59.846 02:48:24 -- common/autotest_common.sh@148 -- # : 00:24:59.846 02:48:24 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:24:59.846 02:48:24 -- common/autotest_common.sh@150 -- # : 0 00:24:59.846 02:48:24 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:24:59.846 02:48:24 -- common/autotest_common.sh@152 -- # : 0 00:24:59.846 02:48:24 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:24:59.846 02:48:24 -- common/autotest_common.sh@154 -- # : 0 00:24:59.846 02:48:24 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:24:59.846 02:48:24 -- common/autotest_common.sh@156 -- # : 0 00:24:59.846 02:48:24 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:24:59.846 02:48:24 -- common/autotest_common.sh@158 -- # : 0 00:24:59.846 02:48:24 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:24:59.846 02:48:24 -- common/autotest_common.sh@160 -- # : 0 00:24:59.846 02:48:24 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:24:59.846 02:48:24 -- common/autotest_common.sh@163 -- # : 00:24:59.846 02:48:24 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:24:59.846 02:48:24 -- common/autotest_common.sh@165 -- # : 0 00:24:59.846 02:48:24 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:24:59.846 02:48:24 -- common/autotest_common.sh@167 -- # : 0 00:24:59.846 02:48:24 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:24:59.846 02:48:24 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:24:59.846 02:48:24 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:24:59.846 02:48:24 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:24:59.846 02:48:24 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:24:59.846 02:48:24 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:24:59.846 02:48:24 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:24:59.846 02:48:24 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:24:59.846 02:48:24 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:24:59.846 02:48:24 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:24:59.846 02:48:24 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:24:59.846 02:48:24 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:24:59.846 02:48:24 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:24:59.846 02:48:24 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:24:59.846 02:48:24 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:24:59.846 02:48:24 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:24:59.846 02:48:24 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:24:59.846 02:48:24 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:24:59.846 02:48:24 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:24:59.846 02:48:24 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:24:59.846 02:48:24 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:24:59.847 02:48:24 -- common/autotest_common.sh@196 -- # cat 00:24:59.847 02:48:24 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:24:59.847 02:48:24 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:24:59.847 02:48:24 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:24:59.847 02:48:24 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:24:59.847 02:48:24 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:24:59.847 02:48:24 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:24:59.847 02:48:24 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:24:59.847 02:48:24 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:24:59.847 02:48:24 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:24:59.847 02:48:24 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:24:59.847 02:48:24 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:24:59.847 02:48:24 -- common/autotest_common.sh@239 -- # export QEMU_BIN= 00:24:59.847 02:48:24 -- common/autotest_common.sh@239 -- # QEMU_BIN= 00:24:59.847 02:48:24 -- common/autotest_common.sh@240 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:24:59.847 02:48:24 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:24:59.847 02:48:24 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:24:59.847 02:48:24 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:24:59.847 02:48:24 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:24:59.847 02:48:24 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:24:59.847 02:48:24 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:24:59.847 02:48:24 -- common/autotest_common.sh@249 -- # export valgrind= 00:24:59.847 02:48:24 -- common/autotest_common.sh@249 -- # valgrind= 00:24:59.847 02:48:24 -- common/autotest_common.sh@255 -- # uname -s 00:24:59.847 02:48:24 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:24:59.847 02:48:24 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:24:59.847 02:48:24 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:24:59.847 02:48:24 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:24:59.847 02:48:24 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:24:59.847 02:48:24 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:24:59.847 02:48:24 -- common/autotest_common.sh@265 -- # MAKE=make 00:24:59.847 02:48:24 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j10 00:24:59.847 02:48:24 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:24:59.847 02:48:24 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:24:59.847 02:48:24 -- common/autotest_common.sh@284 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:24:59.847 02:48:24 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:24:59.847 02:48:24 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:24:59.847 02:48:24 -- common/autotest_common.sh@309 -- # [[ -z 146155 ]] 00:24:59.847 02:48:24 -- common/autotest_common.sh@309 -- # kill -0 146155 00:24:59.847 02:48:24 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:24:59.847 02:48:24 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:24:59.847 02:48:24 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:24:59.847 02:48:24 -- common/autotest_common.sh@322 -- # local mount target_dir 00:24:59.847 02:48:24 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:24:59.847 02:48:24 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:24:59.847 02:48:24 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:24:59.847 02:48:24 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:24:59.847 02:48:24 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.vntH2Z 00:24:59.847 02:48:24 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:24:59.847 02:48:24 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:24:59.847 02:48:24 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:24:59.847 02:48:24 -- common/autotest_common.sh@346 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.vntH2Z/tests/interrupt /tmp/spdk.vntH2Z 00:24:59.847 02:48:24 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:24:59.847 02:48:24 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:24:59.847 02:48:24 -- common/autotest_common.sh@318 -- # df -T 00:24:59.847 02:48:24 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:24:59.847 02:48:24 -- common/autotest_common.sh@352 -- # mounts["$mount"]=udev 00:24:59.847 02:48:24 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:24:59.847 02:48:24 -- common/autotest_common.sh@353 -- # avails["$mount"]=6224457728 00:24:59.847 02:48:24 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6224457728 00:24:59.847 02:48:24 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:24:59.847 02:48:24 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:24:59.847 02:48:24 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:24:59.847 02:48:24 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:24:59.847 02:48:24 -- common/autotest_common.sh@353 -- # avails["$mount"]=1249763328 00:24:59.847 02:48:24 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1254514688 00:24:59.847 02:48:24 -- common/autotest_common.sh@354 -- # uses["$mount"]=4751360 00:24:59.847 02:48:24 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:24:59.847 02:48:24 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda1 00:24:59.847 02:48:24 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext4 00:24:59.847 02:48:24 -- common/autotest_common.sh@353 -- # avails["$mount"]=9148555264 00:24:59.847 02:48:24 -- common/autotest_common.sh@353 -- # sizes["$mount"]=20616794112 00:24:59.847 02:48:24 -- common/autotest_common.sh@354 -- # uses["$mount"]=11451461632 00:24:59.847 02:48:24 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:24:59.847 02:48:24 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:24:59.847 02:48:24 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:24:59.847 02:48:24 -- common/autotest_common.sh@353 -- # avails["$mount"]=6271299584 00:24:59.847 02:48:24 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6272557056 00:24:59.847 02:48:24 -- common/autotest_common.sh@354 -- # uses["$mount"]=1257472 00:24:59.847 02:48:24 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:24:59.847 02:48:24 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:24:59.847 02:48:24 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:24:59.847 02:48:24 -- common/autotest_common.sh@353 -- # avails["$mount"]=5242880 00:24:59.847 02:48:24 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5242880 00:24:59.847 02:48:24 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:24:59.847 02:48:24 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:24:59.847 02:48:24 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:24:59.847 02:48:24 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:24:59.847 02:48:24 -- common/autotest_common.sh@353 -- # avails["$mount"]=6272557056 00:24:59.847 02:48:24 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6272557056 00:24:59.847 02:48:24 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:24:59.847 02:48:24 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:24:59.847 02:48:24 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop0 00:24:59.847 02:48:24 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:24:59.847 02:48:24 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:24:59.847 02:48:24 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:24:59.847 02:48:24 -- common/autotest_common.sh@354 -- # uses["$mount"]=67108864 00:24:59.847 02:48:24 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:24:59.847 02:48:24 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop1 00:24:59.847 02:48:24 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:24:59.847 02:48:24 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:24:59.847 02:48:24 -- common/autotest_common.sh@353 -- # sizes["$mount"]=41025536 00:24:59.847 02:48:24 -- common/autotest_common.sh@354 -- # uses["$mount"]=41025536 00:24:59.847 02:48:24 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:24:59.847 02:48:24 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda15 00:24:59.847 02:48:24 -- common/autotest_common.sh@352 -- # fss["$mount"]=vfat 00:24:59.847 02:48:24 -- common/autotest_common.sh@353 -- # avails["$mount"]=103089152 00:24:59.847 02:48:24 -- common/autotest_common.sh@353 -- # sizes["$mount"]=109422592 00:24:59.847 02:48:24 -- common/autotest_common.sh@354 -- # uses["$mount"]=6334464 00:24:59.847 02:48:24 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:24:59.847 02:48:24 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop2 00:24:59.847 02:48:24 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:24:59.847 02:48:24 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:24:59.847 02:48:24 -- common/autotest_common.sh@353 -- # sizes["$mount"]=96337920 00:24:59.847 02:48:24 -- common/autotest_common.sh@354 -- # uses["$mount"]=96337920 00:24:59.847 02:48:24 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:24:59.847 02:48:24 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:24:59.847 02:48:24 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:24:59.847 02:48:24 -- common/autotest_common.sh@353 -- # avails["$mount"]=1254510592 00:24:59.847 02:48:24 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1254510592 00:24:59.847 02:48:24 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:24:59.847 02:48:24 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:24:59.847 02:48:24 -- common/autotest_common.sh@352 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest/ubuntu2004-libvirt/output 00:24:59.847 02:48:24 -- common/autotest_common.sh@352 -- # fss["$mount"]=fuse.sshfs 00:24:59.847 02:48:24 -- common/autotest_common.sh@353 -- # avails["$mount"]=96448147456 00:24:59.847 02:48:24 -- common/autotest_common.sh@353 -- # sizes["$mount"]=105088212992 00:24:59.847 02:48:24 -- common/autotest_common.sh@354 -- # uses["$mount"]=3254632448 00:24:59.847 02:48:24 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:24:59.847 02:48:24 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop3 00:24:59.847 02:48:24 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:24:59.847 02:48:24 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:24:59.847 02:48:24 -- common/autotest_common.sh@353 -- # sizes["$mount"]=40763392 00:24:59.847 02:48:24 -- common/autotest_common.sh@354 -- # uses["$mount"]=40763392 00:24:59.847 02:48:24 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:24:59.847 02:48:24 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop4 00:24:59.847 02:48:24 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:24:59.847 02:48:24 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:24:59.847 02:48:24 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:24:59.847 02:48:24 -- common/autotest_common.sh@354 -- # uses["$mount"]=67108864 00:24:59.847 02:48:24 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:24:59.847 02:48:24 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:24:59.847 * Looking for test storage... 00:24:59.847 02:48:24 -- common/autotest_common.sh@359 -- # local target_space new_size 00:24:59.847 02:48:24 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:24:59.847 02:48:24 -- common/autotest_common.sh@363 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:24:59.847 02:48:24 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:24:59.848 02:48:24 -- common/autotest_common.sh@363 -- # mount=/ 00:24:59.848 02:48:24 -- common/autotest_common.sh@365 -- # target_space=9148555264 00:24:59.848 02:48:24 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:24:59.848 02:48:24 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:24:59.848 02:48:24 -- common/autotest_common.sh@371 -- # [[ ext4 == tmpfs ]] 00:24:59.848 02:48:24 -- common/autotest_common.sh@371 -- # [[ ext4 == ramfs ]] 00:24:59.848 02:48:24 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:24:59.848 02:48:24 -- common/autotest_common.sh@372 -- # new_size=13666054144 00:24:59.848 02:48:24 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:24:59.848 02:48:24 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:24:59.848 02:48:24 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:24:59.848 02:48:24 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:24:59.848 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:24:59.848 02:48:24 -- common/autotest_common.sh@380 -- # return 0 00:24:59.848 02:48:24 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:24:59.848 02:48:24 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:24:59.848 02:48:24 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:24:59.848 02:48:24 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:24:59.848 02:48:24 -- common/autotest_common.sh@1672 -- # true 00:24:59.848 02:48:24 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:24:59.848 02:48:24 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:24:59.848 02:48:24 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:24:59.848 02:48:24 -- common/autotest_common.sh@27 -- # exec 00:24:59.848 02:48:24 -- common/autotest_common.sh@29 -- # exec 00:24:59.848 02:48:24 -- common/autotest_common.sh@31 -- # xtrace_restore 00:24:59.848 02:48:24 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:24:59.848 02:48:24 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:24:59.848 02:48:24 -- common/autotest_common.sh@18 -- # set -x 00:24:59.848 02:48:24 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:59.848 02:48:24 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:24:59.848 02:48:24 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:24:59.848 02:48:24 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:24:59.848 02:48:24 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:24:59.848 02:48:24 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:24:59.848 02:48:24 -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:24:59.848 02:48:24 -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:24:59.848 02:48:24 -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:24:59.848 02:48:24 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:59.848 02:48:24 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:24:59.848 02:48:24 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=146195 00:24:59.848 02:48:24 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:59.848 02:48:24 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 146195 /var/tmp/spdk.sock 00:24:59.848 02:48:24 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:24:59.848 02:48:24 -- common/autotest_common.sh@819 -- # '[' -z 146195 ']' 00:24:59.848 02:48:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:59.848 02:48:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:59.848 02:48:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:59.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:59.848 02:48:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:59.848 02:48:24 -- common/autotest_common.sh@10 -- # set +x 00:24:59.848 [2024-07-11 02:48:24.877422] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:24:59.848 [2024-07-11 02:48:24.877742] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146195 ] 00:25:00.106 [2024-07-11 02:48:25.033246] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:00.106 [2024-07-11 02:48:25.095621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:00.106 [2024-07-11 02:48:25.095732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:00.106 [2024-07-11 02:48:25.096141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:00.106 [2024-07-11 02:48:25.172185] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:25:01.052 02:48:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:01.052 02:48:25 -- common/autotest_common.sh@852 -- # return 0 00:25:01.052 02:48:25 -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:25:01.052 02:48:25 -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:25:01.052 02:48:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:01.052 02:48:25 -- common/autotest_common.sh@10 -- # set +x 00:25:01.052 02:48:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:01.052 02:48:25 -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:25:01.052 "name": "app_thread", 00:25:01.052 "id": 1, 00:25:01.052 "active_pollers": [], 00:25:01.052 "timed_pollers": [ 00:25:01.052 { 00:25:01.052 "name": "rpc_subsystem_poll", 00:25:01.052 "id": 1, 00:25:01.052 "state": "waiting", 00:25:01.052 "run_count": 0, 00:25:01.052 "busy_count": 0, 00:25:01.052 "period_ticks": 8800000 00:25:01.052 } 00:25:01.052 ], 00:25:01.052 "paused_pollers": [] 00:25:01.052 }' 00:25:01.052 02:48:25 -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:25:01.052 02:48:25 -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:25:01.052 02:48:25 -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:25:01.052 02:48:25 -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:25:01.052 02:48:25 -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll 00:25:01.052 02:48:25 -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:25:01.052 02:48:25 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:25:01.052 02:48:25 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:25:01.052 02:48:25 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:25:01.052 5000+0 records in 00:25:01.052 5000+0 records out 00:25:01.052 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0258927 s, 395 MB/s 00:25:01.052 02:48:25 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:25:01.311 AIO0 00:25:01.311 02:48:26 -- interrupt/reap_unregistered_poller.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:01.568 02:48:26 -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:25:01.568 02:48:26 -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:25:01.568 02:48:26 -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:25:01.568 02:48:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:01.568 02:48:26 -- common/autotest_common.sh@10 -- # set +x 00:25:01.568 02:48:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:01.826 02:48:26 -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:25:01.826 "name": "app_thread", 00:25:01.826 "id": 1, 00:25:01.826 "active_pollers": [], 00:25:01.826 "timed_pollers": [ 00:25:01.826 { 00:25:01.826 "name": "rpc_subsystem_poll", 00:25:01.826 "id": 1, 00:25:01.826 "state": "waiting", 00:25:01.826 "run_count": 0, 00:25:01.826 "busy_count": 0, 00:25:01.826 "period_ticks": 8800000 00:25:01.826 } 00:25:01.826 ], 00:25:01.826 "paused_pollers": [] 00:25:01.826 }' 00:25:01.826 02:48:26 -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:25:01.826 02:48:26 -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:25:01.826 02:48:26 -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:25:01.826 02:48:26 -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:25:01.826 02:48:26 -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll 00:25:01.826 02:48:26 -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l ]] 00:25:01.826 02:48:26 -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:25:01.826 02:48:26 -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 146195 00:25:01.826 02:48:26 -- common/autotest_common.sh@926 -- # '[' -z 146195 ']' 00:25:01.826 02:48:26 -- common/autotest_common.sh@930 -- # kill -0 146195 00:25:01.826 02:48:26 -- common/autotest_common.sh@931 -- # uname 00:25:01.826 02:48:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:01.826 02:48:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 146195 00:25:01.826 02:48:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:01.826 killing process with pid 146195 00:25:01.826 02:48:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:01.826 02:48:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 146195' 00:25:01.826 02:48:26 -- common/autotest_common.sh@945 -- # kill 146195 00:25:01.826 02:48:26 -- common/autotest_common.sh@950 -- # wait 146195 00:25:02.084 02:48:27 -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:25:02.084 02:48:27 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:25:02.084 00:25:02.084 real 0m2.396s 00:25:02.084 user 0m1.616s 00:25:02.084 sys 0m0.404s 00:25:02.084 02:48:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:02.084 ************************************ 00:25:02.084 02:48:27 -- common/autotest_common.sh@10 -- # set +x 00:25:02.084 END TEST reap_unregistered_poller 00:25:02.084 ************************************ 00:25:02.084 02:48:27 -- spdk/autotest.sh@204 -- # uname -s 00:25:02.084 02:48:27 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:25:02.084 02:48:27 -- spdk/autotest.sh@205 -- # [[ 1 -eq 1 ]] 00:25:02.084 02:48:27 -- spdk/autotest.sh@211 -- # [[ 0 -eq 0 ]] 00:25:02.084 02:48:27 -- spdk/autotest.sh@212 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:25:02.084 02:48:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:02.084 02:48:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:02.084 02:48:27 -- common/autotest_common.sh@10 -- # set +x 00:25:02.084 ************************************ 00:25:02.084 START TEST spdk_dd 00:25:02.084 ************************************ 00:25:02.084 02:48:27 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:25:02.343 * Looking for test storage... 00:25:02.343 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:25:02.343 02:48:27 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:02.343 02:48:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:02.343 02:48:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:02.343 02:48:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:02.343 02:48:27 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:02.343 02:48:27 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:02.343 02:48:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:02.343 02:48:27 -- paths/export.sh@5 -- # export PATH 00:25:02.343 02:48:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:02.343 02:48:27 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:02.601 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:25:02.601 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:03.575 02:48:28 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:25:03.575 02:48:28 -- dd/dd.sh@11 -- # nvme_in_userspace 00:25:03.575 02:48:28 -- scripts/common.sh@311 -- # local bdf bdfs 00:25:03.575 02:48:28 -- scripts/common.sh@312 -- # local nvmes 00:25:03.575 02:48:28 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:25:03.575 02:48:28 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:25:03.575 02:48:28 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:25:03.575 02:48:28 -- scripts/common.sh@297 -- # local bdf= 00:25:03.575 02:48:28 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:25:03.575 02:48:28 -- scripts/common.sh@232 -- # local class 00:25:03.575 02:48:28 -- scripts/common.sh@233 -- # local subclass 00:25:03.575 02:48:28 -- scripts/common.sh@234 -- # local progif 00:25:03.575 02:48:28 -- scripts/common.sh@235 -- # printf %02x 1 00:25:03.575 02:48:28 -- scripts/common.sh@235 -- # class=01 00:25:03.575 02:48:28 -- scripts/common.sh@236 -- # printf %02x 8 00:25:03.575 02:48:28 -- scripts/common.sh@236 -- # subclass=08 00:25:03.575 02:48:28 -- scripts/common.sh@237 -- # printf %02x 2 00:25:03.575 02:48:28 -- scripts/common.sh@237 -- # progif=02 00:25:03.575 02:48:28 -- scripts/common.sh@239 -- # hash lspci 00:25:03.575 02:48:28 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:25:03.575 02:48:28 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:25:03.575 02:48:28 -- scripts/common.sh@242 -- # grep -i -- -p02 00:25:03.575 02:48:28 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:25:03.575 02:48:28 -- scripts/common.sh@244 -- # tr -d '"' 00:25:03.575 02:48:28 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:25:03.575 02:48:28 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:25:03.575 02:48:28 -- scripts/common.sh@15 -- # local i 00:25:03.575 02:48:28 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:25:03.575 02:48:28 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:25:03.575 02:48:28 -- scripts/common.sh@24 -- # return 0 00:25:03.575 02:48:28 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:25:03.575 02:48:28 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:25:03.575 02:48:28 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:25:03.575 02:48:28 -- scripts/common.sh@322 -- # uname -s 00:25:03.575 02:48:28 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:25:03.575 02:48:28 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:25:03.575 02:48:28 -- scripts/common.sh@327 -- # (( 1 )) 00:25:03.575 02:48:28 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 00:25:03.575 02:48:28 -- dd/dd.sh@13 -- # check_liburing 00:25:03.575 02:48:28 -- dd/common.sh@139 -- # local lib so 00:25:03.575 02:48:28 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:25:03.575 02:48:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:03.575 02:48:28 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:25:03.575 02:48:28 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:03.575 02:48:28 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:25:03.575 02:48:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:03.575 02:48:28 -- dd/common.sh@143 -- # [[ libasan.so.5 == liburing.so.* ]] 00:25:03.575 02:48:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:03.575 02:48:28 -- dd/common.sh@143 -- # [[ libnuma.so.1 == liburing.so.* ]] 00:25:03.575 02:48:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:03.575 02:48:28 -- dd/common.sh@143 -- # [[ libdl.so.2 == liburing.so.* ]] 00:25:03.575 02:48:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:03.575 02:48:28 -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:25:03.575 02:48:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:03.575 02:48:28 -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:25:03.575 02:48:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:03.575 02:48:28 -- dd/common.sh@143 -- # [[ librt.so.1 == liburing.so.* ]] 00:25:03.575 02:48:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:03.575 02:48:28 -- dd/common.sh@143 -- # [[ libuuid.so.1 == liburing.so.* ]] 00:25:03.575 02:48:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:03.575 02:48:28 -- dd/common.sh@143 -- # [[ libssl.so.1.1 == liburing.so.* ]] 00:25:03.575 02:48:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:03.575 02:48:28 -- dd/common.sh@143 -- # [[ libcrypto.so.1.1 == liburing.so.* ]] 00:25:03.575 02:48:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:03.575 02:48:28 -- dd/common.sh@143 -- # [[ libm.so.6 == liburing.so.* ]] 00:25:03.575 02:48:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:03.575 02:48:28 -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:25:03.575 02:48:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:03.575 02:48:28 -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:25:03.575 02:48:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:03.575 02:48:28 -- dd/common.sh@143 -- # [[ libiscsi.so.7 == liburing.so.* ]] 00:25:03.575 02:48:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:03.575 02:48:28 -- dd/common.sh@143 -- # [[ libubsan.so.1 == liburing.so.* ]] 00:25:03.575 02:48:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:03.575 02:48:28 -- dd/common.sh@143 -- # [[ libpthread.so.0 == liburing.so.* ]] 00:25:03.575 02:48:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:03.575 02:48:28 -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:25:03.575 02:48:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:03.575 02:48:28 -- dd/common.sh@143 -- # [[ libgcc_s.so.1 == liburing.so.* ]] 00:25:03.575 02:48:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:03.575 02:48:28 -- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]] 00:25:03.575 02:48:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:03.575 02:48:28 -- dd/common.sh@143 -- # [[ libnl-route-3.so.200 == liburing.so.* ]] 00:25:03.575 02:48:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:03.575 02:48:28 -- dd/common.sh@143 -- # [[ libnl-3.so.200 == liburing.so.* ]] 00:25:03.575 02:48:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:03.575 02:48:28 -- dd/common.sh@143 -- # [[ libstdc++.so.6 == liburing.so.* ]] 00:25:03.575 02:48:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:03.575 02:48:28 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:25:03.575 02:48:28 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 00:25:03.575 02:48:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:03.575 02:48:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:03.575 02:48:28 -- common/autotest_common.sh@10 -- # set +x 00:25:03.575 ************************************ 00:25:03.575 START TEST spdk_dd_basic_rw 00:25:03.575 ************************************ 00:25:03.575 02:48:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 00:25:03.575 * Looking for test storage... 00:25:03.575 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:25:03.575 02:48:28 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:03.575 02:48:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:03.575 02:48:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:03.575 02:48:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:03.575 02:48:28 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:03.575 02:48:28 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:03.575 02:48:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:03.575 02:48:28 -- paths/export.sh@5 -- # export PATH 00:25:03.576 02:48:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:03.576 02:48:28 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:25:03.576 02:48:28 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:25:03.576 02:48:28 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:25:03.576 02:48:28 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:06.0 00:25:03.576 02:48:28 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:25:03.576 02:48:28 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(["name"]=$nvme0 ["traddr"]=$nvme0_pci ["trtype"]=pcie) 00:25:03.576 02:48:28 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:25:03.576 02:48:28 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:03.576 02:48:28 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:03.576 02:48:28 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:06.0 00:25:03.576 02:48:28 -- dd/common.sh@124 -- # local pci=0000:00:06.0 lbaf id 00:25:03.576 02:48:28 -- dd/common.sh@126 -- # mapfile -t id 00:25:03.576 02:48:28 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:06.0' 00:25:03.836 02:48:28 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 94 Data Units Written: 7 Host Read Commands: 2111 Host Write Commands: 115 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:25:03.836 02:48:28 -- dd/common.sh@130 -- # lbaf=04 00:25:03.836 02:48:28 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 94 Data Units Written: 7 Host Read Commands: 2111 Host Write Commands: 115 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:25:03.836 02:48:28 -- dd/common.sh@132 -- # lbaf=4096 00:25:03.836 02:48:28 -- dd/common.sh@134 -- # echo 4096 00:25:03.836 02:48:28 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:25:03.836 02:48:28 -- dd/basic_rw.sh@96 -- # : 00:25:03.836 02:48:28 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:25:03.836 02:48:28 -- dd/basic_rw.sh@96 -- # gen_conf 00:25:03.836 02:48:28 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:25:03.836 02:48:28 -- dd/common.sh@31 -- # xtrace_disable 00:25:03.836 02:48:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:03.836 02:48:28 -- common/autotest_common.sh@10 -- # set +x 00:25:03.836 02:48:28 -- common/autotest_common.sh@10 -- # set +x 00:25:03.836 ************************************ 00:25:03.836 START TEST dd_bs_lt_native_bs 00:25:03.836 ************************************ 00:25:03.836 02:48:28 -- common/autotest_common.sh@1104 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:25:03.836 02:48:28 -- common/autotest_common.sh@640 -- # local es=0 00:25:03.836 02:48:28 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:25:03.836 02:48:28 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:03.836 02:48:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:03.836 02:48:28 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:03.836 02:48:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:03.836 02:48:28 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:03.836 02:48:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:03.836 02:48:28 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:03.836 02:48:28 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:03.836 02:48:28 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:25:04.094 { 00:25:04.094 "subsystems": [ 00:25:04.094 { 00:25:04.094 "subsystem": "bdev", 00:25:04.094 "config": [ 00:25:04.094 { 00:25:04.094 "params": { 00:25:04.094 "trtype": "pcie", 00:25:04.094 "traddr": "0000:00:06.0", 00:25:04.094 "name": "Nvme0" 00:25:04.094 }, 00:25:04.094 "method": "bdev_nvme_attach_controller" 00:25:04.094 }, 00:25:04.094 { 00:25:04.094 "method": "bdev_wait_for_examine" 00:25:04.094 } 00:25:04.094 ] 00:25:04.094 } 00:25:04.094 ] 00:25:04.094 } 00:25:04.094 [2024-07-11 02:48:28.967444] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:04.094 [2024-07-11 02:48:28.967829] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146492 ] 00:25:04.094 [2024-07-11 02:48:29.110083] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:04.352 [2024-07-11 02:48:29.201841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:04.352 [2024-07-11 02:48:29.364644] spdk_dd.c:1145:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:25:04.352 [2024-07-11 02:48:29.364790] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:04.609 [2024-07-11 02:48:29.497059] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:25:04.609 02:48:29 -- common/autotest_common.sh@643 -- # es=234 00:25:04.609 02:48:29 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:04.609 02:48:29 -- common/autotest_common.sh@652 -- # es=106 00:25:04.609 02:48:29 -- common/autotest_common.sh@653 -- # case "$es" in 00:25:04.609 02:48:29 -- common/autotest_common.sh@660 -- # es=1 00:25:04.609 02:48:29 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:04.609 00:25:04.609 real 0m0.721s 00:25:04.609 user 0m0.487s 00:25:04.609 sys 0m0.204s 00:25:04.609 02:48:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:04.609 ************************************ 00:25:04.609 END TEST dd_bs_lt_native_bs 00:25:04.609 ************************************ 00:25:04.609 02:48:29 -- common/autotest_common.sh@10 -- # set +x 00:25:04.609 02:48:29 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:25:04.609 02:48:29 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:04.609 02:48:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:04.609 02:48:29 -- common/autotest_common.sh@10 -- # set +x 00:25:04.609 ************************************ 00:25:04.609 START TEST dd_rw 00:25:04.609 ************************************ 00:25:04.609 02:48:29 -- common/autotest_common.sh@1104 -- # basic_rw 4096 00:25:04.609 02:48:29 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:25:04.609 02:48:29 -- dd/basic_rw.sh@12 -- # local count size 00:25:04.609 02:48:29 -- dd/basic_rw.sh@13 -- # local qds bss 00:25:04.609 02:48:29 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:25:04.609 02:48:29 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:25:04.609 02:48:29 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:25:04.609 02:48:29 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:25:04.609 02:48:29 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:25:04.609 02:48:29 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:25:04.609 02:48:29 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:25:04.609 02:48:29 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:25:04.609 02:48:29 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:25:04.609 02:48:29 -- dd/basic_rw.sh@23 -- # count=15 00:25:04.609 02:48:29 -- dd/basic_rw.sh@24 -- # count=15 00:25:04.609 02:48:29 -- dd/basic_rw.sh@25 -- # size=61440 00:25:04.609 02:48:29 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:25:04.609 02:48:29 -- dd/common.sh@98 -- # xtrace_disable 00:25:04.609 02:48:29 -- common/autotest_common.sh@10 -- # set +x 00:25:05.175 02:48:30 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:25:05.175 02:48:30 -- dd/basic_rw.sh@30 -- # gen_conf 00:25:05.175 02:48:30 -- dd/common.sh@31 -- # xtrace_disable 00:25:05.175 02:48:30 -- common/autotest_common.sh@10 -- # set +x 00:25:05.434 [2024-07-11 02:48:30.276757] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:05.434 [2024-07-11 02:48:30.277030] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146531 ] 00:25:05.434 { 00:25:05.434 "subsystems": [ 00:25:05.434 { 00:25:05.434 "subsystem": "bdev", 00:25:05.434 "config": [ 00:25:05.434 { 00:25:05.434 "params": { 00:25:05.434 "trtype": "pcie", 00:25:05.434 "traddr": "0000:00:06.0", 00:25:05.434 "name": "Nvme0" 00:25:05.434 }, 00:25:05.434 "method": "bdev_nvme_attach_controller" 00:25:05.434 }, 00:25:05.434 { 00:25:05.434 "method": "bdev_wait_for_examine" 00:25:05.434 } 00:25:05.434 ] 00:25:05.434 } 00:25:05.434 ] 00:25:05.434 } 00:25:05.434 [2024-07-11 02:48:30.424828] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:05.434 [2024-07-11 02:48:30.495047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:05.950  Copying: 60/60 [kB] (average 19 MBps) 00:25:05.950 00:25:05.950 02:48:30 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:25:05.950 02:48:30 -- dd/basic_rw.sh@37 -- # gen_conf 00:25:05.950 02:48:30 -- dd/common.sh@31 -- # xtrace_disable 00:25:05.950 02:48:30 -- common/autotest_common.sh@10 -- # set +x 00:25:05.950 [2024-07-11 02:48:31.013582] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:05.950 [2024-07-11 02:48:31.013871] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146553 ] 00:25:05.950 { 00:25:05.950 "subsystems": [ 00:25:05.950 { 00:25:05.950 "subsystem": "bdev", 00:25:05.950 "config": [ 00:25:05.950 { 00:25:05.950 "params": { 00:25:05.950 "trtype": "pcie", 00:25:05.950 "traddr": "0000:00:06.0", 00:25:05.950 "name": "Nvme0" 00:25:05.950 }, 00:25:05.950 "method": "bdev_nvme_attach_controller" 00:25:05.950 }, 00:25:05.950 { 00:25:05.950 "method": "bdev_wait_for_examine" 00:25:05.950 } 00:25:05.950 ] 00:25:05.950 } 00:25:05.950 ] 00:25:05.950 } 00:25:06.209 [2024-07-11 02:48:31.168306] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.209 [2024-07-11 02:48:31.248851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:06.727  Copying: 60/60 [kB] (average 19 MBps) 00:25:06.727 00:25:06.727 02:48:31 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:06.727 02:48:31 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:25:06.727 02:48:31 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:25:06.727 02:48:31 -- dd/common.sh@11 -- # local nvme_ref= 00:25:06.727 02:48:31 -- dd/common.sh@12 -- # local size=61440 00:25:06.727 02:48:31 -- dd/common.sh@14 -- # local bs=1048576 00:25:06.727 02:48:31 -- dd/common.sh@15 -- # local count=1 00:25:06.727 02:48:31 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:25:06.727 02:48:31 -- dd/common.sh@18 -- # gen_conf 00:25:06.727 02:48:31 -- dd/common.sh@31 -- # xtrace_disable 00:25:06.727 02:48:31 -- common/autotest_common.sh@10 -- # set +x 00:25:06.727 [2024-07-11 02:48:31.759495] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:06.727 [2024-07-11 02:48:31.759713] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146562 ] 00:25:06.727 { 00:25:06.727 "subsystems": [ 00:25:06.727 { 00:25:06.727 "subsystem": "bdev", 00:25:06.727 "config": [ 00:25:06.727 { 00:25:06.727 "params": { 00:25:06.727 "trtype": "pcie", 00:25:06.727 "traddr": "0000:00:06.0", 00:25:06.727 "name": "Nvme0" 00:25:06.727 }, 00:25:06.727 "method": "bdev_nvme_attach_controller" 00:25:06.727 }, 00:25:06.727 { 00:25:06.727 "method": "bdev_wait_for_examine" 00:25:06.727 } 00:25:06.727 ] 00:25:06.727 } 00:25:06.727 ] 00:25:06.727 } 00:25:06.986 [2024-07-11 02:48:31.898622] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.986 [2024-07-11 02:48:31.991335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:07.503  Copying: 1024/1024 [kB] (average 500 MBps) 00:25:07.503 00:25:07.503 02:48:32 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:25:07.503 02:48:32 -- dd/basic_rw.sh@23 -- # count=15 00:25:07.503 02:48:32 -- dd/basic_rw.sh@24 -- # count=15 00:25:07.503 02:48:32 -- dd/basic_rw.sh@25 -- # size=61440 00:25:07.503 02:48:32 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:25:07.503 02:48:32 -- dd/common.sh@98 -- # xtrace_disable 00:25:07.503 02:48:32 -- common/autotest_common.sh@10 -- # set +x 00:25:08.070 02:48:33 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:25:08.070 02:48:33 -- dd/basic_rw.sh@30 -- # gen_conf 00:25:08.070 02:48:33 -- dd/common.sh@31 -- # xtrace_disable 00:25:08.070 02:48:33 -- common/autotest_common.sh@10 -- # set +x 00:25:08.070 [2024-07-11 02:48:33.065871] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:08.070 [2024-07-11 02:48:33.066107] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146614 ] 00:25:08.070 { 00:25:08.070 "subsystems": [ 00:25:08.070 { 00:25:08.070 "subsystem": "bdev", 00:25:08.070 "config": [ 00:25:08.070 { 00:25:08.070 "params": { 00:25:08.070 "trtype": "pcie", 00:25:08.070 "traddr": "0000:00:06.0", 00:25:08.070 "name": "Nvme0" 00:25:08.070 }, 00:25:08.070 "method": "bdev_nvme_attach_controller" 00:25:08.070 }, 00:25:08.071 { 00:25:08.071 "method": "bdev_wait_for_examine" 00:25:08.071 } 00:25:08.071 ] 00:25:08.071 } 00:25:08.071 ] 00:25:08.071 } 00:25:08.329 [2024-07-11 02:48:33.213908] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.329 [2024-07-11 02:48:33.267754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:08.896  Copying: 60/60 [kB] (average 58 MBps) 00:25:08.896 00:25:08.896 02:48:33 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:25:08.896 02:48:33 -- dd/basic_rw.sh@37 -- # gen_conf 00:25:08.896 02:48:33 -- dd/common.sh@31 -- # xtrace_disable 00:25:08.896 02:48:33 -- common/autotest_common.sh@10 -- # set +x 00:25:08.896 [2024-07-11 02:48:33.750827] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:08.896 [2024-07-11 02:48:33.751092] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146623 ] 00:25:08.896 { 00:25:08.896 "subsystems": [ 00:25:08.896 { 00:25:08.896 "subsystem": "bdev", 00:25:08.896 "config": [ 00:25:08.896 { 00:25:08.896 "params": { 00:25:08.896 "trtype": "pcie", 00:25:08.896 "traddr": "0000:00:06.0", 00:25:08.896 "name": "Nvme0" 00:25:08.896 }, 00:25:08.896 "method": "bdev_nvme_attach_controller" 00:25:08.896 }, 00:25:08.896 { 00:25:08.896 "method": "bdev_wait_for_examine" 00:25:08.896 } 00:25:08.896 ] 00:25:08.896 } 00:25:08.896 ] 00:25:08.896 } 00:25:08.896 [2024-07-11 02:48:33.899507] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.896 [2024-07-11 02:48:33.958216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:09.413  Copying: 60/60 [kB] (average 58 MBps) 00:25:09.413 00:25:09.413 02:48:34 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:09.413 02:48:34 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:25:09.413 02:48:34 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:25:09.413 02:48:34 -- dd/common.sh@11 -- # local nvme_ref= 00:25:09.413 02:48:34 -- dd/common.sh@12 -- # local size=61440 00:25:09.413 02:48:34 -- dd/common.sh@14 -- # local bs=1048576 00:25:09.413 02:48:34 -- dd/common.sh@15 -- # local count=1 00:25:09.413 02:48:34 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:25:09.413 02:48:34 -- dd/common.sh@18 -- # gen_conf 00:25:09.413 02:48:34 -- dd/common.sh@31 -- # xtrace_disable 00:25:09.413 02:48:34 -- common/autotest_common.sh@10 -- # set +x 00:25:09.413 [2024-07-11 02:48:34.482354] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:09.413 [2024-07-11 02:48:34.482769] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146644 ] 00:25:09.413 { 00:25:09.413 "subsystems": [ 00:25:09.413 { 00:25:09.413 "subsystem": "bdev", 00:25:09.413 "config": [ 00:25:09.413 { 00:25:09.413 "params": { 00:25:09.413 "trtype": "pcie", 00:25:09.413 "traddr": "0000:00:06.0", 00:25:09.413 "name": "Nvme0" 00:25:09.413 }, 00:25:09.413 "method": "bdev_nvme_attach_controller" 00:25:09.413 }, 00:25:09.413 { 00:25:09.413 "method": "bdev_wait_for_examine" 00:25:09.413 } 00:25:09.413 ] 00:25:09.413 } 00:25:09.413 ] 00:25:09.413 } 00:25:09.672 [2024-07-11 02:48:34.629772] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:09.672 [2024-07-11 02:48:34.711523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:10.189  Copying: 1024/1024 [kB] (average 1000 MBps) 00:25:10.189 00:25:10.189 02:48:35 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:25:10.189 02:48:35 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:25:10.189 02:48:35 -- dd/basic_rw.sh@23 -- # count=7 00:25:10.189 02:48:35 -- dd/basic_rw.sh@24 -- # count=7 00:25:10.189 02:48:35 -- dd/basic_rw.sh@25 -- # size=57344 00:25:10.189 02:48:35 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:25:10.189 02:48:35 -- dd/common.sh@98 -- # xtrace_disable 00:25:10.189 02:48:35 -- common/autotest_common.sh@10 -- # set +x 00:25:10.756 02:48:35 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:25:10.756 02:48:35 -- dd/basic_rw.sh@30 -- # gen_conf 00:25:10.756 02:48:35 -- dd/common.sh@31 -- # xtrace_disable 00:25:10.756 02:48:35 -- common/autotest_common.sh@10 -- # set +x 00:25:10.756 [2024-07-11 02:48:35.762985] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:10.756 [2024-07-11 02:48:35.763735] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146664 ] 00:25:10.756 { 00:25:10.756 "subsystems": [ 00:25:10.756 { 00:25:10.756 "subsystem": "bdev", 00:25:10.756 "config": [ 00:25:10.756 { 00:25:10.756 "params": { 00:25:10.756 "trtype": "pcie", 00:25:10.756 "traddr": "0000:00:06.0", 00:25:10.756 "name": "Nvme0" 00:25:10.756 }, 00:25:10.756 "method": "bdev_nvme_attach_controller" 00:25:10.756 }, 00:25:10.756 { 00:25:10.756 "method": "bdev_wait_for_examine" 00:25:10.756 } 00:25:10.756 ] 00:25:10.756 } 00:25:10.756 ] 00:25:10.756 } 00:25:11.015 [2024-07-11 02:48:35.909473] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:11.015 [2024-07-11 02:48:35.979338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:11.532  Copying: 56/56 [kB] (average 27 MBps) 00:25:11.532 00:25:11.532 02:48:36 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:25:11.532 02:48:36 -- dd/basic_rw.sh@37 -- # gen_conf 00:25:11.532 02:48:36 -- dd/common.sh@31 -- # xtrace_disable 00:25:11.532 02:48:36 -- common/autotest_common.sh@10 -- # set +x 00:25:11.532 [2024-07-11 02:48:36.456112] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:11.532 [2024-07-11 02:48:36.456347] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146679 ] 00:25:11.532 { 00:25:11.532 "subsystems": [ 00:25:11.532 { 00:25:11.532 "subsystem": "bdev", 00:25:11.532 "config": [ 00:25:11.532 { 00:25:11.532 "params": { 00:25:11.532 "trtype": "pcie", 00:25:11.532 "traddr": "0000:00:06.0", 00:25:11.532 "name": "Nvme0" 00:25:11.532 }, 00:25:11.532 "method": "bdev_nvme_attach_controller" 00:25:11.532 }, 00:25:11.532 { 00:25:11.532 "method": "bdev_wait_for_examine" 00:25:11.532 } 00:25:11.532 ] 00:25:11.532 } 00:25:11.532 ] 00:25:11.532 } 00:25:11.532 [2024-07-11 02:48:36.602120] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:11.791 [2024-07-11 02:48:36.656653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:12.050  Copying: 56/56 [kB] (average 27 MBps) 00:25:12.050 00:25:12.050 02:48:37 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:12.050 02:48:37 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:25:12.050 02:48:37 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:25:12.050 02:48:37 -- dd/common.sh@11 -- # local nvme_ref= 00:25:12.050 02:48:37 -- dd/common.sh@12 -- # local size=57344 00:25:12.050 02:48:37 -- dd/common.sh@14 -- # local bs=1048576 00:25:12.050 02:48:37 -- dd/common.sh@15 -- # local count=1 00:25:12.050 02:48:37 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:25:12.050 02:48:37 -- dd/common.sh@18 -- # gen_conf 00:25:12.050 02:48:37 -- dd/common.sh@31 -- # xtrace_disable 00:25:12.050 02:48:37 -- common/autotest_common.sh@10 -- # set +x 00:25:12.307 [2024-07-11 02:48:37.172783] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:12.307 [2024-07-11 02:48:37.173051] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146700 ] 00:25:12.307 { 00:25:12.307 "subsystems": [ 00:25:12.307 { 00:25:12.308 "subsystem": "bdev", 00:25:12.308 "config": [ 00:25:12.308 { 00:25:12.308 "params": { 00:25:12.308 "trtype": "pcie", 00:25:12.308 "traddr": "0000:00:06.0", 00:25:12.308 "name": "Nvme0" 00:25:12.308 }, 00:25:12.308 "method": "bdev_nvme_attach_controller" 00:25:12.308 }, 00:25:12.308 { 00:25:12.308 "method": "bdev_wait_for_examine" 00:25:12.308 } 00:25:12.308 ] 00:25:12.308 } 00:25:12.308 ] 00:25:12.308 } 00:25:12.308 [2024-07-11 02:48:37.322026] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:12.308 [2024-07-11 02:48:37.386534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:12.823  Copying: 1024/1024 [kB] (average 500 MBps) 00:25:12.823 00:25:12.823 02:48:37 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:25:12.823 02:48:37 -- dd/basic_rw.sh@23 -- # count=7 00:25:12.823 02:48:37 -- dd/basic_rw.sh@24 -- # count=7 00:25:12.823 02:48:37 -- dd/basic_rw.sh@25 -- # size=57344 00:25:12.823 02:48:37 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:25:12.823 02:48:37 -- dd/common.sh@98 -- # xtrace_disable 00:25:12.823 02:48:37 -- common/autotest_common.sh@10 -- # set +x 00:25:13.389 02:48:38 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:25:13.389 02:48:38 -- dd/basic_rw.sh@30 -- # gen_conf 00:25:13.389 02:48:38 -- dd/common.sh@31 -- # xtrace_disable 00:25:13.389 02:48:38 -- common/autotest_common.sh@10 -- # set +x 00:25:13.389 [2024-07-11 02:48:38.441097] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:13.389 [2024-07-11 02:48:38.441356] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146720 ] 00:25:13.389 { 00:25:13.389 "subsystems": [ 00:25:13.389 { 00:25:13.390 "subsystem": "bdev", 00:25:13.390 "config": [ 00:25:13.390 { 00:25:13.390 "params": { 00:25:13.390 "trtype": "pcie", 00:25:13.390 "traddr": "0000:00:06.0", 00:25:13.390 "name": "Nvme0" 00:25:13.390 }, 00:25:13.390 "method": "bdev_nvme_attach_controller" 00:25:13.390 }, 00:25:13.390 { 00:25:13.390 "method": "bdev_wait_for_examine" 00:25:13.390 } 00:25:13.390 ] 00:25:13.390 } 00:25:13.390 ] 00:25:13.390 } 00:25:13.647 [2024-07-11 02:48:38.589194] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:13.647 [2024-07-11 02:48:38.679805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:14.162  Copying: 56/56 [kB] (average 54 MBps) 00:25:14.162 00:25:14.163 02:48:39 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:25:14.163 02:48:39 -- dd/basic_rw.sh@37 -- # gen_conf 00:25:14.163 02:48:39 -- dd/common.sh@31 -- # xtrace_disable 00:25:14.163 02:48:39 -- common/autotest_common.sh@10 -- # set +x 00:25:14.163 [2024-07-11 02:48:39.189963] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:14.163 [2024-07-11 02:48:39.190220] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146739 ] 00:25:14.163 { 00:25:14.163 "subsystems": [ 00:25:14.163 { 00:25:14.163 "subsystem": "bdev", 00:25:14.163 "config": [ 00:25:14.163 { 00:25:14.163 "params": { 00:25:14.163 "trtype": "pcie", 00:25:14.163 "traddr": "0000:00:06.0", 00:25:14.163 "name": "Nvme0" 00:25:14.163 }, 00:25:14.163 "method": "bdev_nvme_attach_controller" 00:25:14.163 }, 00:25:14.163 { 00:25:14.163 "method": "bdev_wait_for_examine" 00:25:14.163 } 00:25:14.163 ] 00:25:14.163 } 00:25:14.163 ] 00:25:14.163 } 00:25:14.421 [2024-07-11 02:48:39.330548] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:14.421 [2024-07-11 02:48:39.425916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:14.941  Copying: 56/56 [kB] (average 54 MBps) 00:25:14.941 00:25:14.941 02:48:39 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:14.941 02:48:39 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:25:14.941 02:48:39 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:25:14.941 02:48:39 -- dd/common.sh@11 -- # local nvme_ref= 00:25:14.941 02:48:39 -- dd/common.sh@12 -- # local size=57344 00:25:14.941 02:48:39 -- dd/common.sh@14 -- # local bs=1048576 00:25:14.941 02:48:39 -- dd/common.sh@15 -- # local count=1 00:25:14.941 02:48:39 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:25:14.941 02:48:39 -- dd/common.sh@18 -- # gen_conf 00:25:14.941 02:48:39 -- dd/common.sh@31 -- # xtrace_disable 00:25:14.941 02:48:39 -- common/autotest_common.sh@10 -- # set +x 00:25:14.941 [2024-07-11 02:48:39.929718] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:14.941 [2024-07-11 02:48:39.930014] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146749 ] 00:25:14.941 { 00:25:14.941 "subsystems": [ 00:25:14.941 { 00:25:14.941 "subsystem": "bdev", 00:25:14.941 "config": [ 00:25:14.941 { 00:25:14.941 "params": { 00:25:14.941 "trtype": "pcie", 00:25:14.941 "traddr": "0000:00:06.0", 00:25:14.941 "name": "Nvme0" 00:25:14.941 }, 00:25:14.941 "method": "bdev_nvme_attach_controller" 00:25:14.941 }, 00:25:14.941 { 00:25:14.941 "method": "bdev_wait_for_examine" 00:25:14.941 } 00:25:14.941 ] 00:25:14.941 } 00:25:14.941 ] 00:25:14.941 } 00:25:15.199 [2024-07-11 02:48:40.071790] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:15.199 [2024-07-11 02:48:40.130571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:15.765  Copying: 1024/1024 [kB] (average 1000 MBps) 00:25:15.765 00:25:15.765 02:48:40 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:25:15.765 02:48:40 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:25:15.765 02:48:40 -- dd/basic_rw.sh@23 -- # count=3 00:25:15.765 02:48:40 -- dd/basic_rw.sh@24 -- # count=3 00:25:15.765 02:48:40 -- dd/basic_rw.sh@25 -- # size=49152 00:25:15.765 02:48:40 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:25:15.765 02:48:40 -- dd/common.sh@98 -- # xtrace_disable 00:25:15.765 02:48:40 -- common/autotest_common.sh@10 -- # set +x 00:25:16.023 02:48:41 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:25:16.023 02:48:41 -- dd/basic_rw.sh@30 -- # gen_conf 00:25:16.023 02:48:41 -- dd/common.sh@31 -- # xtrace_disable 00:25:16.023 02:48:41 -- common/autotest_common.sh@10 -- # set +x 00:25:16.023 [2024-07-11 02:48:41.069474] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:16.023 [2024-07-11 02:48:41.069800] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146776 ] 00:25:16.023 { 00:25:16.023 "subsystems": [ 00:25:16.023 { 00:25:16.023 "subsystem": "bdev", 00:25:16.023 "config": [ 00:25:16.023 { 00:25:16.023 "params": { 00:25:16.023 "trtype": "pcie", 00:25:16.023 "traddr": "0000:00:06.0", 00:25:16.023 "name": "Nvme0" 00:25:16.023 }, 00:25:16.023 "method": "bdev_nvme_attach_controller" 00:25:16.023 }, 00:25:16.023 { 00:25:16.023 "method": "bdev_wait_for_examine" 00:25:16.023 } 00:25:16.023 ] 00:25:16.023 } 00:25:16.023 ] 00:25:16.023 } 00:25:16.281 [2024-07-11 02:48:41.216609] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:16.281 [2024-07-11 02:48:41.271037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:16.796  Copying: 48/48 [kB] (average 46 MBps) 00:25:16.796 00:25:16.796 02:48:41 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:25:16.796 02:48:41 -- dd/basic_rw.sh@37 -- # gen_conf 00:25:16.796 02:48:41 -- dd/common.sh@31 -- # xtrace_disable 00:25:16.796 02:48:41 -- common/autotest_common.sh@10 -- # set +x 00:25:16.796 [2024-07-11 02:48:41.753098] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:16.796 [2024-07-11 02:48:41.753318] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146788 ] 00:25:16.796 { 00:25:16.796 "subsystems": [ 00:25:16.796 { 00:25:16.796 "subsystem": "bdev", 00:25:16.796 "config": [ 00:25:16.796 { 00:25:16.796 "params": { 00:25:16.796 "trtype": "pcie", 00:25:16.796 "traddr": "0000:00:06.0", 00:25:16.796 "name": "Nvme0" 00:25:16.796 }, 00:25:16.796 "method": "bdev_nvme_attach_controller" 00:25:16.796 }, 00:25:16.796 { 00:25:16.796 "method": "bdev_wait_for_examine" 00:25:16.796 } 00:25:16.796 ] 00:25:16.796 } 00:25:16.796 ] 00:25:16.796 } 00:25:17.054 [2024-07-11 02:48:41.889945] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:17.054 [2024-07-11 02:48:41.966879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:17.313  Copying: 48/48 [kB] (average 46 MBps) 00:25:17.313 00:25:17.313 02:48:42 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:17.313 02:48:42 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:25:17.313 02:48:42 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:25:17.313 02:48:42 -- dd/common.sh@11 -- # local nvme_ref= 00:25:17.313 02:48:42 -- dd/common.sh@12 -- # local size=49152 00:25:17.313 02:48:42 -- dd/common.sh@14 -- # local bs=1048576 00:25:17.313 02:48:42 -- dd/common.sh@15 -- # local count=1 00:25:17.313 02:48:42 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:25:17.313 02:48:42 -- dd/common.sh@18 -- # gen_conf 00:25:17.313 02:48:42 -- dd/common.sh@31 -- # xtrace_disable 00:25:17.313 02:48:42 -- common/autotest_common.sh@10 -- # set +x 00:25:17.571 [2024-07-11 02:48:42.445456] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:17.571 [2024-07-11 02:48:42.445783] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146822 ] 00:25:17.571 { 00:25:17.571 "subsystems": [ 00:25:17.571 { 00:25:17.571 "subsystem": "bdev", 00:25:17.571 "config": [ 00:25:17.571 { 00:25:17.571 "params": { 00:25:17.571 "trtype": "pcie", 00:25:17.571 "traddr": "0000:00:06.0", 00:25:17.571 "name": "Nvme0" 00:25:17.571 }, 00:25:17.571 "method": "bdev_nvme_attach_controller" 00:25:17.571 }, 00:25:17.571 { 00:25:17.571 "method": "bdev_wait_for_examine" 00:25:17.571 } 00:25:17.571 ] 00:25:17.571 } 00:25:17.571 ] 00:25:17.571 } 00:25:17.571 [2024-07-11 02:48:42.593645] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:17.830 [2024-07-11 02:48:42.664879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:18.089  Copying: 1024/1024 [kB] (average 500 MBps) 00:25:18.089 00:25:18.089 02:48:43 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:25:18.089 02:48:43 -- dd/basic_rw.sh@23 -- # count=3 00:25:18.089 02:48:43 -- dd/basic_rw.sh@24 -- # count=3 00:25:18.089 02:48:43 -- dd/basic_rw.sh@25 -- # size=49152 00:25:18.089 02:48:43 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:25:18.089 02:48:43 -- dd/common.sh@98 -- # xtrace_disable 00:25:18.089 02:48:43 -- common/autotest_common.sh@10 -- # set +x 00:25:18.656 02:48:43 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:25:18.656 02:48:43 -- dd/basic_rw.sh@30 -- # gen_conf 00:25:18.656 02:48:43 -- dd/common.sh@31 -- # xtrace_disable 00:25:18.656 02:48:43 -- common/autotest_common.sh@10 -- # set +x 00:25:18.656 { 00:25:18.656 "subsystems": [ 00:25:18.656 { 00:25:18.656 "subsystem": "bdev", 00:25:18.656 "config": [ 00:25:18.656 { 00:25:18.656 "params": { 00:25:18.656 "trtype": "pcie", 00:25:18.656 "traddr": "0000:00:06.0", 00:25:18.656 "name": "Nvme0" 00:25:18.656 }, 00:25:18.656 "method": "bdev_nvme_attach_controller" 00:25:18.656 }, 00:25:18.656 { 00:25:18.656 "method": "bdev_wait_for_examine" 00:25:18.656 } 00:25:18.656 ] 00:25:18.656 } 00:25:18.656 ] 00:25:18.656 } 00:25:18.656 [2024-07-11 02:48:43.667942] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:18.656 [2024-07-11 02:48:43.668641] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146842 ] 00:25:18.913 [2024-07-11 02:48:43.830650] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:18.913 [2024-07-11 02:48:43.910604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:19.427  Copying: 48/48 [kB] (average 46 MBps) 00:25:19.427 00:25:19.428 02:48:44 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:25:19.428 02:48:44 -- dd/basic_rw.sh@37 -- # gen_conf 00:25:19.428 02:48:44 -- dd/common.sh@31 -- # xtrace_disable 00:25:19.428 02:48:44 -- common/autotest_common.sh@10 -- # set +x 00:25:19.428 [2024-07-11 02:48:44.429891] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:19.428 [2024-07-11 02:48:44.430173] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146862 ] 00:25:19.428 { 00:25:19.428 "subsystems": [ 00:25:19.428 { 00:25:19.428 "subsystem": "bdev", 00:25:19.428 "config": [ 00:25:19.428 { 00:25:19.428 "params": { 00:25:19.428 "trtype": "pcie", 00:25:19.428 "traddr": "0000:00:06.0", 00:25:19.428 "name": "Nvme0" 00:25:19.428 }, 00:25:19.428 "method": "bdev_nvme_attach_controller" 00:25:19.428 }, 00:25:19.428 { 00:25:19.428 "method": "bdev_wait_for_examine" 00:25:19.428 } 00:25:19.428 ] 00:25:19.428 } 00:25:19.428 ] 00:25:19.428 } 00:25:19.686 [2024-07-11 02:48:44.578182] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:19.686 [2024-07-11 02:48:44.640007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:20.203  Copying: 48/48 [kB] (average 46 MBps) 00:25:20.203 00:25:20.203 02:48:45 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:20.203 02:48:45 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:25:20.203 02:48:45 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:25:20.203 02:48:45 -- dd/common.sh@11 -- # local nvme_ref= 00:25:20.203 02:48:45 -- dd/common.sh@12 -- # local size=49152 00:25:20.203 02:48:45 -- dd/common.sh@14 -- # local bs=1048576 00:25:20.203 02:48:45 -- dd/common.sh@15 -- # local count=1 00:25:20.203 02:48:45 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:25:20.203 02:48:45 -- dd/common.sh@18 -- # gen_conf 00:25:20.203 02:48:45 -- dd/common.sh@31 -- # xtrace_disable 00:25:20.203 02:48:45 -- common/autotest_common.sh@10 -- # set +x 00:25:20.203 [2024-07-11 02:48:45.146655] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:20.203 [2024-07-11 02:48:45.146891] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146877 ] 00:25:20.203 { 00:25:20.203 "subsystems": [ 00:25:20.203 { 00:25:20.203 "subsystem": "bdev", 00:25:20.203 "config": [ 00:25:20.203 { 00:25:20.203 "params": { 00:25:20.203 "trtype": "pcie", 00:25:20.203 "traddr": "0000:00:06.0", 00:25:20.203 "name": "Nvme0" 00:25:20.203 }, 00:25:20.203 "method": "bdev_nvme_attach_controller" 00:25:20.203 }, 00:25:20.203 { 00:25:20.203 "method": "bdev_wait_for_examine" 00:25:20.203 } 00:25:20.203 ] 00:25:20.203 } 00:25:20.203 ] 00:25:20.203 } 00:25:20.203 [2024-07-11 02:48:45.293583] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:20.461 [2024-07-11 02:48:45.375843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:21.029  Copying: 1024/1024 [kB] (average 1000 MBps) 00:25:21.029 00:25:21.029 ************************************ 00:25:21.029 END TEST dd_rw 00:25:21.029 ************************************ 00:25:21.029 00:25:21.029 real 0m16.142s 00:25:21.029 user 0m10.935s 00:25:21.029 sys 0m3.837s 00:25:21.029 02:48:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:21.029 02:48:45 -- common/autotest_common.sh@10 -- # set +x 00:25:21.029 02:48:45 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:25:21.029 02:48:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:21.029 02:48:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:21.029 02:48:45 -- common/autotest_common.sh@10 -- # set +x 00:25:21.029 ************************************ 00:25:21.029 START TEST dd_rw_offset 00:25:21.029 ************************************ 00:25:21.029 02:48:45 -- common/autotest_common.sh@1104 -- # basic_offset 00:25:21.029 02:48:45 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:25:21.029 02:48:45 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:25:21.029 02:48:45 -- dd/common.sh@98 -- # xtrace_disable 00:25:21.029 02:48:45 -- common/autotest_common.sh@10 -- # set +x 00:25:21.029 02:48:45 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:25:21.029 02:48:45 -- dd/basic_rw.sh@56 -- # data=z67h00b52is0tzuj26spzgxzcccryj47za9s9nnq4mdl6gwrijfmxgrxkfofns9c6jvuzpqam4nm1h1ywkky099yn7ne7io76i36h92im86v8lioj0d9vi6hpr59ov8lppfds9ucg65x6vcnvyckik0s46v8rb5bu046xs003z3xtv01jr514mbu22p112v6xc6cjf60scc1puo90utgbm6deoesvdzg8a66lrxzb68v3idnv9z6sfb1lstokrhosjw1103hkp0lagsaxbsi3h4qkhyue47yt5o1nbl71pfcb2ma83qeq1btzv9pz2xxhfbphfry2tce8jfl50e87jhueqclw58c7jet725n4s28ni4cxifej811tjuwdr6wpucgduxx16mcsizgud330o5k6rxxxjfksby5minn07jye0drealxmcli3k8p0kfd7x95y3alvsbrtmrs6y5l0ai31c9l7cqvunt8v69kdon79fejjshoz75v5suw1f9fou4zoq36ievg1h537zzhmvu4hjsvmqfkiiqjkziemsxolie13wu9r31yj4wc3en7kpicf63sy02vl3gtlcsqgk8ucjczun4av4nys9fzbl4cyvebqfvenvj5zd842lwvnnuza829a3sln4nmzmmgo4h78z6qjos0rk4php4v6qezsjrpary55z4e8m7fnjxsnh4rpjnbjids44xhjgszx3vo2dliy25m3rsvpoj4mdrm6ejpg2ac3th8sybe8l688jrgkeex68m8wltilel7yrqsfhpj9iy6ok3bdt7p4xhk32qmjp05gwtavd06ayr7323unquw6f7admkt5q3wkwkvxl4bu9cdlyisj732v2vxu8jdjxdbbgvlew0cmw39qi6b4k3jvt24b4yt7mwysfyl2qw3yz0wtdfg9jtw0l957ogp27ev9nq3n8o5nijo1lzjjpaf5i4492doy5tmyv2pk8yixlv6bkih022shhfyt4um77owz93nnjrqivr4w4vnaivmobtip6kwphbkjmwszqr91714d7juiysssy8o8jb5sg7rraroo670cgvf341lcvzdiad3d4q3hvztskj9ihtqg6zth8tait8w4l5rc3f1emuja7paq1212shh41tc8gcvvyfnlw0g1v2r4bgsnns82sx059m007ie6kuasjy5wd8thvd8un3w5ymxvcbo8daylallfbnkofmjijbwynieys1daggxqmf6zsio36cz5mcngqjg9hycwugy2vphpprtecexn4gq7vro01pb8l9knn9vawzecr22zmmlaw1dunbnqq4h512ww6vgkw84i2mvmkzh0ms1vzi519w6bvn6vhn3outa2rzufllsoc37devywqjw5tfhypwgb8reqlso2nsr1k9942o7rb6m17m3fplcw9ipdxuk8c6m3f2pescdtj5plyvw43vnzplfywp03zt6cf577ve68xrzgepdwqbtjf9g23a7fgasg8jovow8vm0w6phx6s5e8qvipwku397pvc6avkq2zxtweukg0j2sap4ael8ogdowd4zgjavupwvn4e1fddvqjzvvlo4hkvqh3bbhw585ixssbheqvytagi1yztl46cx6r55hgah52c7ucieicoqywo76jkt6s07097us00022z4u0rllttd9fc2mbuyhhzl8segs8zmyo845xz4lyt48foecbj26s04xszn3kpip6vco0su1qsnk9lmbtmhrzewyi5vbusja0baagk8aggbwqyl3mjv353pdtp5wtes2ighki1udvwy9sij5f386twbevucvkwg9ndg60f8xzxi4o39ydl392s4oqzwlvtna9yewtjgqr6kf7znhmcq9ez6ms4f27udact58wotw3pzqiz7cke1t8thmq3ruh1bejdxhhgq2dxqg16pos7i4jq7ggvpxklopfqyxjds3meg1m1wpnebzis25kmd26ihzkvzny5kmb9ca4nze6qmyz1ow699nicilfg4phxfejf4ttr5u9y7zy9zuhr9eoi5f07n1vmn1pwwtutk9xz5z7p9wme55umqcc76sk1hfi7x4d4qbka5w2equlc07zqp7xjblak7fwp1101g9p932ysry44tnbnfhct2pkjhk8x4nd8gsp2y7u3q09vdktltuaitff1uvo1lhkrjh8185a1kcaktsbnnqsitb10bmntcm840j3fps29xbbx9n1wsmug7qnkk01n54qp281p59hzky85cg4vg2qb09b7baos566pm9xk0f6liu5zkph1o07eu7mof401zbraxnyf7u3s76tsf2cd2w1z6exl12sw6pq5g2u9s8mr7fb5801r517tfibvvo1df61usbesy8xu83i5u2p8r6aqy0da7axhl8oid9l8x55wfebkszu62qfksk1qfh2q9xk6957i7jcuutycy3ugh8traclosvemyc9dmt4tdnxnfvjtzsse268o4dp99rsyn8h1pdqam6rzdcp6tovwntvc32rlfyxgssgl8slotsrmul5f2fpfvtfb22fbaog66ev9a2artlavawnrfizn2cqt6bhynrzx2hsz9l190m3u2rhir9fnr6c13d131yjeyass02qc2i0avjc6orqfxufomeomlmzkoil6rncitpmn9ho6llh0tgjdekmwi8jfdq6zady8ioox0xw6qh3smzcjgyt64xt6s1rlunjjc0dez6ff90mw7yf34ar1083dw2polen94ti6zki4ytlhdho0oce6v8l98tx9rnoaf5hydid1eyhb77vp6xh0jmrqhoumw60kwlx05w88vtx3igj7ivfz2mf66j237nz37dgnfblzsw1nt6rzojffgtgdzfgxglol6z7xni4yho7o48tz2fszmf3erqmv9h58ee9ouete1ydxwl9003h13rntqtgpyuygvmxm6ot7qh9vy0uzp9peypn6fnn3sw2oqdz60m0l46k559d0448lj7njb9hb21bsjiam3ezig4xgdh4wgowujnadxwplrg6pka11iex764j74pdj4qupyxy9wbyobca2cujplatfbwjoyqnbfg2utj9a58pjc8tej7mgw8yzuwaask8z4bz0cyls4l4gmeclexvfw7t9ka95c1avfnllry2cvithnvqmb0ohbbu5bdxmipjrdnuwp9hsx2ez55e9f3zvg7ouytfzt9e09v3gcuqaer3jnwun7opzms2q52ctjxojt4hhldgmc95gqfp3dlnu6gp71dfuai7sotejgac4pd4ab25rf8dwjoiazmnsurg7nky6ugv0p87h2umc0jsxdicun9iqlg6r0ligeuh1vl6su9o33zvrkayhmtxi8787k4ijdaya6iqr1qfxf4y1jrzmd36ulgu2qag5yw3ub9buc6qc1uhmfkaity4optqo3818f84pgo291o5d3c7qsojza29jw5xohy794fzirm1yr5rtibd92lkghn4pmbzhnqy31sjn9u8cmqff20qnott5h968l9xokbxjbyb1j6gh1iyj60cn7a16ofvnthsmxnwhaxfvi8ewxmpzng4cgiiycxb3pe1fawgwpiodrr9umhz7r8hbvrp7vqfv074r4ondcna8ztxb99geabuaih4rubd4fyvta8ndrjs7x5zusldehrxogbmsmmjbypdx44x58m9rvq7qx1dvdlam4tdbzpt58qk7cmj0rood3os94qw4ge8wuc7ki93byby4kjqt865puohofbhm5vel2fmjopmvp19re5zlkhibwn2fqoaic34bet6zlxfzr96wf1bmx7hd6j0r4zv8kjsm49angiuvzgssx6dk71ufprqot61npa7dac0rt1jjvo6z01d3qokcy07t5xovha8ubbpb6dxe9i8f5fktsauq7v2yary3zw25ydbzgugxn0z0a4cqiit5f4pdcqa0auko141b25azse1yv8y55vexia2ehnh6jnr710hg770pm8px1ycctazujpdw37f8sdwy7nnt8g1vsolvj31rmea8ww 00:25:21.029 02:48:45 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:25:21.029 02:48:45 -- dd/basic_rw.sh@59 -- # gen_conf 00:25:21.029 02:48:45 -- dd/common.sh@31 -- # xtrace_disable 00:25:21.029 02:48:45 -- common/autotest_common.sh@10 -- # set +x 00:25:21.029 [2024-07-11 02:48:45.986618] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:21.029 [2024-07-11 02:48:45.987408] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146911 ] 00:25:21.029 { 00:25:21.029 "subsystems": [ 00:25:21.029 { 00:25:21.029 "subsystem": "bdev", 00:25:21.029 "config": [ 00:25:21.029 { 00:25:21.029 "params": { 00:25:21.029 "trtype": "pcie", 00:25:21.029 "traddr": "0000:00:06.0", 00:25:21.029 "name": "Nvme0" 00:25:21.029 }, 00:25:21.029 "method": "bdev_nvme_attach_controller" 00:25:21.029 }, 00:25:21.029 { 00:25:21.029 "method": "bdev_wait_for_examine" 00:25:21.029 } 00:25:21.029 ] 00:25:21.029 } 00:25:21.029 ] 00:25:21.029 } 00:25:21.288 [2024-07-11 02:48:46.130273] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:21.288 [2024-07-11 02:48:46.204419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:21.856  Copying: 4096/4096 [B] (average 4000 kBps) 00:25:21.856 00:25:21.856 02:48:46 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:25:21.856 02:48:46 -- dd/basic_rw.sh@65 -- # gen_conf 00:25:21.856 02:48:46 -- dd/common.sh@31 -- # xtrace_disable 00:25:21.856 02:48:46 -- common/autotest_common.sh@10 -- # set +x 00:25:21.856 [2024-07-11 02:48:46.711567] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:21.856 [2024-07-11 02:48:46.711881] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146934 ] 00:25:21.856 { 00:25:21.856 "subsystems": [ 00:25:21.856 { 00:25:21.856 "subsystem": "bdev", 00:25:21.856 "config": [ 00:25:21.856 { 00:25:21.856 "params": { 00:25:21.856 "trtype": "pcie", 00:25:21.856 "traddr": "0000:00:06.0", 00:25:21.856 "name": "Nvme0" 00:25:21.856 }, 00:25:21.856 "method": "bdev_nvme_attach_controller" 00:25:21.856 }, 00:25:21.856 { 00:25:21.856 "method": "bdev_wait_for_examine" 00:25:21.856 } 00:25:21.856 ] 00:25:21.856 } 00:25:21.856 ] 00:25:21.856 } 00:25:21.856 [2024-07-11 02:48:46.857565] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:21.856 [2024-07-11 02:48:46.934410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:22.374  Copying: 4096/4096 [B] (average 4000 kBps) 00:25:22.374 00:25:22.374 02:48:47 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:25:22.374 ************************************ 00:25:22.374 END TEST dd_rw_offset 00:25:22.374 ************************************ 00:25:22.375 02:48:47 -- dd/basic_rw.sh@72 -- # [[ z67h00b52is0tzuj26spzgxzcccryj47za9s9nnq4mdl6gwrijfmxgrxkfofns9c6jvuzpqam4nm1h1ywkky099yn7ne7io76i36h92im86v8lioj0d9vi6hpr59ov8lppfds9ucg65x6vcnvyckik0s46v8rb5bu046xs003z3xtv01jr514mbu22p112v6xc6cjf60scc1puo90utgbm6deoesvdzg8a66lrxzb68v3idnv9z6sfb1lstokrhosjw1103hkp0lagsaxbsi3h4qkhyue47yt5o1nbl71pfcb2ma83qeq1btzv9pz2xxhfbphfry2tce8jfl50e87jhueqclw58c7jet725n4s28ni4cxifej811tjuwdr6wpucgduxx16mcsizgud330o5k6rxxxjfksby5minn07jye0drealxmcli3k8p0kfd7x95y3alvsbrtmrs6y5l0ai31c9l7cqvunt8v69kdon79fejjshoz75v5suw1f9fou4zoq36ievg1h537zzhmvu4hjsvmqfkiiqjkziemsxolie13wu9r31yj4wc3en7kpicf63sy02vl3gtlcsqgk8ucjczun4av4nys9fzbl4cyvebqfvenvj5zd842lwvnnuza829a3sln4nmzmmgo4h78z6qjos0rk4php4v6qezsjrpary55z4e8m7fnjxsnh4rpjnbjids44xhjgszx3vo2dliy25m3rsvpoj4mdrm6ejpg2ac3th8sybe8l688jrgkeex68m8wltilel7yrqsfhpj9iy6ok3bdt7p4xhk32qmjp05gwtavd06ayr7323unquw6f7admkt5q3wkwkvxl4bu9cdlyisj732v2vxu8jdjxdbbgvlew0cmw39qi6b4k3jvt24b4yt7mwysfyl2qw3yz0wtdfg9jtw0l957ogp27ev9nq3n8o5nijo1lzjjpaf5i4492doy5tmyv2pk8yixlv6bkih022shhfyt4um77owz93nnjrqivr4w4vnaivmobtip6kwphbkjmwszqr91714d7juiysssy8o8jb5sg7rraroo670cgvf341lcvzdiad3d4q3hvztskj9ihtqg6zth8tait8w4l5rc3f1emuja7paq1212shh41tc8gcvvyfnlw0g1v2r4bgsnns82sx059m007ie6kuasjy5wd8thvd8un3w5ymxvcbo8daylallfbnkofmjijbwynieys1daggxqmf6zsio36cz5mcngqjg9hycwugy2vphpprtecexn4gq7vro01pb8l9knn9vawzecr22zmmlaw1dunbnqq4h512ww6vgkw84i2mvmkzh0ms1vzi519w6bvn6vhn3outa2rzufllsoc37devywqjw5tfhypwgb8reqlso2nsr1k9942o7rb6m17m3fplcw9ipdxuk8c6m3f2pescdtj5plyvw43vnzplfywp03zt6cf577ve68xrzgepdwqbtjf9g23a7fgasg8jovow8vm0w6phx6s5e8qvipwku397pvc6avkq2zxtweukg0j2sap4ael8ogdowd4zgjavupwvn4e1fddvqjzvvlo4hkvqh3bbhw585ixssbheqvytagi1yztl46cx6r55hgah52c7ucieicoqywo76jkt6s07097us00022z4u0rllttd9fc2mbuyhhzl8segs8zmyo845xz4lyt48foecbj26s04xszn3kpip6vco0su1qsnk9lmbtmhrzewyi5vbusja0baagk8aggbwqyl3mjv353pdtp5wtes2ighki1udvwy9sij5f386twbevucvkwg9ndg60f8xzxi4o39ydl392s4oqzwlvtna9yewtjgqr6kf7znhmcq9ez6ms4f27udact58wotw3pzqiz7cke1t8thmq3ruh1bejdxhhgq2dxqg16pos7i4jq7ggvpxklopfqyxjds3meg1m1wpnebzis25kmd26ihzkvzny5kmb9ca4nze6qmyz1ow699nicilfg4phxfejf4ttr5u9y7zy9zuhr9eoi5f07n1vmn1pwwtutk9xz5z7p9wme55umqcc76sk1hfi7x4d4qbka5w2equlc07zqp7xjblak7fwp1101g9p932ysry44tnbnfhct2pkjhk8x4nd8gsp2y7u3q09vdktltuaitff1uvo1lhkrjh8185a1kcaktsbnnqsitb10bmntcm840j3fps29xbbx9n1wsmug7qnkk01n54qp281p59hzky85cg4vg2qb09b7baos566pm9xk0f6liu5zkph1o07eu7mof401zbraxnyf7u3s76tsf2cd2w1z6exl12sw6pq5g2u9s8mr7fb5801r517tfibvvo1df61usbesy8xu83i5u2p8r6aqy0da7axhl8oid9l8x55wfebkszu62qfksk1qfh2q9xk6957i7jcuutycy3ugh8traclosvemyc9dmt4tdnxnfvjtzsse268o4dp99rsyn8h1pdqam6rzdcp6tovwntvc32rlfyxgssgl8slotsrmul5f2fpfvtfb22fbaog66ev9a2artlavawnrfizn2cqt6bhynrzx2hsz9l190m3u2rhir9fnr6c13d131yjeyass02qc2i0avjc6orqfxufomeomlmzkoil6rncitpmn9ho6llh0tgjdekmwi8jfdq6zady8ioox0xw6qh3smzcjgyt64xt6s1rlunjjc0dez6ff90mw7yf34ar1083dw2polen94ti6zki4ytlhdho0oce6v8l98tx9rnoaf5hydid1eyhb77vp6xh0jmrqhoumw60kwlx05w88vtx3igj7ivfz2mf66j237nz37dgnfblzsw1nt6rzojffgtgdzfgxglol6z7xni4yho7o48tz2fszmf3erqmv9h58ee9ouete1ydxwl9003h13rntqtgpyuygvmxm6ot7qh9vy0uzp9peypn6fnn3sw2oqdz60m0l46k559d0448lj7njb9hb21bsjiam3ezig4xgdh4wgowujnadxwplrg6pka11iex764j74pdj4qupyxy9wbyobca2cujplatfbwjoyqnbfg2utj9a58pjc8tej7mgw8yzuwaask8z4bz0cyls4l4gmeclexvfw7t9ka95c1avfnllry2cvithnvqmb0ohbbu5bdxmipjrdnuwp9hsx2ez55e9f3zvg7ouytfzt9e09v3gcuqaer3jnwun7opzms2q52ctjxojt4hhldgmc95gqfp3dlnu6gp71dfuai7sotejgac4pd4ab25rf8dwjoiazmnsurg7nky6ugv0p87h2umc0jsxdicun9iqlg6r0ligeuh1vl6su9o33zvrkayhmtxi8787k4ijdaya6iqr1qfxf4y1jrzmd36ulgu2qag5yw3ub9buc6qc1uhmfkaity4optqo3818f84pgo291o5d3c7qsojza29jw5xohy794fzirm1yr5rtibd92lkghn4pmbzhnqy31sjn9u8cmqff20qnott5h968l9xokbxjbyb1j6gh1iyj60cn7a16ofvnthsmxnwhaxfvi8ewxmpzng4cgiiycxb3pe1fawgwpiodrr9umhz7r8hbvrp7vqfv074r4ondcna8ztxb99geabuaih4rubd4fyvta8ndrjs7x5zusldehrxogbmsmmjbypdx44x58m9rvq7qx1dvdlam4tdbzpt58qk7cmj0rood3os94qw4ge8wuc7ki93byby4kjqt865puohofbhm5vel2fmjopmvp19re5zlkhibwn2fqoaic34bet6zlxfzr96wf1bmx7hd6j0r4zv8kjsm49angiuvzgssx6dk71ufprqot61npa7dac0rt1jjvo6z01d3qokcy07t5xovha8ubbpb6dxe9i8f5fktsauq7v2yary3zw25ydbzgugxn0z0a4cqiit5f4pdcqa0auko141b25azse1yv8y55vexia2ehnh6jnr710hg770pm8px1ycctazujpdw37f8sdwy7nnt8g1vsolvj31rmea8ww == \z\6\7\h\0\0\b\5\2\i\s\0\t\z\u\j\2\6\s\p\z\g\x\z\c\c\c\r\y\j\4\7\z\a\9\s\9\n\n\q\4\m\d\l\6\g\w\r\i\j\f\m\x\g\r\x\k\f\o\f\n\s\9\c\6\j\v\u\z\p\q\a\m\4\n\m\1\h\1\y\w\k\k\y\0\9\9\y\n\7\n\e\7\i\o\7\6\i\3\6\h\9\2\i\m\8\6\v\8\l\i\o\j\0\d\9\v\i\6\h\p\r\5\9\o\v\8\l\p\p\f\d\s\9\u\c\g\6\5\x\6\v\c\n\v\y\c\k\i\k\0\s\4\6\v\8\r\b\5\b\u\0\4\6\x\s\0\0\3\z\3\x\t\v\0\1\j\r\5\1\4\m\b\u\2\2\p\1\1\2\v\6\x\c\6\c\j\f\6\0\s\c\c\1\p\u\o\9\0\u\t\g\b\m\6\d\e\o\e\s\v\d\z\g\8\a\6\6\l\r\x\z\b\6\8\v\3\i\d\n\v\9\z\6\s\f\b\1\l\s\t\o\k\r\h\o\s\j\w\1\1\0\3\h\k\p\0\l\a\g\s\a\x\b\s\i\3\h\4\q\k\h\y\u\e\4\7\y\t\5\o\1\n\b\l\7\1\p\f\c\b\2\m\a\8\3\q\e\q\1\b\t\z\v\9\p\z\2\x\x\h\f\b\p\h\f\r\y\2\t\c\e\8\j\f\l\5\0\e\8\7\j\h\u\e\q\c\l\w\5\8\c\7\j\e\t\7\2\5\n\4\s\2\8\n\i\4\c\x\i\f\e\j\8\1\1\t\j\u\w\d\r\6\w\p\u\c\g\d\u\x\x\1\6\m\c\s\i\z\g\u\d\3\3\0\o\5\k\6\r\x\x\x\j\f\k\s\b\y\5\m\i\n\n\0\7\j\y\e\0\d\r\e\a\l\x\m\c\l\i\3\k\8\p\0\k\f\d\7\x\9\5\y\3\a\l\v\s\b\r\t\m\r\s\6\y\5\l\0\a\i\3\1\c\9\l\7\c\q\v\u\n\t\8\v\6\9\k\d\o\n\7\9\f\e\j\j\s\h\o\z\7\5\v\5\s\u\w\1\f\9\f\o\u\4\z\o\q\3\6\i\e\v\g\1\h\5\3\7\z\z\h\m\v\u\4\h\j\s\v\m\q\f\k\i\i\q\j\k\z\i\e\m\s\x\o\l\i\e\1\3\w\u\9\r\3\1\y\j\4\w\c\3\e\n\7\k\p\i\c\f\6\3\s\y\0\2\v\l\3\g\t\l\c\s\q\g\k\8\u\c\j\c\z\u\n\4\a\v\4\n\y\s\9\f\z\b\l\4\c\y\v\e\b\q\f\v\e\n\v\j\5\z\d\8\4\2\l\w\v\n\n\u\z\a\8\2\9\a\3\s\l\n\4\n\m\z\m\m\g\o\4\h\7\8\z\6\q\j\o\s\0\r\k\4\p\h\p\4\v\6\q\e\z\s\j\r\p\a\r\y\5\5\z\4\e\8\m\7\f\n\j\x\s\n\h\4\r\p\j\n\b\j\i\d\s\4\4\x\h\j\g\s\z\x\3\v\o\2\d\l\i\y\2\5\m\3\r\s\v\p\o\j\4\m\d\r\m\6\e\j\p\g\2\a\c\3\t\h\8\s\y\b\e\8\l\6\8\8\j\r\g\k\e\e\x\6\8\m\8\w\l\t\i\l\e\l\7\y\r\q\s\f\h\p\j\9\i\y\6\o\k\3\b\d\t\7\p\4\x\h\k\3\2\q\m\j\p\0\5\g\w\t\a\v\d\0\6\a\y\r\7\3\2\3\u\n\q\u\w\6\f\7\a\d\m\k\t\5\q\3\w\k\w\k\v\x\l\4\b\u\9\c\d\l\y\i\s\j\7\3\2\v\2\v\x\u\8\j\d\j\x\d\b\b\g\v\l\e\w\0\c\m\w\3\9\q\i\6\b\4\k\3\j\v\t\2\4\b\4\y\t\7\m\w\y\s\f\y\l\2\q\w\3\y\z\0\w\t\d\f\g\9\j\t\w\0\l\9\5\7\o\g\p\2\7\e\v\9\n\q\3\n\8\o\5\n\i\j\o\1\l\z\j\j\p\a\f\5\i\4\4\9\2\d\o\y\5\t\m\y\v\2\p\k\8\y\i\x\l\v\6\b\k\i\h\0\2\2\s\h\h\f\y\t\4\u\m\7\7\o\w\z\9\3\n\n\j\r\q\i\v\r\4\w\4\v\n\a\i\v\m\o\b\t\i\p\6\k\w\p\h\b\k\j\m\w\s\z\q\r\9\1\7\1\4\d\7\j\u\i\y\s\s\s\y\8\o\8\j\b\5\s\g\7\r\r\a\r\o\o\6\7\0\c\g\v\f\3\4\1\l\c\v\z\d\i\a\d\3\d\4\q\3\h\v\z\t\s\k\j\9\i\h\t\q\g\6\z\t\h\8\t\a\i\t\8\w\4\l\5\r\c\3\f\1\e\m\u\j\a\7\p\a\q\1\2\1\2\s\h\h\4\1\t\c\8\g\c\v\v\y\f\n\l\w\0\g\1\v\2\r\4\b\g\s\n\n\s\8\2\s\x\0\5\9\m\0\0\7\i\e\6\k\u\a\s\j\y\5\w\d\8\t\h\v\d\8\u\n\3\w\5\y\m\x\v\c\b\o\8\d\a\y\l\a\l\l\f\b\n\k\o\f\m\j\i\j\b\w\y\n\i\e\y\s\1\d\a\g\g\x\q\m\f\6\z\s\i\o\3\6\c\z\5\m\c\n\g\q\j\g\9\h\y\c\w\u\g\y\2\v\p\h\p\p\r\t\e\c\e\x\n\4\g\q\7\v\r\o\0\1\p\b\8\l\9\k\n\n\9\v\a\w\z\e\c\r\2\2\z\m\m\l\a\w\1\d\u\n\b\n\q\q\4\h\5\1\2\w\w\6\v\g\k\w\8\4\i\2\m\v\m\k\z\h\0\m\s\1\v\z\i\5\1\9\w\6\b\v\n\6\v\h\n\3\o\u\t\a\2\r\z\u\f\l\l\s\o\c\3\7\d\e\v\y\w\q\j\w\5\t\f\h\y\p\w\g\b\8\r\e\q\l\s\o\2\n\s\r\1\k\9\9\4\2\o\7\r\b\6\m\1\7\m\3\f\p\l\c\w\9\i\p\d\x\u\k\8\c\6\m\3\f\2\p\e\s\c\d\t\j\5\p\l\y\v\w\4\3\v\n\z\p\l\f\y\w\p\0\3\z\t\6\c\f\5\7\7\v\e\6\8\x\r\z\g\e\p\d\w\q\b\t\j\f\9\g\2\3\a\7\f\g\a\s\g\8\j\o\v\o\w\8\v\m\0\w\6\p\h\x\6\s\5\e\8\q\v\i\p\w\k\u\3\9\7\p\v\c\6\a\v\k\q\2\z\x\t\w\e\u\k\g\0\j\2\s\a\p\4\a\e\l\8\o\g\d\o\w\d\4\z\g\j\a\v\u\p\w\v\n\4\e\1\f\d\d\v\q\j\z\v\v\l\o\4\h\k\v\q\h\3\b\b\h\w\5\8\5\i\x\s\s\b\h\e\q\v\y\t\a\g\i\1\y\z\t\l\4\6\c\x\6\r\5\5\h\g\a\h\5\2\c\7\u\c\i\e\i\c\o\q\y\w\o\7\6\j\k\t\6\s\0\7\0\9\7\u\s\0\0\0\2\2\z\4\u\0\r\l\l\t\t\d\9\f\c\2\m\b\u\y\h\h\z\l\8\s\e\g\s\8\z\m\y\o\8\4\5\x\z\4\l\y\t\4\8\f\o\e\c\b\j\2\6\s\0\4\x\s\z\n\3\k\p\i\p\6\v\c\o\0\s\u\1\q\s\n\k\9\l\m\b\t\m\h\r\z\e\w\y\i\5\v\b\u\s\j\a\0\b\a\a\g\k\8\a\g\g\b\w\q\y\l\3\m\j\v\3\5\3\p\d\t\p\5\w\t\e\s\2\i\g\h\k\i\1\u\d\v\w\y\9\s\i\j\5\f\3\8\6\t\w\b\e\v\u\c\v\k\w\g\9\n\d\g\6\0\f\8\x\z\x\i\4\o\3\9\y\d\l\3\9\2\s\4\o\q\z\w\l\v\t\n\a\9\y\e\w\t\j\g\q\r\6\k\f\7\z\n\h\m\c\q\9\e\z\6\m\s\4\f\2\7\u\d\a\c\t\5\8\w\o\t\w\3\p\z\q\i\z\7\c\k\e\1\t\8\t\h\m\q\3\r\u\h\1\b\e\j\d\x\h\h\g\q\2\d\x\q\g\1\6\p\o\s\7\i\4\j\q\7\g\g\v\p\x\k\l\o\p\f\q\y\x\j\d\s\3\m\e\g\1\m\1\w\p\n\e\b\z\i\s\2\5\k\m\d\2\6\i\h\z\k\v\z\n\y\5\k\m\b\9\c\a\4\n\z\e\6\q\m\y\z\1\o\w\6\9\9\n\i\c\i\l\f\g\4\p\h\x\f\e\j\f\4\t\t\r\5\u\9\y\7\z\y\9\z\u\h\r\9\e\o\i\5\f\0\7\n\1\v\m\n\1\p\w\w\t\u\t\k\9\x\z\5\z\7\p\9\w\m\e\5\5\u\m\q\c\c\7\6\s\k\1\h\f\i\7\x\4\d\4\q\b\k\a\5\w\2\e\q\u\l\c\0\7\z\q\p\7\x\j\b\l\a\k\7\f\w\p\1\1\0\1\g\9\p\9\3\2\y\s\r\y\4\4\t\n\b\n\f\h\c\t\2\p\k\j\h\k\8\x\4\n\d\8\g\s\p\2\y\7\u\3\q\0\9\v\d\k\t\l\t\u\a\i\t\f\f\1\u\v\o\1\l\h\k\r\j\h\8\1\8\5\a\1\k\c\a\k\t\s\b\n\n\q\s\i\t\b\1\0\b\m\n\t\c\m\8\4\0\j\3\f\p\s\2\9\x\b\b\x\9\n\1\w\s\m\u\g\7\q\n\k\k\0\1\n\5\4\q\p\2\8\1\p\5\9\h\z\k\y\8\5\c\g\4\v\g\2\q\b\0\9\b\7\b\a\o\s\5\6\6\p\m\9\x\k\0\f\6\l\i\u\5\z\k\p\h\1\o\0\7\e\u\7\m\o\f\4\0\1\z\b\r\a\x\n\y\f\7\u\3\s\7\6\t\s\f\2\c\d\2\w\1\z\6\e\x\l\1\2\s\w\6\p\q\5\g\2\u\9\s\8\m\r\7\f\b\5\8\0\1\r\5\1\7\t\f\i\b\v\v\o\1\d\f\6\1\u\s\b\e\s\y\8\x\u\8\3\i\5\u\2\p\8\r\6\a\q\y\0\d\a\7\a\x\h\l\8\o\i\d\9\l\8\x\5\5\w\f\e\b\k\s\z\u\6\2\q\f\k\s\k\1\q\f\h\2\q\9\x\k\6\9\5\7\i\7\j\c\u\u\t\y\c\y\3\u\g\h\8\t\r\a\c\l\o\s\v\e\m\y\c\9\d\m\t\4\t\d\n\x\n\f\v\j\t\z\s\s\e\2\6\8\o\4\d\p\9\9\r\s\y\n\8\h\1\p\d\q\a\m\6\r\z\d\c\p\6\t\o\v\w\n\t\v\c\3\2\r\l\f\y\x\g\s\s\g\l\8\s\l\o\t\s\r\m\u\l\5\f\2\f\p\f\v\t\f\b\2\2\f\b\a\o\g\6\6\e\v\9\a\2\a\r\t\l\a\v\a\w\n\r\f\i\z\n\2\c\q\t\6\b\h\y\n\r\z\x\2\h\s\z\9\l\1\9\0\m\3\u\2\r\h\i\r\9\f\n\r\6\c\1\3\d\1\3\1\y\j\e\y\a\s\s\0\2\q\c\2\i\0\a\v\j\c\6\o\r\q\f\x\u\f\o\m\e\o\m\l\m\z\k\o\i\l\6\r\n\c\i\t\p\m\n\9\h\o\6\l\l\h\0\t\g\j\d\e\k\m\w\i\8\j\f\d\q\6\z\a\d\y\8\i\o\o\x\0\x\w\6\q\h\3\s\m\z\c\j\g\y\t\6\4\x\t\6\s\1\r\l\u\n\j\j\c\0\d\e\z\6\f\f\9\0\m\w\7\y\f\3\4\a\r\1\0\8\3\d\w\2\p\o\l\e\n\9\4\t\i\6\z\k\i\4\y\t\l\h\d\h\o\0\o\c\e\6\v\8\l\9\8\t\x\9\r\n\o\a\f\5\h\y\d\i\d\1\e\y\h\b\7\7\v\p\6\x\h\0\j\m\r\q\h\o\u\m\w\6\0\k\w\l\x\0\5\w\8\8\v\t\x\3\i\g\j\7\i\v\f\z\2\m\f\6\6\j\2\3\7\n\z\3\7\d\g\n\f\b\l\z\s\w\1\n\t\6\r\z\o\j\f\f\g\t\g\d\z\f\g\x\g\l\o\l\6\z\7\x\n\i\4\y\h\o\7\o\4\8\t\z\2\f\s\z\m\f\3\e\r\q\m\v\9\h\5\8\e\e\9\o\u\e\t\e\1\y\d\x\w\l\9\0\0\3\h\1\3\r\n\t\q\t\g\p\y\u\y\g\v\m\x\m\6\o\t\7\q\h\9\v\y\0\u\z\p\9\p\e\y\p\n\6\f\n\n\3\s\w\2\o\q\d\z\6\0\m\0\l\4\6\k\5\5\9\d\0\4\4\8\l\j\7\n\j\b\9\h\b\2\1\b\s\j\i\a\m\3\e\z\i\g\4\x\g\d\h\4\w\g\o\w\u\j\n\a\d\x\w\p\l\r\g\6\p\k\a\1\1\i\e\x\7\6\4\j\7\4\p\d\j\4\q\u\p\y\x\y\9\w\b\y\o\b\c\a\2\c\u\j\p\l\a\t\f\b\w\j\o\y\q\n\b\f\g\2\u\t\j\9\a\5\8\p\j\c\8\t\e\j\7\m\g\w\8\y\z\u\w\a\a\s\k\8\z\4\b\z\0\c\y\l\s\4\l\4\g\m\e\c\l\e\x\v\f\w\7\t\9\k\a\9\5\c\1\a\v\f\n\l\l\r\y\2\c\v\i\t\h\n\v\q\m\b\0\o\h\b\b\u\5\b\d\x\m\i\p\j\r\d\n\u\w\p\9\h\s\x\2\e\z\5\5\e\9\f\3\z\v\g\7\o\u\y\t\f\z\t\9\e\0\9\v\3\g\c\u\q\a\e\r\3\j\n\w\u\n\7\o\p\z\m\s\2\q\5\2\c\t\j\x\o\j\t\4\h\h\l\d\g\m\c\9\5\g\q\f\p\3\d\l\n\u\6\g\p\7\1\d\f\u\a\i\7\s\o\t\e\j\g\a\c\4\p\d\4\a\b\2\5\r\f\8\d\w\j\o\i\a\z\m\n\s\u\r\g\7\n\k\y\6\u\g\v\0\p\8\7\h\2\u\m\c\0\j\s\x\d\i\c\u\n\9\i\q\l\g\6\r\0\l\i\g\e\u\h\1\v\l\6\s\u\9\o\3\3\z\v\r\k\a\y\h\m\t\x\i\8\7\8\7\k\4\i\j\d\a\y\a\6\i\q\r\1\q\f\x\f\4\y\1\j\r\z\m\d\3\6\u\l\g\u\2\q\a\g\5\y\w\3\u\b\9\b\u\c\6\q\c\1\u\h\m\f\k\a\i\t\y\4\o\p\t\q\o\3\8\1\8\f\8\4\p\g\o\2\9\1\o\5\d\3\c\7\q\s\o\j\z\a\2\9\j\w\5\x\o\h\y\7\9\4\f\z\i\r\m\1\y\r\5\r\t\i\b\d\9\2\l\k\g\h\n\4\p\m\b\z\h\n\q\y\3\1\s\j\n\9\u\8\c\m\q\f\f\2\0\q\n\o\t\t\5\h\9\6\8\l\9\x\o\k\b\x\j\b\y\b\1\j\6\g\h\1\i\y\j\6\0\c\n\7\a\1\6\o\f\v\n\t\h\s\m\x\n\w\h\a\x\f\v\i\8\e\w\x\m\p\z\n\g\4\c\g\i\i\y\c\x\b\3\p\e\1\f\a\w\g\w\p\i\o\d\r\r\9\u\m\h\z\7\r\8\h\b\v\r\p\7\v\q\f\v\0\7\4\r\4\o\n\d\c\n\a\8\z\t\x\b\9\9\g\e\a\b\u\a\i\h\4\r\u\b\d\4\f\y\v\t\a\8\n\d\r\j\s\7\x\5\z\u\s\l\d\e\h\r\x\o\g\b\m\s\m\m\j\b\y\p\d\x\4\4\x\5\8\m\9\r\v\q\7\q\x\1\d\v\d\l\a\m\4\t\d\b\z\p\t\5\8\q\k\7\c\m\j\0\r\o\o\d\3\o\s\9\4\q\w\4\g\e\8\w\u\c\7\k\i\9\3\b\y\b\y\4\k\j\q\t\8\6\5\p\u\o\h\o\f\b\h\m\5\v\e\l\2\f\m\j\o\p\m\v\p\1\9\r\e\5\z\l\k\h\i\b\w\n\2\f\q\o\a\i\c\3\4\b\e\t\6\z\l\x\f\z\r\9\6\w\f\1\b\m\x\7\h\d\6\j\0\r\4\z\v\8\k\j\s\m\4\9\a\n\g\i\u\v\z\g\s\s\x\6\d\k\7\1\u\f\p\r\q\o\t\6\1\n\p\a\7\d\a\c\0\r\t\1\j\j\v\o\6\z\0\1\d\3\q\o\k\c\y\0\7\t\5\x\o\v\h\a\8\u\b\b\p\b\6\d\x\e\9\i\8\f\5\f\k\t\s\a\u\q\7\v\2\y\a\r\y\3\z\w\2\5\y\d\b\z\g\u\g\x\n\0\z\0\a\4\c\q\i\i\t\5\f\4\p\d\c\q\a\0\a\u\k\o\1\4\1\b\2\5\a\z\s\e\1\y\v\8\y\5\5\v\e\x\i\a\2\e\h\n\h\6\j\n\r\7\1\0\h\g\7\7\0\p\m\8\p\x\1\y\c\c\t\a\z\u\j\p\d\w\3\7\f\8\s\d\w\y\7\n\n\t\8\g\1\v\s\o\l\v\j\3\1\r\m\e\a\8\w\w ]] 00:25:22.375 00:25:22.375 real 0m1.517s 00:25:22.375 user 0m0.954s 00:25:22.375 sys 0m0.410s 00:25:22.375 02:48:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:22.375 02:48:47 -- common/autotest_common.sh@10 -- # set +x 00:25:22.375 02:48:47 -- dd/basic_rw.sh@1 -- # cleanup 00:25:22.375 02:48:47 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:25:22.375 02:48:47 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:25:22.375 02:48:47 -- dd/common.sh@11 -- # local nvme_ref= 00:25:22.375 02:48:47 -- dd/common.sh@12 -- # local size=0xffff 00:25:22.375 02:48:47 -- dd/common.sh@14 -- # local bs=1048576 00:25:22.375 02:48:47 -- dd/common.sh@15 -- # local count=1 00:25:22.375 02:48:47 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:25:22.375 02:48:47 -- dd/common.sh@18 -- # gen_conf 00:25:22.375 02:48:47 -- dd/common.sh@31 -- # xtrace_disable 00:25:22.375 02:48:47 -- common/autotest_common.sh@10 -- # set +x 00:25:22.633 [2024-07-11 02:48:47.490906] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:22.633 [2024-07-11 02:48:47.491232] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146965 ] 00:25:22.633 { 00:25:22.633 "subsystems": [ 00:25:22.633 { 00:25:22.633 "subsystem": "bdev", 00:25:22.633 "config": [ 00:25:22.633 { 00:25:22.633 "params": { 00:25:22.633 "trtype": "pcie", 00:25:22.633 "traddr": "0000:00:06.0", 00:25:22.633 "name": "Nvme0" 00:25:22.633 }, 00:25:22.633 "method": "bdev_nvme_attach_controller" 00:25:22.633 }, 00:25:22.633 { 00:25:22.633 "method": "bdev_wait_for_examine" 00:25:22.633 } 00:25:22.633 ] 00:25:22.633 } 00:25:22.633 ] 00:25:22.633 } 00:25:22.633 [2024-07-11 02:48:47.641436] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:22.633 [2024-07-11 02:48:47.716774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:23.149  Copying: 1024/1024 [kB] (average 1000 MBps) 00:25:23.149 00:25:23.149 02:48:48 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:23.149 00:25:23.149 real 0m19.600s 00:25:23.149 user 0m13.058s 00:25:23.149 sys 0m4.826s 00:25:23.149 02:48:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:23.149 02:48:48 -- common/autotest_common.sh@10 -- # set +x 00:25:23.149 ************************************ 00:25:23.149 END TEST spdk_dd_basic_rw 00:25:23.149 ************************************ 00:25:23.149 02:48:48 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:25:23.149 02:48:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:23.149 02:48:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:23.149 02:48:48 -- common/autotest_common.sh@10 -- # set +x 00:25:23.149 ************************************ 00:25:23.149 START TEST spdk_dd_posix 00:25:23.149 ************************************ 00:25:23.149 02:48:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:25:23.407 * Looking for test storage... 00:25:23.407 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:25:23.407 02:48:48 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:23.407 02:48:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:23.407 02:48:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:23.407 02:48:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:23.407 02:48:48 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:23.407 02:48:48 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:23.407 02:48:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:23.407 02:48:48 -- paths/export.sh@5 -- # export PATH 00:25:23.407 02:48:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:23.407 02:48:48 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:25:23.407 02:48:48 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:25:23.407 02:48:48 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:25:23.407 02:48:48 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:25:23.407 02:48:48 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:23.407 02:48:48 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:23.407 02:48:48 -- dd/posix.sh@130 -- # tests 00:25:23.407 02:48:48 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', using AIO' 00:25:23.407 * First test run, using AIO 00:25:23.407 02:48:48 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:25:23.407 02:48:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:23.407 02:48:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:23.407 02:48:48 -- common/autotest_common.sh@10 -- # set +x 00:25:23.407 ************************************ 00:25:23.407 START TEST dd_flag_append 00:25:23.407 ************************************ 00:25:23.407 02:48:48 -- common/autotest_common.sh@1104 -- # append 00:25:23.407 02:48:48 -- dd/posix.sh@16 -- # local dump0 00:25:23.407 02:48:48 -- dd/posix.sh@17 -- # local dump1 00:25:23.407 02:48:48 -- dd/posix.sh@19 -- # gen_bytes 32 00:25:23.407 02:48:48 -- dd/common.sh@98 -- # xtrace_disable 00:25:23.407 02:48:48 -- common/autotest_common.sh@10 -- # set +x 00:25:23.407 02:48:48 -- dd/posix.sh@19 -- # dump0=fj4wwxns4qlfzwf4cmg2zmrp92c0usrw 00:25:23.407 02:48:48 -- dd/posix.sh@20 -- # gen_bytes 32 00:25:23.407 02:48:48 -- dd/common.sh@98 -- # xtrace_disable 00:25:23.407 02:48:48 -- common/autotest_common.sh@10 -- # set +x 00:25:23.407 02:48:48 -- dd/posix.sh@20 -- # dump1=6ygqtgbaqmy5b1tk3v98o6z6elt8lwcv 00:25:23.407 02:48:48 -- dd/posix.sh@22 -- # printf %s fj4wwxns4qlfzwf4cmg2zmrp92c0usrw 00:25:23.407 02:48:48 -- dd/posix.sh@23 -- # printf %s 6ygqtgbaqmy5b1tk3v98o6z6elt8lwcv 00:25:23.407 02:48:48 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:25:23.407 [2024-07-11 02:48:48.391371] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:23.407 [2024-07-11 02:48:48.392303] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147034 ] 00:25:23.665 [2024-07-11 02:48:48.543691] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:23.665 [2024-07-11 02:48:48.622866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:23.922  Copying: 32/32 [B] (average 31 kBps) 00:25:23.922 00:25:24.180 02:48:49 -- dd/posix.sh@27 -- # [[ 6ygqtgbaqmy5b1tk3v98o6z6elt8lwcvfj4wwxns4qlfzwf4cmg2zmrp92c0usrw == \6\y\g\q\t\g\b\a\q\m\y\5\b\1\t\k\3\v\9\8\o\6\z\6\e\l\t\8\l\w\c\v\f\j\4\w\w\x\n\s\4\q\l\f\z\w\f\4\c\m\g\2\z\m\r\p\9\2\c\0\u\s\r\w ]] 00:25:24.180 00:25:24.180 real 0m0.690s 00:25:24.180 user 0m0.341s 00:25:24.180 sys 0m0.206s 00:25:24.180 02:48:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:24.180 02:48:49 -- common/autotest_common.sh@10 -- # set +x 00:25:24.180 ************************************ 00:25:24.180 END TEST dd_flag_append 00:25:24.180 ************************************ 00:25:24.180 02:48:49 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:25:24.180 02:48:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:24.180 02:48:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:24.180 02:48:49 -- common/autotest_common.sh@10 -- # set +x 00:25:24.180 ************************************ 00:25:24.180 START TEST dd_flag_directory 00:25:24.180 ************************************ 00:25:24.180 02:48:49 -- common/autotest_common.sh@1104 -- # directory 00:25:24.180 02:48:49 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:24.180 02:48:49 -- common/autotest_common.sh@640 -- # local es=0 00:25:24.180 02:48:49 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:24.180 02:48:49 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:24.180 02:48:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:24.180 02:48:49 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:24.180 02:48:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:24.180 02:48:49 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:24.180 02:48:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:24.180 02:48:49 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:24.180 02:48:49 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:24.180 02:48:49 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:24.180 [2024-07-11 02:48:49.124863] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:24.180 [2024-07-11 02:48:49.125074] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147063 ] 00:25:24.180 [2024-07-11 02:48:49.266620] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:24.438 [2024-07-11 02:48:49.355094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:24.439 [2024-07-11 02:48:49.438126] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:25:24.439 [2024-07-11 02:48:49.438232] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:25:24.439 [2024-07-11 02:48:49.438273] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:24.698 [2024-07-11 02:48:49.559080] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:25:24.698 02:48:49 -- common/autotest_common.sh@643 -- # es=236 00:25:24.698 02:48:49 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:24.698 02:48:49 -- common/autotest_common.sh@652 -- # es=108 00:25:24.698 02:48:49 -- common/autotest_common.sh@653 -- # case "$es" in 00:25:24.698 02:48:49 -- common/autotest_common.sh@660 -- # es=1 00:25:24.698 02:48:49 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:24.698 02:48:49 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:25:24.698 02:48:49 -- common/autotest_common.sh@640 -- # local es=0 00:25:24.698 02:48:49 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:25:24.698 02:48:49 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:24.698 02:48:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:24.698 02:48:49 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:24.698 02:48:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:24.698 02:48:49 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:24.698 02:48:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:24.698 02:48:49 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:24.698 02:48:49 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:24.698 02:48:49 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:25:24.698 [2024-07-11 02:48:49.739900] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:24.698 [2024-07-11 02:48:49.740405] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147081 ] 00:25:24.957 [2024-07-11 02:48:49.892657] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:24.957 [2024-07-11 02:48:49.953102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:24.957 [2024-07-11 02:48:50.034550] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:25:24.957 [2024-07-11 02:48:50.034657] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:25:24.957 [2024-07-11 02:48:50.034700] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:25.215 [2024-07-11 02:48:50.150910] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:25:25.215 02:48:50 -- common/autotest_common.sh@643 -- # es=236 00:25:25.215 02:48:50 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:25.215 02:48:50 -- common/autotest_common.sh@652 -- # es=108 00:25:25.215 02:48:50 -- common/autotest_common.sh@653 -- # case "$es" in 00:25:25.215 02:48:50 -- common/autotest_common.sh@660 -- # es=1 00:25:25.215 02:48:50 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:25.215 00:25:25.215 real 0m1.202s 00:25:25.215 user 0m0.647s 00:25:25.215 sys 0m0.355s 00:25:25.215 02:48:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:25.215 ************************************ 00:25:25.215 02:48:50 -- common/autotest_common.sh@10 -- # set +x 00:25:25.215 END TEST dd_flag_directory 00:25:25.215 ************************************ 00:25:25.474 02:48:50 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:25:25.474 02:48:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:25.474 02:48:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:25.474 02:48:50 -- common/autotest_common.sh@10 -- # set +x 00:25:25.474 ************************************ 00:25:25.474 START TEST dd_flag_nofollow 00:25:25.474 ************************************ 00:25:25.474 02:48:50 -- common/autotest_common.sh@1104 -- # nofollow 00:25:25.474 02:48:50 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:25:25.474 02:48:50 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:25:25.474 02:48:50 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:25:25.474 02:48:50 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:25:25.474 02:48:50 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:25.474 02:48:50 -- common/autotest_common.sh@640 -- # local es=0 00:25:25.474 02:48:50 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:25.474 02:48:50 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:25.474 02:48:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:25.474 02:48:50 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:25.474 02:48:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:25.474 02:48:50 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:25.474 02:48:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:25.474 02:48:50 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:25.474 02:48:50 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:25.474 02:48:50 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:25.474 [2024-07-11 02:48:50.391929] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:25.474 [2024-07-11 02:48:50.392186] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147114 ] 00:25:25.474 [2024-07-11 02:48:50.537383] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:25.733 [2024-07-11 02:48:50.607770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:25.733 [2024-07-11 02:48:50.688574] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:25:25.733 [2024-07-11 02:48:50.688682] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:25:25.733 [2024-07-11 02:48:50.688727] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:25.733 [2024-07-11 02:48:50.805993] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:25:25.992 02:48:50 -- common/autotest_common.sh@643 -- # es=216 00:25:25.992 02:48:50 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:25.992 02:48:50 -- common/autotest_common.sh@652 -- # es=88 00:25:25.992 02:48:50 -- common/autotest_common.sh@653 -- # case "$es" in 00:25:25.992 02:48:50 -- common/autotest_common.sh@660 -- # es=1 00:25:25.992 02:48:50 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:25.992 02:48:50 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:25:25.992 02:48:50 -- common/autotest_common.sh@640 -- # local es=0 00:25:25.992 02:48:50 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:25:25.992 02:48:50 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:25.992 02:48:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:25.992 02:48:50 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:25.992 02:48:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:25.992 02:48:50 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:25.992 02:48:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:25.992 02:48:50 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:25.992 02:48:50 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:25.992 02:48:50 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:25:25.992 [2024-07-11 02:48:51.010466] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:25.992 [2024-07-11 02:48:51.010744] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147126 ] 00:25:26.250 [2024-07-11 02:48:51.161979] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:26.250 [2024-07-11 02:48:51.245361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:26.250 [2024-07-11 02:48:51.334722] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:25:26.250 [2024-07-11 02:48:51.334841] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:25:26.250 [2024-07-11 02:48:51.334890] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:26.510 [2024-07-11 02:48:51.456031] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:25:26.510 02:48:51 -- common/autotest_common.sh@643 -- # es=216 00:25:26.510 02:48:51 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:26.510 02:48:51 -- common/autotest_common.sh@652 -- # es=88 00:25:26.510 02:48:51 -- common/autotest_common.sh@653 -- # case "$es" in 00:25:26.510 02:48:51 -- common/autotest_common.sh@660 -- # es=1 00:25:26.510 02:48:51 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:26.510 02:48:51 -- dd/posix.sh@46 -- # gen_bytes 512 00:25:26.510 02:48:51 -- dd/common.sh@98 -- # xtrace_disable 00:25:26.510 02:48:51 -- common/autotest_common.sh@10 -- # set +x 00:25:26.510 02:48:51 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:26.769 [2024-07-11 02:48:51.630048] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:26.769 [2024-07-11 02:48:51.631006] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147138 ] 00:25:26.769 [2024-07-11 02:48:51.782063] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:27.028 [2024-07-11 02:48:51.866267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:27.287  Copying: 512/512 [B] (average 500 kBps) 00:25:27.287 00:25:27.288 02:48:52 -- dd/posix.sh@49 -- # [[ bhy137wohcbff9m5emu111d0nt0bo7ys3eou4tn91f752m7wd58xzuv3s0ghc00e9j4mh2c5vvyvx2prtt8urxlckq507k66dnwrcm7ve0d8qwdjtjf7tehauxh7am413wlt4fpf6y8lzg1bfu9215ztjbm4azeiy3v684lqxgh8k1ou75pwbwk1vb1k0vtid1vacbaatwdr23dlduwiybi9eahgtbogqg045zunngc66vd1za9by4woe0e8w0e2t6n9f56xq7e7m4r6u0fo5flvwmix9x8wlma8igyo5860taamafx3mtwiw3wzc31s6frzieuqh6gm79hha4aysccs8cfuswq71j2k2ptund6nczhqd77btd3cfv8fuh7sa1gh8pk7mfqlo582pcgzi7tqi1k4ezj0vhl43ekeva6nh2zv1h0ctundpx54k5l4tf0nupx2nqkt7hmrp83pffhyaycgg91ue6dcq5qilwrzot6dfqtbq0a2wxkwn9mx == \b\h\y\1\3\7\w\o\h\c\b\f\f\9\m\5\e\m\u\1\1\1\d\0\n\t\0\b\o\7\y\s\3\e\o\u\4\t\n\9\1\f\7\5\2\m\7\w\d\5\8\x\z\u\v\3\s\0\g\h\c\0\0\e\9\j\4\m\h\2\c\5\v\v\y\v\x\2\p\r\t\t\8\u\r\x\l\c\k\q\5\0\7\k\6\6\d\n\w\r\c\m\7\v\e\0\d\8\q\w\d\j\t\j\f\7\t\e\h\a\u\x\h\7\a\m\4\1\3\w\l\t\4\f\p\f\6\y\8\l\z\g\1\b\f\u\9\2\1\5\z\t\j\b\m\4\a\z\e\i\y\3\v\6\8\4\l\q\x\g\h\8\k\1\o\u\7\5\p\w\b\w\k\1\v\b\1\k\0\v\t\i\d\1\v\a\c\b\a\a\t\w\d\r\2\3\d\l\d\u\w\i\y\b\i\9\e\a\h\g\t\b\o\g\q\g\0\4\5\z\u\n\n\g\c\6\6\v\d\1\z\a\9\b\y\4\w\o\e\0\e\8\w\0\e\2\t\6\n\9\f\5\6\x\q\7\e\7\m\4\r\6\u\0\f\o\5\f\l\v\w\m\i\x\9\x\8\w\l\m\a\8\i\g\y\o\5\8\6\0\t\a\a\m\a\f\x\3\m\t\w\i\w\3\w\z\c\3\1\s\6\f\r\z\i\e\u\q\h\6\g\m\7\9\h\h\a\4\a\y\s\c\c\s\8\c\f\u\s\w\q\7\1\j\2\k\2\p\t\u\n\d\6\n\c\z\h\q\d\7\7\b\t\d\3\c\f\v\8\f\u\h\7\s\a\1\g\h\8\p\k\7\m\f\q\l\o\5\8\2\p\c\g\z\i\7\t\q\i\1\k\4\e\z\j\0\v\h\l\4\3\e\k\e\v\a\6\n\h\2\z\v\1\h\0\c\t\u\n\d\p\x\5\4\k\5\l\4\t\f\0\n\u\p\x\2\n\q\k\t\7\h\m\r\p\8\3\p\f\f\h\y\a\y\c\g\g\9\1\u\e\6\d\c\q\5\q\i\l\w\r\z\o\t\6\d\f\q\t\b\q\0\a\2\w\x\k\w\n\9\m\x ]] 00:25:27.288 00:25:27.288 real 0m1.901s 00:25:27.288 user 0m1.018s 00:25:27.288 sys 0m0.547s 00:25:27.288 ************************************ 00:25:27.288 END TEST dd_flag_nofollow 00:25:27.288 ************************************ 00:25:27.288 02:48:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:27.288 02:48:52 -- common/autotest_common.sh@10 -- # set +x 00:25:27.288 02:48:52 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:25:27.288 02:48:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:27.288 02:48:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:27.288 02:48:52 -- common/autotest_common.sh@10 -- # set +x 00:25:27.288 ************************************ 00:25:27.288 START TEST dd_flag_noatime 00:25:27.288 ************************************ 00:25:27.288 02:48:52 -- common/autotest_common.sh@1104 -- # noatime 00:25:27.288 02:48:52 -- dd/posix.sh@53 -- # local atime_if 00:25:27.288 02:48:52 -- dd/posix.sh@54 -- # local atime_of 00:25:27.288 02:48:52 -- dd/posix.sh@58 -- # gen_bytes 512 00:25:27.288 02:48:52 -- dd/common.sh@98 -- # xtrace_disable 00:25:27.288 02:48:52 -- common/autotest_common.sh@10 -- # set +x 00:25:27.288 02:48:52 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:27.288 02:48:52 -- dd/posix.sh@60 -- # atime_if=1720666131 00:25:27.288 02:48:52 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:27.288 02:48:52 -- dd/posix.sh@61 -- # atime_of=1720666132 00:25:27.288 02:48:52 -- dd/posix.sh@66 -- # sleep 1 00:25:28.228 02:48:53 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:28.487 [2024-07-11 02:48:53.365767] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:28.487 [2024-07-11 02:48:53.366701] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147213 ] 00:25:28.487 [2024-07-11 02:48:53.518135] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:28.746 [2024-07-11 02:48:53.587289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:29.003  Copying: 512/512 [B] (average 500 kBps) 00:25:29.003 00:25:29.003 02:48:53 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:29.003 02:48:53 -- dd/posix.sh@69 -- # (( atime_if == 1720666131 )) 00:25:29.003 02:48:53 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:29.003 02:48:53 -- dd/posix.sh@70 -- # (( atime_of == 1720666132 )) 00:25:29.003 02:48:53 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:29.003 [2024-07-11 02:48:53.995401] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:29.003 [2024-07-11 02:48:53.996271] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147220 ] 00:25:29.261 [2024-07-11 02:48:54.145716] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:29.261 [2024-07-11 02:48:54.206000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:29.520  Copying: 512/512 [B] (average 500 kBps) 00:25:29.520 00:25:29.520 02:48:54 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:29.520 02:48:54 -- dd/posix.sh@73 -- # (( atime_if < 1720666134 )) 00:25:29.520 00:25:29.520 real 0m2.278s 00:25:29.520 user 0m0.615s 00:25:29.520 sys 0m0.382s 00:25:29.520 02:48:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:29.520 02:48:54 -- common/autotest_common.sh@10 -- # set +x 00:25:29.520 ************************************ 00:25:29.520 END TEST dd_flag_noatime 00:25:29.520 ************************************ 00:25:29.520 02:48:54 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:25:29.520 02:48:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:29.520 02:48:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:29.520 02:48:54 -- common/autotest_common.sh@10 -- # set +x 00:25:29.779 ************************************ 00:25:29.779 START TEST dd_flags_misc 00:25:29.779 ************************************ 00:25:29.779 02:48:54 -- common/autotest_common.sh@1104 -- # io 00:25:29.779 02:48:54 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:25:29.779 02:48:54 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:25:29.779 02:48:54 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:25:29.779 02:48:54 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:25:29.779 02:48:54 -- dd/posix.sh@86 -- # gen_bytes 512 00:25:29.779 02:48:54 -- dd/common.sh@98 -- # xtrace_disable 00:25:29.779 02:48:54 -- common/autotest_common.sh@10 -- # set +x 00:25:29.779 02:48:54 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:29.779 02:48:54 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:25:29.779 [2024-07-11 02:48:54.675221] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:29.779 [2024-07-11 02:48:54.675624] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147256 ] 00:25:29.779 [2024-07-11 02:48:54.821952] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:30.038 [2024-07-11 02:48:54.874719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:30.296  Copying: 512/512 [B] (average 500 kBps) 00:25:30.296 00:25:30.296 02:48:55 -- dd/posix.sh@93 -- # [[ 60dx9cwnpdyx86jmaw4suh5s70fwwcdvn2kxqnpl3sltbzlhlokinxlarz9b60euuvlpu71qqfemjuoj32n9i4xtpis4zg90xmexmv6jewlvsye7naee9xozhwf2swpwmg5y51xb0jejm6r79sbdfxm357l80oe7l1x0c7on6mgwpf2953wocby5h11fqaftha0p4fercz7htz3gwwqsgbzwhypjm36ja4e5eqa9r0osw3jcs0solac4h9zn8oe9u4j6eg0bxkiwva92bs9yj3y75at17bhmbdf477lf75gd2dk263fp7mixidr8r4pogrycnqg6s9tlmby0xhcsfhihzt2p0sex7mr5zikuvqgell8664hjvxvewh25lezve230x3ory629qyaumnbcmvib8ph864f4szrgkgpsrc0z1zteo6f4ef7kewzvly044ii65qercmz9ibzdcqbym4u0trro1zk8omuh7i63gggti1ad9qi3lrw9snepyglw == \6\0\d\x\9\c\w\n\p\d\y\x\8\6\j\m\a\w\4\s\u\h\5\s\7\0\f\w\w\c\d\v\n\2\k\x\q\n\p\l\3\s\l\t\b\z\l\h\l\o\k\i\n\x\l\a\r\z\9\b\6\0\e\u\u\v\l\p\u\7\1\q\q\f\e\m\j\u\o\j\3\2\n\9\i\4\x\t\p\i\s\4\z\g\9\0\x\m\e\x\m\v\6\j\e\w\l\v\s\y\e\7\n\a\e\e\9\x\o\z\h\w\f\2\s\w\p\w\m\g\5\y\5\1\x\b\0\j\e\j\m\6\r\7\9\s\b\d\f\x\m\3\5\7\l\8\0\o\e\7\l\1\x\0\c\7\o\n\6\m\g\w\p\f\2\9\5\3\w\o\c\b\y\5\h\1\1\f\q\a\f\t\h\a\0\p\4\f\e\r\c\z\7\h\t\z\3\g\w\w\q\s\g\b\z\w\h\y\p\j\m\3\6\j\a\4\e\5\e\q\a\9\r\0\o\s\w\3\j\c\s\0\s\o\l\a\c\4\h\9\z\n\8\o\e\9\u\4\j\6\e\g\0\b\x\k\i\w\v\a\9\2\b\s\9\y\j\3\y\7\5\a\t\1\7\b\h\m\b\d\f\4\7\7\l\f\7\5\g\d\2\d\k\2\6\3\f\p\7\m\i\x\i\d\r\8\r\4\p\o\g\r\y\c\n\q\g\6\s\9\t\l\m\b\y\0\x\h\c\s\f\h\i\h\z\t\2\p\0\s\e\x\7\m\r\5\z\i\k\u\v\q\g\e\l\l\8\6\6\4\h\j\v\x\v\e\w\h\2\5\l\e\z\v\e\2\3\0\x\3\o\r\y\6\2\9\q\y\a\u\m\n\b\c\m\v\i\b\8\p\h\8\6\4\f\4\s\z\r\g\k\g\p\s\r\c\0\z\1\z\t\e\o\6\f\4\e\f\7\k\e\w\z\v\l\y\0\4\4\i\i\6\5\q\e\r\c\m\z\9\i\b\z\d\c\q\b\y\m\4\u\0\t\r\r\o\1\z\k\8\o\m\u\h\7\i\6\3\g\g\g\t\i\1\a\d\9\q\i\3\l\r\w\9\s\n\e\p\y\g\l\w ]] 00:25:30.296 02:48:55 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:30.296 02:48:55 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:25:30.296 [2024-07-11 02:48:55.292638] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:30.297 [2024-07-11 02:48:55.293940] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147261 ] 00:25:30.555 [2024-07-11 02:48:55.446359] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:30.555 [2024-07-11 02:48:55.513736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:30.817  Copying: 512/512 [B] (average 500 kBps) 00:25:30.817 00:25:30.817 02:48:55 -- dd/posix.sh@93 -- # [[ 60dx9cwnpdyx86jmaw4suh5s70fwwcdvn2kxqnpl3sltbzlhlokinxlarz9b60euuvlpu71qqfemjuoj32n9i4xtpis4zg90xmexmv6jewlvsye7naee9xozhwf2swpwmg5y51xb0jejm6r79sbdfxm357l80oe7l1x0c7on6mgwpf2953wocby5h11fqaftha0p4fercz7htz3gwwqsgbzwhypjm36ja4e5eqa9r0osw3jcs0solac4h9zn8oe9u4j6eg0bxkiwva92bs9yj3y75at17bhmbdf477lf75gd2dk263fp7mixidr8r4pogrycnqg6s9tlmby0xhcsfhihzt2p0sex7mr5zikuvqgell8664hjvxvewh25lezve230x3ory629qyaumnbcmvib8ph864f4szrgkgpsrc0z1zteo6f4ef7kewzvly044ii65qercmz9ibzdcqbym4u0trro1zk8omuh7i63gggti1ad9qi3lrw9snepyglw == \6\0\d\x\9\c\w\n\p\d\y\x\8\6\j\m\a\w\4\s\u\h\5\s\7\0\f\w\w\c\d\v\n\2\k\x\q\n\p\l\3\s\l\t\b\z\l\h\l\o\k\i\n\x\l\a\r\z\9\b\6\0\e\u\u\v\l\p\u\7\1\q\q\f\e\m\j\u\o\j\3\2\n\9\i\4\x\t\p\i\s\4\z\g\9\0\x\m\e\x\m\v\6\j\e\w\l\v\s\y\e\7\n\a\e\e\9\x\o\z\h\w\f\2\s\w\p\w\m\g\5\y\5\1\x\b\0\j\e\j\m\6\r\7\9\s\b\d\f\x\m\3\5\7\l\8\0\o\e\7\l\1\x\0\c\7\o\n\6\m\g\w\p\f\2\9\5\3\w\o\c\b\y\5\h\1\1\f\q\a\f\t\h\a\0\p\4\f\e\r\c\z\7\h\t\z\3\g\w\w\q\s\g\b\z\w\h\y\p\j\m\3\6\j\a\4\e\5\e\q\a\9\r\0\o\s\w\3\j\c\s\0\s\o\l\a\c\4\h\9\z\n\8\o\e\9\u\4\j\6\e\g\0\b\x\k\i\w\v\a\9\2\b\s\9\y\j\3\y\7\5\a\t\1\7\b\h\m\b\d\f\4\7\7\l\f\7\5\g\d\2\d\k\2\6\3\f\p\7\m\i\x\i\d\r\8\r\4\p\o\g\r\y\c\n\q\g\6\s\9\t\l\m\b\y\0\x\h\c\s\f\h\i\h\z\t\2\p\0\s\e\x\7\m\r\5\z\i\k\u\v\q\g\e\l\l\8\6\6\4\h\j\v\x\v\e\w\h\2\5\l\e\z\v\e\2\3\0\x\3\o\r\y\6\2\9\q\y\a\u\m\n\b\c\m\v\i\b\8\p\h\8\6\4\f\4\s\z\r\g\k\g\p\s\r\c\0\z\1\z\t\e\o\6\f\4\e\f\7\k\e\w\z\v\l\y\0\4\4\i\i\6\5\q\e\r\c\m\z\9\i\b\z\d\c\q\b\y\m\4\u\0\t\r\r\o\1\z\k\8\o\m\u\h\7\i\6\3\g\g\g\t\i\1\a\d\9\q\i\3\l\r\w\9\s\n\e\p\y\g\l\w ]] 00:25:30.817 02:48:55 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:30.817 02:48:55 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:25:31.082 [2024-07-11 02:48:55.922629] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:31.082 [2024-07-11 02:48:55.923050] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147282 ] 00:25:31.082 [2024-07-11 02:48:56.068640] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:31.082 [2024-07-11 02:48:56.128546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:31.599  Copying: 512/512 [B] (average 250 kBps) 00:25:31.599 00:25:31.599 02:48:56 -- dd/posix.sh@93 -- # [[ 60dx9cwnpdyx86jmaw4suh5s70fwwcdvn2kxqnpl3sltbzlhlokinxlarz9b60euuvlpu71qqfemjuoj32n9i4xtpis4zg90xmexmv6jewlvsye7naee9xozhwf2swpwmg5y51xb0jejm6r79sbdfxm357l80oe7l1x0c7on6mgwpf2953wocby5h11fqaftha0p4fercz7htz3gwwqsgbzwhypjm36ja4e5eqa9r0osw3jcs0solac4h9zn8oe9u4j6eg0bxkiwva92bs9yj3y75at17bhmbdf477lf75gd2dk263fp7mixidr8r4pogrycnqg6s9tlmby0xhcsfhihzt2p0sex7mr5zikuvqgell8664hjvxvewh25lezve230x3ory629qyaumnbcmvib8ph864f4szrgkgpsrc0z1zteo6f4ef7kewzvly044ii65qercmz9ibzdcqbym4u0trro1zk8omuh7i63gggti1ad9qi3lrw9snepyglw == \6\0\d\x\9\c\w\n\p\d\y\x\8\6\j\m\a\w\4\s\u\h\5\s\7\0\f\w\w\c\d\v\n\2\k\x\q\n\p\l\3\s\l\t\b\z\l\h\l\o\k\i\n\x\l\a\r\z\9\b\6\0\e\u\u\v\l\p\u\7\1\q\q\f\e\m\j\u\o\j\3\2\n\9\i\4\x\t\p\i\s\4\z\g\9\0\x\m\e\x\m\v\6\j\e\w\l\v\s\y\e\7\n\a\e\e\9\x\o\z\h\w\f\2\s\w\p\w\m\g\5\y\5\1\x\b\0\j\e\j\m\6\r\7\9\s\b\d\f\x\m\3\5\7\l\8\0\o\e\7\l\1\x\0\c\7\o\n\6\m\g\w\p\f\2\9\5\3\w\o\c\b\y\5\h\1\1\f\q\a\f\t\h\a\0\p\4\f\e\r\c\z\7\h\t\z\3\g\w\w\q\s\g\b\z\w\h\y\p\j\m\3\6\j\a\4\e\5\e\q\a\9\r\0\o\s\w\3\j\c\s\0\s\o\l\a\c\4\h\9\z\n\8\o\e\9\u\4\j\6\e\g\0\b\x\k\i\w\v\a\9\2\b\s\9\y\j\3\y\7\5\a\t\1\7\b\h\m\b\d\f\4\7\7\l\f\7\5\g\d\2\d\k\2\6\3\f\p\7\m\i\x\i\d\r\8\r\4\p\o\g\r\y\c\n\q\g\6\s\9\t\l\m\b\y\0\x\h\c\s\f\h\i\h\z\t\2\p\0\s\e\x\7\m\r\5\z\i\k\u\v\q\g\e\l\l\8\6\6\4\h\j\v\x\v\e\w\h\2\5\l\e\z\v\e\2\3\0\x\3\o\r\y\6\2\9\q\y\a\u\m\n\b\c\m\v\i\b\8\p\h\8\6\4\f\4\s\z\r\g\k\g\p\s\r\c\0\z\1\z\t\e\o\6\f\4\e\f\7\k\e\w\z\v\l\y\0\4\4\i\i\6\5\q\e\r\c\m\z\9\i\b\z\d\c\q\b\y\m\4\u\0\t\r\r\o\1\z\k\8\o\m\u\h\7\i\6\3\g\g\g\t\i\1\a\d\9\q\i\3\l\r\w\9\s\n\e\p\y\g\l\w ]] 00:25:31.599 02:48:56 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:31.599 02:48:56 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:25:31.599 [2024-07-11 02:48:56.535075] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:31.599 [2024-07-11 02:48:56.535873] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147294 ] 00:25:31.599 [2024-07-11 02:48:56.677917] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:31.859 [2024-07-11 02:48:56.737994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:32.118  Copying: 512/512 [B] (average 166 kBps) 00:25:32.118 00:25:32.118 02:48:57 -- dd/posix.sh@93 -- # [[ 60dx9cwnpdyx86jmaw4suh5s70fwwcdvn2kxqnpl3sltbzlhlokinxlarz9b60euuvlpu71qqfemjuoj32n9i4xtpis4zg90xmexmv6jewlvsye7naee9xozhwf2swpwmg5y51xb0jejm6r79sbdfxm357l80oe7l1x0c7on6mgwpf2953wocby5h11fqaftha0p4fercz7htz3gwwqsgbzwhypjm36ja4e5eqa9r0osw3jcs0solac4h9zn8oe9u4j6eg0bxkiwva92bs9yj3y75at17bhmbdf477lf75gd2dk263fp7mixidr8r4pogrycnqg6s9tlmby0xhcsfhihzt2p0sex7mr5zikuvqgell8664hjvxvewh25lezve230x3ory629qyaumnbcmvib8ph864f4szrgkgpsrc0z1zteo6f4ef7kewzvly044ii65qercmz9ibzdcqbym4u0trro1zk8omuh7i63gggti1ad9qi3lrw9snepyglw == \6\0\d\x\9\c\w\n\p\d\y\x\8\6\j\m\a\w\4\s\u\h\5\s\7\0\f\w\w\c\d\v\n\2\k\x\q\n\p\l\3\s\l\t\b\z\l\h\l\o\k\i\n\x\l\a\r\z\9\b\6\0\e\u\u\v\l\p\u\7\1\q\q\f\e\m\j\u\o\j\3\2\n\9\i\4\x\t\p\i\s\4\z\g\9\0\x\m\e\x\m\v\6\j\e\w\l\v\s\y\e\7\n\a\e\e\9\x\o\z\h\w\f\2\s\w\p\w\m\g\5\y\5\1\x\b\0\j\e\j\m\6\r\7\9\s\b\d\f\x\m\3\5\7\l\8\0\o\e\7\l\1\x\0\c\7\o\n\6\m\g\w\p\f\2\9\5\3\w\o\c\b\y\5\h\1\1\f\q\a\f\t\h\a\0\p\4\f\e\r\c\z\7\h\t\z\3\g\w\w\q\s\g\b\z\w\h\y\p\j\m\3\6\j\a\4\e\5\e\q\a\9\r\0\o\s\w\3\j\c\s\0\s\o\l\a\c\4\h\9\z\n\8\o\e\9\u\4\j\6\e\g\0\b\x\k\i\w\v\a\9\2\b\s\9\y\j\3\y\7\5\a\t\1\7\b\h\m\b\d\f\4\7\7\l\f\7\5\g\d\2\d\k\2\6\3\f\p\7\m\i\x\i\d\r\8\r\4\p\o\g\r\y\c\n\q\g\6\s\9\t\l\m\b\y\0\x\h\c\s\f\h\i\h\z\t\2\p\0\s\e\x\7\m\r\5\z\i\k\u\v\q\g\e\l\l\8\6\6\4\h\j\v\x\v\e\w\h\2\5\l\e\z\v\e\2\3\0\x\3\o\r\y\6\2\9\q\y\a\u\m\n\b\c\m\v\i\b\8\p\h\8\6\4\f\4\s\z\r\g\k\g\p\s\r\c\0\z\1\z\t\e\o\6\f\4\e\f\7\k\e\w\z\v\l\y\0\4\4\i\i\6\5\q\e\r\c\m\z\9\i\b\z\d\c\q\b\y\m\4\u\0\t\r\r\o\1\z\k\8\o\m\u\h\7\i\6\3\g\g\g\t\i\1\a\d\9\q\i\3\l\r\w\9\s\n\e\p\y\g\l\w ]] 00:25:32.118 02:48:57 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:25:32.118 02:48:57 -- dd/posix.sh@86 -- # gen_bytes 512 00:25:32.118 02:48:57 -- dd/common.sh@98 -- # xtrace_disable 00:25:32.118 02:48:57 -- common/autotest_common.sh@10 -- # set +x 00:25:32.118 02:48:57 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:32.118 02:48:57 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:25:32.118 [2024-07-11 02:48:57.181069] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:32.119 [2024-07-11 02:48:57.181861] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147304 ] 00:25:32.377 [2024-07-11 02:48:57.327630] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:32.377 [2024-07-11 02:48:57.401431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:32.894  Copying: 512/512 [B] (average 500 kBps) 00:25:32.894 00:25:32.894 02:48:57 -- dd/posix.sh@93 -- # [[ o8dk3h6ullevmd96lyf6ea6lbfk79e2mmy358th8fhyge5ssoyv9vcphsbk1inc1btqgxwmu2govfwszzscq3zvlrfhnfejwyg59073mqm3s50jtbhqug5ibq65jzz8vcgn5bb7ohmtbkitcel27q65z2kj0yewhlb3w6ivo205p57pqmvr77vsc15zt30s0055v40xtfw7e5vvv0iab9rp6fu90yujxplvlmdtqxnxtc01g4lue5grf7pgk7qrlo7mg3ep0l0rbtd29lains9gcdnyjmdj4wk0a4gwqijetzo7jk37ywij0t2ujm5ccgyauretq4fq9inmcjvya6y8jzm57loz9cicy0lv2v7t0uodt47t4fhwh6tpws07uebyoh2hg2zhuh5iv7p1ty3ttzxf7lk1cq18d1jlxh32wdvwaewm6vxvg3y4ivudotk5xgbj69ssfxmxd0mzvtt6qnen9p06wgp89p1d1u20okzu82ruouepvostzir4s == \o\8\d\k\3\h\6\u\l\l\e\v\m\d\9\6\l\y\f\6\e\a\6\l\b\f\k\7\9\e\2\m\m\y\3\5\8\t\h\8\f\h\y\g\e\5\s\s\o\y\v\9\v\c\p\h\s\b\k\1\i\n\c\1\b\t\q\g\x\w\m\u\2\g\o\v\f\w\s\z\z\s\c\q\3\z\v\l\r\f\h\n\f\e\j\w\y\g\5\9\0\7\3\m\q\m\3\s\5\0\j\t\b\h\q\u\g\5\i\b\q\6\5\j\z\z\8\v\c\g\n\5\b\b\7\o\h\m\t\b\k\i\t\c\e\l\2\7\q\6\5\z\2\k\j\0\y\e\w\h\l\b\3\w\6\i\v\o\2\0\5\p\5\7\p\q\m\v\r\7\7\v\s\c\1\5\z\t\3\0\s\0\0\5\5\v\4\0\x\t\f\w\7\e\5\v\v\v\0\i\a\b\9\r\p\6\f\u\9\0\y\u\j\x\p\l\v\l\m\d\t\q\x\n\x\t\c\0\1\g\4\l\u\e\5\g\r\f\7\p\g\k\7\q\r\l\o\7\m\g\3\e\p\0\l\0\r\b\t\d\2\9\l\a\i\n\s\9\g\c\d\n\y\j\m\d\j\4\w\k\0\a\4\g\w\q\i\j\e\t\z\o\7\j\k\3\7\y\w\i\j\0\t\2\u\j\m\5\c\c\g\y\a\u\r\e\t\q\4\f\q\9\i\n\m\c\j\v\y\a\6\y\8\j\z\m\5\7\l\o\z\9\c\i\c\y\0\l\v\2\v\7\t\0\u\o\d\t\4\7\t\4\f\h\w\h\6\t\p\w\s\0\7\u\e\b\y\o\h\2\h\g\2\z\h\u\h\5\i\v\7\p\1\t\y\3\t\t\z\x\f\7\l\k\1\c\q\1\8\d\1\j\l\x\h\3\2\w\d\v\w\a\e\w\m\6\v\x\v\g\3\y\4\i\v\u\d\o\t\k\5\x\g\b\j\6\9\s\s\f\x\m\x\d\0\m\z\v\t\t\6\q\n\e\n\9\p\0\6\w\g\p\8\9\p\1\d\1\u\2\0\o\k\z\u\8\2\r\u\o\u\e\p\v\o\s\t\z\i\r\4\s ]] 00:25:32.894 02:48:57 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:32.894 02:48:57 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:25:32.894 [2024-07-11 02:48:57.807262] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:32.894 [2024-07-11 02:48:57.807487] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147316 ] 00:25:32.894 [2024-07-11 02:48:57.952437] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:33.153 [2024-07-11 02:48:58.016763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:33.412  Copying: 512/512 [B] (average 500 kBps) 00:25:33.412 00:25:33.412 02:48:58 -- dd/posix.sh@93 -- # [[ o8dk3h6ullevmd96lyf6ea6lbfk79e2mmy358th8fhyge5ssoyv9vcphsbk1inc1btqgxwmu2govfwszzscq3zvlrfhnfejwyg59073mqm3s50jtbhqug5ibq65jzz8vcgn5bb7ohmtbkitcel27q65z2kj0yewhlb3w6ivo205p57pqmvr77vsc15zt30s0055v40xtfw7e5vvv0iab9rp6fu90yujxplvlmdtqxnxtc01g4lue5grf7pgk7qrlo7mg3ep0l0rbtd29lains9gcdnyjmdj4wk0a4gwqijetzo7jk37ywij0t2ujm5ccgyauretq4fq9inmcjvya6y8jzm57loz9cicy0lv2v7t0uodt47t4fhwh6tpws07uebyoh2hg2zhuh5iv7p1ty3ttzxf7lk1cq18d1jlxh32wdvwaewm6vxvg3y4ivudotk5xgbj69ssfxmxd0mzvtt6qnen9p06wgp89p1d1u20okzu82ruouepvostzir4s == \o\8\d\k\3\h\6\u\l\l\e\v\m\d\9\6\l\y\f\6\e\a\6\l\b\f\k\7\9\e\2\m\m\y\3\5\8\t\h\8\f\h\y\g\e\5\s\s\o\y\v\9\v\c\p\h\s\b\k\1\i\n\c\1\b\t\q\g\x\w\m\u\2\g\o\v\f\w\s\z\z\s\c\q\3\z\v\l\r\f\h\n\f\e\j\w\y\g\5\9\0\7\3\m\q\m\3\s\5\0\j\t\b\h\q\u\g\5\i\b\q\6\5\j\z\z\8\v\c\g\n\5\b\b\7\o\h\m\t\b\k\i\t\c\e\l\2\7\q\6\5\z\2\k\j\0\y\e\w\h\l\b\3\w\6\i\v\o\2\0\5\p\5\7\p\q\m\v\r\7\7\v\s\c\1\5\z\t\3\0\s\0\0\5\5\v\4\0\x\t\f\w\7\e\5\v\v\v\0\i\a\b\9\r\p\6\f\u\9\0\y\u\j\x\p\l\v\l\m\d\t\q\x\n\x\t\c\0\1\g\4\l\u\e\5\g\r\f\7\p\g\k\7\q\r\l\o\7\m\g\3\e\p\0\l\0\r\b\t\d\2\9\l\a\i\n\s\9\g\c\d\n\y\j\m\d\j\4\w\k\0\a\4\g\w\q\i\j\e\t\z\o\7\j\k\3\7\y\w\i\j\0\t\2\u\j\m\5\c\c\g\y\a\u\r\e\t\q\4\f\q\9\i\n\m\c\j\v\y\a\6\y\8\j\z\m\5\7\l\o\z\9\c\i\c\y\0\l\v\2\v\7\t\0\u\o\d\t\4\7\t\4\f\h\w\h\6\t\p\w\s\0\7\u\e\b\y\o\h\2\h\g\2\z\h\u\h\5\i\v\7\p\1\t\y\3\t\t\z\x\f\7\l\k\1\c\q\1\8\d\1\j\l\x\h\3\2\w\d\v\w\a\e\w\m\6\v\x\v\g\3\y\4\i\v\u\d\o\t\k\5\x\g\b\j\6\9\s\s\f\x\m\x\d\0\m\z\v\t\t\6\q\n\e\n\9\p\0\6\w\g\p\8\9\p\1\d\1\u\2\0\o\k\z\u\8\2\r\u\o\u\e\p\v\o\s\t\z\i\r\4\s ]] 00:25:33.412 02:48:58 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:33.412 02:48:58 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:25:33.412 [2024-07-11 02:48:58.444411] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:33.412 [2024-07-11 02:48:58.444662] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147333 ] 00:25:33.670 [2024-07-11 02:48:58.587689] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:33.670 [2024-07-11 02:48:58.644646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:33.928  Copying: 512/512 [B] (average 166 kBps) 00:25:33.928 00:25:33.928 02:48:59 -- dd/posix.sh@93 -- # [[ o8dk3h6ullevmd96lyf6ea6lbfk79e2mmy358th8fhyge5ssoyv9vcphsbk1inc1btqgxwmu2govfwszzscq3zvlrfhnfejwyg59073mqm3s50jtbhqug5ibq65jzz8vcgn5bb7ohmtbkitcel27q65z2kj0yewhlb3w6ivo205p57pqmvr77vsc15zt30s0055v40xtfw7e5vvv0iab9rp6fu90yujxplvlmdtqxnxtc01g4lue5grf7pgk7qrlo7mg3ep0l0rbtd29lains9gcdnyjmdj4wk0a4gwqijetzo7jk37ywij0t2ujm5ccgyauretq4fq9inmcjvya6y8jzm57loz9cicy0lv2v7t0uodt47t4fhwh6tpws07uebyoh2hg2zhuh5iv7p1ty3ttzxf7lk1cq18d1jlxh32wdvwaewm6vxvg3y4ivudotk5xgbj69ssfxmxd0mzvtt6qnen9p06wgp89p1d1u20okzu82ruouepvostzir4s == \o\8\d\k\3\h\6\u\l\l\e\v\m\d\9\6\l\y\f\6\e\a\6\l\b\f\k\7\9\e\2\m\m\y\3\5\8\t\h\8\f\h\y\g\e\5\s\s\o\y\v\9\v\c\p\h\s\b\k\1\i\n\c\1\b\t\q\g\x\w\m\u\2\g\o\v\f\w\s\z\z\s\c\q\3\z\v\l\r\f\h\n\f\e\j\w\y\g\5\9\0\7\3\m\q\m\3\s\5\0\j\t\b\h\q\u\g\5\i\b\q\6\5\j\z\z\8\v\c\g\n\5\b\b\7\o\h\m\t\b\k\i\t\c\e\l\2\7\q\6\5\z\2\k\j\0\y\e\w\h\l\b\3\w\6\i\v\o\2\0\5\p\5\7\p\q\m\v\r\7\7\v\s\c\1\5\z\t\3\0\s\0\0\5\5\v\4\0\x\t\f\w\7\e\5\v\v\v\0\i\a\b\9\r\p\6\f\u\9\0\y\u\j\x\p\l\v\l\m\d\t\q\x\n\x\t\c\0\1\g\4\l\u\e\5\g\r\f\7\p\g\k\7\q\r\l\o\7\m\g\3\e\p\0\l\0\r\b\t\d\2\9\l\a\i\n\s\9\g\c\d\n\y\j\m\d\j\4\w\k\0\a\4\g\w\q\i\j\e\t\z\o\7\j\k\3\7\y\w\i\j\0\t\2\u\j\m\5\c\c\g\y\a\u\r\e\t\q\4\f\q\9\i\n\m\c\j\v\y\a\6\y\8\j\z\m\5\7\l\o\z\9\c\i\c\y\0\l\v\2\v\7\t\0\u\o\d\t\4\7\t\4\f\h\w\h\6\t\p\w\s\0\7\u\e\b\y\o\h\2\h\g\2\z\h\u\h\5\i\v\7\p\1\t\y\3\t\t\z\x\f\7\l\k\1\c\q\1\8\d\1\j\l\x\h\3\2\w\d\v\w\a\e\w\m\6\v\x\v\g\3\y\4\i\v\u\d\o\t\k\5\x\g\b\j\6\9\s\s\f\x\m\x\d\0\m\z\v\t\t\6\q\n\e\n\9\p\0\6\w\g\p\8\9\p\1\d\1\u\2\0\o\k\z\u\8\2\r\u\o\u\e\p\v\o\s\t\z\i\r\4\s ]] 00:25:33.928 02:48:59 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:33.928 02:48:59 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:25:34.186 [2024-07-11 02:48:59.066944] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:34.186 [2024-07-11 02:48:59.067161] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147338 ] 00:25:34.186 [2024-07-11 02:48:59.213017] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:34.444 [2024-07-11 02:48:59.297810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:34.703  Copying: 512/512 [B] (average 250 kBps) 00:25:34.703 00:25:34.703 02:48:59 -- dd/posix.sh@93 -- # [[ o8dk3h6ullevmd96lyf6ea6lbfk79e2mmy358th8fhyge5ssoyv9vcphsbk1inc1btqgxwmu2govfwszzscq3zvlrfhnfejwyg59073mqm3s50jtbhqug5ibq65jzz8vcgn5bb7ohmtbkitcel27q65z2kj0yewhlb3w6ivo205p57pqmvr77vsc15zt30s0055v40xtfw7e5vvv0iab9rp6fu90yujxplvlmdtqxnxtc01g4lue5grf7pgk7qrlo7mg3ep0l0rbtd29lains9gcdnyjmdj4wk0a4gwqijetzo7jk37ywij0t2ujm5ccgyauretq4fq9inmcjvya6y8jzm57loz9cicy0lv2v7t0uodt47t4fhwh6tpws07uebyoh2hg2zhuh5iv7p1ty3ttzxf7lk1cq18d1jlxh32wdvwaewm6vxvg3y4ivudotk5xgbj69ssfxmxd0mzvtt6qnen9p06wgp89p1d1u20okzu82ruouepvostzir4s == \o\8\d\k\3\h\6\u\l\l\e\v\m\d\9\6\l\y\f\6\e\a\6\l\b\f\k\7\9\e\2\m\m\y\3\5\8\t\h\8\f\h\y\g\e\5\s\s\o\y\v\9\v\c\p\h\s\b\k\1\i\n\c\1\b\t\q\g\x\w\m\u\2\g\o\v\f\w\s\z\z\s\c\q\3\z\v\l\r\f\h\n\f\e\j\w\y\g\5\9\0\7\3\m\q\m\3\s\5\0\j\t\b\h\q\u\g\5\i\b\q\6\5\j\z\z\8\v\c\g\n\5\b\b\7\o\h\m\t\b\k\i\t\c\e\l\2\7\q\6\5\z\2\k\j\0\y\e\w\h\l\b\3\w\6\i\v\o\2\0\5\p\5\7\p\q\m\v\r\7\7\v\s\c\1\5\z\t\3\0\s\0\0\5\5\v\4\0\x\t\f\w\7\e\5\v\v\v\0\i\a\b\9\r\p\6\f\u\9\0\y\u\j\x\p\l\v\l\m\d\t\q\x\n\x\t\c\0\1\g\4\l\u\e\5\g\r\f\7\p\g\k\7\q\r\l\o\7\m\g\3\e\p\0\l\0\r\b\t\d\2\9\l\a\i\n\s\9\g\c\d\n\y\j\m\d\j\4\w\k\0\a\4\g\w\q\i\j\e\t\z\o\7\j\k\3\7\y\w\i\j\0\t\2\u\j\m\5\c\c\g\y\a\u\r\e\t\q\4\f\q\9\i\n\m\c\j\v\y\a\6\y\8\j\z\m\5\7\l\o\z\9\c\i\c\y\0\l\v\2\v\7\t\0\u\o\d\t\4\7\t\4\f\h\w\h\6\t\p\w\s\0\7\u\e\b\y\o\h\2\h\g\2\z\h\u\h\5\i\v\7\p\1\t\y\3\t\t\z\x\f\7\l\k\1\c\q\1\8\d\1\j\l\x\h\3\2\w\d\v\w\a\e\w\m\6\v\x\v\g\3\y\4\i\v\u\d\o\t\k\5\x\g\b\j\6\9\s\s\f\x\m\x\d\0\m\z\v\t\t\6\q\n\e\n\9\p\0\6\w\g\p\8\9\p\1\d\1\u\2\0\o\k\z\u\8\2\r\u\o\u\e\p\v\o\s\t\z\i\r\4\s ]] 00:25:34.703 ************************************ 00:25:34.703 END TEST dd_flags_misc 00:25:34.703 ************************************ 00:25:34.703 00:25:34.703 real 0m5.100s 00:25:34.703 user 0m2.490s 00:25:34.703 sys 0m1.487s 00:25:34.703 02:48:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:34.703 02:48:59 -- common/autotest_common.sh@10 -- # set +x 00:25:34.703 02:48:59 -- dd/posix.sh@131 -- # tests_forced_aio 00:25:34.703 02:48:59 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', using AIO' 00:25:34.703 * Second test run, using AIO 00:25:34.703 02:48:59 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:25:34.703 02:48:59 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:25:34.703 02:48:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:34.703 02:48:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:34.703 02:48:59 -- common/autotest_common.sh@10 -- # set +x 00:25:34.703 ************************************ 00:25:34.703 START TEST dd_flag_append_forced_aio 00:25:34.703 ************************************ 00:25:34.703 02:48:59 -- common/autotest_common.sh@1104 -- # append 00:25:34.703 02:48:59 -- dd/posix.sh@16 -- # local dump0 00:25:34.703 02:48:59 -- dd/posix.sh@17 -- # local dump1 00:25:34.703 02:48:59 -- dd/posix.sh@19 -- # gen_bytes 32 00:25:34.703 02:48:59 -- dd/common.sh@98 -- # xtrace_disable 00:25:34.703 02:48:59 -- common/autotest_common.sh@10 -- # set +x 00:25:34.703 02:48:59 -- dd/posix.sh@19 -- # dump0=p7dxui9ijzb8ne321f8bin66f4caowso 00:25:34.703 02:48:59 -- dd/posix.sh@20 -- # gen_bytes 32 00:25:34.703 02:48:59 -- dd/common.sh@98 -- # xtrace_disable 00:25:34.703 02:48:59 -- common/autotest_common.sh@10 -- # set +x 00:25:34.703 02:48:59 -- dd/posix.sh@20 -- # dump1=uyg00zpyxgc8jepx6f9ww9e5230vmuws 00:25:34.703 02:48:59 -- dd/posix.sh@22 -- # printf %s p7dxui9ijzb8ne321f8bin66f4caowso 00:25:34.703 02:48:59 -- dd/posix.sh@23 -- # printf %s uyg00zpyxgc8jepx6f9ww9e5230vmuws 00:25:34.703 02:48:59 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:25:34.963 [2024-07-11 02:48:59.826793] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:34.963 [2024-07-11 02:48:59.827049] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147376 ] 00:25:34.963 [2024-07-11 02:48:59.977004] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:35.222 [2024-07-11 02:49:00.065166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:35.480  Copying: 32/32 [B] (average 31 kBps) 00:25:35.480 00:25:35.480 02:49:00 -- dd/posix.sh@27 -- # [[ uyg00zpyxgc8jepx6f9ww9e5230vmuwsp7dxui9ijzb8ne321f8bin66f4caowso == \u\y\g\0\0\z\p\y\x\g\c\8\j\e\p\x\6\f\9\w\w\9\e\5\2\3\0\v\m\u\w\s\p\7\d\x\u\i\9\i\j\z\b\8\n\e\3\2\1\f\8\b\i\n\6\6\f\4\c\a\o\w\s\o ]] 00:25:35.480 00:25:35.480 real 0m0.746s 00:25:35.480 user 0m0.343s 00:25:35.480 sys 0m0.256s 00:25:35.480 02:49:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:35.480 02:49:00 -- common/autotest_common.sh@10 -- # set +x 00:25:35.480 ************************************ 00:25:35.480 END TEST dd_flag_append_forced_aio 00:25:35.480 ************************************ 00:25:35.480 02:49:00 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:25:35.480 02:49:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:35.480 02:49:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:35.480 02:49:00 -- common/autotest_common.sh@10 -- # set +x 00:25:35.480 ************************************ 00:25:35.480 START TEST dd_flag_directory_forced_aio 00:25:35.480 ************************************ 00:25:35.480 02:49:00 -- common/autotest_common.sh@1104 -- # directory 00:25:35.480 02:49:00 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:35.480 02:49:00 -- common/autotest_common.sh@640 -- # local es=0 00:25:35.480 02:49:00 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:35.480 02:49:00 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:35.480 02:49:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:35.480 02:49:00 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:35.480 02:49:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:35.480 02:49:00 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:35.738 02:49:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:35.738 02:49:00 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:35.738 02:49:00 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:35.738 02:49:00 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:35.738 [2024-07-11 02:49:00.620979] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:35.738 [2024-07-11 02:49:00.621256] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147418 ] 00:25:35.738 [2024-07-11 02:49:00.768292] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:35.996 [2024-07-11 02:49:00.851566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:35.996 [2024-07-11 02:49:00.941341] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:25:35.996 [2024-07-11 02:49:00.941456] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:25:35.996 [2024-07-11 02:49:00.941485] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:35.996 [2024-07-11 02:49:01.086057] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:25:36.254 02:49:01 -- common/autotest_common.sh@643 -- # es=236 00:25:36.254 02:49:01 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:36.254 02:49:01 -- common/autotest_common.sh@652 -- # es=108 00:25:36.254 02:49:01 -- common/autotest_common.sh@653 -- # case "$es" in 00:25:36.254 02:49:01 -- common/autotest_common.sh@660 -- # es=1 00:25:36.254 02:49:01 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:36.254 02:49:01 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:25:36.255 02:49:01 -- common/autotest_common.sh@640 -- # local es=0 00:25:36.255 02:49:01 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:25:36.255 02:49:01 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:36.255 02:49:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:36.255 02:49:01 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:36.255 02:49:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:36.255 02:49:01 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:36.255 02:49:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:36.255 02:49:01 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:36.255 02:49:01 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:36.255 02:49:01 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:25:36.255 [2024-07-11 02:49:01.262176] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:36.255 [2024-07-11 02:49:01.262736] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147424 ] 00:25:36.512 [2024-07-11 02:49:01.409199] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:36.512 [2024-07-11 02:49:01.484824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:36.512 [2024-07-11 02:49:01.574747] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:25:36.512 [2024-07-11 02:49:01.575116] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:25:36.512 [2024-07-11 02:49:01.575179] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:36.770 [2024-07-11 02:49:01.706071] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:25:36.770 ************************************ 00:25:36.770 END TEST dd_flag_directory_forced_aio 00:25:36.770 ************************************ 00:25:36.770 02:49:01 -- common/autotest_common.sh@643 -- # es=236 00:25:36.770 02:49:01 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:36.770 02:49:01 -- common/autotest_common.sh@652 -- # es=108 00:25:36.770 02:49:01 -- common/autotest_common.sh@653 -- # case "$es" in 00:25:36.770 02:49:01 -- common/autotest_common.sh@660 -- # es=1 00:25:36.770 02:49:01 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:36.770 00:25:36.770 real 0m1.275s 00:25:36.770 user 0m0.687s 00:25:36.770 sys 0m0.384s 00:25:36.770 02:49:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:36.770 02:49:01 -- common/autotest_common.sh@10 -- # set +x 00:25:37.029 02:49:01 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:25:37.029 02:49:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:37.029 02:49:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:37.029 02:49:01 -- common/autotest_common.sh@10 -- # set +x 00:25:37.029 ************************************ 00:25:37.029 START TEST dd_flag_nofollow_forced_aio 00:25:37.029 ************************************ 00:25:37.029 02:49:01 -- common/autotest_common.sh@1104 -- # nofollow 00:25:37.029 02:49:01 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:25:37.029 02:49:01 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:25:37.029 02:49:01 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:25:37.029 02:49:01 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:25:37.029 02:49:01 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:37.029 02:49:01 -- common/autotest_common.sh@640 -- # local es=0 00:25:37.029 02:49:01 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:37.029 02:49:01 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:37.029 02:49:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:37.029 02:49:01 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:37.029 02:49:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:37.029 02:49:01 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:37.029 02:49:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:37.029 02:49:01 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:37.029 02:49:01 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:37.029 02:49:01 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:37.029 [2024-07-11 02:49:01.951338] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:37.029 [2024-07-11 02:49:01.951845] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147465 ] 00:25:37.029 [2024-07-11 02:49:02.097557] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:37.287 [2024-07-11 02:49:02.184121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:37.287 [2024-07-11 02:49:02.272155] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:25:37.287 [2024-07-11 02:49:02.272533] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:25:37.287 [2024-07-11 02:49:02.272599] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:37.546 [2024-07-11 02:49:02.410230] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:25:37.546 02:49:02 -- common/autotest_common.sh@643 -- # es=216 00:25:37.546 02:49:02 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:37.546 02:49:02 -- common/autotest_common.sh@652 -- # es=88 00:25:37.546 02:49:02 -- common/autotest_common.sh@653 -- # case "$es" in 00:25:37.546 02:49:02 -- common/autotest_common.sh@660 -- # es=1 00:25:37.546 02:49:02 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:37.546 02:49:02 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:25:37.546 02:49:02 -- common/autotest_common.sh@640 -- # local es=0 00:25:37.546 02:49:02 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:25:37.546 02:49:02 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:37.546 02:49:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:37.546 02:49:02 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:37.546 02:49:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:37.546 02:49:02 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:37.546 02:49:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:37.546 02:49:02 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:37.546 02:49:02 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:37.546 02:49:02 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:25:37.546 [2024-07-11 02:49:02.588580] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:37.546 [2024-07-11 02:49:02.589021] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147493 ] 00:25:37.804 [2024-07-11 02:49:02.735866] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:37.804 [2024-07-11 02:49:02.796997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:37.804 [2024-07-11 02:49:02.884396] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:25:37.804 [2024-07-11 02:49:02.884754] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:25:37.804 [2024-07-11 02:49:02.884890] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:38.062 [2024-07-11 02:49:03.002560] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:25:38.062 02:49:03 -- common/autotest_common.sh@643 -- # es=216 00:25:38.062 02:49:03 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:38.062 02:49:03 -- common/autotest_common.sh@652 -- # es=88 00:25:38.062 02:49:03 -- common/autotest_common.sh@653 -- # case "$es" in 00:25:38.062 02:49:03 -- common/autotest_common.sh@660 -- # es=1 00:25:38.062 02:49:03 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:38.062 02:49:03 -- dd/posix.sh@46 -- # gen_bytes 512 00:25:38.062 02:49:03 -- dd/common.sh@98 -- # xtrace_disable 00:25:38.062 02:49:03 -- common/autotest_common.sh@10 -- # set +x 00:25:38.062 02:49:03 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:38.321 [2024-07-11 02:49:03.176221] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:38.321 [2024-07-11 02:49:03.176824] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147505 ] 00:25:38.321 [2024-07-11 02:49:03.325674] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:38.321 [2024-07-11 02:49:03.383798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:38.848  Copying: 512/512 [B] (average 500 kBps) 00:25:38.848 00:25:38.848 ************************************ 00:25:38.848 END TEST dd_flag_nofollow_forced_aio 00:25:38.848 ************************************ 00:25:38.848 02:49:03 -- dd/posix.sh@49 -- # [[ uguk2hufloyucd1jhtv39ymsvp1qg4ybbb69larxnowzsdss9bgtp3t581b93b0pxh6gdeni0vbs5cf1g5626q6jez796xv1s7uj23gfcdywvxcp5u3v6e3d71iprij1d55zyniul6ajcqhosx59jrupcsmtf49nlmt3li8t5jed1nnu2ldp2fndqy6u4bozykwnvv8k9pipgknnsb28l06w25gtotvf9w7smehqg78l4ctj6vqmgkfgwvye9ost7ehd9wlep5a8kcfhio39iecfk7fmlisfyvg66ri2miiwkigzdf6kwdav9fjw9ghsbxasyxbiv1oddd3x5g1hld2h9rupexi98t97hagsm7rnbvlvnidozd7czlsn10ru4azccmtfggtu04wiv902affdhm1x179yk74n8anlnalj3orxg1q22xtot7fxudvaj708g35y019yd9zk2h2ak3lhk96u0jvumz2q0h203oezm93424q7br8xhw7p6s8z == \u\g\u\k\2\h\u\f\l\o\y\u\c\d\1\j\h\t\v\3\9\y\m\s\v\p\1\q\g\4\y\b\b\b\6\9\l\a\r\x\n\o\w\z\s\d\s\s\9\b\g\t\p\3\t\5\8\1\b\9\3\b\0\p\x\h\6\g\d\e\n\i\0\v\b\s\5\c\f\1\g\5\6\2\6\q\6\j\e\z\7\9\6\x\v\1\s\7\u\j\2\3\g\f\c\d\y\w\v\x\c\p\5\u\3\v\6\e\3\d\7\1\i\p\r\i\j\1\d\5\5\z\y\n\i\u\l\6\a\j\c\q\h\o\s\x\5\9\j\r\u\p\c\s\m\t\f\4\9\n\l\m\t\3\l\i\8\t\5\j\e\d\1\n\n\u\2\l\d\p\2\f\n\d\q\y\6\u\4\b\o\z\y\k\w\n\v\v\8\k\9\p\i\p\g\k\n\n\s\b\2\8\l\0\6\w\2\5\g\t\o\t\v\f\9\w\7\s\m\e\h\q\g\7\8\l\4\c\t\j\6\v\q\m\g\k\f\g\w\v\y\e\9\o\s\t\7\e\h\d\9\w\l\e\p\5\a\8\k\c\f\h\i\o\3\9\i\e\c\f\k\7\f\m\l\i\s\f\y\v\g\6\6\r\i\2\m\i\i\w\k\i\g\z\d\f\6\k\w\d\a\v\9\f\j\w\9\g\h\s\b\x\a\s\y\x\b\i\v\1\o\d\d\d\3\x\5\g\1\h\l\d\2\h\9\r\u\p\e\x\i\9\8\t\9\7\h\a\g\s\m\7\r\n\b\v\l\v\n\i\d\o\z\d\7\c\z\l\s\n\1\0\r\u\4\a\z\c\c\m\t\f\g\g\t\u\0\4\w\i\v\9\0\2\a\f\f\d\h\m\1\x\1\7\9\y\k\7\4\n\8\a\n\l\n\a\l\j\3\o\r\x\g\1\q\2\2\x\t\o\t\7\f\x\u\d\v\a\j\7\0\8\g\3\5\y\0\1\9\y\d\9\z\k\2\h\2\a\k\3\l\h\k\9\6\u\0\j\v\u\m\z\2\q\0\h\2\0\3\o\e\z\m\9\3\4\2\4\q\7\b\r\8\x\h\w\7\p\6\s\8\z ]] 00:25:38.848 00:25:38.848 real 0m1.872s 00:25:38.848 user 0m0.982s 00:25:38.848 sys 0m0.554s 00:25:38.848 02:49:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:38.848 02:49:03 -- common/autotest_common.sh@10 -- # set +x 00:25:38.848 02:49:03 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:25:38.848 02:49:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:38.848 02:49:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:38.848 02:49:03 -- common/autotest_common.sh@10 -- # set +x 00:25:38.848 ************************************ 00:25:38.848 START TEST dd_flag_noatime_forced_aio 00:25:38.848 ************************************ 00:25:38.848 02:49:03 -- common/autotest_common.sh@1104 -- # noatime 00:25:38.848 02:49:03 -- dd/posix.sh@53 -- # local atime_if 00:25:38.848 02:49:03 -- dd/posix.sh@54 -- # local atime_of 00:25:38.848 02:49:03 -- dd/posix.sh@58 -- # gen_bytes 512 00:25:38.848 02:49:03 -- dd/common.sh@98 -- # xtrace_disable 00:25:38.848 02:49:03 -- common/autotest_common.sh@10 -- # set +x 00:25:38.848 02:49:03 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:38.848 02:49:03 -- dd/posix.sh@60 -- # atime_if=1720666143 00:25:38.848 02:49:03 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:38.848 02:49:03 -- dd/posix.sh@61 -- # atime_of=1720666143 00:25:38.848 02:49:03 -- dd/posix.sh@66 -- # sleep 1 00:25:39.782 02:49:04 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:40.040 [2024-07-11 02:49:04.897753] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:40.040 [2024-07-11 02:49:04.898814] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147558 ] 00:25:40.040 [2024-07-11 02:49:05.049518] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:40.040 [2024-07-11 02:49:05.121358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:40.555  Copying: 512/512 [B] (average 500 kBps) 00:25:40.555 00:25:40.555 02:49:05 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:40.555 02:49:05 -- dd/posix.sh@69 -- # (( atime_if == 1720666143 )) 00:25:40.555 02:49:05 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:40.555 02:49:05 -- dd/posix.sh@70 -- # (( atime_of == 1720666143 )) 00:25:40.555 02:49:05 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:40.555 [2024-07-11 02:49:05.520085] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:40.555 [2024-07-11 02:49:05.520483] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147576 ] 00:25:40.812 [2024-07-11 02:49:05.655737] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:40.813 [2024-07-11 02:49:05.712663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:41.070  Copying: 512/512 [B] (average 500 kBps) 00:25:41.070 00:25:41.070 02:49:06 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:41.070 02:49:06 -- dd/posix.sh@73 -- # (( atime_if < 1720666145 )) 00:25:41.070 00:25:41.070 real 0m2.254s 00:25:41.070 user 0m0.625s 00:25:41.070 ************************************ 00:25:41.070 END TEST dd_flag_noatime_forced_aio 00:25:41.070 ************************************ 00:25:41.070 sys 0m0.354s 00:25:41.070 02:49:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:41.070 02:49:06 -- common/autotest_common.sh@10 -- # set +x 00:25:41.070 02:49:06 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:25:41.070 02:49:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:41.070 02:49:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:41.070 02:49:06 -- common/autotest_common.sh@10 -- # set +x 00:25:41.070 ************************************ 00:25:41.070 START TEST dd_flags_misc_forced_aio 00:25:41.070 ************************************ 00:25:41.070 02:49:06 -- common/autotest_common.sh@1104 -- # io 00:25:41.070 02:49:06 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:25:41.070 02:49:06 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:25:41.070 02:49:06 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:25:41.070 02:49:06 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:25:41.070 02:49:06 -- dd/posix.sh@86 -- # gen_bytes 512 00:25:41.070 02:49:06 -- dd/common.sh@98 -- # xtrace_disable 00:25:41.070 02:49:06 -- common/autotest_common.sh@10 -- # set +x 00:25:41.070 02:49:06 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:41.070 02:49:06 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:25:41.327 [2024-07-11 02:49:06.188761] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:41.327 [2024-07-11 02:49:06.189140] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147601 ] 00:25:41.327 [2024-07-11 02:49:06.334531] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.327 [2024-07-11 02:49:06.387429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:41.843  Copying: 512/512 [B] (average 500 kBps) 00:25:41.843 00:25:41.843 02:49:06 -- dd/posix.sh@93 -- # [[ 5svyph3ru5oofwjtpskkqkrvmc6su0p24fv8r85rgrdh6yxa44a5bcayt7llz5tp3xfbizc1o3fmisg4jgg81ub4vgkjego4nonyg9lbyzps66wkvw4hlbbv0lxj1cqsiu706cvf9hh9qrlfp1o7meruh39nhrxch5xihx4l4fg7oow33rb0vpqw20hxdku4qd4ydk77rf4scvx2p6jbwv9mbutexbl95z1b4gni0e3akunyu9m10an44o39op9xm9t46peamw466q88ijkkkq25ap1b266oacqv76majfin2ryknstqvis4hdvztpjfhez0jfbwewjkam8sp94y8e8nu5k8ro76z6vg21xze5qeznkocaht42i44bgbwi49f3nnu3whtkj0g88pcs712dy8u20p76m30ujjiqpljyfbg1pwvb0mu2fi3se7cu30twak96rwm4kmj5nq9lyfo97iphelucscj47238wb7h8vy1i1ochj35rmwp3lloit == \5\s\v\y\p\h\3\r\u\5\o\o\f\w\j\t\p\s\k\k\q\k\r\v\m\c\6\s\u\0\p\2\4\f\v\8\r\8\5\r\g\r\d\h\6\y\x\a\4\4\a\5\b\c\a\y\t\7\l\l\z\5\t\p\3\x\f\b\i\z\c\1\o\3\f\m\i\s\g\4\j\g\g\8\1\u\b\4\v\g\k\j\e\g\o\4\n\o\n\y\g\9\l\b\y\z\p\s\6\6\w\k\v\w\4\h\l\b\b\v\0\l\x\j\1\c\q\s\i\u\7\0\6\c\v\f\9\h\h\9\q\r\l\f\p\1\o\7\m\e\r\u\h\3\9\n\h\r\x\c\h\5\x\i\h\x\4\l\4\f\g\7\o\o\w\3\3\r\b\0\v\p\q\w\2\0\h\x\d\k\u\4\q\d\4\y\d\k\7\7\r\f\4\s\c\v\x\2\p\6\j\b\w\v\9\m\b\u\t\e\x\b\l\9\5\z\1\b\4\g\n\i\0\e\3\a\k\u\n\y\u\9\m\1\0\a\n\4\4\o\3\9\o\p\9\x\m\9\t\4\6\p\e\a\m\w\4\6\6\q\8\8\i\j\k\k\k\q\2\5\a\p\1\b\2\6\6\o\a\c\q\v\7\6\m\a\j\f\i\n\2\r\y\k\n\s\t\q\v\i\s\4\h\d\v\z\t\p\j\f\h\e\z\0\j\f\b\w\e\w\j\k\a\m\8\s\p\9\4\y\8\e\8\n\u\5\k\8\r\o\7\6\z\6\v\g\2\1\x\z\e\5\q\e\z\n\k\o\c\a\h\t\4\2\i\4\4\b\g\b\w\i\4\9\f\3\n\n\u\3\w\h\t\k\j\0\g\8\8\p\c\s\7\1\2\d\y\8\u\2\0\p\7\6\m\3\0\u\j\j\i\q\p\l\j\y\f\b\g\1\p\w\v\b\0\m\u\2\f\i\3\s\e\7\c\u\3\0\t\w\a\k\9\6\r\w\m\4\k\m\j\5\n\q\9\l\y\f\o\9\7\i\p\h\e\l\u\c\s\c\j\4\7\2\3\8\w\b\7\h\8\v\y\1\i\1\o\c\h\j\3\5\r\m\w\p\3\l\l\o\i\t ]] 00:25:41.843 02:49:06 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:41.843 02:49:06 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:25:41.843 [2024-07-11 02:49:06.785337] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:41.843 [2024-07-11 02:49:06.785760] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147621 ] 00:25:41.843 [2024-07-11 02:49:06.925167] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:42.100 [2024-07-11 02:49:06.987810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:42.358  Copying: 512/512 [B] (average 500 kBps) 00:25:42.358 00:25:42.358 02:49:07 -- dd/posix.sh@93 -- # [[ 5svyph3ru5oofwjtpskkqkrvmc6su0p24fv8r85rgrdh6yxa44a5bcayt7llz5tp3xfbizc1o3fmisg4jgg81ub4vgkjego4nonyg9lbyzps66wkvw4hlbbv0lxj1cqsiu706cvf9hh9qrlfp1o7meruh39nhrxch5xihx4l4fg7oow33rb0vpqw20hxdku4qd4ydk77rf4scvx2p6jbwv9mbutexbl95z1b4gni0e3akunyu9m10an44o39op9xm9t46peamw466q88ijkkkq25ap1b266oacqv76majfin2ryknstqvis4hdvztpjfhez0jfbwewjkam8sp94y8e8nu5k8ro76z6vg21xze5qeznkocaht42i44bgbwi49f3nnu3whtkj0g88pcs712dy8u20p76m30ujjiqpljyfbg1pwvb0mu2fi3se7cu30twak96rwm4kmj5nq9lyfo97iphelucscj47238wb7h8vy1i1ochj35rmwp3lloit == \5\s\v\y\p\h\3\r\u\5\o\o\f\w\j\t\p\s\k\k\q\k\r\v\m\c\6\s\u\0\p\2\4\f\v\8\r\8\5\r\g\r\d\h\6\y\x\a\4\4\a\5\b\c\a\y\t\7\l\l\z\5\t\p\3\x\f\b\i\z\c\1\o\3\f\m\i\s\g\4\j\g\g\8\1\u\b\4\v\g\k\j\e\g\o\4\n\o\n\y\g\9\l\b\y\z\p\s\6\6\w\k\v\w\4\h\l\b\b\v\0\l\x\j\1\c\q\s\i\u\7\0\6\c\v\f\9\h\h\9\q\r\l\f\p\1\o\7\m\e\r\u\h\3\9\n\h\r\x\c\h\5\x\i\h\x\4\l\4\f\g\7\o\o\w\3\3\r\b\0\v\p\q\w\2\0\h\x\d\k\u\4\q\d\4\y\d\k\7\7\r\f\4\s\c\v\x\2\p\6\j\b\w\v\9\m\b\u\t\e\x\b\l\9\5\z\1\b\4\g\n\i\0\e\3\a\k\u\n\y\u\9\m\1\0\a\n\4\4\o\3\9\o\p\9\x\m\9\t\4\6\p\e\a\m\w\4\6\6\q\8\8\i\j\k\k\k\q\2\5\a\p\1\b\2\6\6\o\a\c\q\v\7\6\m\a\j\f\i\n\2\r\y\k\n\s\t\q\v\i\s\4\h\d\v\z\t\p\j\f\h\e\z\0\j\f\b\w\e\w\j\k\a\m\8\s\p\9\4\y\8\e\8\n\u\5\k\8\r\o\7\6\z\6\v\g\2\1\x\z\e\5\q\e\z\n\k\o\c\a\h\t\4\2\i\4\4\b\g\b\w\i\4\9\f\3\n\n\u\3\w\h\t\k\j\0\g\8\8\p\c\s\7\1\2\d\y\8\u\2\0\p\7\6\m\3\0\u\j\j\i\q\p\l\j\y\f\b\g\1\p\w\v\b\0\m\u\2\f\i\3\s\e\7\c\u\3\0\t\w\a\k\9\6\r\w\m\4\k\m\j\5\n\q\9\l\y\f\o\9\7\i\p\h\e\l\u\c\s\c\j\4\7\2\3\8\w\b\7\h\8\v\y\1\i\1\o\c\h\j\3\5\r\m\w\p\3\l\l\o\i\t ]] 00:25:42.358 02:49:07 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:42.358 02:49:07 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:25:42.358 [2024-07-11 02:49:07.395660] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:42.358 [2024-07-11 02:49:07.396083] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147627 ] 00:25:42.614 [2024-07-11 02:49:07.541685] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:42.614 [2024-07-11 02:49:07.613599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:43.179  Copying: 512/512 [B] (average 250 kBps) 00:25:43.179 00:25:43.179 02:49:07 -- dd/posix.sh@93 -- # [[ 5svyph3ru5oofwjtpskkqkrvmc6su0p24fv8r85rgrdh6yxa44a5bcayt7llz5tp3xfbizc1o3fmisg4jgg81ub4vgkjego4nonyg9lbyzps66wkvw4hlbbv0lxj1cqsiu706cvf9hh9qrlfp1o7meruh39nhrxch5xihx4l4fg7oow33rb0vpqw20hxdku4qd4ydk77rf4scvx2p6jbwv9mbutexbl95z1b4gni0e3akunyu9m10an44o39op9xm9t46peamw466q88ijkkkq25ap1b266oacqv76majfin2ryknstqvis4hdvztpjfhez0jfbwewjkam8sp94y8e8nu5k8ro76z6vg21xze5qeznkocaht42i44bgbwi49f3nnu3whtkj0g88pcs712dy8u20p76m30ujjiqpljyfbg1pwvb0mu2fi3se7cu30twak96rwm4kmj5nq9lyfo97iphelucscj47238wb7h8vy1i1ochj35rmwp3lloit == \5\s\v\y\p\h\3\r\u\5\o\o\f\w\j\t\p\s\k\k\q\k\r\v\m\c\6\s\u\0\p\2\4\f\v\8\r\8\5\r\g\r\d\h\6\y\x\a\4\4\a\5\b\c\a\y\t\7\l\l\z\5\t\p\3\x\f\b\i\z\c\1\o\3\f\m\i\s\g\4\j\g\g\8\1\u\b\4\v\g\k\j\e\g\o\4\n\o\n\y\g\9\l\b\y\z\p\s\6\6\w\k\v\w\4\h\l\b\b\v\0\l\x\j\1\c\q\s\i\u\7\0\6\c\v\f\9\h\h\9\q\r\l\f\p\1\o\7\m\e\r\u\h\3\9\n\h\r\x\c\h\5\x\i\h\x\4\l\4\f\g\7\o\o\w\3\3\r\b\0\v\p\q\w\2\0\h\x\d\k\u\4\q\d\4\y\d\k\7\7\r\f\4\s\c\v\x\2\p\6\j\b\w\v\9\m\b\u\t\e\x\b\l\9\5\z\1\b\4\g\n\i\0\e\3\a\k\u\n\y\u\9\m\1\0\a\n\4\4\o\3\9\o\p\9\x\m\9\t\4\6\p\e\a\m\w\4\6\6\q\8\8\i\j\k\k\k\q\2\5\a\p\1\b\2\6\6\o\a\c\q\v\7\6\m\a\j\f\i\n\2\r\y\k\n\s\t\q\v\i\s\4\h\d\v\z\t\p\j\f\h\e\z\0\j\f\b\w\e\w\j\k\a\m\8\s\p\9\4\y\8\e\8\n\u\5\k\8\r\o\7\6\z\6\v\g\2\1\x\z\e\5\q\e\z\n\k\o\c\a\h\t\4\2\i\4\4\b\g\b\w\i\4\9\f\3\n\n\u\3\w\h\t\k\j\0\g\8\8\p\c\s\7\1\2\d\y\8\u\2\0\p\7\6\m\3\0\u\j\j\i\q\p\l\j\y\f\b\g\1\p\w\v\b\0\m\u\2\f\i\3\s\e\7\c\u\3\0\t\w\a\k\9\6\r\w\m\4\k\m\j\5\n\q\9\l\y\f\o\9\7\i\p\h\e\l\u\c\s\c\j\4\7\2\3\8\w\b\7\h\8\v\y\1\i\1\o\c\h\j\3\5\r\m\w\p\3\l\l\o\i\t ]] 00:25:43.179 02:49:07 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:43.179 02:49:07 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:25:43.179 [2024-07-11 02:49:08.041030] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:43.179 [2024-07-11 02:49:08.041431] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147643 ] 00:25:43.179 [2024-07-11 02:49:08.188782] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:43.179 [2024-07-11 02:49:08.250314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:43.697  Copying: 512/512 [B] (average 166 kBps) 00:25:43.697 00:25:43.697 02:49:08 -- dd/posix.sh@93 -- # [[ 5svyph3ru5oofwjtpskkqkrvmc6su0p24fv8r85rgrdh6yxa44a5bcayt7llz5tp3xfbizc1o3fmisg4jgg81ub4vgkjego4nonyg9lbyzps66wkvw4hlbbv0lxj1cqsiu706cvf9hh9qrlfp1o7meruh39nhrxch5xihx4l4fg7oow33rb0vpqw20hxdku4qd4ydk77rf4scvx2p6jbwv9mbutexbl95z1b4gni0e3akunyu9m10an44o39op9xm9t46peamw466q88ijkkkq25ap1b266oacqv76majfin2ryknstqvis4hdvztpjfhez0jfbwewjkam8sp94y8e8nu5k8ro76z6vg21xze5qeznkocaht42i44bgbwi49f3nnu3whtkj0g88pcs712dy8u20p76m30ujjiqpljyfbg1pwvb0mu2fi3se7cu30twak96rwm4kmj5nq9lyfo97iphelucscj47238wb7h8vy1i1ochj35rmwp3lloit == \5\s\v\y\p\h\3\r\u\5\o\o\f\w\j\t\p\s\k\k\q\k\r\v\m\c\6\s\u\0\p\2\4\f\v\8\r\8\5\r\g\r\d\h\6\y\x\a\4\4\a\5\b\c\a\y\t\7\l\l\z\5\t\p\3\x\f\b\i\z\c\1\o\3\f\m\i\s\g\4\j\g\g\8\1\u\b\4\v\g\k\j\e\g\o\4\n\o\n\y\g\9\l\b\y\z\p\s\6\6\w\k\v\w\4\h\l\b\b\v\0\l\x\j\1\c\q\s\i\u\7\0\6\c\v\f\9\h\h\9\q\r\l\f\p\1\o\7\m\e\r\u\h\3\9\n\h\r\x\c\h\5\x\i\h\x\4\l\4\f\g\7\o\o\w\3\3\r\b\0\v\p\q\w\2\0\h\x\d\k\u\4\q\d\4\y\d\k\7\7\r\f\4\s\c\v\x\2\p\6\j\b\w\v\9\m\b\u\t\e\x\b\l\9\5\z\1\b\4\g\n\i\0\e\3\a\k\u\n\y\u\9\m\1\0\a\n\4\4\o\3\9\o\p\9\x\m\9\t\4\6\p\e\a\m\w\4\6\6\q\8\8\i\j\k\k\k\q\2\5\a\p\1\b\2\6\6\o\a\c\q\v\7\6\m\a\j\f\i\n\2\r\y\k\n\s\t\q\v\i\s\4\h\d\v\z\t\p\j\f\h\e\z\0\j\f\b\w\e\w\j\k\a\m\8\s\p\9\4\y\8\e\8\n\u\5\k\8\r\o\7\6\z\6\v\g\2\1\x\z\e\5\q\e\z\n\k\o\c\a\h\t\4\2\i\4\4\b\g\b\w\i\4\9\f\3\n\n\u\3\w\h\t\k\j\0\g\8\8\p\c\s\7\1\2\d\y\8\u\2\0\p\7\6\m\3\0\u\j\j\i\q\p\l\j\y\f\b\g\1\p\w\v\b\0\m\u\2\f\i\3\s\e\7\c\u\3\0\t\w\a\k\9\6\r\w\m\4\k\m\j\5\n\q\9\l\y\f\o\9\7\i\p\h\e\l\u\c\s\c\j\4\7\2\3\8\w\b\7\h\8\v\y\1\i\1\o\c\h\j\3\5\r\m\w\p\3\l\l\o\i\t ]] 00:25:43.697 02:49:08 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:25:43.697 02:49:08 -- dd/posix.sh@86 -- # gen_bytes 512 00:25:43.697 02:49:08 -- dd/common.sh@98 -- # xtrace_disable 00:25:43.697 02:49:08 -- common/autotest_common.sh@10 -- # set +x 00:25:43.697 02:49:08 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:43.697 02:49:08 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:25:43.697 [2024-07-11 02:49:08.692724] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:43.697 [2024-07-11 02:49:08.693699] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147656 ] 00:25:43.955 [2024-07-11 02:49:08.838238] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:43.955 [2024-07-11 02:49:08.896364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:44.214  Copying: 512/512 [B] (average 500 kBps) 00:25:44.214 00:25:44.214 02:49:09 -- dd/posix.sh@93 -- # [[ bzujufola8cdv2eeveustg0lkuc14bg5g24prpxbs9go7lcijf9ngkqwnigns47eqqxkzvcsp2umow4us5a3949x8gua2anrymbv91b6uymckb5q1qgnrb1wetatw4kqqkxy842mb2eejyvu2sn6morle4oq9idrwwrnc41xvge2dfz6uioms2kr7bi8tukz95m30fzxyyclxmqn41ehc6rm7hy13qyipt7su4vizwnk64ck12mkvewxlr7byzmbf2dtpntdbtx34pib9k1rn9mueejncyc1jfn89phqns53nrpri3o2s8zk9x2hd7kp66leuz2envxlymtifsw426mjth0byhzd781wtnmz4d0yj4rs4or94p5gsz17lc6wwwdhp6jcehkrmda7mn6a4kyf0jmde114xldvg2hm7cidpsneu9cn6fzvehs807wlegwcxrvaonp34op3tsx2653y29mpxi8bb3hm1whpqnzg2b53fzmq918lgkkpp7a8 == \b\z\u\j\u\f\o\l\a\8\c\d\v\2\e\e\v\e\u\s\t\g\0\l\k\u\c\1\4\b\g\5\g\2\4\p\r\p\x\b\s\9\g\o\7\l\c\i\j\f\9\n\g\k\q\w\n\i\g\n\s\4\7\e\q\q\x\k\z\v\c\s\p\2\u\m\o\w\4\u\s\5\a\3\9\4\9\x\8\g\u\a\2\a\n\r\y\m\b\v\9\1\b\6\u\y\m\c\k\b\5\q\1\q\g\n\r\b\1\w\e\t\a\t\w\4\k\q\q\k\x\y\8\4\2\m\b\2\e\e\j\y\v\u\2\s\n\6\m\o\r\l\e\4\o\q\9\i\d\r\w\w\r\n\c\4\1\x\v\g\e\2\d\f\z\6\u\i\o\m\s\2\k\r\7\b\i\8\t\u\k\z\9\5\m\3\0\f\z\x\y\y\c\l\x\m\q\n\4\1\e\h\c\6\r\m\7\h\y\1\3\q\y\i\p\t\7\s\u\4\v\i\z\w\n\k\6\4\c\k\1\2\m\k\v\e\w\x\l\r\7\b\y\z\m\b\f\2\d\t\p\n\t\d\b\t\x\3\4\p\i\b\9\k\1\r\n\9\m\u\e\e\j\n\c\y\c\1\j\f\n\8\9\p\h\q\n\s\5\3\n\r\p\r\i\3\o\2\s\8\z\k\9\x\2\h\d\7\k\p\6\6\l\e\u\z\2\e\n\v\x\l\y\m\t\i\f\s\w\4\2\6\m\j\t\h\0\b\y\h\z\d\7\8\1\w\t\n\m\z\4\d\0\y\j\4\r\s\4\o\r\9\4\p\5\g\s\z\1\7\l\c\6\w\w\w\d\h\p\6\j\c\e\h\k\r\m\d\a\7\m\n\6\a\4\k\y\f\0\j\m\d\e\1\1\4\x\l\d\v\g\2\h\m\7\c\i\d\p\s\n\e\u\9\c\n\6\f\z\v\e\h\s\8\0\7\w\l\e\g\w\c\x\r\v\a\o\n\p\3\4\o\p\3\t\s\x\2\6\5\3\y\2\9\m\p\x\i\8\b\b\3\h\m\1\w\h\p\q\n\z\g\2\b\5\3\f\z\m\q\9\1\8\l\g\k\k\p\p\7\a\8 ]] 00:25:44.214 02:49:09 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:44.214 02:49:09 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:25:44.472 [2024-07-11 02:49:09.313428] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:44.472 [2024-07-11 02:49:09.313849] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147665 ] 00:25:44.472 [2024-07-11 02:49:09.461184] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:44.472 [2024-07-11 02:49:09.517416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:44.990  Copying: 512/512 [B] (average 500 kBps) 00:25:44.990 00:25:44.990 02:49:09 -- dd/posix.sh@93 -- # [[ bzujufola8cdv2eeveustg0lkuc14bg5g24prpxbs9go7lcijf9ngkqwnigns47eqqxkzvcsp2umow4us5a3949x8gua2anrymbv91b6uymckb5q1qgnrb1wetatw4kqqkxy842mb2eejyvu2sn6morle4oq9idrwwrnc41xvge2dfz6uioms2kr7bi8tukz95m30fzxyyclxmqn41ehc6rm7hy13qyipt7su4vizwnk64ck12mkvewxlr7byzmbf2dtpntdbtx34pib9k1rn9mueejncyc1jfn89phqns53nrpri3o2s8zk9x2hd7kp66leuz2envxlymtifsw426mjth0byhzd781wtnmz4d0yj4rs4or94p5gsz17lc6wwwdhp6jcehkrmda7mn6a4kyf0jmde114xldvg2hm7cidpsneu9cn6fzvehs807wlegwcxrvaonp34op3tsx2653y29mpxi8bb3hm1whpqnzg2b53fzmq918lgkkpp7a8 == \b\z\u\j\u\f\o\l\a\8\c\d\v\2\e\e\v\e\u\s\t\g\0\l\k\u\c\1\4\b\g\5\g\2\4\p\r\p\x\b\s\9\g\o\7\l\c\i\j\f\9\n\g\k\q\w\n\i\g\n\s\4\7\e\q\q\x\k\z\v\c\s\p\2\u\m\o\w\4\u\s\5\a\3\9\4\9\x\8\g\u\a\2\a\n\r\y\m\b\v\9\1\b\6\u\y\m\c\k\b\5\q\1\q\g\n\r\b\1\w\e\t\a\t\w\4\k\q\q\k\x\y\8\4\2\m\b\2\e\e\j\y\v\u\2\s\n\6\m\o\r\l\e\4\o\q\9\i\d\r\w\w\r\n\c\4\1\x\v\g\e\2\d\f\z\6\u\i\o\m\s\2\k\r\7\b\i\8\t\u\k\z\9\5\m\3\0\f\z\x\y\y\c\l\x\m\q\n\4\1\e\h\c\6\r\m\7\h\y\1\3\q\y\i\p\t\7\s\u\4\v\i\z\w\n\k\6\4\c\k\1\2\m\k\v\e\w\x\l\r\7\b\y\z\m\b\f\2\d\t\p\n\t\d\b\t\x\3\4\p\i\b\9\k\1\r\n\9\m\u\e\e\j\n\c\y\c\1\j\f\n\8\9\p\h\q\n\s\5\3\n\r\p\r\i\3\o\2\s\8\z\k\9\x\2\h\d\7\k\p\6\6\l\e\u\z\2\e\n\v\x\l\y\m\t\i\f\s\w\4\2\6\m\j\t\h\0\b\y\h\z\d\7\8\1\w\t\n\m\z\4\d\0\y\j\4\r\s\4\o\r\9\4\p\5\g\s\z\1\7\l\c\6\w\w\w\d\h\p\6\j\c\e\h\k\r\m\d\a\7\m\n\6\a\4\k\y\f\0\j\m\d\e\1\1\4\x\l\d\v\g\2\h\m\7\c\i\d\p\s\n\e\u\9\c\n\6\f\z\v\e\h\s\8\0\7\w\l\e\g\w\c\x\r\v\a\o\n\p\3\4\o\p\3\t\s\x\2\6\5\3\y\2\9\m\p\x\i\8\b\b\3\h\m\1\w\h\p\q\n\z\g\2\b\5\3\f\z\m\q\9\1\8\l\g\k\k\p\p\7\a\8 ]] 00:25:44.990 02:49:09 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:44.990 02:49:09 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:25:44.990 [2024-07-11 02:49:09.932263] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:44.990 [2024-07-11 02:49:09.932656] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147678 ] 00:25:44.990 [2024-07-11 02:49:10.079161] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:45.249 [2024-07-11 02:49:10.158706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:45.508  Copying: 512/512 [B] (average 250 kBps) 00:25:45.508 00:25:45.508 02:49:10 -- dd/posix.sh@93 -- # [[ bzujufola8cdv2eeveustg0lkuc14bg5g24prpxbs9go7lcijf9ngkqwnigns47eqqxkzvcsp2umow4us5a3949x8gua2anrymbv91b6uymckb5q1qgnrb1wetatw4kqqkxy842mb2eejyvu2sn6morle4oq9idrwwrnc41xvge2dfz6uioms2kr7bi8tukz95m30fzxyyclxmqn41ehc6rm7hy13qyipt7su4vizwnk64ck12mkvewxlr7byzmbf2dtpntdbtx34pib9k1rn9mueejncyc1jfn89phqns53nrpri3o2s8zk9x2hd7kp66leuz2envxlymtifsw426mjth0byhzd781wtnmz4d0yj4rs4or94p5gsz17lc6wwwdhp6jcehkrmda7mn6a4kyf0jmde114xldvg2hm7cidpsneu9cn6fzvehs807wlegwcxrvaonp34op3tsx2653y29mpxi8bb3hm1whpqnzg2b53fzmq918lgkkpp7a8 == \b\z\u\j\u\f\o\l\a\8\c\d\v\2\e\e\v\e\u\s\t\g\0\l\k\u\c\1\4\b\g\5\g\2\4\p\r\p\x\b\s\9\g\o\7\l\c\i\j\f\9\n\g\k\q\w\n\i\g\n\s\4\7\e\q\q\x\k\z\v\c\s\p\2\u\m\o\w\4\u\s\5\a\3\9\4\9\x\8\g\u\a\2\a\n\r\y\m\b\v\9\1\b\6\u\y\m\c\k\b\5\q\1\q\g\n\r\b\1\w\e\t\a\t\w\4\k\q\q\k\x\y\8\4\2\m\b\2\e\e\j\y\v\u\2\s\n\6\m\o\r\l\e\4\o\q\9\i\d\r\w\w\r\n\c\4\1\x\v\g\e\2\d\f\z\6\u\i\o\m\s\2\k\r\7\b\i\8\t\u\k\z\9\5\m\3\0\f\z\x\y\y\c\l\x\m\q\n\4\1\e\h\c\6\r\m\7\h\y\1\3\q\y\i\p\t\7\s\u\4\v\i\z\w\n\k\6\4\c\k\1\2\m\k\v\e\w\x\l\r\7\b\y\z\m\b\f\2\d\t\p\n\t\d\b\t\x\3\4\p\i\b\9\k\1\r\n\9\m\u\e\e\j\n\c\y\c\1\j\f\n\8\9\p\h\q\n\s\5\3\n\r\p\r\i\3\o\2\s\8\z\k\9\x\2\h\d\7\k\p\6\6\l\e\u\z\2\e\n\v\x\l\y\m\t\i\f\s\w\4\2\6\m\j\t\h\0\b\y\h\z\d\7\8\1\w\t\n\m\z\4\d\0\y\j\4\r\s\4\o\r\9\4\p\5\g\s\z\1\7\l\c\6\w\w\w\d\h\p\6\j\c\e\h\k\r\m\d\a\7\m\n\6\a\4\k\y\f\0\j\m\d\e\1\1\4\x\l\d\v\g\2\h\m\7\c\i\d\p\s\n\e\u\9\c\n\6\f\z\v\e\h\s\8\0\7\w\l\e\g\w\c\x\r\v\a\o\n\p\3\4\o\p\3\t\s\x\2\6\5\3\y\2\9\m\p\x\i\8\b\b\3\h\m\1\w\h\p\q\n\z\g\2\b\5\3\f\z\m\q\9\1\8\l\g\k\k\p\p\7\a\8 ]] 00:25:45.508 02:49:10 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:45.508 02:49:10 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:25:45.508 [2024-07-11 02:49:10.591683] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:45.508 [2024-07-11 02:49:10.592083] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147687 ] 00:25:45.766 [2024-07-11 02:49:10.737695] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:45.767 [2024-07-11 02:49:10.809432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:46.284  Copying: 512/512 [B] (average 166 kBps) 00:25:46.284 00:25:46.284 ************************************ 00:25:46.284 END TEST dd_flags_misc_forced_aio 00:25:46.284 ************************************ 00:25:46.284 02:49:11 -- dd/posix.sh@93 -- # [[ bzujufola8cdv2eeveustg0lkuc14bg5g24prpxbs9go7lcijf9ngkqwnigns47eqqxkzvcsp2umow4us5a3949x8gua2anrymbv91b6uymckb5q1qgnrb1wetatw4kqqkxy842mb2eejyvu2sn6morle4oq9idrwwrnc41xvge2dfz6uioms2kr7bi8tukz95m30fzxyyclxmqn41ehc6rm7hy13qyipt7su4vizwnk64ck12mkvewxlr7byzmbf2dtpntdbtx34pib9k1rn9mueejncyc1jfn89phqns53nrpri3o2s8zk9x2hd7kp66leuz2envxlymtifsw426mjth0byhzd781wtnmz4d0yj4rs4or94p5gsz17lc6wwwdhp6jcehkrmda7mn6a4kyf0jmde114xldvg2hm7cidpsneu9cn6fzvehs807wlegwcxrvaonp34op3tsx2653y29mpxi8bb3hm1whpqnzg2b53fzmq918lgkkpp7a8 == \b\z\u\j\u\f\o\l\a\8\c\d\v\2\e\e\v\e\u\s\t\g\0\l\k\u\c\1\4\b\g\5\g\2\4\p\r\p\x\b\s\9\g\o\7\l\c\i\j\f\9\n\g\k\q\w\n\i\g\n\s\4\7\e\q\q\x\k\z\v\c\s\p\2\u\m\o\w\4\u\s\5\a\3\9\4\9\x\8\g\u\a\2\a\n\r\y\m\b\v\9\1\b\6\u\y\m\c\k\b\5\q\1\q\g\n\r\b\1\w\e\t\a\t\w\4\k\q\q\k\x\y\8\4\2\m\b\2\e\e\j\y\v\u\2\s\n\6\m\o\r\l\e\4\o\q\9\i\d\r\w\w\r\n\c\4\1\x\v\g\e\2\d\f\z\6\u\i\o\m\s\2\k\r\7\b\i\8\t\u\k\z\9\5\m\3\0\f\z\x\y\y\c\l\x\m\q\n\4\1\e\h\c\6\r\m\7\h\y\1\3\q\y\i\p\t\7\s\u\4\v\i\z\w\n\k\6\4\c\k\1\2\m\k\v\e\w\x\l\r\7\b\y\z\m\b\f\2\d\t\p\n\t\d\b\t\x\3\4\p\i\b\9\k\1\r\n\9\m\u\e\e\j\n\c\y\c\1\j\f\n\8\9\p\h\q\n\s\5\3\n\r\p\r\i\3\o\2\s\8\z\k\9\x\2\h\d\7\k\p\6\6\l\e\u\z\2\e\n\v\x\l\y\m\t\i\f\s\w\4\2\6\m\j\t\h\0\b\y\h\z\d\7\8\1\w\t\n\m\z\4\d\0\y\j\4\r\s\4\o\r\9\4\p\5\g\s\z\1\7\l\c\6\w\w\w\d\h\p\6\j\c\e\h\k\r\m\d\a\7\m\n\6\a\4\k\y\f\0\j\m\d\e\1\1\4\x\l\d\v\g\2\h\m\7\c\i\d\p\s\n\e\u\9\c\n\6\f\z\v\e\h\s\8\0\7\w\l\e\g\w\c\x\r\v\a\o\n\p\3\4\o\p\3\t\s\x\2\6\5\3\y\2\9\m\p\x\i\8\b\b\3\h\m\1\w\h\p\q\n\z\g\2\b\5\3\f\z\m\q\9\1\8\l\g\k\k\p\p\7\a\8 ]] 00:25:46.284 00:25:46.284 real 0m5.069s 00:25:46.284 user 0m2.512s 00:25:46.284 sys 0m1.446s 00:25:46.284 02:49:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:46.284 02:49:11 -- common/autotest_common.sh@10 -- # set +x 00:25:46.284 02:49:11 -- dd/posix.sh@1 -- # cleanup 00:25:46.284 02:49:11 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:25:46.284 02:49:11 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:25:46.284 ************************************ 00:25:46.284 END TEST spdk_dd_posix 00:25:46.284 ************************************ 00:25:46.284 00:25:46.284 real 0m23.010s 00:25:46.284 user 0m10.582s 00:25:46.284 sys 0m6.223s 00:25:46.284 02:49:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:46.284 02:49:11 -- common/autotest_common.sh@10 -- # set +x 00:25:46.284 02:49:11 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:25:46.284 02:49:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:46.284 02:49:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:46.284 02:49:11 -- common/autotest_common.sh@10 -- # set +x 00:25:46.284 ************************************ 00:25:46.284 START TEST spdk_dd_malloc 00:25:46.284 ************************************ 00:25:46.284 02:49:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:25:46.284 * Looking for test storage... 00:25:46.284 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:25:46.284 02:49:11 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:46.284 02:49:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:46.284 02:49:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:46.284 02:49:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:46.285 02:49:11 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:46.285 02:49:11 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:46.285 02:49:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:46.285 02:49:11 -- paths/export.sh@5 -- # export PATH 00:25:46.285 02:49:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:46.543 02:49:11 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:25:46.543 02:49:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:46.543 02:49:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:46.543 02:49:11 -- common/autotest_common.sh@10 -- # set +x 00:25:46.543 ************************************ 00:25:46.543 START TEST dd_malloc_copy 00:25:46.543 ************************************ 00:25:46.543 02:49:11 -- common/autotest_common.sh@1104 -- # malloc_copy 00:25:46.543 02:49:11 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:25:46.543 02:49:11 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:25:46.543 02:49:11 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(["name"]=$mbdev0 ["num_blocks"]=$mbdev0_b ["block_size"]=$mbdev0_bs) 00:25:46.543 02:49:11 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:25:46.543 02:49:11 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(["name"]=$mbdev1 ["num_blocks"]=$mbdev1_b ["block_size"]=$mbdev1_bs) 00:25:46.543 02:49:11 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:25:46.543 02:49:11 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:25:46.543 02:49:11 -- dd/malloc.sh@28 -- # gen_conf 00:25:46.543 02:49:11 -- dd/common.sh@31 -- # xtrace_disable 00:25:46.543 02:49:11 -- common/autotest_common.sh@10 -- # set +x 00:25:46.543 [2024-07-11 02:49:11.435924] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:46.543 [2024-07-11 02:49:11.436882] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147765 ] 00:25:46.543 { 00:25:46.543 "subsystems": [ 00:25:46.543 { 00:25:46.543 "subsystem": "bdev", 00:25:46.543 "config": [ 00:25:46.543 { 00:25:46.543 "params": { 00:25:46.543 "num_blocks": 1048576, 00:25:46.543 "block_size": 512, 00:25:46.543 "name": "malloc0" 00:25:46.543 }, 00:25:46.543 "method": "bdev_malloc_create" 00:25:46.543 }, 00:25:46.543 { 00:25:46.543 "params": { 00:25:46.543 "num_blocks": 1048576, 00:25:46.543 "block_size": 512, 00:25:46.543 "name": "malloc1" 00:25:46.543 }, 00:25:46.543 "method": "bdev_malloc_create" 00:25:46.543 }, 00:25:46.543 { 00:25:46.543 "method": "bdev_wait_for_examine" 00:25:46.543 } 00:25:46.543 ] 00:25:46.543 } 00:25:46.543 ] 00:25:46.543 } 00:25:46.543 [2024-07-11 02:49:11.582862] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.800 [2024-07-11 02:49:11.648455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:50.602  Copying: 188/512 [MB] (188 MBps) Copying: 381/512 [MB] (192 MBps) Copying: 512/512 [MB] (average 193 MBps) 00:25:50.602 00:25:50.602 02:49:15 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:25:50.602 02:49:15 -- dd/malloc.sh@33 -- # gen_conf 00:25:50.602 02:49:15 -- dd/common.sh@31 -- # xtrace_disable 00:25:50.603 02:49:15 -- common/autotest_common.sh@10 -- # set +x 00:25:50.603 [2024-07-11 02:49:15.405741] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:50.603 [2024-07-11 02:49:15.406374] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147839 ] 00:25:50.603 { 00:25:50.603 "subsystems": [ 00:25:50.603 { 00:25:50.603 "subsystem": "bdev", 00:25:50.603 "config": [ 00:25:50.603 { 00:25:50.603 "params": { 00:25:50.603 "num_blocks": 1048576, 00:25:50.603 "block_size": 512, 00:25:50.603 "name": "malloc0" 00:25:50.603 }, 00:25:50.603 "method": "bdev_malloc_create" 00:25:50.603 }, 00:25:50.603 { 00:25:50.603 "params": { 00:25:50.603 "num_blocks": 1048576, 00:25:50.603 "block_size": 512, 00:25:50.603 "name": "malloc1" 00:25:50.603 }, 00:25:50.603 "method": "bdev_malloc_create" 00:25:50.603 }, 00:25:50.603 { 00:25:50.603 "method": "bdev_wait_for_examine" 00:25:50.603 } 00:25:50.603 ] 00:25:50.603 } 00:25:50.603 ] 00:25:50.603 } 00:25:50.603 [2024-07-11 02:49:15.554255] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.603 [2024-07-11 02:49:15.628938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:54.485  Copying: 185/512 [MB] (185 MBps) Copying: 373/512 [MB] (187 MBps) Copying: 512/512 [MB] (average 190 MBps) 00:25:54.485 00:25:54.485 ************************************ 00:25:54.485 END TEST dd_malloc_copy 00:25:54.485 ************************************ 00:25:54.485 00:25:54.485 real 0m8.013s 00:25:54.485 user 0m6.920s 00:25:54.485 sys 0m0.966s 00:25:54.485 02:49:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:54.485 02:49:19 -- common/autotest_common.sh@10 -- # set +x 00:25:54.485 ************************************ 00:25:54.485 END TEST spdk_dd_malloc 00:25:54.485 ************************************ 00:25:54.485 00:25:54.485 real 0m8.136s 00:25:54.485 user 0m6.997s 00:25:54.485 sys 0m1.014s 00:25:54.485 02:49:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:54.485 02:49:19 -- common/autotest_common.sh@10 -- # set +x 00:25:54.485 02:49:19 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 00:25:54.485 02:49:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:54.485 02:49:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:54.485 02:49:19 -- common/autotest_common.sh@10 -- # set +x 00:25:54.485 ************************************ 00:25:54.485 START TEST spdk_dd_bdev_to_bdev 00:25:54.485 ************************************ 00:25:54.485 02:49:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 00:25:54.485 * Looking for test storage... 00:25:54.485 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:25:54.485 02:49:19 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:54.485 02:49:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:54.485 02:49:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:54.485 02:49:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:54.485 02:49:19 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:54.485 02:49:19 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:54.485 02:49:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:54.485 02:49:19 -- paths/export.sh@5 -- # export PATH 00:25:54.485 02:49:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:54.485 02:49:19 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:25:54.485 02:49:19 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:25:54.485 02:49:19 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:25:54.485 02:49:19 -- dd/bdev_to_bdev.sh@51 -- # (( 1 > 1 )) 00:25:54.485 02:49:19 -- dd/bdev_to_bdev.sh@67 -- # nvme0=Nvme0 00:25:54.485 02:49:19 -- dd/bdev_to_bdev.sh@67 -- # bdev0=Nvme0n1 00:25:54.485 02:49:19 -- dd/bdev_to_bdev.sh@67 -- # nvme0_pci=0000:00:06.0 00:25:54.485 02:49:19 -- dd/bdev_to_bdev.sh@68 -- # aio1=/home/vagrant/spdk_repo/spdk/test/dd/aio1 00:25:54.485 02:49:19 -- dd/bdev_to_bdev.sh@68 -- # bdev1=aio1 00:25:54.485 02:49:19 -- dd/bdev_to_bdev.sh@70 -- # method_bdev_nvme_attach_controller_1=(["name"]=$nvme0 ["traddr"]=$nvme0_pci ["trtype"]=pcie) 00:25:54.485 02:49:19 -- dd/bdev_to_bdev.sh@70 -- # declare -A method_bdev_nvme_attach_controller_1 00:25:54.485 02:49:19 -- dd/bdev_to_bdev.sh@75 -- # method_bdev_aio_create_0=(["name"]=$bdev1 ["filename"]=$aio1 ["block_size"]=4096) 00:25:54.485 02:49:19 -- dd/bdev_to_bdev.sh@75 -- # declare -A method_bdev_aio_create_0 00:25:54.485 02:49:19 -- dd/bdev_to_bdev.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/aio1 --bs=1048576 --count=256 00:25:54.744 [2024-07-11 02:49:19.614354] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:54.744 [2024-07-11 02:49:19.614851] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147954 ] 00:25:54.744 [2024-07-11 02:49:19.757790] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.744 [2024-07-11 02:49:19.824755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:55.310  Copying: 256/256 [MB] (average 1267 MBps) 00:25:55.310 00:25:55.570 02:49:20 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:55.570 02:49:20 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:55.570 02:49:20 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:25:55.570 02:49:20 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:25:55.570 02:49:20 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:25:55.570 02:49:20 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:25:55.570 02:49:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:55.570 02:49:20 -- common/autotest_common.sh@10 -- # set +x 00:25:55.570 ************************************ 00:25:55.570 START TEST dd_inflate_file 00:25:55.570 ************************************ 00:25:55.570 02:49:20 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:25:55.570 [2024-07-11 02:49:20.468204] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:55.570 [2024-07-11 02:49:20.468694] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147970 ] 00:25:55.570 [2024-07-11 02:49:20.611648] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:55.829 [2024-07-11 02:49:20.685699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:56.088  Copying: 64/64 [MB] (average 1280 MBps) 00:25:56.088 00:25:56.088 00:25:56.088 real 0m0.708s 00:25:56.088 user 0m0.374s 00:25:56.088 sys 0m0.202s 00:25:56.088 02:49:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:56.088 02:49:21 -- common/autotest_common.sh@10 -- # set +x 00:25:56.088 ************************************ 00:25:56.088 END TEST dd_inflate_file 00:25:56.088 ************************************ 00:25:56.088 02:49:21 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:25:56.088 02:49:21 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:25:56.088 02:49:21 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:25:56.088 02:49:21 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:25:56.088 02:49:21 -- dd/common.sh@31 -- # xtrace_disable 00:25:56.088 02:49:21 -- common/autotest_common.sh@10 -- # set +x 00:25:56.088 02:49:21 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:25:56.088 02:49:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:56.088 02:49:21 -- common/autotest_common.sh@10 -- # set +x 00:25:56.346 ************************************ 00:25:56.346 START TEST dd_copy_to_out_bdev 00:25:56.346 ************************************ 00:25:56.346 02:49:21 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:25:56.346 { 00:25:56.346 "subsystems": [ 00:25:56.346 { 00:25:56.346 "subsystem": "bdev", 00:25:56.346 "config": [ 00:25:56.346 { 00:25:56.346 "params": { 00:25:56.346 "block_size": 4096, 00:25:56.346 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:25:56.346 "name": "aio1" 00:25:56.346 }, 00:25:56.346 "method": "bdev_aio_create" 00:25:56.346 }, 00:25:56.346 { 00:25:56.346 "params": { 00:25:56.346 "trtype": "pcie", 00:25:56.346 "traddr": "0000:00:06.0", 00:25:56.346 "name": "Nvme0" 00:25:56.346 }, 00:25:56.346 "method": "bdev_nvme_attach_controller" 00:25:56.346 }, 00:25:56.346 { 00:25:56.346 "method": "bdev_wait_for_examine" 00:25:56.346 } 00:25:56.346 ] 00:25:56.346 } 00:25:56.346 ] 00:25:56.346 } 00:25:56.346 [2024-07-11 02:49:21.235723] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:56.346 [2024-07-11 02:49:21.236672] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148017 ] 00:25:56.346 [2024-07-11 02:49:21.382443] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:56.604 [2024-07-11 02:49:21.462600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:58.542  Copying: 49/64 [MB] (49 MBps) Copying: 64/64 [MB] (average 47 MBps) 00:25:58.542 00:25:58.542 ************************************ 00:25:58.542 END TEST dd_copy_to_out_bdev 00:25:58.542 ************************************ 00:25:58.542 00:25:58.542 real 0m2.173s 00:25:58.542 user 0m1.815s 00:25:58.542 sys 0m0.236s 00:25:58.542 02:49:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:58.542 02:49:23 -- common/autotest_common.sh@10 -- # set +x 00:25:58.542 02:49:23 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:25:58.542 02:49:23 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:25:58.542 02:49:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:58.542 02:49:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:58.542 02:49:23 -- common/autotest_common.sh@10 -- # set +x 00:25:58.542 ************************************ 00:25:58.542 START TEST dd_offset_magic 00:25:58.542 ************************************ 00:25:58.542 02:49:23 -- common/autotest_common.sh@1104 -- # offset_magic 00:25:58.542 02:49:23 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:25:58.542 02:49:23 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:25:58.542 02:49:23 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:25:58.542 02:49:23 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:25:58.542 02:49:23 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:25:58.542 02:49:23 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:25:58.542 02:49:23 -- dd/common.sh@31 -- # xtrace_disable 00:25:58.542 02:49:23 -- common/autotest_common.sh@10 -- # set +x 00:25:58.542 [2024-07-11 02:49:23.465216] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:58.542 [2024-07-11 02:49:23.465702] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148086 ] 00:25:58.542 { 00:25:58.542 "subsystems": [ 00:25:58.542 { 00:25:58.542 "subsystem": "bdev", 00:25:58.542 "config": [ 00:25:58.542 { 00:25:58.542 "params": { 00:25:58.542 "block_size": 4096, 00:25:58.542 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:25:58.542 "name": "aio1" 00:25:58.542 }, 00:25:58.542 "method": "bdev_aio_create" 00:25:58.542 }, 00:25:58.542 { 00:25:58.542 "params": { 00:25:58.542 "trtype": "pcie", 00:25:58.542 "traddr": "0000:00:06.0", 00:25:58.542 "name": "Nvme0" 00:25:58.542 }, 00:25:58.542 "method": "bdev_nvme_attach_controller" 00:25:58.542 }, 00:25:58.542 { 00:25:58.542 "method": "bdev_wait_for_examine" 00:25:58.542 } 00:25:58.542 ] 00:25:58.542 } 00:25:58.542 ] 00:25:58.542 } 00:25:58.542 [2024-07-11 02:49:23.616244] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:58.800 [2024-07-11 02:49:23.690226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:59.624  Copying: 65/65 [MB] (average 238 MBps) 00:25:59.624 00:25:59.624 02:49:24 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:25:59.624 02:49:24 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:25:59.624 02:49:24 -- dd/common.sh@31 -- # xtrace_disable 00:25:59.624 02:49:24 -- common/autotest_common.sh@10 -- # set +x 00:25:59.624 [2024-07-11 02:49:24.556443] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:59.624 [2024-07-11 02:49:24.557029] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148112 ] 00:25:59.624 { 00:25:59.624 "subsystems": [ 00:25:59.624 { 00:25:59.624 "subsystem": "bdev", 00:25:59.624 "config": [ 00:25:59.624 { 00:25:59.624 "params": { 00:25:59.624 "block_size": 4096, 00:25:59.624 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:25:59.624 "name": "aio1" 00:25:59.624 }, 00:25:59.624 "method": "bdev_aio_create" 00:25:59.624 }, 00:25:59.624 { 00:25:59.624 "params": { 00:25:59.624 "trtype": "pcie", 00:25:59.624 "traddr": "0000:00:06.0", 00:25:59.624 "name": "Nvme0" 00:25:59.624 }, 00:25:59.624 "method": "bdev_nvme_attach_controller" 00:25:59.624 }, 00:25:59.624 { 00:25:59.624 "method": "bdev_wait_for_examine" 00:25:59.624 } 00:25:59.624 ] 00:25:59.624 } 00:25:59.624 ] 00:25:59.624 } 00:25:59.624 [2024-07-11 02:49:24.709017] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:59.882 [2024-07-11 02:49:24.793484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:00.397  Copying: 1024/1024 [kB] (average 500 MBps) 00:26:00.397 00:26:00.397 02:49:25 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:26:00.397 02:49:25 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:26:00.397 02:49:25 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:26:00.397 02:49:25 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:26:00.397 02:49:25 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:26:00.397 02:49:25 -- dd/common.sh@31 -- # xtrace_disable 00:26:00.398 02:49:25 -- common/autotest_common.sh@10 -- # set +x 00:26:00.398 [2024-07-11 02:49:25.404658] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:00.398 [2024-07-11 02:49:25.405157] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148129 ] 00:26:00.398 { 00:26:00.398 "subsystems": [ 00:26:00.398 { 00:26:00.398 "subsystem": "bdev", 00:26:00.398 "config": [ 00:26:00.398 { 00:26:00.398 "params": { 00:26:00.398 "block_size": 4096, 00:26:00.398 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:26:00.398 "name": "aio1" 00:26:00.398 }, 00:26:00.398 "method": "bdev_aio_create" 00:26:00.398 }, 00:26:00.398 { 00:26:00.398 "params": { 00:26:00.398 "trtype": "pcie", 00:26:00.398 "traddr": "0000:00:06.0", 00:26:00.398 "name": "Nvme0" 00:26:00.398 }, 00:26:00.398 "method": "bdev_nvme_attach_controller" 00:26:00.398 }, 00:26:00.398 { 00:26:00.398 "method": "bdev_wait_for_examine" 00:26:00.398 } 00:26:00.398 ] 00:26:00.398 } 00:26:00.398 ] 00:26:00.398 } 00:26:00.656 [2024-07-11 02:49:25.552891] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:00.656 [2024-07-11 02:49:25.628997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:01.479  Copying: 65/65 [MB] (average 286 MBps) 00:26:01.479 00:26:01.479 02:49:26 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:26:01.479 02:49:26 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:26:01.479 02:49:26 -- dd/common.sh@31 -- # xtrace_disable 00:26:01.479 02:49:26 -- common/autotest_common.sh@10 -- # set +x 00:26:01.479 [2024-07-11 02:49:26.444671] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:01.479 [2024-07-11 02:49:26.445211] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148151 ] 00:26:01.479 { 00:26:01.479 "subsystems": [ 00:26:01.479 { 00:26:01.479 "subsystem": "bdev", 00:26:01.479 "config": [ 00:26:01.479 { 00:26:01.479 "params": { 00:26:01.479 "block_size": 4096, 00:26:01.479 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:26:01.479 "name": "aio1" 00:26:01.479 }, 00:26:01.479 "method": "bdev_aio_create" 00:26:01.479 }, 00:26:01.479 { 00:26:01.479 "params": { 00:26:01.479 "trtype": "pcie", 00:26:01.479 "traddr": "0000:00:06.0", 00:26:01.479 "name": "Nvme0" 00:26:01.479 }, 00:26:01.479 "method": "bdev_nvme_attach_controller" 00:26:01.479 }, 00:26:01.479 { 00:26:01.479 "method": "bdev_wait_for_examine" 00:26:01.479 } 00:26:01.479 ] 00:26:01.479 } 00:26:01.479 ] 00:26:01.479 } 00:26:01.737 [2024-07-11 02:49:26.606026] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:01.737 [2024-07-11 02:49:26.683614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:02.254  Copying: 1024/1024 [kB] (average 500 MBps) 00:26:02.254 00:26:02.254 02:49:27 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:26:02.254 02:49:27 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:26:02.254 00:26:02.254 real 0m3.818s 00:26:02.254 user 0m1.985s 00:26:02.254 ************************************ 00:26:02.254 END TEST dd_offset_magic 00:26:02.254 ************************************ 00:26:02.254 sys 0m1.062s 00:26:02.254 02:49:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:02.254 02:49:27 -- common/autotest_common.sh@10 -- # set +x 00:26:02.254 02:49:27 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:26:02.254 02:49:27 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:26:02.254 02:49:27 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:26:02.254 02:49:27 -- dd/common.sh@11 -- # local nvme_ref= 00:26:02.255 02:49:27 -- dd/common.sh@12 -- # local size=4194330 00:26:02.255 02:49:27 -- dd/common.sh@14 -- # local bs=1048576 00:26:02.255 02:49:27 -- dd/common.sh@15 -- # local count=5 00:26:02.255 02:49:27 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:26:02.255 02:49:27 -- dd/common.sh@18 -- # gen_conf 00:26:02.255 02:49:27 -- dd/common.sh@31 -- # xtrace_disable 00:26:02.255 02:49:27 -- common/autotest_common.sh@10 -- # set +x 00:26:02.255 [2024-07-11 02:49:27.329772] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:02.255 [2024-07-11 02:49:27.330315] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148188 ] 00:26:02.255 { 00:26:02.255 "subsystems": [ 00:26:02.255 { 00:26:02.255 "subsystem": "bdev", 00:26:02.255 "config": [ 00:26:02.255 { 00:26:02.255 "params": { 00:26:02.255 "block_size": 4096, 00:26:02.255 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:26:02.255 "name": "aio1" 00:26:02.255 }, 00:26:02.255 "method": "bdev_aio_create" 00:26:02.255 }, 00:26:02.255 { 00:26:02.255 "params": { 00:26:02.255 "trtype": "pcie", 00:26:02.255 "traddr": "0000:00:06.0", 00:26:02.255 "name": "Nvme0" 00:26:02.255 }, 00:26:02.255 "method": "bdev_nvme_attach_controller" 00:26:02.255 }, 00:26:02.255 { 00:26:02.255 "method": "bdev_wait_for_examine" 00:26:02.255 } 00:26:02.255 ] 00:26:02.255 } 00:26:02.255 ] 00:26:02.255 } 00:26:02.513 [2024-07-11 02:49:27.480061] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:02.513 [2024-07-11 02:49:27.555801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:03.031  Copying: 5120/5120 [kB] (average 1000 MBps) 00:26:03.031 00:26:03.031 02:49:28 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme aio1 '' 4194330 00:26:03.031 02:49:28 -- dd/common.sh@10 -- # local bdev=aio1 00:26:03.031 02:49:28 -- dd/common.sh@11 -- # local nvme_ref= 00:26:03.031 02:49:28 -- dd/common.sh@12 -- # local size=4194330 00:26:03.031 02:49:28 -- dd/common.sh@14 -- # local bs=1048576 00:26:03.031 02:49:28 -- dd/common.sh@15 -- # local count=5 00:26:03.031 02:49:28 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=aio1 --count=5 --json /dev/fd/62 00:26:03.031 02:49:28 -- dd/common.sh@18 -- # gen_conf 00:26:03.031 02:49:28 -- dd/common.sh@31 -- # xtrace_disable 00:26:03.031 02:49:28 -- common/autotest_common.sh@10 -- # set +x 00:26:03.289 [2024-07-11 02:49:28.148742] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:03.289 [2024-07-11 02:49:28.149158] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148209 ] 00:26:03.289 { 00:26:03.289 "subsystems": [ 00:26:03.289 { 00:26:03.289 "subsystem": "bdev", 00:26:03.289 "config": [ 00:26:03.289 { 00:26:03.289 "params": { 00:26:03.289 "block_size": 4096, 00:26:03.289 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:26:03.289 "name": "aio1" 00:26:03.289 }, 00:26:03.289 "method": "bdev_aio_create" 00:26:03.289 }, 00:26:03.289 { 00:26:03.289 "params": { 00:26:03.289 "trtype": "pcie", 00:26:03.289 "traddr": "0000:00:06.0", 00:26:03.289 "name": "Nvme0" 00:26:03.289 }, 00:26:03.289 "method": "bdev_nvme_attach_controller" 00:26:03.289 }, 00:26:03.289 { 00:26:03.289 "method": "bdev_wait_for_examine" 00:26:03.289 } 00:26:03.289 ] 00:26:03.289 } 00:26:03.289 ] 00:26:03.289 } 00:26:03.289 [2024-07-11 02:49:28.297042] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:03.289 [2024-07-11 02:49:28.377457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:04.115  Copying: 5120/5120 [kB] (average 312 MBps) 00:26:04.115 00:26:04.115 02:49:28 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/aio1 00:26:04.115 00:26:04.115 real 0m9.512s 00:26:04.115 user 0m5.643s 00:26:04.115 sys 0m2.430s 00:26:04.115 02:49:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:04.115 ************************************ 00:26:04.115 02:49:29 -- common/autotest_common.sh@10 -- # set +x 00:26:04.115 END TEST spdk_dd_bdev_to_bdev 00:26:04.115 ************************************ 00:26:04.115 02:49:29 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:26:04.115 02:49:29 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:26:04.115 02:49:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:04.116 02:49:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:04.116 02:49:29 -- common/autotest_common.sh@10 -- # set +x 00:26:04.116 ************************************ 00:26:04.116 START TEST spdk_dd_sparse 00:26:04.116 ************************************ 00:26:04.116 02:49:29 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:26:04.116 * Looking for test storage... 00:26:04.116 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:26:04.116 02:49:29 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:04.116 02:49:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:04.116 02:49:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:04.116 02:49:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:04.116 02:49:29 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:04.116 02:49:29 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:04.116 02:49:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:04.116 02:49:29 -- paths/export.sh@5 -- # export PATH 00:26:04.116 02:49:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:04.116 02:49:29 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:26:04.116 02:49:29 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:26:04.116 02:49:29 -- dd/sparse.sh@110 -- # file1=file_zero1 00:26:04.116 02:49:29 -- dd/sparse.sh@111 -- # file2=file_zero2 00:26:04.116 02:49:29 -- dd/sparse.sh@112 -- # file3=file_zero3 00:26:04.116 02:49:29 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:26:04.116 02:49:29 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:26:04.116 02:49:29 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:26:04.116 02:49:29 -- dd/sparse.sh@118 -- # prepare 00:26:04.116 02:49:29 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:26:04.116 02:49:29 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:26:04.116 1+0 records in 00:26:04.116 1+0 records out 00:26:04.116 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00880257 s, 476 MB/s 00:26:04.116 02:49:29 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:26:04.116 1+0 records in 00:26:04.116 1+0 records out 00:26:04.116 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0076492 s, 548 MB/s 00:26:04.116 02:49:29 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:26:04.116 1+0 records in 00:26:04.116 1+0 records out 00:26:04.116 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00896354 s, 468 MB/s 00:26:04.116 02:49:29 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:26:04.116 02:49:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:04.116 02:49:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:04.116 02:49:29 -- common/autotest_common.sh@10 -- # set +x 00:26:04.116 ************************************ 00:26:04.116 START TEST dd_sparse_file_to_file 00:26:04.116 ************************************ 00:26:04.116 02:49:29 -- common/autotest_common.sh@1104 -- # file_to_file 00:26:04.116 02:49:29 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:26:04.116 02:49:29 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:26:04.116 02:49:29 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(["filename"]=$aio_disk ["name"]=$aio_bdev ["block_size"]=4096) 00:26:04.116 02:49:29 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:26:04.116 02:49:29 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(["bdev_name"]=$aio_bdev ["lvs_name"]=$lvstore) 00:26:04.116 02:49:29 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:26:04.116 02:49:29 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:26:04.116 02:49:29 -- dd/sparse.sh@41 -- # gen_conf 00:26:04.116 02:49:29 -- dd/common.sh@31 -- # xtrace_disable 00:26:04.116 02:49:29 -- common/autotest_common.sh@10 -- # set +x 00:26:04.375 [2024-07-11 02:49:29.238833] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:04.375 [2024-07-11 02:49:29.239366] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148279 ] 00:26:04.375 { 00:26:04.375 "subsystems": [ 00:26:04.375 { 00:26:04.375 "subsystem": "bdev", 00:26:04.375 "config": [ 00:26:04.375 { 00:26:04.375 "params": { 00:26:04.375 "block_size": 4096, 00:26:04.375 "filename": "dd_sparse_aio_disk", 00:26:04.375 "name": "dd_aio" 00:26:04.375 }, 00:26:04.375 "method": "bdev_aio_create" 00:26:04.375 }, 00:26:04.375 { 00:26:04.375 "params": { 00:26:04.375 "lvs_name": "dd_lvstore", 00:26:04.375 "bdev_name": "dd_aio" 00:26:04.375 }, 00:26:04.375 "method": "bdev_lvol_create_lvstore" 00:26:04.375 }, 00:26:04.375 { 00:26:04.375 "method": "bdev_wait_for_examine" 00:26:04.375 } 00:26:04.375 ] 00:26:04.375 } 00:26:04.375 ] 00:26:04.376 } 00:26:04.376 [2024-07-11 02:49:29.383125] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:04.376 [2024-07-11 02:49:29.449328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:04.893  Copying: 12/36 [MB] (average 1090 MBps) 00:26:04.893 00:26:04.893 02:49:29 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:26:04.893 02:49:29 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:26:04.893 02:49:29 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:26:04.893 02:49:29 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:26:04.893 02:49:29 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:26:04.893 02:49:29 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:26:04.893 02:49:29 -- dd/sparse.sh@52 -- # stat1_b=24576 00:26:04.893 02:49:29 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:26:04.893 ************************************ 00:26:04.893 END TEST dd_sparse_file_to_file 00:26:04.893 ************************************ 00:26:04.893 02:49:29 -- dd/sparse.sh@53 -- # stat2_b=24576 00:26:04.893 02:49:29 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:26:04.893 00:26:04.893 real 0m0.773s 00:26:04.893 user 0m0.430s 00:26:04.893 sys 0m0.219s 00:26:04.893 02:49:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:04.893 02:49:29 -- common/autotest_common.sh@10 -- # set +x 00:26:05.153 02:49:29 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:26:05.153 02:49:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:05.153 02:49:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:05.153 02:49:29 -- common/autotest_common.sh@10 -- # set +x 00:26:05.153 ************************************ 00:26:05.153 START TEST dd_sparse_file_to_bdev 00:26:05.153 ************************************ 00:26:05.153 02:49:30 -- common/autotest_common.sh@1104 -- # file_to_bdev 00:26:05.153 02:49:30 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(["filename"]=$aio_disk ["name"]=$aio_bdev ["block_size"]=4096) 00:26:05.153 02:49:30 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:26:05.153 02:49:30 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(["lvs_name"]=$lvstore ["lvol_name"]=$lvol ["size"]=37748736 ["thin_provision"]=true) 00:26:05.153 02:49:30 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:26:05.153 02:49:30 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:26:05.153 02:49:30 -- dd/sparse.sh@73 -- # gen_conf 00:26:05.153 02:49:30 -- dd/common.sh@31 -- # xtrace_disable 00:26:05.153 02:49:30 -- common/autotest_common.sh@10 -- # set +x 00:26:05.153 [2024-07-11 02:49:30.059974] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:05.153 [2024-07-11 02:49:30.060369] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148332 ] 00:26:05.153 { 00:26:05.153 "subsystems": [ 00:26:05.153 { 00:26:05.153 "subsystem": "bdev", 00:26:05.153 "config": [ 00:26:05.153 { 00:26:05.153 "params": { 00:26:05.153 "block_size": 4096, 00:26:05.153 "filename": "dd_sparse_aio_disk", 00:26:05.153 "name": "dd_aio" 00:26:05.153 }, 00:26:05.153 "method": "bdev_aio_create" 00:26:05.153 }, 00:26:05.153 { 00:26:05.153 "params": { 00:26:05.153 "lvs_name": "dd_lvstore", 00:26:05.153 "thin_provision": true, 00:26:05.153 "lvol_name": "dd_lvol", 00:26:05.153 "size": 37748736 00:26:05.153 }, 00:26:05.153 "method": "bdev_lvol_create" 00:26:05.153 }, 00:26:05.153 { 00:26:05.153 "method": "bdev_wait_for_examine" 00:26:05.153 } 00:26:05.153 ] 00:26:05.153 } 00:26:05.153 ] 00:26:05.153 } 00:26:05.153 [2024-07-11 02:49:30.206847] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:05.412 [2024-07-11 02:49:30.269637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:05.412 [2024-07-11 02:49:30.364371] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:26:05.412  Copying: 12/36 [MB] (average 400 MBps)[2024-07-11 02:49:30.413559] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:26:05.670 00:26:05.670 00:26:05.929 00:26:05.929 real 0m0.760s 00:26:05.929 user 0m0.437s 00:26:05.929 sys 0m0.216s 00:26:05.929 02:49:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:05.929 02:49:30 -- common/autotest_common.sh@10 -- # set +x 00:26:05.929 ************************************ 00:26:05.929 END TEST dd_sparse_file_to_bdev 00:26:05.929 ************************************ 00:26:05.929 02:49:30 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:26:05.929 02:49:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:05.929 02:49:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:05.929 02:49:30 -- common/autotest_common.sh@10 -- # set +x 00:26:05.929 ************************************ 00:26:05.929 START TEST dd_sparse_bdev_to_file 00:26:05.929 ************************************ 00:26:05.929 02:49:30 -- common/autotest_common.sh@1104 -- # bdev_to_file 00:26:05.929 02:49:30 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:26:05.929 02:49:30 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:26:05.929 02:49:30 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(["filename"]=$aio_disk ["name"]=$aio_bdev ["block_size"]=4096) 00:26:05.929 02:49:30 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:26:05.929 02:49:30 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:26:05.929 02:49:30 -- dd/sparse.sh@91 -- # gen_conf 00:26:05.929 02:49:30 -- dd/common.sh@31 -- # xtrace_disable 00:26:05.929 02:49:30 -- common/autotest_common.sh@10 -- # set +x 00:26:05.929 [2024-07-11 02:49:30.866531] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:05.929 [2024-07-11 02:49:30.867532] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148369 ] 00:26:05.929 { 00:26:05.929 "subsystems": [ 00:26:05.929 { 00:26:05.929 "subsystem": "bdev", 00:26:05.929 "config": [ 00:26:05.929 { 00:26:05.929 "params": { 00:26:05.929 "block_size": 4096, 00:26:05.929 "filename": "dd_sparse_aio_disk", 00:26:05.929 "name": "dd_aio" 00:26:05.929 }, 00:26:05.929 "method": "bdev_aio_create" 00:26:05.929 }, 00:26:05.929 { 00:26:05.929 "method": "bdev_wait_for_examine" 00:26:05.929 } 00:26:05.929 ] 00:26:05.929 } 00:26:05.929 ] 00:26:05.929 } 00:26:05.929 [2024-07-11 02:49:31.014528] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:06.187 [2024-07-11 02:49:31.077535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:06.446  Copying: 12/36 [MB] (average 1000 MBps) 00:26:06.446 00:26:06.705 02:49:31 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:26:06.705 02:49:31 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:26:06.705 02:49:31 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:26:06.705 02:49:31 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:26:06.705 02:49:31 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:26:06.705 02:49:31 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:26:06.705 02:49:31 -- dd/sparse.sh@102 -- # stat2_b=24576 00:26:06.705 02:49:31 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:26:06.705 ************************************ 00:26:06.705 END TEST dd_sparse_bdev_to_file 00:26:06.705 ************************************ 00:26:06.705 02:49:31 -- dd/sparse.sh@103 -- # stat3_b=24576 00:26:06.705 02:49:31 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:26:06.705 00:26:06.705 real 0m0.749s 00:26:06.705 user 0m0.419s 00:26:06.705 sys 0m0.219s 00:26:06.705 02:49:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:06.705 02:49:31 -- common/autotest_common.sh@10 -- # set +x 00:26:06.705 02:49:31 -- dd/sparse.sh@1 -- # cleanup 00:26:06.705 02:49:31 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:26:06.705 02:49:31 -- dd/sparse.sh@12 -- # rm file_zero1 00:26:06.705 02:49:31 -- dd/sparse.sh@13 -- # rm file_zero2 00:26:06.705 02:49:31 -- dd/sparse.sh@14 -- # rm file_zero3 00:26:06.705 ************************************ 00:26:06.705 END TEST spdk_dd_sparse 00:26:06.705 ************************************ 00:26:06.705 00:26:06.705 real 0m2.572s 00:26:06.705 user 0m1.421s 00:26:06.705 sys 0m0.800s 00:26:06.705 02:49:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:06.705 02:49:31 -- common/autotest_common.sh@10 -- # set +x 00:26:06.705 02:49:31 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:26:06.705 02:49:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:06.705 02:49:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:06.705 02:49:31 -- common/autotest_common.sh@10 -- # set +x 00:26:06.705 ************************************ 00:26:06.705 START TEST spdk_dd_negative 00:26:06.705 ************************************ 00:26:06.705 02:49:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:26:06.705 * Looking for test storage... 00:26:06.705 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:26:06.705 02:49:31 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:06.705 02:49:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:06.705 02:49:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:06.705 02:49:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:06.705 02:49:31 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:06.706 02:49:31 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:06.706 02:49:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:06.706 02:49:31 -- paths/export.sh@5 -- # export PATH 00:26:06.706 02:49:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:06.706 02:49:31 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:06.706 02:49:31 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:06.706 02:49:31 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:06.706 02:49:31 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:06.706 02:49:31 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:26:06.706 02:49:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:06.706 02:49:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:06.706 02:49:31 -- common/autotest_common.sh@10 -- # set +x 00:26:06.706 ************************************ 00:26:06.706 START TEST dd_invalid_arguments 00:26:06.706 ************************************ 00:26:06.706 02:49:31 -- common/autotest_common.sh@1104 -- # invalid_arguments 00:26:06.706 02:49:31 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:26:06.706 02:49:31 -- common/autotest_common.sh@640 -- # local es=0 00:26:06.706 02:49:31 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:26:06.706 02:49:31 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:06.706 02:49:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:06.706 02:49:31 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:06.706 02:49:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:06.706 02:49:31 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:06.706 02:49:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:06.706 02:49:31 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:06.706 02:49:31 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:06.706 02:49:31 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:26:06.966 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:26:06.966 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:26:06.966 options: 00:26:06.966 -c, --config JSON config file (default none) 00:26:06.966 --json JSON config file (default none) 00:26:06.966 --json-ignore-init-errors 00:26:06.966 don't exit on invalid config entry 00:26:06.966 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:26:06.966 -g, --single-file-segments 00:26:06.966 force creating just one hugetlbfs file 00:26:06.966 -h, --help show this usage 00:26:06.966 -i, --shm-id shared memory ID (optional) 00:26:06.966 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:26:06.966 --lcores lcore to CPU mapping list. The list is in the format: 00:26:06.966 [<,lcores[@CPUs]>...] 00:26:06.966 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:26:06.966 Within the group, '-' is used for range separator, 00:26:06.966 ',' is used for single number separator. 00:26:06.966 '( )' can be omitted for single element group, 00:26:06.966 '@' can be omitted if cpus and lcores have the same value 00:26:06.966 -n, --mem-channels channel number of memory channels used for DPDK 00:26:06.966 -p, --main-core main (primary) core for DPDK 00:26:06.966 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:26:06.966 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:26:06.966 --disable-cpumask-locks Disable CPU core lock files. 00:26:06.966 --silence-noticelog disable notice level logging to stderr 00:26:06.966 --msg-mempool-size global message memory pool size in count (default: 262143) 00:26:06.966 -u, --no-pci disable PCI access 00:26:06.966 --wait-for-rpc wait for RPCs to initialize subsystems 00:26:06.966 --max-delay maximum reactor delay (in microseconds) 00:26:06.966 -B, --pci-blocked pci addr to block (can be used more than once) 00:26:06.966 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:26:06.966 -R, --huge-unlink unlink huge files after initialization 00:26:06.966 -v, --version print SPDK version 00:26:06.966 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:26:06.966 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:26:06.966 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:26:06.966 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:26:06.966 Tracepoints vary in size and can use more than one trace entry. 00:26:06.966 --rpcs-allowed comma-separated list of permitted RPCS 00:26:06.966 --env-context Opaque context for use of the env implementation 00:26:06.966 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:26:06.966 --no-huge run without using hugepages 00:26:06.966 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid5f, bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, json_util, log, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, nvme_cuse, nvme_vfio, opal, reactor, rpc, rpc_client, sock, sock_posix, thread, trace, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:26:06.966 -e, --tpoint-group [:] 00:26:06.966 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, all) 00:26:06.966 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:26:06.966 Groups and [2024-07-11 02:49:31.822019] spdk_dd.c:1460:main: *ERROR*: Invalid arguments 00:26:06.966 masks can be combined (e.g. thread,bdev:0x1). 00:26:06.966 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:26:06.966 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:26:06.966 [--------- DD Options ---------] 00:26:06.966 --if Input file. Must specify either --if or --ib. 00:26:06.966 --ib Input bdev. Must specifier either --if or --ib 00:26:06.966 --of Output file. Must specify either --of or --ob. 00:26:06.966 --ob Output bdev. Must specify either --of or --ob. 00:26:06.966 --iflag Input file flags. 00:26:06.966 --oflag Output file flags. 00:26:06.966 --bs I/O unit size (default: 4096) 00:26:06.966 --qd Queue depth (default: 2) 00:26:06.966 --count I/O unit count. The number of I/O units to copy. (default: all) 00:26:06.966 --skip Skip this many I/O units at start of input. (default: 0) 00:26:06.966 --seek Skip this many I/O units at start of output. (default: 0) 00:26:06.966 --aio Force usage of AIO. (by default io_uring is used if available) 00:26:06.966 --sparse Enable hole skipping in input target 00:26:06.966 Available iflag and oflag values: 00:26:06.966 append - append mode 00:26:06.966 direct - use direct I/O for data 00:26:06.966 directory - fail unless a directory 00:26:06.966 dsync - use synchronized I/O for data 00:26:06.966 noatime - do not update access time 00:26:06.966 noctty - do not assign controlling terminal from file 00:26:06.966 nofollow - do not follow symlinks 00:26:06.966 nonblock - use non-blocking I/O 00:26:06.966 sync - use synchronized I/O for data and metadata 00:26:06.966 02:49:31 -- common/autotest_common.sh@643 -- # es=2 00:26:06.966 02:49:31 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:06.966 02:49:31 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:06.966 02:49:31 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:06.966 00:26:06.966 real 0m0.093s 00:26:06.966 user 0m0.047s 00:26:06.966 sys 0m0.044s 00:26:06.966 02:49:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:06.966 02:49:31 -- common/autotest_common.sh@10 -- # set +x 00:26:06.966 ************************************ 00:26:06.966 END TEST dd_invalid_arguments 00:26:06.966 ************************************ 00:26:06.966 02:49:31 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:26:06.966 02:49:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:06.966 02:49:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:06.966 02:49:31 -- common/autotest_common.sh@10 -- # set +x 00:26:06.966 ************************************ 00:26:06.966 START TEST dd_double_input 00:26:06.966 ************************************ 00:26:06.966 02:49:31 -- common/autotest_common.sh@1104 -- # double_input 00:26:06.966 02:49:31 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:26:06.966 02:49:31 -- common/autotest_common.sh@640 -- # local es=0 00:26:06.966 02:49:31 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:26:06.966 02:49:31 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:06.966 02:49:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:06.966 02:49:31 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:06.966 02:49:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:06.966 02:49:31 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:06.966 02:49:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:06.966 02:49:31 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:06.966 02:49:31 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:06.966 02:49:31 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:26:06.966 [2024-07-11 02:49:31.976251] spdk_dd.c:1467:main: *ERROR*: You may specify either --if or --ib, but not both. 00:26:06.966 ************************************ 00:26:06.966 END TEST dd_double_input 00:26:06.966 ************************************ 00:26:06.966 02:49:32 -- common/autotest_common.sh@643 -- # es=22 00:26:06.967 02:49:32 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:06.967 02:49:32 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:06.967 02:49:32 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:06.967 00:26:06.967 real 0m0.089s 00:26:06.967 user 0m0.044s 00:26:06.967 sys 0m0.044s 00:26:06.967 02:49:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:06.967 02:49:32 -- common/autotest_common.sh@10 -- # set +x 00:26:06.967 02:49:32 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:26:06.967 02:49:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:06.967 02:49:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:06.967 02:49:32 -- common/autotest_common.sh@10 -- # set +x 00:26:07.226 ************************************ 00:26:07.226 START TEST dd_double_output 00:26:07.226 ************************************ 00:26:07.226 02:49:32 -- common/autotest_common.sh@1104 -- # double_output 00:26:07.226 02:49:32 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:26:07.226 02:49:32 -- common/autotest_common.sh@640 -- # local es=0 00:26:07.226 02:49:32 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:26:07.226 02:49:32 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:07.226 02:49:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:07.226 02:49:32 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:07.226 02:49:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:07.226 02:49:32 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:07.226 02:49:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:07.226 02:49:32 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:07.226 02:49:32 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:07.226 02:49:32 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:26:07.226 [2024-07-11 02:49:32.113528] spdk_dd.c:1473:main: *ERROR*: You may specify either --of or --ob, but not both. 00:26:07.226 ************************************ 00:26:07.226 END TEST dd_double_output 00:26:07.226 ************************************ 00:26:07.226 02:49:32 -- common/autotest_common.sh@643 -- # es=22 00:26:07.226 02:49:32 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:07.226 02:49:32 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:07.226 02:49:32 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:07.226 00:26:07.226 real 0m0.086s 00:26:07.226 user 0m0.044s 00:26:07.226 sys 0m0.042s 00:26:07.226 02:49:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:07.226 02:49:32 -- common/autotest_common.sh@10 -- # set +x 00:26:07.226 02:49:32 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:26:07.226 02:49:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:07.226 02:49:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:07.226 02:49:32 -- common/autotest_common.sh@10 -- # set +x 00:26:07.226 ************************************ 00:26:07.226 START TEST dd_no_input 00:26:07.226 ************************************ 00:26:07.226 02:49:32 -- common/autotest_common.sh@1104 -- # no_input 00:26:07.226 02:49:32 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:26:07.226 02:49:32 -- common/autotest_common.sh@640 -- # local es=0 00:26:07.226 02:49:32 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:26:07.226 02:49:32 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:07.226 02:49:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:07.226 02:49:32 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:07.226 02:49:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:07.226 02:49:32 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:07.226 02:49:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:07.226 02:49:32 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:07.226 02:49:32 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:07.226 02:49:32 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:26:07.226 [2024-07-11 02:49:32.236472] spdk_dd.c:1479:main: *ERROR*: You must specify either --if or --ib 00:26:07.226 02:49:32 -- common/autotest_common.sh@643 -- # es=22 00:26:07.226 02:49:32 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:07.226 02:49:32 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:07.226 02:49:32 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:07.226 00:26:07.226 real 0m0.080s 00:26:07.226 user 0m0.042s 00:26:07.226 sys 0m0.037s 00:26:07.226 02:49:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:07.226 02:49:32 -- common/autotest_common.sh@10 -- # set +x 00:26:07.226 ************************************ 00:26:07.226 END TEST dd_no_input 00:26:07.226 ************************************ 00:26:07.226 02:49:32 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:26:07.226 02:49:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:07.226 02:49:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:07.226 02:49:32 -- common/autotest_common.sh@10 -- # set +x 00:26:07.485 ************************************ 00:26:07.485 START TEST dd_no_output 00:26:07.485 ************************************ 00:26:07.485 02:49:32 -- common/autotest_common.sh@1104 -- # no_output 00:26:07.485 02:49:32 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:07.485 02:49:32 -- common/autotest_common.sh@640 -- # local es=0 00:26:07.485 02:49:32 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:07.486 02:49:32 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:07.486 02:49:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:07.486 02:49:32 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:07.486 02:49:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:07.486 02:49:32 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:07.486 02:49:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:07.486 02:49:32 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:07.486 02:49:32 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:07.486 02:49:32 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:07.486 [2024-07-11 02:49:32.373418] spdk_dd.c:1485:main: *ERROR*: You must specify either --of or --ob 00:26:07.486 02:49:32 -- common/autotest_common.sh@643 -- # es=22 00:26:07.486 02:49:32 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:07.486 02:49:32 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:07.486 02:49:32 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:07.486 00:26:07.486 real 0m0.089s 00:26:07.486 user 0m0.041s 00:26:07.486 sys 0m0.047s 00:26:07.486 02:49:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:07.486 02:49:32 -- common/autotest_common.sh@10 -- # set +x 00:26:07.486 ************************************ 00:26:07.486 END TEST dd_no_output 00:26:07.486 ************************************ 00:26:07.486 02:49:32 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:26:07.486 02:49:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:07.486 02:49:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:07.486 02:49:32 -- common/autotest_common.sh@10 -- # set +x 00:26:07.486 ************************************ 00:26:07.486 START TEST dd_wrong_blocksize 00:26:07.486 ************************************ 00:26:07.486 02:49:32 -- common/autotest_common.sh@1104 -- # wrong_blocksize 00:26:07.486 02:49:32 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:26:07.486 02:49:32 -- common/autotest_common.sh@640 -- # local es=0 00:26:07.486 02:49:32 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:26:07.486 02:49:32 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:07.486 02:49:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:07.486 02:49:32 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:07.486 02:49:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:07.486 02:49:32 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:07.486 02:49:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:07.486 02:49:32 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:07.486 02:49:32 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:07.486 02:49:32 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:26:07.486 [2024-07-11 02:49:32.506619] spdk_dd.c:1491:main: *ERROR*: Invalid --bs value 00:26:07.486 02:49:32 -- common/autotest_common.sh@643 -- # es=22 00:26:07.486 02:49:32 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:07.486 02:49:32 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:07.486 02:49:32 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:07.486 00:26:07.486 real 0m0.083s 00:26:07.486 user 0m0.044s 00:26:07.486 sys 0m0.038s 00:26:07.486 02:49:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:07.486 02:49:32 -- common/autotest_common.sh@10 -- # set +x 00:26:07.486 ************************************ 00:26:07.486 END TEST dd_wrong_blocksize 00:26:07.486 ************************************ 00:26:07.745 02:49:32 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:26:07.745 02:49:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:07.745 02:49:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:07.745 02:49:32 -- common/autotest_common.sh@10 -- # set +x 00:26:07.745 ************************************ 00:26:07.745 START TEST dd_smaller_blocksize 00:26:07.745 ************************************ 00:26:07.745 02:49:32 -- common/autotest_common.sh@1104 -- # smaller_blocksize 00:26:07.745 02:49:32 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:26:07.745 02:49:32 -- common/autotest_common.sh@640 -- # local es=0 00:26:07.745 02:49:32 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:26:07.745 02:49:32 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:07.745 02:49:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:07.745 02:49:32 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:07.745 02:49:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:07.745 02:49:32 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:07.745 02:49:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:07.745 02:49:32 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:07.745 02:49:32 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:07.745 02:49:32 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:26:07.745 [2024-07-11 02:49:32.646370] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:07.745 [2024-07-11 02:49:32.646615] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148622 ] 00:26:07.745 [2024-07-11 02:49:32.796801] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:08.004 [2024-07-11 02:49:32.861667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:08.004 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:26:08.004 [2024-07-11 02:49:33.022114] spdk_dd.c:1168:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:26:08.004 [2024-07-11 02:49:33.022208] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:08.263 [2024-07-11 02:49:33.151770] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:26:08.264 02:49:33 -- common/autotest_common.sh@643 -- # es=244 00:26:08.264 02:49:33 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:08.264 02:49:33 -- common/autotest_common.sh@652 -- # es=116 00:26:08.264 ************************************ 00:26:08.264 END TEST dd_smaller_blocksize 00:26:08.264 ************************************ 00:26:08.264 02:49:33 -- common/autotest_common.sh@653 -- # case "$es" in 00:26:08.264 02:49:33 -- common/autotest_common.sh@660 -- # es=1 00:26:08.264 02:49:33 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:08.264 00:26:08.264 real 0m0.675s 00:26:08.264 user 0m0.318s 00:26:08.264 sys 0m0.258s 00:26:08.264 02:49:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:08.264 02:49:33 -- common/autotest_common.sh@10 -- # set +x 00:26:08.264 02:49:33 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:26:08.264 02:49:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:08.264 02:49:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:08.264 02:49:33 -- common/autotest_common.sh@10 -- # set +x 00:26:08.264 ************************************ 00:26:08.264 START TEST dd_invalid_count 00:26:08.264 ************************************ 00:26:08.264 02:49:33 -- common/autotest_common.sh@1104 -- # invalid_count 00:26:08.264 02:49:33 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:26:08.264 02:49:33 -- common/autotest_common.sh@640 -- # local es=0 00:26:08.264 02:49:33 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:26:08.264 02:49:33 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:08.264 02:49:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:08.264 02:49:33 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:08.264 02:49:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:08.264 02:49:33 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:08.264 02:49:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:08.264 02:49:33 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:08.264 02:49:33 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:08.264 02:49:33 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:26:08.522 [2024-07-11 02:49:33.370436] spdk_dd.c:1497:main: *ERROR*: Invalid --count value 00:26:08.522 02:49:33 -- common/autotest_common.sh@643 -- # es=22 00:26:08.522 02:49:33 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:08.522 02:49:33 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:08.522 02:49:33 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:08.522 00:26:08.522 real 0m0.091s 00:26:08.522 user 0m0.058s 00:26:08.522 sys 0m0.034s 00:26:08.522 ************************************ 00:26:08.522 END TEST dd_invalid_count 00:26:08.522 ************************************ 00:26:08.522 02:49:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:08.522 02:49:33 -- common/autotest_common.sh@10 -- # set +x 00:26:08.522 02:49:33 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:26:08.522 02:49:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:08.522 02:49:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:08.522 02:49:33 -- common/autotest_common.sh@10 -- # set +x 00:26:08.522 ************************************ 00:26:08.522 START TEST dd_invalid_oflag 00:26:08.522 ************************************ 00:26:08.522 02:49:33 -- common/autotest_common.sh@1104 -- # invalid_oflag 00:26:08.522 02:49:33 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:26:08.522 02:49:33 -- common/autotest_common.sh@640 -- # local es=0 00:26:08.522 02:49:33 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:26:08.522 02:49:33 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:08.522 02:49:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:08.522 02:49:33 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:08.522 02:49:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:08.522 02:49:33 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:08.522 02:49:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:08.522 02:49:33 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:08.522 02:49:33 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:08.522 02:49:33 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:26:08.522 [2024-07-11 02:49:33.503810] spdk_dd.c:1503:main: *ERROR*: --oflags may be used only with --of 00:26:08.522 02:49:33 -- common/autotest_common.sh@643 -- # es=22 00:26:08.522 02:49:33 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:08.522 ************************************ 00:26:08.522 END TEST dd_invalid_oflag 00:26:08.522 ************************************ 00:26:08.522 02:49:33 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:08.522 02:49:33 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:08.522 00:26:08.522 real 0m0.090s 00:26:08.522 user 0m0.054s 00:26:08.522 sys 0m0.036s 00:26:08.522 02:49:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:08.522 02:49:33 -- common/autotest_common.sh@10 -- # set +x 00:26:08.522 02:49:33 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:26:08.522 02:49:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:08.522 02:49:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:08.522 02:49:33 -- common/autotest_common.sh@10 -- # set +x 00:26:08.522 ************************************ 00:26:08.522 START TEST dd_invalid_iflag 00:26:08.522 ************************************ 00:26:08.522 02:49:33 -- common/autotest_common.sh@1104 -- # invalid_iflag 00:26:08.522 02:49:33 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:26:08.523 02:49:33 -- common/autotest_common.sh@640 -- # local es=0 00:26:08.523 02:49:33 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:26:08.523 02:49:33 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:08.523 02:49:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:08.523 02:49:33 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:08.523 02:49:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:08.523 02:49:33 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:08.523 02:49:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:08.523 02:49:33 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:08.523 02:49:33 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:08.523 02:49:33 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:26:08.780 [2024-07-11 02:49:33.633425] spdk_dd.c:1509:main: *ERROR*: --iflags may be used only with --if 00:26:08.780 02:49:33 -- common/autotest_common.sh@643 -- # es=22 00:26:08.780 02:49:33 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:08.780 02:49:33 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:08.780 ************************************ 00:26:08.780 END TEST dd_invalid_iflag 00:26:08.780 ************************************ 00:26:08.780 02:49:33 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:08.780 00:26:08.780 real 0m0.081s 00:26:08.780 user 0m0.032s 00:26:08.780 sys 0m0.049s 00:26:08.780 02:49:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:08.780 02:49:33 -- common/autotest_common.sh@10 -- # set +x 00:26:08.780 02:49:33 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:26:08.780 02:49:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:08.780 02:49:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:08.780 02:49:33 -- common/autotest_common.sh@10 -- # set +x 00:26:08.780 ************************************ 00:26:08.780 START TEST dd_unknown_flag 00:26:08.780 ************************************ 00:26:08.780 02:49:33 -- common/autotest_common.sh@1104 -- # unknown_flag 00:26:08.780 02:49:33 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:26:08.780 02:49:33 -- common/autotest_common.sh@640 -- # local es=0 00:26:08.780 02:49:33 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:26:08.780 02:49:33 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:08.780 02:49:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:08.780 02:49:33 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:08.780 02:49:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:08.780 02:49:33 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:08.780 02:49:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:08.780 02:49:33 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:08.780 02:49:33 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:08.780 02:49:33 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:26:08.780 [2024-07-11 02:49:33.775041] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:08.780 [2024-07-11 02:49:33.775333] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148751 ] 00:26:09.038 [2024-07-11 02:49:33.923298] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:09.038 [2024-07-11 02:49:33.981953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:09.038 [2024-07-11 02:49:34.063847] spdk_dd.c: 985:parse_flags: *ERROR*: Unknown file flag: -1 00:26:09.038 [2024-07-11 02:49:34.063960] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Invalid argument 00:26:09.038 [2024-07-11 02:49:34.063994] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Invalid argument 00:26:09.038 [2024-07-11 02:49:34.064042] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:09.296 [2024-07-11 02:49:34.182998] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:26:09.296 02:49:34 -- common/autotest_common.sh@643 -- # es=234 00:26:09.296 02:49:34 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:09.296 ************************************ 00:26:09.296 END TEST dd_unknown_flag 00:26:09.296 ************************************ 00:26:09.296 02:49:34 -- common/autotest_common.sh@652 -- # es=106 00:26:09.296 02:49:34 -- common/autotest_common.sh@653 -- # case "$es" in 00:26:09.296 02:49:34 -- common/autotest_common.sh@660 -- # es=1 00:26:09.296 02:49:34 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:09.296 00:26:09.296 real 0m0.573s 00:26:09.296 user 0m0.291s 00:26:09.296 sys 0m0.182s 00:26:09.296 02:49:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:09.296 02:49:34 -- common/autotest_common.sh@10 -- # set +x 00:26:09.296 02:49:34 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:26:09.296 02:49:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:09.296 02:49:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:09.296 02:49:34 -- common/autotest_common.sh@10 -- # set +x 00:26:09.296 ************************************ 00:26:09.296 START TEST dd_invalid_json 00:26:09.296 ************************************ 00:26:09.296 02:49:34 -- common/autotest_common.sh@1104 -- # invalid_json 00:26:09.296 02:49:34 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:26:09.296 02:49:34 -- dd/negative_dd.sh@95 -- # : 00:26:09.296 02:49:34 -- common/autotest_common.sh@640 -- # local es=0 00:26:09.296 02:49:34 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:26:09.296 02:49:34 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:09.296 02:49:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:09.296 02:49:34 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:09.296 02:49:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:09.297 02:49:34 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:09.297 02:49:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:09.297 02:49:34 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:09.297 02:49:34 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:09.297 02:49:34 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:26:09.555 [2024-07-11 02:49:34.399779] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:09.555 [2024-07-11 02:49:34.400057] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148785 ] 00:26:09.555 [2024-07-11 02:49:34.548359] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:09.555 [2024-07-11 02:49:34.622726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:09.555 [2024-07-11 02:49:34.622949] json_config.c: 529:app_json_config_read: *ERROR*: Parsing JSON configuration failed (-2) 00:26:09.555 [2024-07-11 02:49:34.623016] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:09.555 [2024-07-11 02:49:34.623106] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:26:09.814 02:49:34 -- common/autotest_common.sh@643 -- # es=234 00:26:09.814 02:49:34 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:09.814 02:49:34 -- common/autotest_common.sh@652 -- # es=106 00:26:09.814 02:49:34 -- common/autotest_common.sh@653 -- # case "$es" in 00:26:09.814 02:49:34 -- common/autotest_common.sh@660 -- # es=1 00:26:09.814 02:49:34 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:09.814 00:26:09.814 real 0m0.383s 00:26:09.814 user 0m0.190s 00:26:09.814 sys 0m0.094s 00:26:09.814 02:49:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:09.814 02:49:34 -- common/autotest_common.sh@10 -- # set +x 00:26:09.814 ************************************ 00:26:09.814 END TEST dd_invalid_json 00:26:09.814 ************************************ 00:26:09.814 00:26:09.814 real 0m3.088s 00:26:09.814 user 0m1.577s 00:26:09.814 sys 0m1.155s 00:26:09.814 ************************************ 00:26:09.814 END TEST spdk_dd_negative 00:26:09.814 ************************************ 00:26:09.814 02:49:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:09.814 02:49:34 -- common/autotest_common.sh@10 -- # set +x 00:26:09.814 00:26:09.814 real 1m7.660s 00:26:09.814 user 0m39.736s 00:26:09.814 sys 0m17.738s 00:26:09.814 ************************************ 00:26:09.814 END TEST spdk_dd 00:26:09.815 02:49:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:09.815 02:49:34 -- common/autotest_common.sh@10 -- # set +x 00:26:09.815 ************************************ 00:26:09.815 02:49:34 -- spdk/autotest.sh@217 -- # '[' 1 -eq 1 ']' 00:26:09.815 02:49:34 -- spdk/autotest.sh@218 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:26:09.815 02:49:34 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:09.815 02:49:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:09.815 02:49:34 -- common/autotest_common.sh@10 -- # set +x 00:26:09.815 ************************************ 00:26:09.815 START TEST blockdev_nvme 00:26:09.815 ************************************ 00:26:09.815 02:49:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:26:10.073 * Looking for test storage... 00:26:10.073 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:26:10.073 02:49:34 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:26:10.073 02:49:34 -- bdev/nbd_common.sh@6 -- # set -e 00:26:10.073 02:49:34 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:26:10.073 02:49:34 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:26:10.073 02:49:34 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:26:10.073 02:49:34 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:26:10.073 02:49:34 -- bdev/blockdev.sh@18 -- # : 00:26:10.073 02:49:34 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:26:10.073 02:49:34 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:26:10.073 02:49:34 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:26:10.073 02:49:34 -- bdev/blockdev.sh@672 -- # uname -s 00:26:10.073 02:49:34 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:26:10.073 02:49:34 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:26:10.073 02:49:34 -- bdev/blockdev.sh@680 -- # test_type=nvme 00:26:10.073 02:49:34 -- bdev/blockdev.sh@681 -- # crypto_device= 00:26:10.073 02:49:34 -- bdev/blockdev.sh@682 -- # dek= 00:26:10.073 02:49:34 -- bdev/blockdev.sh@683 -- # env_ctx= 00:26:10.073 02:49:34 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:26:10.073 02:49:34 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:26:10.073 02:49:34 -- bdev/blockdev.sh@688 -- # [[ nvme == bdev ]] 00:26:10.073 02:49:34 -- bdev/blockdev.sh@688 -- # [[ nvme == crypto_* ]] 00:26:10.073 02:49:34 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:26:10.073 02:49:34 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=148870 00:26:10.073 02:49:34 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:26:10.073 02:49:34 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:26:10.073 02:49:34 -- bdev/blockdev.sh@47 -- # waitforlisten 148870 00:26:10.073 02:49:34 -- common/autotest_common.sh@819 -- # '[' -z 148870 ']' 00:26:10.073 02:49:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:10.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:10.073 02:49:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:10.073 02:49:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:10.073 02:49:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:10.073 02:49:34 -- common/autotest_common.sh@10 -- # set +x 00:26:10.074 [2024-07-11 02:49:34.995818] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:10.074 [2024-07-11 02:49:34.996604] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148870 ] 00:26:10.074 [2024-07-11 02:49:35.143894] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:10.332 [2024-07-11 02:49:35.219148] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:10.332 [2024-07-11 02:49:35.219427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:10.898 02:49:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:10.898 02:49:35 -- common/autotest_common.sh@852 -- # return 0 00:26:10.898 02:49:35 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:26:10.898 02:49:35 -- bdev/blockdev.sh@697 -- # setup_nvme_conf 00:26:10.898 02:49:35 -- bdev/blockdev.sh@79 -- # local json 00:26:10.898 02:49:35 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:26:10.898 02:49:35 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:26:10.898 02:49:35 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } } ] }'\''' 00:26:10.898 02:49:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:10.898 02:49:35 -- common/autotest_common.sh@10 -- # set +x 00:26:11.157 02:49:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:11.157 02:49:36 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:26:11.157 02:49:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:11.157 02:49:36 -- common/autotest_common.sh@10 -- # set +x 00:26:11.157 02:49:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:11.157 02:49:36 -- bdev/blockdev.sh@738 -- # cat 00:26:11.157 02:49:36 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:26:11.157 02:49:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:11.157 02:49:36 -- common/autotest_common.sh@10 -- # set +x 00:26:11.157 02:49:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:11.157 02:49:36 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:26:11.157 02:49:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:11.157 02:49:36 -- common/autotest_common.sh@10 -- # set +x 00:26:11.157 02:49:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:11.157 02:49:36 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:26:11.157 02:49:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:11.157 02:49:36 -- common/autotest_common.sh@10 -- # set +x 00:26:11.157 02:49:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:11.157 02:49:36 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:26:11.157 02:49:36 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:26:11.157 02:49:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:11.157 02:49:36 -- common/autotest_common.sh@10 -- # set +x 00:26:11.157 02:49:36 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:26:11.157 02:49:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:11.157 02:49:36 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:26:11.157 02:49:36 -- bdev/blockdev.sh@747 -- # jq -r .name 00:26:11.157 02:49:36 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "eb96e9ac-7995-4f5e-a9b8-1832b7e8bca4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "eb96e9ac-7995-4f5e-a9b8-1832b7e8bca4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:06.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:06.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:26:11.157 02:49:36 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:26:11.157 02:49:36 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1 00:26:11.157 02:49:36 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:26:11.157 02:49:36 -- bdev/blockdev.sh@752 -- # killprocess 148870 00:26:11.157 02:49:36 -- common/autotest_common.sh@926 -- # '[' -z 148870 ']' 00:26:11.157 02:49:36 -- common/autotest_common.sh@930 -- # kill -0 148870 00:26:11.157 02:49:36 -- common/autotest_common.sh@931 -- # uname 00:26:11.157 02:49:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:11.157 02:49:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 148870 00:26:11.416 02:49:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:11.416 02:49:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:11.416 killing process with pid 148870 00:26:11.416 02:49:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 148870' 00:26:11.416 02:49:36 -- common/autotest_common.sh@945 -- # kill 148870 00:26:11.416 02:49:36 -- common/autotest_common.sh@950 -- # wait 148870 00:26:11.675 02:49:36 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:11.675 02:49:36 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:26:11.675 02:49:36 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:26:11.675 02:49:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:11.675 02:49:36 -- common/autotest_common.sh@10 -- # set +x 00:26:11.675 ************************************ 00:26:11.675 START TEST bdev_hello_world 00:26:11.675 ************************************ 00:26:11.675 02:49:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:26:11.675 [2024-07-11 02:49:36.749909] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:11.675 [2024-07-11 02:49:36.750202] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148937 ] 00:26:11.934 [2024-07-11 02:49:36.898569] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:11.934 [2024-07-11 02:49:36.977847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:12.192 [2024-07-11 02:49:37.191265] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:26:12.192 [2024-07-11 02:49:37.191363] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:26:12.192 [2024-07-11 02:49:37.191428] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:26:12.192 [2024-07-11 02:49:37.194001] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:26:12.192 [2024-07-11 02:49:37.194560] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:26:12.192 [2024-07-11 02:49:37.194610] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:26:12.192 [2024-07-11 02:49:37.194874] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:26:12.192 00:26:12.192 [2024-07-11 02:49:37.194930] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:26:12.451 00:26:12.451 real 0m0.744s 00:26:12.452 user 0m0.470s 00:26:12.452 sys 0m0.175s 00:26:12.452 ************************************ 00:26:12.452 END TEST bdev_hello_world 00:26:12.452 ************************************ 00:26:12.452 02:49:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:12.452 02:49:37 -- common/autotest_common.sh@10 -- # set +x 00:26:12.452 02:49:37 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:26:12.452 02:49:37 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:12.452 02:49:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:12.452 02:49:37 -- common/autotest_common.sh@10 -- # set +x 00:26:12.452 ************************************ 00:26:12.452 START TEST bdev_bounds 00:26:12.452 ************************************ 00:26:12.452 02:49:37 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:26:12.452 02:49:37 -- bdev/blockdev.sh@288 -- # bdevio_pid=148975 00:26:12.452 Process bdevio pid: 148975 00:26:12.452 02:49:37 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:26:12.452 02:49:37 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:26:12.452 02:49:37 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 148975' 00:26:12.452 02:49:37 -- bdev/blockdev.sh@291 -- # waitforlisten 148975 00:26:12.452 02:49:37 -- common/autotest_common.sh@819 -- # '[' -z 148975 ']' 00:26:12.452 02:49:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:12.452 02:49:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:12.452 02:49:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:12.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:12.452 02:49:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:12.452 02:49:37 -- common/autotest_common.sh@10 -- # set +x 00:26:12.710 [2024-07-11 02:49:37.544587] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:12.710 [2024-07-11 02:49:37.544810] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148975 ] 00:26:12.710 [2024-07-11 02:49:37.696092] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:12.710 [2024-07-11 02:49:37.780854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:12.710 [2024-07-11 02:49:37.780987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:12.710 [2024-07-11 02:49:37.780994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:13.647 02:49:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:13.647 02:49:38 -- common/autotest_common.sh@852 -- # return 0 00:26:13.647 02:49:38 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:26:13.647 I/O targets: 00:26:13.647 Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:26:13.647 00:26:13.647 00:26:13.647 CUnit - A unit testing framework for C - Version 2.1-3 00:26:13.647 http://cunit.sourceforge.net/ 00:26:13.647 00:26:13.647 00:26:13.647 Suite: bdevio tests on: Nvme0n1 00:26:13.647 Test: blockdev write read block ...passed 00:26:13.647 Test: blockdev write zeroes read block ...passed 00:26:13.647 Test: blockdev write zeroes read no split ...passed 00:26:13.647 Test: blockdev write zeroes read split ...passed 00:26:13.647 Test: blockdev write zeroes read split partial ...passed 00:26:13.647 Test: blockdev reset ...[2024-07-11 02:49:38.657879] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:26:13.647 [2024-07-11 02:49:38.662428] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:13.647 passed 00:26:13.647 Test: blockdev write read 8 blocks ...passed 00:26:13.647 Test: blockdev write read size > 128k ...passed 00:26:13.647 Test: blockdev write read invalid size ...passed 00:26:13.647 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:26:13.647 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:26:13.647 Test: blockdev write read max offset ...passed 00:26:13.647 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:26:13.647 Test: blockdev writev readv 8 blocks ...passed 00:26:13.647 Test: blockdev writev readv 30 x 1block ...passed 00:26:13.647 Test: blockdev writev readv block ...passed 00:26:13.647 Test: blockdev writev readv size > 128k ...passed 00:26:13.647 Test: blockdev writev readv size > 128k in two iovs ...passed 00:26:13.647 Test: blockdev comparev and writev ...[2024-07-11 02:49:38.668358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x6300d000 len:0x1000 00:26:13.647 passed 00:26:13.647 Test: blockdev nvme passthru rw ...[2024-07-11 02:49:38.668450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:26:13.647 passed 00:26:13.647 Test: blockdev nvme passthru vendor specific ...passed 00:26:13.647 Test: blockdev nvme admin passthru ...[2024-07-11 02:49:38.669109] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:26:13.647 [2024-07-11 02:49:38.669166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:26:13.647 passed 00:26:13.647 Test: blockdev copy ...passed 00:26:13.647 00:26:13.647 Run Summary: Type Total Ran Passed Failed Inactive 00:26:13.647 suites 1 1 n/a 0 0 00:26:13.647 tests 23 23 23 0 0 00:26:13.647 asserts 152 152 152 0 n/a 00:26:13.647 00:26:13.647 Elapsed time = 0.062 seconds 00:26:13.647 0 00:26:13.647 02:49:38 -- bdev/blockdev.sh@293 -- # killprocess 148975 00:26:13.647 02:49:38 -- common/autotest_common.sh@926 -- # '[' -z 148975 ']' 00:26:13.647 02:49:38 -- common/autotest_common.sh@930 -- # kill -0 148975 00:26:13.647 02:49:38 -- common/autotest_common.sh@931 -- # uname 00:26:13.647 02:49:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:13.647 02:49:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 148975 00:26:13.647 02:49:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:13.647 02:49:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:13.647 killing process with pid 148975 00:26:13.647 02:49:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 148975' 00:26:13.647 02:49:38 -- common/autotest_common.sh@945 -- # kill 148975 00:26:13.647 02:49:38 -- common/autotest_common.sh@950 -- # wait 148975 00:26:13.906 02:49:38 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:26:13.906 00:26:13.906 real 0m1.429s 00:26:13.906 user 0m3.768s 00:26:13.906 sys 0m0.259s 00:26:13.906 02:49:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:13.906 ************************************ 00:26:13.906 END TEST bdev_bounds 00:26:13.906 ************************************ 00:26:13.906 02:49:38 -- common/autotest_common.sh@10 -- # set +x 00:26:13.906 02:49:38 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:26:13.906 02:49:38 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:26:13.906 02:49:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:13.906 02:49:38 -- common/autotest_common.sh@10 -- # set +x 00:26:13.906 ************************************ 00:26:13.906 START TEST bdev_nbd 00:26:13.906 ************************************ 00:26:13.906 02:49:38 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:26:13.906 02:49:38 -- bdev/blockdev.sh@298 -- # uname -s 00:26:13.906 02:49:38 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:26:13.906 02:49:38 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:13.906 02:49:38 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:26:13.906 02:49:38 -- bdev/blockdev.sh@302 -- # bdev_all=($2) 00:26:13.906 02:49:38 -- bdev/blockdev.sh@302 -- # local bdev_all 00:26:13.906 02:49:38 -- bdev/blockdev.sh@303 -- # local bdev_num=1 00:26:13.906 02:49:38 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:26:13.906 02:49:38 -- bdev/blockdev.sh@309 -- # nbd_all=(/dev/nbd+([0-9])) 00:26:13.906 02:49:38 -- bdev/blockdev.sh@309 -- # local nbd_all 00:26:13.906 02:49:38 -- bdev/blockdev.sh@310 -- # bdev_num=1 00:26:13.906 02:49:38 -- bdev/blockdev.sh@312 -- # nbd_list=(${nbd_all[@]::bdev_num}) 00:26:13.906 02:49:38 -- bdev/blockdev.sh@312 -- # local nbd_list 00:26:13.906 02:49:38 -- bdev/blockdev.sh@313 -- # bdev_list=(${bdev_all[@]::bdev_num}) 00:26:13.906 02:49:38 -- bdev/blockdev.sh@313 -- # local bdev_list 00:26:13.906 02:49:38 -- bdev/blockdev.sh@316 -- # nbd_pid=149025 00:26:13.906 02:49:38 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:26:13.906 02:49:38 -- bdev/blockdev.sh@318 -- # waitforlisten 149025 /var/tmp/spdk-nbd.sock 00:26:13.906 02:49:38 -- common/autotest_common.sh@819 -- # '[' -z 149025 ']' 00:26:13.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:26:13.906 02:49:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:26:13.906 02:49:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:13.906 02:49:38 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:26:13.906 02:49:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:26:13.906 02:49:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:13.906 02:49:38 -- common/autotest_common.sh@10 -- # set +x 00:26:14.165 [2024-07-11 02:49:39.030818] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:14.165 [2024-07-11 02:49:39.031033] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:14.165 [2024-07-11 02:49:39.171857] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:14.424 [2024-07-11 02:49:39.259503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:14.991 02:49:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:14.991 02:49:40 -- common/autotest_common.sh@852 -- # return 0 00:26:14.991 02:49:40 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock Nvme0n1 00:26:14.991 02:49:40 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:14.991 02:49:40 -- bdev/nbd_common.sh@114 -- # bdev_list=($2) 00:26:14.991 02:49:40 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:26:14.991 02:49:40 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock Nvme0n1 00:26:14.991 02:49:40 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:14.991 02:49:40 -- bdev/nbd_common.sh@23 -- # bdev_list=($2) 00:26:14.991 02:49:40 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:26:14.991 02:49:40 -- bdev/nbd_common.sh@24 -- # local i 00:26:14.991 02:49:40 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:26:14.991 02:49:40 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:26:14.991 02:49:40 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:26:14.991 02:49:40 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:26:15.250 02:49:40 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:26:15.250 02:49:40 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:26:15.250 02:49:40 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:26:15.250 02:49:40 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:26:15.250 02:49:40 -- common/autotest_common.sh@857 -- # local i 00:26:15.250 02:49:40 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:26:15.250 02:49:40 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:26:15.250 02:49:40 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:26:15.250 02:49:40 -- common/autotest_common.sh@861 -- # break 00:26:15.250 02:49:40 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:26:15.250 02:49:40 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:26:15.250 02:49:40 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:15.250 1+0 records in 00:26:15.250 1+0 records out 00:26:15.250 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381083 s, 10.7 MB/s 00:26:15.250 02:49:40 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:15.250 02:49:40 -- common/autotest_common.sh@874 -- # size=4096 00:26:15.250 02:49:40 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:15.508 02:49:40 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:26:15.508 02:49:40 -- common/autotest_common.sh@877 -- # return 0 00:26:15.508 02:49:40 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:26:15.508 02:49:40 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:26:15.508 02:49:40 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:15.767 02:49:40 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:26:15.767 { 00:26:15.767 "nbd_device": "/dev/nbd0", 00:26:15.767 "bdev_name": "Nvme0n1" 00:26:15.767 } 00:26:15.767 ]' 00:26:15.767 02:49:40 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:26:15.767 02:49:40 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:26:15.767 02:49:40 -- bdev/nbd_common.sh@119 -- # echo '[ 00:26:15.767 { 00:26:15.767 "nbd_device": "/dev/nbd0", 00:26:15.767 "bdev_name": "Nvme0n1" 00:26:15.767 } 00:26:15.767 ]' 00:26:15.767 02:49:40 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:26:15.767 02:49:40 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:15.767 02:49:40 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:26:15.767 02:49:40 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:15.767 02:49:40 -- bdev/nbd_common.sh@51 -- # local i 00:26:15.767 02:49:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:15.767 02:49:40 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:26:16.025 02:49:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:16.025 02:49:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:16.025 02:49:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:16.025 02:49:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:16.025 02:49:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:16.025 02:49:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:16.025 02:49:40 -- bdev/nbd_common.sh@41 -- # break 00:26:16.025 02:49:40 -- bdev/nbd_common.sh@45 -- # return 0 00:26:16.025 02:49:40 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:16.025 02:49:40 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:16.025 02:49:40 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:16.284 02:49:41 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:26:16.284 02:49:41 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:26:16.284 02:49:41 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:16.284 02:49:41 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:26:16.284 02:49:41 -- bdev/nbd_common.sh@65 -- # echo '' 00:26:16.284 02:49:41 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:16.284 02:49:41 -- bdev/nbd_common.sh@65 -- # true 00:26:16.284 02:49:41 -- bdev/nbd_common.sh@65 -- # count=0 00:26:16.284 02:49:41 -- bdev/nbd_common.sh@66 -- # echo 0 00:26:16.284 02:49:41 -- bdev/nbd_common.sh@122 -- # count=0 00:26:16.284 02:49:41 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:26:16.284 02:49:41 -- bdev/nbd_common.sh@127 -- # return 0 00:26:16.284 02:49:41 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:26:16.284 02:49:41 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:16.284 02:49:41 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:26:16.284 02:49:41 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:26:16.284 02:49:41 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:26:16.284 02:49:41 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:26:16.284 02:49:41 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:26:16.284 02:49:41 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:16.284 02:49:41 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:26:16.284 02:49:41 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:16.284 02:49:41 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:26:16.284 02:49:41 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:16.284 02:49:41 -- bdev/nbd_common.sh@12 -- # local i 00:26:16.284 02:49:41 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:16.284 02:49:41 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:16.284 02:49:41 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:26:16.543 /dev/nbd0 00:26:16.543 02:49:41 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:16.543 02:49:41 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:16.543 02:49:41 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:26:16.543 02:49:41 -- common/autotest_common.sh@857 -- # local i 00:26:16.543 02:49:41 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:26:16.543 02:49:41 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:26:16.543 02:49:41 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:26:16.543 02:49:41 -- common/autotest_common.sh@861 -- # break 00:26:16.802 02:49:41 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:26:16.802 02:49:41 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:26:16.802 02:49:41 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:16.802 1+0 records in 00:26:16.802 1+0 records out 00:26:16.802 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000578684 s, 7.1 MB/s 00:26:16.802 02:49:41 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:16.802 02:49:41 -- common/autotest_common.sh@874 -- # size=4096 00:26:16.802 02:49:41 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:16.802 02:49:41 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:26:16.802 02:49:41 -- common/autotest_common.sh@877 -- # return 0 00:26:16.802 02:49:41 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:16.802 02:49:41 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:16.802 02:49:41 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:16.802 02:49:41 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:16.802 02:49:41 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:17.061 02:49:41 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:26:17.061 { 00:26:17.061 "nbd_device": "/dev/nbd0", 00:26:17.061 "bdev_name": "Nvme0n1" 00:26:17.061 } 00:26:17.061 ]' 00:26:17.061 02:49:41 -- bdev/nbd_common.sh@64 -- # echo '[ 00:26:17.061 { 00:26:17.061 "nbd_device": "/dev/nbd0", 00:26:17.061 "bdev_name": "Nvme0n1" 00:26:17.061 } 00:26:17.061 ]' 00:26:17.061 02:49:41 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:17.061 02:49:41 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:26:17.061 02:49:41 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:26:17.061 02:49:41 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:17.061 02:49:41 -- bdev/nbd_common.sh@65 -- # count=1 00:26:17.061 02:49:41 -- bdev/nbd_common.sh@66 -- # echo 1 00:26:17.061 02:49:41 -- bdev/nbd_common.sh@95 -- # count=1 00:26:17.061 02:49:41 -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:26:17.061 02:49:41 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:26:17.061 02:49:41 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:26:17.061 02:49:41 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:26:17.061 02:49:41 -- bdev/nbd_common.sh@71 -- # local operation=write 00:26:17.061 02:49:41 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:26:17.061 02:49:41 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:26:17.061 02:49:41 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:26:17.061 256+0 records in 00:26:17.061 256+0 records out 00:26:17.061 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00760304 s, 138 MB/s 00:26:17.061 02:49:41 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:26:17.061 02:49:41 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:26:17.061 256+0 records in 00:26:17.061 256+0 records out 00:26:17.061 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0647857 s, 16.2 MB/s 00:26:17.061 02:49:42 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:26:17.061 02:49:42 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:26:17.061 02:49:42 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:26:17.061 02:49:42 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:26:17.061 02:49:42 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:26:17.061 02:49:42 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:26:17.061 02:49:42 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:26:17.061 02:49:42 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:26:17.061 02:49:42 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:26:17.061 02:49:42 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:26:17.061 02:49:42 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:26:17.061 02:49:42 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:17.061 02:49:42 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:26:17.061 02:49:42 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:17.061 02:49:42 -- bdev/nbd_common.sh@51 -- # local i 00:26:17.061 02:49:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:17.061 02:49:42 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:26:17.320 02:49:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:17.320 02:49:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:17.320 02:49:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:17.320 02:49:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:17.320 02:49:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:17.320 02:49:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:17.320 02:49:42 -- bdev/nbd_common.sh@41 -- # break 00:26:17.320 02:49:42 -- bdev/nbd_common.sh@45 -- # return 0 00:26:17.320 02:49:42 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:17.320 02:49:42 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:17.320 02:49:42 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:17.578 02:49:42 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:26:17.578 02:49:42 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:26:17.578 02:49:42 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:17.835 02:49:42 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:26:17.835 02:49:42 -- bdev/nbd_common.sh@65 -- # echo '' 00:26:17.835 02:49:42 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:17.835 02:49:42 -- bdev/nbd_common.sh@65 -- # true 00:26:17.835 02:49:42 -- bdev/nbd_common.sh@65 -- # count=0 00:26:17.835 02:49:42 -- bdev/nbd_common.sh@66 -- # echo 0 00:26:17.835 02:49:42 -- bdev/nbd_common.sh@104 -- # count=0 00:26:17.835 02:49:42 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:26:17.835 02:49:42 -- bdev/nbd_common.sh@109 -- # return 0 00:26:17.835 02:49:42 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:26:17.835 02:49:42 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:17.835 02:49:42 -- bdev/nbd_common.sh@132 -- # nbd_list=($2) 00:26:17.835 02:49:42 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:26:17.835 02:49:42 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:26:17.835 02:49:42 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:26:18.093 malloc_lvol_verify 00:26:18.093 02:49:42 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:26:18.350 e8e38204-6864-47c9-bb50-ee90f8cd2279 00:26:18.350 02:49:43 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:26:18.619 1534c683-817c-4b9e-8200-b4726c9c98dd 00:26:18.619 02:49:43 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:26:18.892 /dev/nbd0 00:26:18.892 02:49:43 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:26:18.892 mke2fs 1.45.5 (07-Jan-2020) 00:26:18.892 00:26:18.892 Filesystem too small for a journal 00:26:18.892 Creating filesystem with 1024 4k blocks and 1024 inodes 00:26:18.892 00:26:18.892 Allocating group tables: 0/1 done 00:26:18.892 Writing inode tables: 0/1 done 00:26:18.892 Writing superblocks and filesystem accounting information: 0/1 done 00:26:18.892 00:26:18.892 02:49:43 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:26:18.892 02:49:43 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:26:18.892 02:49:43 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:18.892 02:49:43 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:26:18.892 02:49:43 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:18.892 02:49:43 -- bdev/nbd_common.sh@51 -- # local i 00:26:18.892 02:49:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:18.892 02:49:43 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:26:19.149 02:49:44 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:19.149 02:49:44 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:19.149 02:49:44 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:19.149 02:49:44 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:19.149 02:49:44 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:19.149 02:49:44 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:19.150 02:49:44 -- bdev/nbd_common.sh@41 -- # break 00:26:19.150 02:49:44 -- bdev/nbd_common.sh@45 -- # return 0 00:26:19.150 02:49:44 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:26:19.150 02:49:44 -- bdev/nbd_common.sh@147 -- # return 0 00:26:19.150 02:49:44 -- bdev/blockdev.sh@324 -- # killprocess 149025 00:26:19.150 02:49:44 -- common/autotest_common.sh@926 -- # '[' -z 149025 ']' 00:26:19.150 02:49:44 -- common/autotest_common.sh@930 -- # kill -0 149025 00:26:19.150 02:49:44 -- common/autotest_common.sh@931 -- # uname 00:26:19.150 02:49:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:19.150 02:49:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 149025 00:26:19.150 02:49:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:19.150 02:49:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:19.150 02:49:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 149025' 00:26:19.150 killing process with pid 149025 00:26:19.150 02:49:44 -- common/autotest_common.sh@945 -- # kill 149025 00:26:19.150 02:49:44 -- common/autotest_common.sh@950 -- # wait 149025 00:26:19.408 02:49:44 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:26:19.408 00:26:19.408 real 0m5.393s 00:26:19.408 user 0m8.571s 00:26:19.408 sys 0m1.072s 00:26:19.408 02:49:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:19.408 02:49:44 -- common/autotest_common.sh@10 -- # set +x 00:26:19.408 ************************************ 00:26:19.408 END TEST bdev_nbd 00:26:19.408 ************************************ 00:26:19.408 02:49:44 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:26:19.408 02:49:44 -- bdev/blockdev.sh@762 -- # '[' nvme = nvme ']' 00:26:19.408 skipping fio tests on NVMe due to multi-ns failures. 00:26:19.408 02:49:44 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:26:19.408 02:49:44 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:19.408 02:49:44 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:26:19.408 02:49:44 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:26:19.408 02:49:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:19.408 02:49:44 -- common/autotest_common.sh@10 -- # set +x 00:26:19.408 ************************************ 00:26:19.408 START TEST bdev_verify 00:26:19.408 ************************************ 00:26:19.408 02:49:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:26:19.408 [2024-07-11 02:49:44.467653] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:19.408 [2024-07-11 02:49:44.468372] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149243 ] 00:26:19.667 [2024-07-11 02:49:44.609394] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:19.667 [2024-07-11 02:49:44.685407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:19.667 [2024-07-11 02:49:44.685418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:19.925 Running I/O for 5 seconds... 00:26:25.187 00:26:25.187 Latency(us) 00:26:25.187 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:25.187 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:25.187 Verification LBA range: start 0x0 length 0xa0000 00:26:25.187 Nvme0n1 : 5.01 18483.64 72.20 0.00 0.00 6893.88 290.44 14894.55 00:26:25.187 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:25.187 Verification LBA range: start 0xa0000 length 0xa0000 00:26:25.187 Nvme0n1 : 5.01 18482.80 72.20 0.00 0.00 6893.77 258.79 16681.89 00:26:25.187 =================================================================================================================== 00:26:25.187 Total : 36966.44 144.40 0.00 0.00 6893.83 258.79 16681.89 00:26:35.164 00:26:35.164 real 0m14.653s 00:26:35.164 user 0m28.513s 00:26:35.164 sys 0m0.306s 00:26:35.164 ************************************ 00:26:35.164 END TEST bdev_verify 00:26:35.164 02:49:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:35.164 02:49:59 -- common/autotest_common.sh@10 -- # set +x 00:26:35.164 ************************************ 00:26:35.164 02:49:59 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:26:35.164 02:49:59 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:26:35.164 02:49:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:35.164 02:49:59 -- common/autotest_common.sh@10 -- # set +x 00:26:35.164 ************************************ 00:26:35.164 START TEST bdev_verify_big_io 00:26:35.164 ************************************ 00:26:35.164 02:49:59 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:26:35.164 [2024-07-11 02:49:59.178088] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:35.164 [2024-07-11 02:49:59.178826] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149396 ] 00:26:35.164 [2024-07-11 02:49:59.322305] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:35.164 [2024-07-11 02:49:59.399054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:35.164 [2024-07-11 02:49:59.399065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:35.165 Running I/O for 5 seconds... 00:26:40.494 00:26:40.494 Latency(us) 00:26:40.494 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:40.494 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:26:40.494 Verification LBA range: start 0x0 length 0xa000 00:26:40.494 Nvme0n1 : 5.04 1418.49 88.66 0.00 0.00 88925.48 573.44 120109.61 00:26:40.494 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:26:40.494 Verification LBA range: start 0xa000 length 0xa000 00:26:40.494 Nvme0n1 : 5.03 1751.78 109.49 0.00 0.00 72042.78 525.03 104380.97 00:26:40.494 =================================================================================================================== 00:26:40.494 Total : 3170.27 198.14 0.00 0.00 79603.88 525.03 120109.61 00:26:40.494 00:26:40.494 real 0m6.081s 00:26:40.494 user 0m11.422s 00:26:40.494 sys 0m0.242s 00:26:40.494 02:50:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:40.494 02:50:05 -- common/autotest_common.sh@10 -- # set +x 00:26:40.494 ************************************ 00:26:40.494 END TEST bdev_verify_big_io 00:26:40.494 ************************************ 00:26:40.494 02:50:05 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:40.494 02:50:05 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:26:40.494 02:50:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:40.494 02:50:05 -- common/autotest_common.sh@10 -- # set +x 00:26:40.494 ************************************ 00:26:40.494 START TEST bdev_write_zeroes 00:26:40.494 ************************************ 00:26:40.494 02:50:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:40.494 [2024-07-11 02:50:05.316368] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:40.494 [2024-07-11 02:50:05.316700] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149504 ] 00:26:40.494 [2024-07-11 02:50:05.465115] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:40.494 [2024-07-11 02:50:05.556744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:40.753 Running I/O for 1 seconds... 00:26:42.125 00:26:42.125 Latency(us) 00:26:42.125 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:42.125 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:26:42.125 Nvme0n1 : 1.00 55540.87 216.96 0.00 0.00 2298.10 726.11 11141.12 00:26:42.125 =================================================================================================================== 00:26:42.125 Total : 55540.87 216.96 0.00 0.00 2298.10 726.11 11141.12 00:26:42.125 00:26:42.125 real 0m1.794s 00:26:42.125 user 0m1.480s 00:26:42.125 sys 0m0.215s 00:26:42.125 02:50:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:42.125 02:50:07 -- common/autotest_common.sh@10 -- # set +x 00:26:42.125 ************************************ 00:26:42.125 END TEST bdev_write_zeroes 00:26:42.125 ************************************ 00:26:42.125 02:50:07 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:42.125 02:50:07 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:26:42.125 02:50:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:42.125 02:50:07 -- common/autotest_common.sh@10 -- # set +x 00:26:42.125 ************************************ 00:26:42.125 START TEST bdev_json_nonenclosed 00:26:42.125 ************************************ 00:26:42.125 02:50:07 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:42.125 [2024-07-11 02:50:07.161316] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:42.125 [2024-07-11 02:50:07.161732] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149548 ] 00:26:42.384 [2024-07-11 02:50:07.322104] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:42.384 [2024-07-11 02:50:07.400942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:42.384 [2024-07-11 02:50:07.401230] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:26:42.384 [2024-07-11 02:50:07.401283] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:42.643 00:26:42.643 real 0m0.406s 00:26:42.643 user 0m0.186s 00:26:42.643 sys 0m0.119s 00:26:42.643 02:50:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:42.643 02:50:07 -- common/autotest_common.sh@10 -- # set +x 00:26:42.643 ************************************ 00:26:42.643 END TEST bdev_json_nonenclosed 00:26:42.643 ************************************ 00:26:42.643 02:50:07 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:42.643 02:50:07 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:26:42.643 02:50:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:42.643 02:50:07 -- common/autotest_common.sh@10 -- # set +x 00:26:42.643 ************************************ 00:26:42.643 START TEST bdev_json_nonarray 00:26:42.643 ************************************ 00:26:42.643 02:50:07 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:42.643 [2024-07-11 02:50:07.615412] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:42.643 [2024-07-11 02:50:07.615669] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149570 ] 00:26:42.901 [2024-07-11 02:50:07.769411] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:42.901 [2024-07-11 02:50:07.860927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:42.901 [2024-07-11 02:50:07.861202] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:26:42.901 [2024-07-11 02:50:07.861266] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:42.901 00:26:42.901 real 0m0.418s 00:26:42.901 user 0m0.214s 00:26:42.901 sys 0m0.101s 00:26:42.901 02:50:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:42.901 02:50:07 -- common/autotest_common.sh@10 -- # set +x 00:26:42.901 ************************************ 00:26:42.901 END TEST bdev_json_nonarray 00:26:42.901 ************************************ 00:26:43.160 02:50:08 -- bdev/blockdev.sh@785 -- # [[ nvme == bdev ]] 00:26:43.160 02:50:08 -- bdev/blockdev.sh@792 -- # [[ nvme == gpt ]] 00:26:43.160 02:50:08 -- bdev/blockdev.sh@796 -- # [[ nvme == crypto_sw ]] 00:26:43.160 02:50:08 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:26:43.160 02:50:08 -- bdev/blockdev.sh@809 -- # cleanup 00:26:43.161 02:50:08 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:26:43.161 02:50:08 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:26:43.161 02:50:08 -- bdev/blockdev.sh@24 -- # [[ nvme == rbd ]] 00:26:43.161 02:50:08 -- bdev/blockdev.sh@28 -- # [[ nvme == daos ]] 00:26:43.161 02:50:08 -- bdev/blockdev.sh@32 -- # [[ nvme = \g\p\t ]] 00:26:43.161 02:50:08 -- bdev/blockdev.sh@38 -- # [[ nvme == xnvme ]] 00:26:43.161 00:26:43.161 real 0m33.173s 00:26:43.161 user 0m56.807s 00:26:43.161 sys 0m3.142s 00:26:43.161 02:50:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:43.161 02:50:08 -- common/autotest_common.sh@10 -- # set +x 00:26:43.161 ************************************ 00:26:43.161 END TEST blockdev_nvme 00:26:43.161 ************************************ 00:26:43.161 02:50:08 -- spdk/autotest.sh@219 -- # uname -s 00:26:43.161 02:50:08 -- spdk/autotest.sh@219 -- # [[ Linux == Linux ]] 00:26:43.161 02:50:08 -- spdk/autotest.sh@220 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:26:43.161 02:50:08 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:43.161 02:50:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:43.161 02:50:08 -- common/autotest_common.sh@10 -- # set +x 00:26:43.161 ************************************ 00:26:43.161 START TEST blockdev_nvme_gpt 00:26:43.161 ************************************ 00:26:43.161 02:50:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:26:43.161 * Looking for test storage... 00:26:43.161 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:26:43.161 02:50:08 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:26:43.161 02:50:08 -- bdev/nbd_common.sh@6 -- # set -e 00:26:43.161 02:50:08 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:26:43.161 02:50:08 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:26:43.161 02:50:08 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:26:43.161 02:50:08 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:26:43.161 02:50:08 -- bdev/blockdev.sh@18 -- # : 00:26:43.161 02:50:08 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:26:43.161 02:50:08 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:26:43.161 02:50:08 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:26:43.161 02:50:08 -- bdev/blockdev.sh@672 -- # uname -s 00:26:43.161 02:50:08 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:26:43.161 02:50:08 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:26:43.161 02:50:08 -- bdev/blockdev.sh@680 -- # test_type=gpt 00:26:43.161 02:50:08 -- bdev/blockdev.sh@681 -- # crypto_device= 00:26:43.161 02:50:08 -- bdev/blockdev.sh@682 -- # dek= 00:26:43.161 02:50:08 -- bdev/blockdev.sh@683 -- # env_ctx= 00:26:43.161 02:50:08 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:26:43.161 02:50:08 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:26:43.161 02:50:08 -- bdev/blockdev.sh@688 -- # [[ gpt == bdev ]] 00:26:43.161 02:50:08 -- bdev/blockdev.sh@688 -- # [[ gpt == crypto_* ]] 00:26:43.161 02:50:08 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:26:43.161 02:50:08 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=149653 00:26:43.161 02:50:08 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:26:43.161 02:50:08 -- bdev/blockdev.sh@47 -- # waitforlisten 149653 00:26:43.161 02:50:08 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:26:43.161 02:50:08 -- common/autotest_common.sh@819 -- # '[' -z 149653 ']' 00:26:43.161 02:50:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:43.161 02:50:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:43.161 02:50:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:43.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:43.161 02:50:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:43.161 02:50:08 -- common/autotest_common.sh@10 -- # set +x 00:26:43.161 [2024-07-11 02:50:08.209723] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:43.161 [2024-07-11 02:50:08.210111] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149653 ] 00:26:43.419 [2024-07-11 02:50:08.367912] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:43.419 [2024-07-11 02:50:08.458325] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:43.419 [2024-07-11 02:50:08.458572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:44.354 02:50:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:44.354 02:50:09 -- common/autotest_common.sh@852 -- # return 0 00:26:44.355 02:50:09 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:26:44.355 02:50:09 -- bdev/blockdev.sh@700 -- # setup_gpt_conf 00:26:44.355 02:50:09 -- bdev/blockdev.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:44.355 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:26:44.355 Waiting for block devices as requested 00:26:44.355 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:26:44.613 02:50:09 -- bdev/blockdev.sh@103 -- # get_zoned_devs 00:26:44.613 02:50:09 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:26:44.613 02:50:09 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:26:44.613 02:50:09 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:26:44.613 02:50:09 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:26:44.613 02:50:09 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:26:44.613 02:50:09 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:26:44.613 02:50:09 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:44.613 02:50:09 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:26:44.613 02:50:09 -- bdev/blockdev.sh@105 -- # nvme_devs=(/sys/bus/pci/drivers/nvme/*/nvme/nvme*/nvme*n*) 00:26:44.613 02:50:09 -- bdev/blockdev.sh@105 -- # local nvme_devs nvme_dev 00:26:44.613 02:50:09 -- bdev/blockdev.sh@106 -- # gpt_nvme= 00:26:44.613 02:50:09 -- bdev/blockdev.sh@108 -- # for nvme_dev in "${nvme_devs[@]}" 00:26:44.613 02:50:09 -- bdev/blockdev.sh@109 -- # [[ -z '' ]] 00:26:44.613 02:50:09 -- bdev/blockdev.sh@110 -- # dev=/dev/nvme0n1 00:26:44.613 02:50:09 -- bdev/blockdev.sh@111 -- # parted /dev/nvme0n1 -ms print 00:26:44.613 02:50:09 -- bdev/blockdev.sh@111 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:26:44.613 BYT; 00:26:44.613 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:26:44.613 02:50:09 -- bdev/blockdev.sh@112 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:26:44.613 BYT; 00:26:44.613 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:26:44.613 02:50:09 -- bdev/blockdev.sh@113 -- # gpt_nvme=/dev/nvme0n1 00:26:44.613 02:50:09 -- bdev/blockdev.sh@114 -- # break 00:26:44.613 02:50:09 -- bdev/blockdev.sh@117 -- # [[ -n /dev/nvme0n1 ]] 00:26:44.613 02:50:09 -- bdev/blockdev.sh@122 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:26:44.613 02:50:09 -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:26:44.613 02:50:09 -- bdev/blockdev.sh@126 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:26:45.547 02:50:10 -- bdev/blockdev.sh@128 -- # get_spdk_gpt_old 00:26:45.547 02:50:10 -- scripts/common.sh@410 -- # local spdk_guid 00:26:45.547 02:50:10 -- scripts/common.sh@412 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:26:45.547 02:50:10 -- scripts/common.sh@414 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:26:45.547 02:50:10 -- scripts/common.sh@415 -- # IFS='()' 00:26:45.547 02:50:10 -- scripts/common.sh@415 -- # read -r _ spdk_guid _ 00:26:45.547 02:50:10 -- scripts/common.sh@415 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:26:45.547 02:50:10 -- scripts/common.sh@416 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:26:45.547 02:50:10 -- scripts/common.sh@416 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:26:45.547 02:50:10 -- scripts/common.sh@418 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:26:45.547 02:50:10 -- bdev/blockdev.sh@128 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:26:45.547 02:50:10 -- bdev/blockdev.sh@129 -- # get_spdk_gpt 00:26:45.547 02:50:10 -- scripts/common.sh@422 -- # local spdk_guid 00:26:45.547 02:50:10 -- scripts/common.sh@424 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:26:45.547 02:50:10 -- scripts/common.sh@426 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:26:45.547 02:50:10 -- scripts/common.sh@427 -- # IFS='()' 00:26:45.547 02:50:10 -- scripts/common.sh@427 -- # read -r _ spdk_guid _ 00:26:45.547 02:50:10 -- scripts/common.sh@427 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:26:45.547 02:50:10 -- scripts/common.sh@428 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:26:45.547 02:50:10 -- scripts/common.sh@428 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:26:45.547 02:50:10 -- scripts/common.sh@430 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:26:45.547 02:50:10 -- bdev/blockdev.sh@129 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:26:45.547 02:50:10 -- bdev/blockdev.sh@130 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:26:46.481 The operation has completed successfully. 00:26:46.481 02:50:11 -- bdev/blockdev.sh@131 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:26:47.431 The operation has completed successfully. 00:26:47.431 02:50:12 -- bdev/blockdev.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:47.689 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:26:47.947 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:26:48.881 02:50:13 -- bdev/blockdev.sh@133 -- # rpc_cmd bdev_get_bdevs 00:26:48.881 02:50:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:48.881 02:50:13 -- common/autotest_common.sh@10 -- # set +x 00:26:48.881 [] 00:26:48.881 02:50:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:48.881 02:50:13 -- bdev/blockdev.sh@134 -- # setup_nvme_conf 00:26:48.881 02:50:13 -- bdev/blockdev.sh@79 -- # local json 00:26:48.881 02:50:13 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:26:48.881 02:50:13 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:26:48.881 02:50:13 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } } ] }'\''' 00:26:48.881 02:50:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:48.881 02:50:13 -- common/autotest_common.sh@10 -- # set +x 00:26:48.881 02:50:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:48.881 02:50:13 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:26:48.881 02:50:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:48.881 02:50:13 -- common/autotest_common.sh@10 -- # set +x 00:26:48.881 02:50:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:48.881 02:50:13 -- bdev/blockdev.sh@738 -- # cat 00:26:48.881 02:50:13 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:26:48.881 02:50:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:48.881 02:50:13 -- common/autotest_common.sh@10 -- # set +x 00:26:48.881 02:50:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:48.881 02:50:13 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:26:48.881 02:50:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:48.881 02:50:13 -- common/autotest_common.sh@10 -- # set +x 00:26:48.881 02:50:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:48.881 02:50:13 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:26:48.881 02:50:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:48.881 02:50:13 -- common/autotest_common.sh@10 -- # set +x 00:26:48.881 02:50:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:48.881 02:50:13 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:26:48.881 02:50:13 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:26:48.881 02:50:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:48.881 02:50:13 -- common/autotest_common.sh@10 -- # set +x 00:26:48.881 02:50:13 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:26:48.881 02:50:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:49.139 02:50:13 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:26:49.139 02:50:13 -- bdev/blockdev.sh@747 -- # jq -r .name 00:26:49.139 02:50:13 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' 00:26:49.139 02:50:14 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:26:49.139 02:50:14 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1p1 00:26:49.139 02:50:14 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:26:49.139 02:50:14 -- bdev/blockdev.sh@752 -- # killprocess 149653 00:26:49.139 02:50:14 -- common/autotest_common.sh@926 -- # '[' -z 149653 ']' 00:26:49.139 02:50:14 -- common/autotest_common.sh@930 -- # kill -0 149653 00:26:49.139 02:50:14 -- common/autotest_common.sh@931 -- # uname 00:26:49.139 02:50:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:49.139 02:50:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 149653 00:26:49.139 02:50:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:49.139 02:50:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:49.139 02:50:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 149653' 00:26:49.139 killing process with pid 149653 00:26:49.139 02:50:14 -- common/autotest_common.sh@945 -- # kill 149653 00:26:49.139 02:50:14 -- common/autotest_common.sh@950 -- # wait 149653 00:26:49.398 02:50:14 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:49.398 02:50:14 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:26:49.398 02:50:14 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:26:49.398 02:50:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:49.398 02:50:14 -- common/autotest_common.sh@10 -- # set +x 00:26:49.656 ************************************ 00:26:49.656 START TEST bdev_hello_world 00:26:49.656 ************************************ 00:26:49.656 02:50:14 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:26:49.657 [2024-07-11 02:50:14.548457] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:49.657 [2024-07-11 02:50:14.548734] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150193 ] 00:26:49.657 [2024-07-11 02:50:14.695428] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:49.915 [2024-07-11 02:50:14.772901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:49.915 [2024-07-11 02:50:14.982050] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:26:49.915 [2024-07-11 02:50:14.982188] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:26:49.915 [2024-07-11 02:50:14.982263] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:26:49.915 [2024-07-11 02:50:14.984764] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:26:49.915 [2024-07-11 02:50:14.985414] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:26:49.915 [2024-07-11 02:50:14.985463] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:26:49.915 [2024-07-11 02:50:14.985707] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:26:49.915 00:26:49.915 [2024-07-11 02:50:14.985772] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:26:50.178 ************************************ 00:26:50.178 END TEST bdev_hello_world 00:26:50.178 ************************************ 00:26:50.178 00:26:50.178 real 0m0.737s 00:26:50.178 user 0m0.453s 00:26:50.178 sys 0m0.185s 00:26:50.178 02:50:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:50.178 02:50:15 -- common/autotest_common.sh@10 -- # set +x 00:26:50.445 02:50:15 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:26:50.445 02:50:15 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:50.445 02:50:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:50.445 02:50:15 -- common/autotest_common.sh@10 -- # set +x 00:26:50.445 ************************************ 00:26:50.445 START TEST bdev_bounds 00:26:50.445 ************************************ 00:26:50.445 Process bdevio pid: 150232 00:26:50.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:50.445 02:50:15 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:26:50.445 02:50:15 -- bdev/blockdev.sh@288 -- # bdevio_pid=150232 00:26:50.445 02:50:15 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:26:50.445 02:50:15 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:26:50.445 02:50:15 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 150232' 00:26:50.445 02:50:15 -- bdev/blockdev.sh@291 -- # waitforlisten 150232 00:26:50.445 02:50:15 -- common/autotest_common.sh@819 -- # '[' -z 150232 ']' 00:26:50.445 02:50:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:50.445 02:50:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:50.445 02:50:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:50.445 02:50:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:50.445 02:50:15 -- common/autotest_common.sh@10 -- # set +x 00:26:50.445 [2024-07-11 02:50:15.329659] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:50.445 [2024-07-11 02:50:15.329892] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150232 ] 00:26:50.445 [2024-07-11 02:50:15.479843] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:50.702 [2024-07-11 02:50:15.566907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:50.702 [2024-07-11 02:50:15.567060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:50.702 [2024-07-11 02:50:15.567056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:51.636 02:50:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:51.636 02:50:16 -- common/autotest_common.sh@852 -- # return 0 00:26:51.636 02:50:16 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:26:51.636 I/O targets: 00:26:51.636 Nvme0n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:26:51.636 Nvme0n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:26:51.636 00:26:51.636 00:26:51.636 CUnit - A unit testing framework for C - Version 2.1-3 00:26:51.636 http://cunit.sourceforge.net/ 00:26:51.636 00:26:51.636 00:26:51.636 Suite: bdevio tests on: Nvme0n1p2 00:26:51.636 Test: blockdev write read block ...passed 00:26:51.636 Test: blockdev write zeroes read block ...passed 00:26:51.636 Test: blockdev write zeroes read no split ...passed 00:26:51.636 Test: blockdev write zeroes read split ...passed 00:26:51.636 Test: blockdev write zeroes read split partial ...passed 00:26:51.636 Test: blockdev reset ...[2024-07-11 02:50:16.476672] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:26:51.636 [2024-07-11 02:50:16.478803] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:51.636 passed 00:26:51.636 Test: blockdev write read 8 blocks ...passed 00:26:51.636 Test: blockdev write read size > 128k ...passed 00:26:51.636 Test: blockdev write read invalid size ...passed 00:26:51.636 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:26:51.636 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:26:51.636 Test: blockdev write read max offset ...passed 00:26:51.636 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:26:51.636 Test: blockdev writev readv 8 blocks ...passed 00:26:51.636 Test: blockdev writev readv 30 x 1block ...passed 00:26:51.636 Test: blockdev writev readv block ...passed 00:26:51.636 Test: blockdev writev readv size > 128k ...passed 00:26:51.636 Test: blockdev writev readv size > 128k in two iovs ...passed 00:26:51.636 Test: blockdev comparev and writev ...[2024-07-11 02:50:16.484245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x8b80b000 len:0x1000 00:26:51.636 [2024-07-11 02:50:16.484352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:26:51.636 passed 00:26:51.636 Test: blockdev nvme passthru rw ...passed 00:26:51.636 Test: blockdev nvme passthru vendor specific ...passed 00:26:51.636 Test: blockdev nvme admin passthru ...passed 00:26:51.636 Test: blockdev copy ...passed 00:26:51.636 Suite: bdevio tests on: Nvme0n1p1 00:26:51.636 Test: blockdev write read block ...passed 00:26:51.636 Test: blockdev write zeroes read block ...passed 00:26:51.636 Test: blockdev write zeroes read no split ...passed 00:26:51.636 Test: blockdev write zeroes read split ...passed 00:26:51.636 Test: blockdev write zeroes read split partial ...passed 00:26:51.636 Test: blockdev reset ...[2024-07-11 02:50:16.496769] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:26:51.636 [2024-07-11 02:50:16.498581] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:51.636 passed 00:26:51.636 Test: blockdev write read 8 blocks ...passed 00:26:51.636 Test: blockdev write read size > 128k ...passed 00:26:51.636 Test: blockdev write read invalid size ...passed 00:26:51.636 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:26:51.636 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:26:51.636 Test: blockdev write read max offset ...passed 00:26:51.636 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:26:51.636 Test: blockdev writev readv 8 blocks ...passed 00:26:51.636 Test: blockdev writev readv 30 x 1block ...passed 00:26:51.636 Test: blockdev writev readv block ...passed 00:26:51.636 Test: blockdev writev readv size > 128k ...passed 00:26:51.636 Test: blockdev writev readv size > 128k in two iovs ...passed 00:26:51.636 Test: blockdev comparev and writev ...[2024-07-11 02:50:16.503644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x8b80d000 len:0x1000 00:26:51.636 [2024-07-11 02:50:16.503734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:26:51.636 passed 00:26:51.636 Test: blockdev nvme passthru rw ...passed 00:26:51.636 Test: blockdev nvme passthru vendor specific ...passed 00:26:51.636 Test: blockdev nvme admin passthru ...passed 00:26:51.636 Test: blockdev copy ...passed 00:26:51.636 00:26:51.636 Run Summary: Type Total Ran Passed Failed Inactive 00:26:51.636 suites 2 2 n/a 0 0 00:26:51.636 tests 46 46 46 0 0 00:26:51.636 asserts 284 284 284 0 n/a 00:26:51.636 00:26:51.636 Elapsed time = 0.095 seconds 00:26:51.636 0 00:26:51.636 02:50:16 -- bdev/blockdev.sh@293 -- # killprocess 150232 00:26:51.636 02:50:16 -- common/autotest_common.sh@926 -- # '[' -z 150232 ']' 00:26:51.636 02:50:16 -- common/autotest_common.sh@930 -- # kill -0 150232 00:26:51.636 02:50:16 -- common/autotest_common.sh@931 -- # uname 00:26:51.636 02:50:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:51.636 02:50:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 150232 00:26:51.636 killing process with pid 150232 00:26:51.636 02:50:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:51.636 02:50:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:51.636 02:50:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 150232' 00:26:51.636 02:50:16 -- common/autotest_common.sh@945 -- # kill 150232 00:26:51.636 02:50:16 -- common/autotest_common.sh@950 -- # wait 150232 00:26:51.895 ************************************ 00:26:51.895 END TEST bdev_bounds 00:26:51.895 ************************************ 00:26:51.895 02:50:16 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:26:51.895 00:26:51.895 real 0m1.468s 00:26:51.895 user 0m3.860s 00:26:51.895 sys 0m0.309s 00:26:51.895 02:50:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:51.895 02:50:16 -- common/autotest_common.sh@10 -- # set +x 00:26:51.895 02:50:16 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:26:51.895 02:50:16 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:26:51.895 02:50:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:51.895 02:50:16 -- common/autotest_common.sh@10 -- # set +x 00:26:51.895 ************************************ 00:26:51.895 START TEST bdev_nbd 00:26:51.895 ************************************ 00:26:51.895 02:50:16 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:26:51.895 02:50:16 -- bdev/blockdev.sh@298 -- # uname -s 00:26:51.895 02:50:16 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:26:51.895 02:50:16 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:51.895 02:50:16 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:26:51.895 02:50:16 -- bdev/blockdev.sh@302 -- # bdev_all=($2) 00:26:51.895 02:50:16 -- bdev/blockdev.sh@302 -- # local bdev_all 00:26:51.895 02:50:16 -- bdev/blockdev.sh@303 -- # local bdev_num=2 00:26:51.895 02:50:16 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:26:51.895 02:50:16 -- bdev/blockdev.sh@309 -- # nbd_all=(/dev/nbd+([0-9])) 00:26:51.895 02:50:16 -- bdev/blockdev.sh@309 -- # local nbd_all 00:26:51.895 02:50:16 -- bdev/blockdev.sh@310 -- # bdev_num=2 00:26:51.895 02:50:16 -- bdev/blockdev.sh@312 -- # nbd_list=(${nbd_all[@]::bdev_num}) 00:26:51.895 02:50:16 -- bdev/blockdev.sh@312 -- # local nbd_list 00:26:51.895 02:50:16 -- bdev/blockdev.sh@313 -- # bdev_list=(${bdev_all[@]::bdev_num}) 00:26:51.895 02:50:16 -- bdev/blockdev.sh@313 -- # local bdev_list 00:26:51.895 02:50:16 -- bdev/blockdev.sh@316 -- # nbd_pid=150282 00:26:51.895 02:50:16 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:26:51.895 02:50:16 -- bdev/blockdev.sh@318 -- # waitforlisten 150282 /var/tmp/spdk-nbd.sock 00:26:51.895 02:50:16 -- common/autotest_common.sh@819 -- # '[' -z 150282 ']' 00:26:51.895 02:50:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:26:51.895 02:50:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:51.895 02:50:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:26:51.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:26:51.895 02:50:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:51.895 02:50:16 -- common/autotest_common.sh@10 -- # set +x 00:26:51.895 02:50:16 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:26:51.895 [2024-07-11 02:50:16.844372] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:51.895 [2024-07-11 02:50:16.844825] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:52.153 [2024-07-11 02:50:16.996518] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:52.153 [2024-07-11 02:50:17.092009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:52.719 02:50:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:52.720 02:50:17 -- common/autotest_common.sh@852 -- # return 0 00:26:52.720 02:50:17 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:26:52.720 02:50:17 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:52.720 02:50:17 -- bdev/nbd_common.sh@114 -- # bdev_list=($2) 00:26:52.720 02:50:17 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:26:52.720 02:50:17 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:26:52.720 02:50:17 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:52.720 02:50:17 -- bdev/nbd_common.sh@23 -- # bdev_list=($2) 00:26:52.720 02:50:17 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:26:52.720 02:50:17 -- bdev/nbd_common.sh@24 -- # local i 00:26:52.720 02:50:17 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:26:52.720 02:50:17 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:26:52.720 02:50:17 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:26:52.720 02:50:17 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:26:53.286 02:50:18 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:26:53.286 02:50:18 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:26:53.286 02:50:18 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:26:53.286 02:50:18 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:26:53.286 02:50:18 -- common/autotest_common.sh@857 -- # local i 00:26:53.286 02:50:18 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:26:53.286 02:50:18 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:26:53.286 02:50:18 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:26:53.286 02:50:18 -- common/autotest_common.sh@861 -- # break 00:26:53.286 02:50:18 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:26:53.286 02:50:18 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:26:53.286 02:50:18 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:53.286 1+0 records in 00:26:53.286 1+0 records out 00:26:53.286 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00211918 s, 1.9 MB/s 00:26:53.286 02:50:18 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:53.286 02:50:18 -- common/autotest_common.sh@874 -- # size=4096 00:26:53.286 02:50:18 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:53.286 02:50:18 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:26:53.286 02:50:18 -- common/autotest_common.sh@877 -- # return 0 00:26:53.286 02:50:18 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:26:53.286 02:50:18 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:26:53.286 02:50:18 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:26:53.544 02:50:18 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:26:53.544 02:50:18 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:26:53.544 02:50:18 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:26:53.544 02:50:18 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:26:53.544 02:50:18 -- common/autotest_common.sh@857 -- # local i 00:26:53.544 02:50:18 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:26:53.544 02:50:18 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:26:53.544 02:50:18 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:26:53.544 02:50:18 -- common/autotest_common.sh@861 -- # break 00:26:53.544 02:50:18 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:26:53.545 02:50:18 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:26:53.545 02:50:18 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:53.545 1+0 records in 00:26:53.545 1+0 records out 00:26:53.545 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000485044 s, 8.4 MB/s 00:26:53.545 02:50:18 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:53.545 02:50:18 -- common/autotest_common.sh@874 -- # size=4096 00:26:53.545 02:50:18 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:53.545 02:50:18 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:26:53.545 02:50:18 -- common/autotest_common.sh@877 -- # return 0 00:26:53.545 02:50:18 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:26:53.545 02:50:18 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:26:53.545 02:50:18 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:53.545 02:50:18 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:26:53.545 { 00:26:53.545 "nbd_device": "/dev/nbd0", 00:26:53.545 "bdev_name": "Nvme0n1p1" 00:26:53.545 }, 00:26:53.545 { 00:26:53.545 "nbd_device": "/dev/nbd1", 00:26:53.545 "bdev_name": "Nvme0n1p2" 00:26:53.545 } 00:26:53.545 ]' 00:26:53.545 02:50:18 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:26:53.545 02:50:18 -- bdev/nbd_common.sh@119 -- # echo '[ 00:26:53.545 { 00:26:53.545 "nbd_device": "/dev/nbd0", 00:26:53.545 "bdev_name": "Nvme0n1p1" 00:26:53.545 }, 00:26:53.545 { 00:26:53.545 "nbd_device": "/dev/nbd1", 00:26:53.545 "bdev_name": "Nvme0n1p2" 00:26:53.545 } 00:26:53.545 ]' 00:26:53.545 02:50:18 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:26:53.803 02:50:18 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:26:53.803 02:50:18 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:53.803 02:50:18 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:26:53.803 02:50:18 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:53.803 02:50:18 -- bdev/nbd_common.sh@51 -- # local i 00:26:53.803 02:50:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:53.803 02:50:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:26:53.803 02:50:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:53.803 02:50:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:53.803 02:50:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:53.803 02:50:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:53.803 02:50:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:53.803 02:50:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:53.803 02:50:18 -- bdev/nbd_common.sh@41 -- # break 00:26:53.803 02:50:18 -- bdev/nbd_common.sh@45 -- # return 0 00:26:53.803 02:50:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:53.803 02:50:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:26:54.061 02:50:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:54.061 02:50:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:54.061 02:50:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:54.061 02:50:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:54.061 02:50:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:54.061 02:50:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:54.061 02:50:19 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:26:54.319 02:50:19 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:26:54.319 02:50:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:54.319 02:50:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:54.319 02:50:19 -- bdev/nbd_common.sh@41 -- # break 00:26:54.319 02:50:19 -- bdev/nbd_common.sh@45 -- # return 0 00:26:54.319 02:50:19 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:54.319 02:50:19 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:54.319 02:50:19 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:54.582 02:50:19 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:26:54.582 02:50:19 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:26:54.582 02:50:19 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:54.582 02:50:19 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:26:54.582 02:50:19 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:54.582 02:50:19 -- bdev/nbd_common.sh@65 -- # echo '' 00:26:54.582 02:50:19 -- bdev/nbd_common.sh@65 -- # true 00:26:54.582 02:50:19 -- bdev/nbd_common.sh@65 -- # count=0 00:26:54.582 02:50:19 -- bdev/nbd_common.sh@66 -- # echo 0 00:26:54.582 02:50:19 -- bdev/nbd_common.sh@122 -- # count=0 00:26:54.582 02:50:19 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:26:54.582 02:50:19 -- bdev/nbd_common.sh@127 -- # return 0 00:26:54.583 02:50:19 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:26:54.583 02:50:19 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:54.583 02:50:19 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:26:54.583 02:50:19 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:26:54.583 02:50:19 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:26:54.583 02:50:19 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:26:54.583 02:50:19 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:26:54.583 02:50:19 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:54.583 02:50:19 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:26:54.583 02:50:19 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:54.583 02:50:19 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:26:54.583 02:50:19 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:54.583 02:50:19 -- bdev/nbd_common.sh@12 -- # local i 00:26:54.583 02:50:19 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:54.583 02:50:19 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:54.583 02:50:19 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:26:54.842 /dev/nbd0 00:26:54.842 02:50:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:54.842 02:50:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:54.842 02:50:19 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:26:54.842 02:50:19 -- common/autotest_common.sh@857 -- # local i 00:26:54.842 02:50:19 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:26:54.842 02:50:19 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:26:54.842 02:50:19 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:26:54.842 02:50:19 -- common/autotest_common.sh@861 -- # break 00:26:54.842 02:50:19 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:26:54.842 02:50:19 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:26:54.842 02:50:19 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:54.842 1+0 records in 00:26:54.842 1+0 records out 00:26:54.842 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000932061 s, 4.4 MB/s 00:26:54.842 02:50:19 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:54.842 02:50:19 -- common/autotest_common.sh@874 -- # size=4096 00:26:54.842 02:50:19 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:54.842 02:50:19 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:26:54.842 02:50:19 -- common/autotest_common.sh@877 -- # return 0 00:26:54.842 02:50:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:54.842 02:50:19 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:54.842 02:50:19 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:26:55.100 /dev/nbd1 00:26:55.100 02:50:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:55.100 02:50:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:55.100 02:50:20 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:26:55.100 02:50:20 -- common/autotest_common.sh@857 -- # local i 00:26:55.100 02:50:20 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:26:55.100 02:50:20 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:26:55.100 02:50:20 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:26:55.100 02:50:20 -- common/autotest_common.sh@861 -- # break 00:26:55.100 02:50:20 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:26:55.100 02:50:20 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:26:55.100 02:50:20 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:55.100 1+0 records in 00:26:55.100 1+0 records out 00:26:55.100 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000674277 s, 6.1 MB/s 00:26:55.100 02:50:20 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:55.100 02:50:20 -- common/autotest_common.sh@874 -- # size=4096 00:26:55.100 02:50:20 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:55.100 02:50:20 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:26:55.100 02:50:20 -- common/autotest_common.sh@877 -- # return 0 00:26:55.100 02:50:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:55.100 02:50:20 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:55.100 02:50:20 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:55.100 02:50:20 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:55.100 02:50:20 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:55.359 02:50:20 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:26:55.359 { 00:26:55.359 "nbd_device": "/dev/nbd0", 00:26:55.359 "bdev_name": "Nvme0n1p1" 00:26:55.359 }, 00:26:55.359 { 00:26:55.359 "nbd_device": "/dev/nbd1", 00:26:55.359 "bdev_name": "Nvme0n1p2" 00:26:55.359 } 00:26:55.359 ]' 00:26:55.359 02:50:20 -- bdev/nbd_common.sh@64 -- # echo '[ 00:26:55.359 { 00:26:55.359 "nbd_device": "/dev/nbd0", 00:26:55.359 "bdev_name": "Nvme0n1p1" 00:26:55.359 }, 00:26:55.359 { 00:26:55.359 "nbd_device": "/dev/nbd1", 00:26:55.359 "bdev_name": "Nvme0n1p2" 00:26:55.359 } 00:26:55.359 ]' 00:26:55.359 02:50:20 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:55.359 02:50:20 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:26:55.359 /dev/nbd1' 00:26:55.359 02:50:20 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:26:55.359 /dev/nbd1' 00:26:55.359 02:50:20 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:55.359 02:50:20 -- bdev/nbd_common.sh@65 -- # count=2 00:26:55.359 02:50:20 -- bdev/nbd_common.sh@66 -- # echo 2 00:26:55.359 02:50:20 -- bdev/nbd_common.sh@95 -- # count=2 00:26:55.359 02:50:20 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:26:55.359 02:50:20 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:26:55.359 02:50:20 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:26:55.359 02:50:20 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:26:55.359 02:50:20 -- bdev/nbd_common.sh@71 -- # local operation=write 00:26:55.359 02:50:20 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:26:55.359 02:50:20 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:26:55.359 02:50:20 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:26:55.359 256+0 records in 00:26:55.359 256+0 records out 00:26:55.359 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00629752 s, 167 MB/s 00:26:55.359 02:50:20 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:26:55.359 02:50:20 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:26:55.618 256+0 records in 00:26:55.618 256+0 records out 00:26:55.618 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.077187 s, 13.6 MB/s 00:26:55.618 02:50:20 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:26:55.618 02:50:20 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:26:55.618 256+0 records in 00:26:55.618 256+0 records out 00:26:55.618 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0935679 s, 11.2 MB/s 00:26:55.618 02:50:20 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:26:55.618 02:50:20 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:26:55.618 02:50:20 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:26:55.618 02:50:20 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:26:55.618 02:50:20 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:26:55.618 02:50:20 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:26:55.618 02:50:20 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:26:55.618 02:50:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:26:55.618 02:50:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:26:55.618 02:50:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:26:55.618 02:50:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:26:55.618 02:50:20 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:26:55.618 02:50:20 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:26:55.618 02:50:20 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:55.618 02:50:20 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:26:55.618 02:50:20 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:55.618 02:50:20 -- bdev/nbd_common.sh@51 -- # local i 00:26:55.618 02:50:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:55.618 02:50:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:26:55.876 02:50:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:55.876 02:50:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:55.876 02:50:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:55.876 02:50:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:55.876 02:50:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:55.876 02:50:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:55.876 02:50:20 -- bdev/nbd_common.sh@41 -- # break 00:26:55.876 02:50:20 -- bdev/nbd_common.sh@45 -- # return 0 00:26:55.876 02:50:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:55.876 02:50:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:26:56.134 02:50:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:56.134 02:50:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:56.134 02:50:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:56.134 02:50:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:56.134 02:50:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:56.135 02:50:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:56.135 02:50:21 -- bdev/nbd_common.sh@41 -- # break 00:26:56.135 02:50:21 -- bdev/nbd_common.sh@45 -- # return 0 00:26:56.135 02:50:21 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:56.135 02:50:21 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:56.135 02:50:21 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:56.393 02:50:21 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:26:56.393 02:50:21 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:26:56.393 02:50:21 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:56.393 02:50:21 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:26:56.393 02:50:21 -- bdev/nbd_common.sh@65 -- # echo '' 00:26:56.393 02:50:21 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:56.393 02:50:21 -- bdev/nbd_common.sh@65 -- # true 00:26:56.393 02:50:21 -- bdev/nbd_common.sh@65 -- # count=0 00:26:56.393 02:50:21 -- bdev/nbd_common.sh@66 -- # echo 0 00:26:56.393 02:50:21 -- bdev/nbd_common.sh@104 -- # count=0 00:26:56.393 02:50:21 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:26:56.393 02:50:21 -- bdev/nbd_common.sh@109 -- # return 0 00:26:56.393 02:50:21 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:26:56.393 02:50:21 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:56.393 02:50:21 -- bdev/nbd_common.sh@132 -- # nbd_list=($2) 00:26:56.393 02:50:21 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:26:56.393 02:50:21 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:26:56.393 02:50:21 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:26:56.652 malloc_lvol_verify 00:26:56.652 02:50:21 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:26:56.910 33bb4b8f-fd52-4412-9d19-ffdcf3b9038e 00:26:56.910 02:50:21 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:26:57.169 de9e8ee7-741c-4161-89c8-d1dd4ca1899e 00:26:57.169 02:50:22 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:26:57.427 /dev/nbd0 00:26:57.427 02:50:22 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:26:57.427 mke2fs 1.45.5 (07-Jan-2020) 00:26:57.427 00:26:57.427 Filesystem too small for a journal 00:26:57.427 Creating filesystem with 1024 4k blocks and 1024 inodes 00:26:57.427 00:26:57.427 Allocating group tables: 0/1 done 00:26:57.427 Writing inode tables: 0/1 done 00:26:57.427 Writing superblocks and filesystem accounting information: 0/1 done 00:26:57.427 00:26:57.427 02:50:22 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:26:57.427 02:50:22 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:26:57.427 02:50:22 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:57.427 02:50:22 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:26:57.427 02:50:22 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:57.427 02:50:22 -- bdev/nbd_common.sh@51 -- # local i 00:26:57.427 02:50:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:57.427 02:50:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:26:57.697 02:50:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:57.697 02:50:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:57.697 02:50:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:57.697 02:50:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:57.697 02:50:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:57.697 02:50:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:57.697 02:50:22 -- bdev/nbd_common.sh@41 -- # break 00:26:57.697 02:50:22 -- bdev/nbd_common.sh@45 -- # return 0 00:26:57.697 02:50:22 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:26:57.697 02:50:22 -- bdev/nbd_common.sh@147 -- # return 0 00:26:57.697 02:50:22 -- bdev/blockdev.sh@324 -- # killprocess 150282 00:26:57.697 02:50:22 -- common/autotest_common.sh@926 -- # '[' -z 150282 ']' 00:26:57.697 02:50:22 -- common/autotest_common.sh@930 -- # kill -0 150282 00:26:57.697 02:50:22 -- common/autotest_common.sh@931 -- # uname 00:26:57.697 02:50:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:57.697 02:50:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 150282 00:26:57.697 02:50:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:57.697 02:50:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:57.697 killing process with pid 150282 00:26:57.697 02:50:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 150282' 00:26:57.697 02:50:22 -- common/autotest_common.sh@945 -- # kill 150282 00:26:57.697 02:50:22 -- common/autotest_common.sh@950 -- # wait 150282 00:26:57.968 02:50:22 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:26:57.968 00:26:57.968 real 0m6.099s 00:26:57.968 user 0m9.228s 00:26:57.968 sys 0m1.535s 00:26:57.968 ************************************ 00:26:57.968 END TEST bdev_nbd 00:26:57.968 ************************************ 00:26:57.968 02:50:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:57.968 02:50:22 -- common/autotest_common.sh@10 -- # set +x 00:26:57.968 02:50:22 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:26:57.968 02:50:22 -- bdev/blockdev.sh@762 -- # '[' gpt = nvme ']' 00:26:57.968 02:50:22 -- bdev/blockdev.sh@762 -- # '[' gpt = gpt ']' 00:26:57.968 skipping fio tests on NVMe due to multi-ns failures. 00:26:57.968 02:50:22 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:26:57.968 02:50:22 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:57.968 02:50:22 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:26:57.968 02:50:22 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:26:57.968 02:50:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:57.968 02:50:22 -- common/autotest_common.sh@10 -- # set +x 00:26:57.968 ************************************ 00:26:57.968 START TEST bdev_verify 00:26:57.968 ************************************ 00:26:57.968 02:50:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:26:57.968 [2024-07-11 02:50:23.004628] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:57.968 [2024-07-11 02:50:23.005800] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150539 ] 00:26:58.227 [2024-07-11 02:50:23.162545] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:58.227 [2024-07-11 02:50:23.253712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:58.227 [2024-07-11 02:50:23.253723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:58.485 Running I/O for 5 seconds... 00:27:03.746 00:27:03.746 Latency(us) 00:27:03.746 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:03.746 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:03.746 Verification LBA range: start 0x0 length 0x4ff80 00:27:03.746 Nvme0n1p1 : 5.02 7916.15 30.92 0.00 0.00 16127.75 1645.85 23116.33 00:27:03.746 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:27:03.746 Verification LBA range: start 0x4ff80 length 0x4ff80 00:27:03.746 Nvme0n1p1 : 5.02 7881.42 30.79 0.00 0.00 16195.90 1817.13 25499.46 00:27:03.746 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:03.746 Verification LBA range: start 0x0 length 0x4ff7f 00:27:03.746 Nvme0n1p2 : 5.02 7913.33 30.91 0.00 0.00 16116.27 2249.08 23235.49 00:27:03.746 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:27:03.746 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:27:03.746 Nvme0n1p2 : 5.02 7884.84 30.80 0.00 0.00 16177.06 491.52 24903.68 00:27:03.746 =================================================================================================================== 00:27:03.746 Total : 31595.74 123.42 0.00 0.00 16154.16 491.52 25499.46 00:27:07.937 00:27:07.937 real 0m10.077s 00:27:07.937 user 0m19.301s 00:27:07.937 sys 0m0.301s 00:27:07.937 ************************************ 00:27:07.937 02:50:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:07.937 02:50:33 -- common/autotest_common.sh@10 -- # set +x 00:27:07.937 END TEST bdev_verify 00:27:07.937 ************************************ 00:27:08.196 02:50:33 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:27:08.196 02:50:33 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:27:08.196 02:50:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:08.196 02:50:33 -- common/autotest_common.sh@10 -- # set +x 00:27:08.196 ************************************ 00:27:08.196 START TEST bdev_verify_big_io 00:27:08.196 ************************************ 00:27:08.196 02:50:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:27:08.196 [2024-07-11 02:50:33.108258] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:27:08.196 [2024-07-11 02:50:33.108529] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150664 ] 00:27:08.196 [2024-07-11 02:50:33.259254] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:08.454 [2024-07-11 02:50:33.356638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:08.454 [2024-07-11 02:50:33.356650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:08.711 Running I/O for 5 seconds... 00:27:13.969 00:27:13.969 Latency(us) 00:27:13.969 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:13.969 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:27:13.969 Verification LBA range: start 0x0 length 0x4ff8 00:27:13.969 Nvme0n1p1 : 5.10 861.33 53.83 0.00 0.00 146879.91 1966.08 190650.18 00:27:13.969 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:27:13.969 Verification LBA range: start 0x4ff8 length 0x4ff8 00:27:13.969 Nvme0n1p1 : 5.10 852.04 53.25 0.00 0.00 147547.55 21328.99 285975.27 00:27:13.969 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:27:13.969 Verification LBA range: start 0x0 length 0x4ff7 00:27:13.969 Nvme0n1p2 : 5.11 869.48 54.34 0.00 0.00 144195.14 714.94 177304.67 00:27:13.969 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:27:13.969 Verification LBA range: start 0x4ff7 length 0x4ff7 00:27:13.969 Nvme0n1p2 : 5.13 881.32 55.08 0.00 0.00 140484.01 968.15 201135.94 00:27:13.969 =================================================================================================================== 00:27:13.969 Total : 3464.17 216.51 0.00 0.00 144737.90 714.94 285975.27 00:27:14.227 00:27:14.227 real 0m6.181s 00:27:14.227 user 0m11.649s 00:27:14.227 sys 0m0.206s 00:27:14.227 02:50:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:14.227 02:50:39 -- common/autotest_common.sh@10 -- # set +x 00:27:14.227 ************************************ 00:27:14.227 END TEST bdev_verify_big_io 00:27:14.227 ************************************ 00:27:14.227 02:50:39 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:14.227 02:50:39 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:27:14.227 02:50:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:14.227 02:50:39 -- common/autotest_common.sh@10 -- # set +x 00:27:14.227 ************************************ 00:27:14.227 START TEST bdev_write_zeroes 00:27:14.227 ************************************ 00:27:14.227 02:50:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:14.485 [2024-07-11 02:50:39.352490] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:27:14.485 [2024-07-11 02:50:39.352898] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150760 ] 00:27:14.485 [2024-07-11 02:50:39.498928] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:14.485 [2024-07-11 02:50:39.570330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:14.744 Running I/O for 1 seconds... 00:27:16.114 00:27:16.114 Latency(us) 00:27:16.114 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:16.114 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:27:16.114 Nvme0n1p1 : 1.01 26791.37 104.65 0.00 0.00 4767.33 2502.28 15371.17 00:27:16.114 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:27:16.115 Nvme0n1p2 : 1.01 26756.46 104.52 0.00 0.00 4765.98 2710.81 11379.43 00:27:16.115 =================================================================================================================== 00:27:16.115 Total : 53547.83 209.17 0.00 0.00 4766.66 2502.28 15371.17 00:27:16.115 00:27:16.115 real 0m1.731s 00:27:16.115 user 0m1.468s 00:27:16.115 sys 0m0.161s 00:27:16.115 ************************************ 00:27:16.115 END TEST bdev_write_zeroes 00:27:16.115 02:50:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:16.115 02:50:41 -- common/autotest_common.sh@10 -- # set +x 00:27:16.115 ************************************ 00:27:16.115 02:50:41 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:16.115 02:50:41 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:27:16.115 02:50:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:16.115 02:50:41 -- common/autotest_common.sh@10 -- # set +x 00:27:16.115 ************************************ 00:27:16.115 START TEST bdev_json_nonenclosed 00:27:16.115 ************************************ 00:27:16.115 02:50:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:16.115 [2024-07-11 02:50:41.130662] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:27:16.115 [2024-07-11 02:50:41.130915] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150806 ] 00:27:16.372 [2024-07-11 02:50:41.277773] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:16.372 [2024-07-11 02:50:41.352269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:16.372 [2024-07-11 02:50:41.352514] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:27:16.372 [2024-07-11 02:50:41.352566] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:16.372 00:27:16.372 real 0m0.381s 00:27:16.372 user 0m0.176s 00:27:16.372 sys 0m0.104s 00:27:16.372 02:50:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:16.372 ************************************ 00:27:16.372 END TEST bdev_json_nonenclosed 00:27:16.372 ************************************ 00:27:16.372 02:50:41 -- common/autotest_common.sh@10 -- # set +x 00:27:16.630 02:50:41 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:16.630 02:50:41 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:27:16.630 02:50:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:16.630 02:50:41 -- common/autotest_common.sh@10 -- # set +x 00:27:16.630 ************************************ 00:27:16.630 START TEST bdev_json_nonarray 00:27:16.630 ************************************ 00:27:16.630 02:50:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:16.630 [2024-07-11 02:50:41.555568] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:27:16.630 [2024-07-11 02:50:41.555795] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150828 ] 00:27:16.630 [2024-07-11 02:50:41.697399] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:16.887 [2024-07-11 02:50:41.762677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:16.887 [2024-07-11 02:50:41.762907] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:27:16.887 [2024-07-11 02:50:41.762949] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:16.887 00:27:16.887 real 0m0.369s 00:27:16.887 user 0m0.164s 00:27:16.887 sys 0m0.105s 00:27:16.887 02:50:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:16.887 ************************************ 00:27:16.887 END TEST bdev_json_nonarray 00:27:16.887 ************************************ 00:27:16.887 02:50:41 -- common/autotest_common.sh@10 -- # set +x 00:27:16.887 02:50:41 -- bdev/blockdev.sh@785 -- # [[ gpt == bdev ]] 00:27:16.887 02:50:41 -- bdev/blockdev.sh@792 -- # [[ gpt == gpt ]] 00:27:16.887 02:50:41 -- bdev/blockdev.sh@793 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:27:16.887 02:50:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:16.887 02:50:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:16.887 02:50:41 -- common/autotest_common.sh@10 -- # set +x 00:27:16.887 ************************************ 00:27:16.887 START TEST bdev_gpt_uuid 00:27:16.887 ************************************ 00:27:16.887 02:50:41 -- common/autotest_common.sh@1104 -- # bdev_gpt_uuid 00:27:16.887 02:50:41 -- bdev/blockdev.sh@612 -- # local bdev 00:27:16.887 02:50:41 -- bdev/blockdev.sh@614 -- # start_spdk_tgt 00:27:16.887 02:50:41 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=150858 00:27:16.887 02:50:41 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:27:16.887 02:50:41 -- bdev/blockdev.sh@47 -- # waitforlisten 150858 00:27:16.887 02:50:41 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:27:16.887 02:50:41 -- common/autotest_common.sh@819 -- # '[' -z 150858 ']' 00:27:16.887 02:50:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:16.887 02:50:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:16.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:16.887 02:50:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:16.887 02:50:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:16.887 02:50:41 -- common/autotest_common.sh@10 -- # set +x 00:27:17.145 [2024-07-11 02:50:41.992171] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:27:17.145 [2024-07-11 02:50:41.993082] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150858 ] 00:27:17.145 [2024-07-11 02:50:42.139196] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:17.145 [2024-07-11 02:50:42.206510] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:17.145 [2024-07-11 02:50:42.206786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:18.079 02:50:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:18.079 02:50:42 -- common/autotest_common.sh@852 -- # return 0 00:27:18.079 02:50:42 -- bdev/blockdev.sh@616 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:27:18.079 02:50:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:18.079 02:50:42 -- common/autotest_common.sh@10 -- # set +x 00:27:18.079 Some configs were skipped because the RPC state that can call them passed over. 00:27:18.079 02:50:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:18.079 02:50:43 -- bdev/blockdev.sh@617 -- # rpc_cmd bdev_wait_for_examine 00:27:18.079 02:50:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:18.079 02:50:43 -- common/autotest_common.sh@10 -- # set +x 00:27:18.079 02:50:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:18.079 02:50:43 -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:27:18.079 02:50:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:18.079 02:50:43 -- common/autotest_common.sh@10 -- # set +x 00:27:18.079 02:50:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:18.079 02:50:43 -- bdev/blockdev.sh@619 -- # bdev='[ 00:27:18.079 { 00:27:18.079 "name": "Nvme0n1p1", 00:27:18.079 "aliases": [ 00:27:18.079 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:27:18.079 ], 00:27:18.079 "product_name": "GPT Disk", 00:27:18.079 "block_size": 4096, 00:27:18.079 "num_blocks": 655104, 00:27:18.079 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:27:18.079 "assigned_rate_limits": { 00:27:18.079 "rw_ios_per_sec": 0, 00:27:18.079 "rw_mbytes_per_sec": 0, 00:27:18.079 "r_mbytes_per_sec": 0, 00:27:18.079 "w_mbytes_per_sec": 0 00:27:18.079 }, 00:27:18.079 "claimed": false, 00:27:18.079 "zoned": false, 00:27:18.079 "supported_io_types": { 00:27:18.079 "read": true, 00:27:18.079 "write": true, 00:27:18.079 "unmap": true, 00:27:18.079 "write_zeroes": true, 00:27:18.079 "flush": true, 00:27:18.079 "reset": true, 00:27:18.079 "compare": true, 00:27:18.079 "compare_and_write": false, 00:27:18.079 "abort": true, 00:27:18.079 "nvme_admin": false, 00:27:18.079 "nvme_io": false 00:27:18.079 }, 00:27:18.079 "driver_specific": { 00:27:18.079 "gpt": { 00:27:18.079 "base_bdev": "Nvme0n1", 00:27:18.079 "offset_blocks": 256, 00:27:18.079 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:27:18.079 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:27:18.079 "partition_name": "SPDK_TEST_first" 00:27:18.079 } 00:27:18.079 } 00:27:18.079 } 00:27:18.079 ]' 00:27:18.079 02:50:43 -- bdev/blockdev.sh@620 -- # jq -r length 00:27:18.079 02:50:43 -- bdev/blockdev.sh@620 -- # [[ 1 == \1 ]] 00:27:18.079 02:50:43 -- bdev/blockdev.sh@621 -- # jq -r '.[0].aliases[0]' 00:27:18.337 02:50:43 -- bdev/blockdev.sh@621 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:27:18.337 02:50:43 -- bdev/blockdev.sh@622 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:27:18.337 02:50:43 -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:27:18.337 02:50:43 -- bdev/blockdev.sh@624 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:27:18.337 02:50:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:18.337 02:50:43 -- common/autotest_common.sh@10 -- # set +x 00:27:18.337 02:50:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:18.337 02:50:43 -- bdev/blockdev.sh@624 -- # bdev='[ 00:27:18.337 { 00:27:18.337 "name": "Nvme0n1p2", 00:27:18.337 "aliases": [ 00:27:18.337 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:27:18.337 ], 00:27:18.337 "product_name": "GPT Disk", 00:27:18.337 "block_size": 4096, 00:27:18.337 "num_blocks": 655103, 00:27:18.337 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:27:18.337 "assigned_rate_limits": { 00:27:18.337 "rw_ios_per_sec": 0, 00:27:18.337 "rw_mbytes_per_sec": 0, 00:27:18.337 "r_mbytes_per_sec": 0, 00:27:18.337 "w_mbytes_per_sec": 0 00:27:18.337 }, 00:27:18.337 "claimed": false, 00:27:18.337 "zoned": false, 00:27:18.337 "supported_io_types": { 00:27:18.337 "read": true, 00:27:18.337 "write": true, 00:27:18.337 "unmap": true, 00:27:18.337 "write_zeroes": true, 00:27:18.337 "flush": true, 00:27:18.337 "reset": true, 00:27:18.337 "compare": true, 00:27:18.337 "compare_and_write": false, 00:27:18.337 "abort": true, 00:27:18.337 "nvme_admin": false, 00:27:18.337 "nvme_io": false 00:27:18.337 }, 00:27:18.337 "driver_specific": { 00:27:18.337 "gpt": { 00:27:18.337 "base_bdev": "Nvme0n1", 00:27:18.337 "offset_blocks": 655360, 00:27:18.337 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:27:18.337 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:27:18.337 "partition_name": "SPDK_TEST_second" 00:27:18.337 } 00:27:18.337 } 00:27:18.337 } 00:27:18.337 ]' 00:27:18.337 02:50:43 -- bdev/blockdev.sh@625 -- # jq -r length 00:27:18.337 02:50:43 -- bdev/blockdev.sh@625 -- # [[ 1 == \1 ]] 00:27:18.337 02:50:43 -- bdev/blockdev.sh@626 -- # jq -r '.[0].aliases[0]' 00:27:18.337 02:50:43 -- bdev/blockdev.sh@626 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:27:18.337 02:50:43 -- bdev/blockdev.sh@627 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:27:18.337 02:50:43 -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:27:18.337 02:50:43 -- bdev/blockdev.sh@629 -- # killprocess 150858 00:27:18.337 02:50:43 -- common/autotest_common.sh@926 -- # '[' -z 150858 ']' 00:27:18.337 02:50:43 -- common/autotest_common.sh@930 -- # kill -0 150858 00:27:18.337 02:50:43 -- common/autotest_common.sh@931 -- # uname 00:27:18.595 02:50:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:18.595 02:50:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 150858 00:27:18.595 02:50:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:18.595 killing process with pid 150858 00:27:18.595 02:50:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:18.595 02:50:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 150858' 00:27:18.595 02:50:43 -- common/autotest_common.sh@945 -- # kill 150858 00:27:18.595 02:50:43 -- common/autotest_common.sh@950 -- # wait 150858 00:27:18.854 00:27:18.854 real 0m1.918s 00:27:18.854 user 0m2.305s 00:27:18.854 sys 0m0.348s 00:27:18.854 02:50:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:18.854 ************************************ 00:27:18.854 END TEST bdev_gpt_uuid 00:27:18.854 ************************************ 00:27:18.854 02:50:43 -- common/autotest_common.sh@10 -- # set +x 00:27:18.854 02:50:43 -- bdev/blockdev.sh@796 -- # [[ gpt == crypto_sw ]] 00:27:18.854 02:50:43 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:27:18.854 02:50:43 -- bdev/blockdev.sh@809 -- # cleanup 00:27:18.854 02:50:43 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:27:18.854 02:50:43 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:27:18.854 02:50:43 -- bdev/blockdev.sh@24 -- # [[ gpt == rbd ]] 00:27:18.854 02:50:43 -- bdev/blockdev.sh@28 -- # [[ gpt == daos ]] 00:27:18.854 02:50:43 -- bdev/blockdev.sh@32 -- # [[ gpt = \g\p\t ]] 00:27:18.854 02:50:43 -- bdev/blockdev.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:19.112 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:27:19.112 Waiting for block devices as requested 00:27:19.369 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:27:19.369 02:50:44 -- bdev/blockdev.sh@34 -- # [[ -b /dev/nvme0n1 ]] 00:27:19.369 02:50:44 -- bdev/blockdev.sh@35 -- # wipefs --all /dev/nvme0n1 00:27:19.369 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:27:19.369 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:27:19.369 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:27:19.369 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:27:19.369 02:50:44 -- bdev/blockdev.sh@38 -- # [[ gpt == xnvme ]] 00:27:19.369 ************************************ 00:27:19.369 END TEST blockdev_nvme_gpt 00:27:19.369 ************************************ 00:27:19.369 00:27:19.369 real 0m36.344s 00:27:19.370 user 0m56.089s 00:27:19.370 sys 0m5.319s 00:27:19.370 02:50:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:19.370 02:50:44 -- common/autotest_common.sh@10 -- # set +x 00:27:19.370 02:50:44 -- spdk/autotest.sh@222 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:27:19.370 02:50:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:19.370 02:50:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:19.370 02:50:44 -- common/autotest_common.sh@10 -- # set +x 00:27:19.626 ************************************ 00:27:19.627 START TEST nvme 00:27:19.627 ************************************ 00:27:19.627 02:50:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:27:19.627 * Looking for test storage... 00:27:19.627 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:27:19.627 02:50:44 -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:19.884 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:27:20.141 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:27:21.072 02:50:46 -- nvme/nvme.sh@79 -- # uname 00:27:21.072 02:50:46 -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:27:21.072 02:50:46 -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:27:21.072 02:50:46 -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:27:21.072 02:50:46 -- common/autotest_common.sh@1058 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:27:21.072 02:50:46 -- common/autotest_common.sh@1044 -- # _randomize_va_space=2 00:27:21.072 02:50:46 -- common/autotest_common.sh@1045 -- # echo 0 00:27:21.072 02:50:46 -- common/autotest_common.sh@1047 -- # stubpid=151295 00:27:21.072 Waiting for stub to ready for secondary processes... 00:27:21.072 02:50:46 -- common/autotest_common.sh@1048 -- # echo Waiting for stub to ready for secondary processes... 00:27:21.072 02:50:46 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:27:21.072 02:50:46 -- common/autotest_common.sh@1051 -- # [[ -e /proc/151295 ]] 00:27:21.072 02:50:46 -- common/autotest_common.sh@1052 -- # sleep 1s 00:27:21.072 02:50:46 -- common/autotest_common.sh@1046 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:27:21.072 [2024-07-11 02:50:46.090692] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:27:21.072 [2024-07-11 02:50:46.091179] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:22.045 02:50:47 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:27:22.045 02:50:47 -- common/autotest_common.sh@1051 -- # [[ -e /proc/151295 ]] 00:27:22.045 02:50:47 -- common/autotest_common.sh@1052 -- # sleep 1s 00:27:22.304 [2024-07-11 02:50:47.359605] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:22.562 [2024-07-11 02:50:47.429540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:22.562 [2024-07-11 02:50:47.429691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:22.562 [2024-07-11 02:50:47.429695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:22.562 [2024-07-11 02:50:47.439191] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:27:22.562 [2024-07-11 02:50:47.447383] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:27:22.562 [2024-07-11 02:50:47.448539] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:27:23.128 02:50:48 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:27:23.128 done. 00:27:23.128 02:50:48 -- common/autotest_common.sh@1054 -- # echo done. 00:27:23.128 02:50:48 -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:27:23.128 02:50:48 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:27:23.128 02:50:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:23.128 02:50:48 -- common/autotest_common.sh@10 -- # set +x 00:27:23.128 ************************************ 00:27:23.128 START TEST nvme_reset 00:27:23.128 ************************************ 00:27:23.128 02:50:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:27:23.386 Initializing NVMe Controllers 00:27:23.386 Skipping QEMU NVMe SSD at 0000:00:06.0 00:27:23.386 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:27:23.386 00:27:23.386 real 0m0.304s 00:27:23.386 user 0m0.095s 00:27:23.386 sys 0m0.143s 00:27:23.386 02:50:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:23.386 ************************************ 00:27:23.386 END TEST nvme_reset 00:27:23.386 ************************************ 00:27:23.386 02:50:48 -- common/autotest_common.sh@10 -- # set +x 00:27:23.386 02:50:48 -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:27:23.386 02:50:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:23.386 02:50:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:23.386 02:50:48 -- common/autotest_common.sh@10 -- # set +x 00:27:23.386 ************************************ 00:27:23.386 START TEST nvme_identify 00:27:23.386 ************************************ 00:27:23.386 02:50:48 -- common/autotest_common.sh@1104 -- # nvme_identify 00:27:23.386 02:50:48 -- nvme/nvme.sh@12 -- # bdfs=() 00:27:23.386 02:50:48 -- nvme/nvme.sh@12 -- # local bdfs bdf 00:27:23.386 02:50:48 -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:27:23.386 02:50:48 -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:27:23.386 02:50:48 -- common/autotest_common.sh@1498 -- # bdfs=() 00:27:23.386 02:50:48 -- common/autotest_common.sh@1498 -- # local bdfs 00:27:23.386 02:50:48 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:23.386 02:50:48 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:27:23.386 02:50:48 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:27:23.644 02:50:48 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:27:23.644 02:50:48 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:27:23.644 02:50:48 -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:27:23.644 [2024-07-11 02:50:48.690390] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:06.0] process 151328 terminated unexpected 00:27:23.644 ===================================================== 00:27:23.644 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:27:23.644 ===================================================== 00:27:23.644 Controller Capabilities/Features 00:27:23.644 ================================ 00:27:23.644 Vendor ID: 1b36 00:27:23.644 Subsystem Vendor ID: 1af4 00:27:23.644 Serial Number: 12340 00:27:23.644 Model Number: QEMU NVMe Ctrl 00:27:23.644 Firmware Version: 8.0.0 00:27:23.645 Recommended Arb Burst: 6 00:27:23.645 IEEE OUI Identifier: 00 54 52 00:27:23.645 Multi-path I/O 00:27:23.645 May have multiple subsystem ports: No 00:27:23.645 May have multiple controllers: No 00:27:23.645 Associated with SR-IOV VF: No 00:27:23.645 Max Data Transfer Size: 524288 00:27:23.645 Max Number of Namespaces: 256 00:27:23.645 Max Number of I/O Queues: 64 00:27:23.645 NVMe Specification Version (VS): 1.4 00:27:23.645 NVMe Specification Version (Identify): 1.4 00:27:23.645 Maximum Queue Entries: 2048 00:27:23.645 Contiguous Queues Required: Yes 00:27:23.645 Arbitration Mechanisms Supported 00:27:23.645 Weighted Round Robin: Not Supported 00:27:23.645 Vendor Specific: Not Supported 00:27:23.645 Reset Timeout: 7500 ms 00:27:23.645 Doorbell Stride: 4 bytes 00:27:23.645 NVM Subsystem Reset: Not Supported 00:27:23.645 Command Sets Supported 00:27:23.645 NVM Command Set: Supported 00:27:23.645 Boot Partition: Not Supported 00:27:23.645 Memory Page Size Minimum: 4096 bytes 00:27:23.645 Memory Page Size Maximum: 65536 bytes 00:27:23.645 Persistent Memory Region: Not Supported 00:27:23.645 Optional Asynchronous Events Supported 00:27:23.645 Namespace Attribute Notices: Supported 00:27:23.645 Firmware Activation Notices: Not Supported 00:27:23.645 ANA Change Notices: Not Supported 00:27:23.645 PLE Aggregate Log Change Notices: Not Supported 00:27:23.645 LBA Status Info Alert Notices: Not Supported 00:27:23.645 EGE Aggregate Log Change Notices: Not Supported 00:27:23.645 Normal NVM Subsystem Shutdown event: Not Supported 00:27:23.645 Zone Descriptor Change Notices: Not Supported 00:27:23.645 Discovery Log Change Notices: Not Supported 00:27:23.645 Controller Attributes 00:27:23.645 128-bit Host Identifier: Not Supported 00:27:23.645 Non-Operational Permissive Mode: Not Supported 00:27:23.645 NVM Sets: Not Supported 00:27:23.645 Read Recovery Levels: Not Supported 00:27:23.645 Endurance Groups: Not Supported 00:27:23.645 Predictable Latency Mode: Not Supported 00:27:23.645 Traffic Based Keep ALive: Not Supported 00:27:23.645 Namespace Granularity: Not Supported 00:27:23.645 SQ Associations: Not Supported 00:27:23.645 UUID List: Not Supported 00:27:23.645 Multi-Domain Subsystem: Not Supported 00:27:23.645 Fixed Capacity Management: Not Supported 00:27:23.645 Variable Capacity Management: Not Supported 00:27:23.645 Delete Endurance Group: Not Supported 00:27:23.645 Delete NVM Set: Not Supported 00:27:23.645 Extended LBA Formats Supported: Supported 00:27:23.645 Flexible Data Placement Supported: Not Supported 00:27:23.645 00:27:23.645 Controller Memory Buffer Support 00:27:23.645 ================================ 00:27:23.645 Supported: No 00:27:23.645 00:27:23.645 Persistent Memory Region Support 00:27:23.645 ================================ 00:27:23.645 Supported: No 00:27:23.645 00:27:23.645 Admin Command Set Attributes 00:27:23.645 ============================ 00:27:23.645 Security Send/Receive: Not Supported 00:27:23.645 Format NVM: Supported 00:27:23.645 Firmware Activate/Download: Not Supported 00:27:23.645 Namespace Management: Supported 00:27:23.645 Device Self-Test: Not Supported 00:27:23.645 Directives: Supported 00:27:23.645 NVMe-MI: Not Supported 00:27:23.645 Virtualization Management: Not Supported 00:27:23.645 Doorbell Buffer Config: Supported 00:27:23.645 Get LBA Status Capability: Not Supported 00:27:23.645 Command & Feature Lockdown Capability: Not Supported 00:27:23.645 Abort Command Limit: 4 00:27:23.645 Async Event Request Limit: 4 00:27:23.645 Number of Firmware Slots: N/A 00:27:23.645 Firmware Slot 1 Read-Only: N/A 00:27:23.645 Firmware Activation Without Reset: N/A 00:27:23.645 Multiple Update Detection Support: N/A 00:27:23.645 Firmware Update Granularity: No Information Provided 00:27:23.645 Per-Namespace SMART Log: Yes 00:27:23.645 Asymmetric Namespace Access Log Page: Not Supported 00:27:23.645 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:27:23.645 Command Effects Log Page: Supported 00:27:23.645 Get Log Page Extended Data: Supported 00:27:23.645 Telemetry Log Pages: Not Supported 00:27:23.645 Persistent Event Log Pages: Not Supported 00:27:23.645 Supported Log Pages Log Page: May Support 00:27:23.645 Commands Supported & Effects Log Page: Not Supported 00:27:23.645 Feature Identifiers & Effects Log Page:May Support 00:27:23.645 NVMe-MI Commands & Effects Log Page: May Support 00:27:23.645 Data Area 4 for Telemetry Log: Not Supported 00:27:23.645 Error Log Page Entries Supported: 1 00:27:23.645 Keep Alive: Not Supported 00:27:23.645 00:27:23.645 NVM Command Set Attributes 00:27:23.645 ========================== 00:27:23.645 Submission Queue Entry Size 00:27:23.645 Max: 64 00:27:23.645 Min: 64 00:27:23.645 Completion Queue Entry Size 00:27:23.645 Max: 16 00:27:23.645 Min: 16 00:27:23.645 Number of Namespaces: 256 00:27:23.645 Compare Command: Supported 00:27:23.645 Write Uncorrectable Command: Not Supported 00:27:23.645 Dataset Management Command: Supported 00:27:23.645 Write Zeroes Command: Supported 00:27:23.645 Set Features Save Field: Supported 00:27:23.645 Reservations: Not Supported 00:27:23.645 Timestamp: Supported 00:27:23.645 Copy: Supported 00:27:23.645 Volatile Write Cache: Present 00:27:23.645 Atomic Write Unit (Normal): 1 00:27:23.645 Atomic Write Unit (PFail): 1 00:27:23.645 Atomic Compare & Write Unit: 1 00:27:23.645 Fused Compare & Write: Not Supported 00:27:23.645 Scatter-Gather List 00:27:23.645 SGL Command Set: Supported 00:27:23.645 SGL Keyed: Not Supported 00:27:23.645 SGL Bit Bucket Descriptor: Not Supported 00:27:23.645 SGL Metadata Pointer: Not Supported 00:27:23.645 Oversized SGL: Not Supported 00:27:23.645 SGL Metadata Address: Not Supported 00:27:23.645 SGL Offset: Not Supported 00:27:23.645 Transport SGL Data Block: Not Supported 00:27:23.645 Replay Protected Memory Block: Not Supported 00:27:23.645 00:27:23.645 Firmware Slot Information 00:27:23.645 ========================= 00:27:23.645 Active slot: 1 00:27:23.645 Slot 1 Firmware Revision: 1.0 00:27:23.645 00:27:23.645 00:27:23.645 Commands Supported and Effects 00:27:23.645 ============================== 00:27:23.645 Admin Commands 00:27:23.645 -------------- 00:27:23.645 Delete I/O Submission Queue (00h): Supported 00:27:23.645 Create I/O Submission Queue (01h): Supported 00:27:23.645 Get Log Page (02h): Supported 00:27:23.645 Delete I/O Completion Queue (04h): Supported 00:27:23.645 Create I/O Completion Queue (05h): Supported 00:27:23.645 Identify (06h): Supported 00:27:23.645 Abort (08h): Supported 00:27:23.645 Set Features (09h): Supported 00:27:23.645 Get Features (0Ah): Supported 00:27:23.645 Asynchronous Event Request (0Ch): Supported 00:27:23.645 Namespace Attachment (15h): Supported NS-Inventory-Change 00:27:23.645 Directive Send (19h): Supported 00:27:23.645 Directive Receive (1Ah): Supported 00:27:23.645 Virtualization Management (1Ch): Supported 00:27:23.645 Doorbell Buffer Config (7Ch): Supported 00:27:23.645 Format NVM (80h): Supported LBA-Change 00:27:23.645 I/O Commands 00:27:23.645 ------------ 00:27:23.645 Flush (00h): Supported LBA-Change 00:27:23.645 Write (01h): Supported LBA-Change 00:27:23.645 Read (02h): Supported 00:27:23.645 Compare (05h): Supported 00:27:23.645 Write Zeroes (08h): Supported LBA-Change 00:27:23.645 Dataset Management (09h): Supported LBA-Change 00:27:23.645 Unknown (0Ch): Supported 00:27:23.645 Unknown (12h): Supported 00:27:23.645 Copy (19h): Supported LBA-Change 00:27:23.645 Unknown (1Dh): Supported LBA-Change 00:27:23.645 00:27:23.645 Error Log 00:27:23.645 ========= 00:27:23.645 00:27:23.645 Arbitration 00:27:23.645 =========== 00:27:23.645 Arbitration Burst: no limit 00:27:23.645 00:27:23.645 Power Management 00:27:23.645 ================ 00:27:23.645 Number of Power States: 1 00:27:23.645 Current Power State: Power State #0 00:27:23.645 Power State #0: 00:27:23.645 Max Power: 25.00 W 00:27:23.645 Non-Operational State: Operational 00:27:23.645 Entry Latency: 16 microseconds 00:27:23.645 Exit Latency: 4 microseconds 00:27:23.645 Relative Read Throughput: 0 00:27:23.645 Relative Read Latency: 0 00:27:23.645 Relative Write Throughput: 0 00:27:23.645 Relative Write Latency: 0 00:27:23.904 Idle Power: Not Reported 00:27:23.904 Active Power: Not Reported 00:27:23.904 Non-Operational Permissive Mode: Not Supported 00:27:23.904 00:27:23.904 Health Information 00:27:23.904 ================== 00:27:23.904 Critical Warnings: 00:27:23.904 Available Spare Space: OK 00:27:23.904 Temperature: OK 00:27:23.904 Device Reliability: OK 00:27:23.904 Read Only: No 00:27:23.904 Volatile Memory Backup: OK 00:27:23.904 Current Temperature: 323 Kelvin (50 Celsius) 00:27:23.904 Temperature Threshold: 343 Kelvin (70 Celsius) 00:27:23.904 Available Spare: 0% 00:27:23.904 Available Spare Threshold: 0% 00:27:23.904 Life Percentage Used: 0% 00:27:23.904 Data Units Read: 7570 00:27:23.904 Data Units Written: 3687 00:27:23.904 Host Read Commands: 381636 00:27:23.904 Host Write Commands: 205957 00:27:23.904 Controller Busy Time: 0 minutes 00:27:23.904 Power Cycles: 0 00:27:23.904 Power On Hours: 0 hours 00:27:23.904 Unsafe Shutdowns: 0 00:27:23.904 Unrecoverable Media Errors: 0 00:27:23.904 Lifetime Error Log Entries: 0 00:27:23.904 Warning Temperature Time: 0 minutes 00:27:23.904 Critical Temperature Time: 0 minutes 00:27:23.904 00:27:23.904 Number of Queues 00:27:23.904 ================ 00:27:23.904 Number of I/O Submission Queues: 64 00:27:23.904 Number of I/O Completion Queues: 64 00:27:23.904 00:27:23.904 ZNS Specific Controller Data 00:27:23.904 ============================ 00:27:23.904 Zone Append Size Limit: 0 00:27:23.904 00:27:23.904 00:27:23.904 Active Namespaces 00:27:23.904 ================= 00:27:23.904 Namespace ID:1 00:27:23.904 Error Recovery Timeout: Unlimited 00:27:23.904 Command Set Identifier: NVM (00h) 00:27:23.904 Deallocate: Supported 00:27:23.904 Deallocated/Unwritten Error: Supported 00:27:23.904 Deallocated Read Value: All 0x00 00:27:23.904 Deallocate in Write Zeroes: Not Supported 00:27:23.904 Deallocated Guard Field: 0xFFFF 00:27:23.904 Flush: Supported 00:27:23.904 Reservation: Not Supported 00:27:23.904 Namespace Sharing Capabilities: Private 00:27:23.904 Size (in LBAs): 1310720 (5GiB) 00:27:23.904 Capacity (in LBAs): 1310720 (5GiB) 00:27:23.904 Utilization (in LBAs): 1310720 (5GiB) 00:27:23.904 Thin Provisioning: Not Supported 00:27:23.904 Per-NS Atomic Units: No 00:27:23.904 Maximum Single Source Range Length: 128 00:27:23.904 Maximum Copy Length: 128 00:27:23.904 Maximum Source Range Count: 128 00:27:23.904 NGUID/EUI64 Never Reused: No 00:27:23.904 Namespace Write Protected: No 00:27:23.904 Number of LBA Formats: 8 00:27:23.904 Current LBA Format: LBA Format #04 00:27:23.904 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:23.904 LBA Format #01: Data Size: 512 Metadata Size: 8 00:27:23.904 LBA Format #02: Data Size: 512 Metadata Size: 16 00:27:23.904 LBA Format #03: Data Size: 512 Metadata Size: 64 00:27:23.904 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:27:23.904 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:27:23.904 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:27:23.904 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:27:23.904 00:27:23.904 02:50:48 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:27:23.904 02:50:48 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:27:23.904 ===================================================== 00:27:23.904 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:27:23.904 ===================================================== 00:27:23.904 Controller Capabilities/Features 00:27:23.904 ================================ 00:27:23.904 Vendor ID: 1b36 00:27:23.904 Subsystem Vendor ID: 1af4 00:27:23.904 Serial Number: 12340 00:27:23.904 Model Number: QEMU NVMe Ctrl 00:27:23.904 Firmware Version: 8.0.0 00:27:23.904 Recommended Arb Burst: 6 00:27:23.904 IEEE OUI Identifier: 00 54 52 00:27:23.904 Multi-path I/O 00:27:23.904 May have multiple subsystem ports: No 00:27:23.904 May have multiple controllers: No 00:27:23.904 Associated with SR-IOV VF: No 00:27:23.904 Max Data Transfer Size: 524288 00:27:23.904 Max Number of Namespaces: 256 00:27:23.904 Max Number of I/O Queues: 64 00:27:23.904 NVMe Specification Version (VS): 1.4 00:27:23.904 NVMe Specification Version (Identify): 1.4 00:27:23.904 Maximum Queue Entries: 2048 00:27:23.904 Contiguous Queues Required: Yes 00:27:23.904 Arbitration Mechanisms Supported 00:27:23.904 Weighted Round Robin: Not Supported 00:27:23.904 Vendor Specific: Not Supported 00:27:23.905 Reset Timeout: 7500 ms 00:27:23.905 Doorbell Stride: 4 bytes 00:27:23.905 NVM Subsystem Reset: Not Supported 00:27:23.905 Command Sets Supported 00:27:23.905 NVM Command Set: Supported 00:27:23.905 Boot Partition: Not Supported 00:27:23.905 Memory Page Size Minimum: 4096 bytes 00:27:23.905 Memory Page Size Maximum: 65536 bytes 00:27:23.905 Persistent Memory Region: Not Supported 00:27:23.905 Optional Asynchronous Events Supported 00:27:23.905 Namespace Attribute Notices: Supported 00:27:23.905 Firmware Activation Notices: Not Supported 00:27:23.905 ANA Change Notices: Not Supported 00:27:23.905 PLE Aggregate Log Change Notices: Not Supported 00:27:23.905 LBA Status Info Alert Notices: Not Supported 00:27:23.905 EGE Aggregate Log Change Notices: Not Supported 00:27:23.905 Normal NVM Subsystem Shutdown event: Not Supported 00:27:23.905 Zone Descriptor Change Notices: Not Supported 00:27:23.905 Discovery Log Change Notices: Not Supported 00:27:23.905 Controller Attributes 00:27:23.905 128-bit Host Identifier: Not Supported 00:27:23.905 Non-Operational Permissive Mode: Not Supported 00:27:23.905 NVM Sets: Not Supported 00:27:23.905 Read Recovery Levels: Not Supported 00:27:23.905 Endurance Groups: Not Supported 00:27:23.905 Predictable Latency Mode: Not Supported 00:27:23.905 Traffic Based Keep ALive: Not Supported 00:27:23.905 Namespace Granularity: Not Supported 00:27:23.905 SQ Associations: Not Supported 00:27:23.905 UUID List: Not Supported 00:27:23.905 Multi-Domain Subsystem: Not Supported 00:27:23.905 Fixed Capacity Management: Not Supported 00:27:23.905 Variable Capacity Management: Not Supported 00:27:23.905 Delete Endurance Group: Not Supported 00:27:23.905 Delete NVM Set: Not Supported 00:27:23.905 Extended LBA Formats Supported: Supported 00:27:23.905 Flexible Data Placement Supported: Not Supported 00:27:23.905 00:27:23.905 Controller Memory Buffer Support 00:27:23.905 ================================ 00:27:23.905 Supported: No 00:27:23.905 00:27:23.905 Persistent Memory Region Support 00:27:23.905 ================================ 00:27:23.905 Supported: No 00:27:23.905 00:27:23.905 Admin Command Set Attributes 00:27:23.905 ============================ 00:27:23.905 Security Send/Receive: Not Supported 00:27:23.905 Format NVM: Supported 00:27:23.905 Firmware Activate/Download: Not Supported 00:27:23.905 Namespace Management: Supported 00:27:23.905 Device Self-Test: Not Supported 00:27:23.905 Directives: Supported 00:27:23.905 NVMe-MI: Not Supported 00:27:23.905 Virtualization Management: Not Supported 00:27:23.905 Doorbell Buffer Config: Supported 00:27:23.905 Get LBA Status Capability: Not Supported 00:27:23.905 Command & Feature Lockdown Capability: Not Supported 00:27:23.905 Abort Command Limit: 4 00:27:23.905 Async Event Request Limit: 4 00:27:23.905 Number of Firmware Slots: N/A 00:27:23.905 Firmware Slot 1 Read-Only: N/A 00:27:23.905 Firmware Activation Without Reset: N/A 00:27:23.905 Multiple Update Detection Support: N/A 00:27:23.905 Firmware Update Granularity: No Information Provided 00:27:23.905 Per-Namespace SMART Log: Yes 00:27:23.905 Asymmetric Namespace Access Log Page: Not Supported 00:27:23.905 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:27:23.905 Command Effects Log Page: Supported 00:27:23.905 Get Log Page Extended Data: Supported 00:27:23.905 Telemetry Log Pages: Not Supported 00:27:23.905 Persistent Event Log Pages: Not Supported 00:27:23.905 Supported Log Pages Log Page: May Support 00:27:23.905 Commands Supported & Effects Log Page: Not Supported 00:27:23.905 Feature Identifiers & Effects Log Page:May Support 00:27:23.905 NVMe-MI Commands & Effects Log Page: May Support 00:27:23.905 Data Area 4 for Telemetry Log: Not Supported 00:27:23.905 Error Log Page Entries Supported: 1 00:27:23.905 Keep Alive: Not Supported 00:27:23.905 00:27:23.905 NVM Command Set Attributes 00:27:23.905 ========================== 00:27:23.905 Submission Queue Entry Size 00:27:23.905 Max: 64 00:27:23.905 Min: 64 00:27:23.905 Completion Queue Entry Size 00:27:23.905 Max: 16 00:27:23.905 Min: 16 00:27:23.905 Number of Namespaces: 256 00:27:23.905 Compare Command: Supported 00:27:23.905 Write Uncorrectable Command: Not Supported 00:27:23.905 Dataset Management Command: Supported 00:27:23.905 Write Zeroes Command: Supported 00:27:23.905 Set Features Save Field: Supported 00:27:23.905 Reservations: Not Supported 00:27:23.905 Timestamp: Supported 00:27:23.905 Copy: Supported 00:27:23.905 Volatile Write Cache: Present 00:27:23.905 Atomic Write Unit (Normal): 1 00:27:23.905 Atomic Write Unit (PFail): 1 00:27:23.905 Atomic Compare & Write Unit: 1 00:27:23.905 Fused Compare & Write: Not Supported 00:27:23.905 Scatter-Gather List 00:27:23.905 SGL Command Set: Supported 00:27:23.905 SGL Keyed: Not Supported 00:27:23.905 SGL Bit Bucket Descriptor: Not Supported 00:27:23.905 SGL Metadata Pointer: Not Supported 00:27:23.905 Oversized SGL: Not Supported 00:27:23.905 SGL Metadata Address: Not Supported 00:27:23.905 SGL Offset: Not Supported 00:27:23.905 Transport SGL Data Block: Not Supported 00:27:23.905 Replay Protected Memory Block: Not Supported 00:27:23.905 00:27:23.905 Firmware Slot Information 00:27:23.905 ========================= 00:27:23.905 Active slot: 1 00:27:23.905 Slot 1 Firmware Revision: 1.0 00:27:23.905 00:27:23.905 00:27:23.905 Commands Supported and Effects 00:27:23.905 ============================== 00:27:23.905 Admin Commands 00:27:23.905 -------------- 00:27:23.905 Delete I/O Submission Queue (00h): Supported 00:27:23.905 Create I/O Submission Queue (01h): Supported 00:27:23.905 Get Log Page (02h): Supported 00:27:23.905 Delete I/O Completion Queue (04h): Supported 00:27:23.905 Create I/O Completion Queue (05h): Supported 00:27:23.905 Identify (06h): Supported 00:27:23.905 Abort (08h): Supported 00:27:23.905 Set Features (09h): Supported 00:27:23.905 Get Features (0Ah): Supported 00:27:23.905 Asynchronous Event Request (0Ch): Supported 00:27:23.905 Namespace Attachment (15h): Supported NS-Inventory-Change 00:27:23.905 Directive Send (19h): Supported 00:27:23.905 Directive Receive (1Ah): Supported 00:27:23.905 Virtualization Management (1Ch): Supported 00:27:23.905 Doorbell Buffer Config (7Ch): Supported 00:27:23.905 Format NVM (80h): Supported LBA-Change 00:27:23.905 I/O Commands 00:27:23.905 ------------ 00:27:23.905 Flush (00h): Supported LBA-Change 00:27:23.905 Write (01h): Supported LBA-Change 00:27:23.905 Read (02h): Supported 00:27:23.905 Compare (05h): Supported 00:27:23.905 Write Zeroes (08h): Supported LBA-Change 00:27:23.905 Dataset Management (09h): Supported LBA-Change 00:27:23.905 Unknown (0Ch): Supported 00:27:23.905 Unknown (12h): Supported 00:27:23.905 Copy (19h): Supported LBA-Change 00:27:23.905 Unknown (1Dh): Supported LBA-Change 00:27:23.905 00:27:23.905 Error Log 00:27:23.905 ========= 00:27:23.905 00:27:23.905 Arbitration 00:27:23.905 =========== 00:27:23.905 Arbitration Burst: no limit 00:27:23.905 00:27:23.905 Power Management 00:27:23.905 ================ 00:27:23.905 Number of Power States: 1 00:27:23.905 Current Power State: Power State #0 00:27:23.905 Power State #0: 00:27:23.905 Max Power: 25.00 W 00:27:23.905 Non-Operational State: Operational 00:27:23.905 Entry Latency: 16 microseconds 00:27:23.905 Exit Latency: 4 microseconds 00:27:23.905 Relative Read Throughput: 0 00:27:23.905 Relative Read Latency: 0 00:27:23.905 Relative Write Throughput: 0 00:27:23.905 Relative Write Latency: 0 00:27:24.164 Idle Power: Not Reported 00:27:24.164 Active Power: Not Reported 00:27:24.164 Non-Operational Permissive Mode: Not Supported 00:27:24.164 00:27:24.164 Health Information 00:27:24.164 ================== 00:27:24.164 Critical Warnings: 00:27:24.164 Available Spare Space: OK 00:27:24.164 Temperature: OK 00:27:24.164 Device Reliability: OK 00:27:24.164 Read Only: No 00:27:24.164 Volatile Memory Backup: OK 00:27:24.164 Current Temperature: 323 Kelvin (50 Celsius) 00:27:24.164 Temperature Threshold: 343 Kelvin (70 Celsius) 00:27:24.164 Available Spare: 0% 00:27:24.164 Available Spare Threshold: 0% 00:27:24.164 Life Percentage Used: 0% 00:27:24.164 Data Units Read: 7570 00:27:24.164 Data Units Written: 3687 00:27:24.164 Host Read Commands: 381636 00:27:24.164 Host Write Commands: 205957 00:27:24.164 Controller Busy Time: 0 minutes 00:27:24.164 Power Cycles: 0 00:27:24.164 Power On Hours: 0 hours 00:27:24.164 Unsafe Shutdowns: 0 00:27:24.164 Unrecoverable Media Errors: 0 00:27:24.164 Lifetime Error Log Entries: 0 00:27:24.164 Warning Temperature Time: 0 minutes 00:27:24.164 Critical Temperature Time: 0 minutes 00:27:24.164 00:27:24.164 Number of Queues 00:27:24.164 ================ 00:27:24.164 Number of I/O Submission Queues: 64 00:27:24.164 Number of I/O Completion Queues: 64 00:27:24.164 00:27:24.164 ZNS Specific Controller Data 00:27:24.164 ============================ 00:27:24.164 Zone Append Size Limit: 0 00:27:24.164 00:27:24.164 00:27:24.164 Active Namespaces 00:27:24.164 ================= 00:27:24.164 Namespace ID:1 00:27:24.164 Error Recovery Timeout: Unlimited 00:27:24.164 Command Set Identifier: NVM (00h) 00:27:24.164 Deallocate: Supported 00:27:24.164 Deallocated/Unwritten Error: Supported 00:27:24.164 Deallocated Read Value: All 0x00 00:27:24.164 Deallocate in Write Zeroes: Not Supported 00:27:24.164 Deallocated Guard Field: 0xFFFF 00:27:24.164 Flush: Supported 00:27:24.164 Reservation: Not Supported 00:27:24.164 Namespace Sharing Capabilities: Private 00:27:24.164 Size (in LBAs): 1310720 (5GiB) 00:27:24.164 Capacity (in LBAs): 1310720 (5GiB) 00:27:24.164 Utilization (in LBAs): 1310720 (5GiB) 00:27:24.164 Thin Provisioning: Not Supported 00:27:24.164 Per-NS Atomic Units: No 00:27:24.164 Maximum Single Source Range Length: 128 00:27:24.164 Maximum Copy Length: 128 00:27:24.164 Maximum Source Range Count: 128 00:27:24.164 NGUID/EUI64 Never Reused: No 00:27:24.164 Namespace Write Protected: No 00:27:24.164 Number of LBA Formats: 8 00:27:24.164 Current LBA Format: LBA Format #04 00:27:24.164 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:24.164 LBA Format #01: Data Size: 512 Metadata Size: 8 00:27:24.164 LBA Format #02: Data Size: 512 Metadata Size: 16 00:27:24.164 LBA Format #03: Data Size: 512 Metadata Size: 64 00:27:24.164 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:27:24.164 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:27:24.164 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:27:24.164 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:27:24.164 00:27:24.164 00:27:24.164 real 0m0.599s 00:27:24.164 user 0m0.287s 00:27:24.164 sys 0m0.209s 00:27:24.164 02:50:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:24.164 ************************************ 00:27:24.164 END TEST nvme_identify 00:27:24.164 ************************************ 00:27:24.164 02:50:49 -- common/autotest_common.sh@10 -- # set +x 00:27:24.164 02:50:49 -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:27:24.164 02:50:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:24.164 02:50:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:24.165 02:50:49 -- common/autotest_common.sh@10 -- # set +x 00:27:24.165 ************************************ 00:27:24.165 START TEST nvme_perf 00:27:24.165 ************************************ 00:27:24.165 02:50:49 -- common/autotest_common.sh@1104 -- # nvme_perf 00:27:24.165 02:50:49 -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:27:25.538 Initializing NVMe Controllers 00:27:25.538 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:27:25.538 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:27:25.538 Initialization complete. Launching workers. 00:27:25.538 ======================================================== 00:27:25.538 Latency(us) 00:27:25.538 Device Information : IOPS MiB/s Average min max 00:27:25.538 PCIE (0000:00:06.0) NSID 1 from core 0: 53247.95 624.00 2402.20 1284.92 8844.29 00:27:25.538 ======================================================== 00:27:25.538 Total : 53247.95 624.00 2402.20 1284.92 8844.29 00:27:25.538 00:27:25.538 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:27:25.538 ================================================================================= 00:27:25.538 1.00000% : 1422.429us 00:27:25.538 10.00000% : 1630.953us 00:27:25.538 25.00000% : 1906.502us 00:27:25.538 50.00000% : 2353.338us 00:27:25.538 75.00000% : 2800.175us 00:27:25.538 90.00000% : 3083.171us 00:27:25.538 95.00000% : 3366.167us 00:27:25.538 98.00000% : 3961.949us 00:27:25.538 99.00000% : 4944.989us 00:27:25.538 99.50000% : 6136.553us 00:27:25.538 99.90000% : 7864.320us 00:27:25.538 99.99000% : 8638.836us 00:27:25.538 99.99900% : 8877.149us 00:27:25.538 99.99990% : 8877.149us 00:27:25.538 99.99999% : 8877.149us 00:27:25.538 00:27:25.538 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:27:25.538 ============================================================================== 00:27:25.538 Range in us Cumulative IO count 00:27:25.538 1280.931 - 1288.378: 0.0038% ( 2) 00:27:25.538 1288.378 - 1295.825: 0.0056% ( 1) 00:27:25.538 1303.273 - 1310.720: 0.0094% ( 2) 00:27:25.538 1310.720 - 1318.167: 0.0169% ( 4) 00:27:25.538 1318.167 - 1325.615: 0.0225% ( 3) 00:27:25.538 1325.615 - 1333.062: 0.0376% ( 8) 00:27:25.538 1333.062 - 1340.509: 0.0507% ( 7) 00:27:25.538 1340.509 - 1347.956: 0.0770% ( 14) 00:27:25.538 1347.956 - 1355.404: 0.1183% ( 22) 00:27:25.538 1355.404 - 1362.851: 0.1615% ( 23) 00:27:25.538 1362.851 - 1370.298: 0.2197% ( 31) 00:27:25.538 1370.298 - 1377.745: 0.2742% ( 29) 00:27:25.538 1377.745 - 1385.193: 0.3606% ( 46) 00:27:25.538 1385.193 - 1392.640: 0.4488% ( 47) 00:27:25.538 1392.640 - 1400.087: 0.5690% ( 64) 00:27:25.538 1400.087 - 1407.535: 0.7061% ( 73) 00:27:25.538 1407.535 - 1414.982: 0.8658% ( 85) 00:27:25.538 1414.982 - 1422.429: 1.0442% ( 95) 00:27:25.538 1422.429 - 1429.876: 1.2320% ( 100) 00:27:25.538 1429.876 - 1437.324: 1.4367% ( 109) 00:27:25.538 1437.324 - 1444.771: 1.6714% ( 125) 00:27:25.538 1444.771 - 1452.218: 1.9099% ( 127) 00:27:25.538 1452.218 - 1459.665: 2.1653% ( 136) 00:27:25.538 1459.665 - 1467.113: 2.4395% ( 146) 00:27:25.538 1467.113 - 1474.560: 2.7081% ( 143) 00:27:25.538 1474.560 - 1482.007: 3.0048% ( 158) 00:27:25.538 1482.007 - 1489.455: 3.2884% ( 151) 00:27:25.538 1489.455 - 1496.902: 3.6076% ( 170) 00:27:25.538 1496.902 - 1504.349: 3.9325% ( 173) 00:27:25.538 1504.349 - 1511.796: 4.2668% ( 178) 00:27:25.538 1511.796 - 1519.244: 4.6030% ( 179) 00:27:25.538 1519.244 - 1526.691: 4.9298% ( 174) 00:27:25.538 1526.691 - 1534.138: 5.2903% ( 192) 00:27:25.538 1534.138 - 1541.585: 5.6659% ( 200) 00:27:25.538 1541.585 - 1549.033: 6.0134% ( 185) 00:27:25.538 1549.033 - 1556.480: 6.3871% ( 199) 00:27:25.538 1556.480 - 1563.927: 6.7514% ( 194) 00:27:25.539 1563.927 - 1571.375: 7.1327% ( 203) 00:27:25.539 1571.375 - 1578.822: 7.5026% ( 197) 00:27:25.539 1578.822 - 1586.269: 7.8914% ( 207) 00:27:25.539 1586.269 - 1593.716: 8.3102% ( 223) 00:27:25.539 1593.716 - 1601.164: 8.6970% ( 206) 00:27:25.539 1601.164 - 1608.611: 9.0933% ( 211) 00:27:25.539 1608.611 - 1616.058: 9.4708% ( 201) 00:27:25.539 1616.058 - 1623.505: 9.8689% ( 212) 00:27:25.539 1623.505 - 1630.953: 10.2783% ( 218) 00:27:25.539 1630.953 - 1638.400: 10.6858% ( 217) 00:27:25.539 1638.400 - 1645.847: 11.0953% ( 218) 00:27:25.539 1645.847 - 1653.295: 11.4765% ( 203) 00:27:25.539 1653.295 - 1660.742: 11.8990% ( 225) 00:27:25.539 1660.742 - 1668.189: 12.2934% ( 210) 00:27:25.539 1668.189 - 1675.636: 12.6747% ( 203) 00:27:25.539 1675.636 - 1683.084: 13.0878% ( 220) 00:27:25.539 1683.084 - 1690.531: 13.4503% ( 193) 00:27:25.539 1690.531 - 1697.978: 13.8747% ( 226) 00:27:25.539 1697.978 - 1705.425: 14.2616% ( 206) 00:27:25.539 1705.425 - 1712.873: 14.6466% ( 205) 00:27:25.539 1712.873 - 1720.320: 15.0916% ( 237) 00:27:25.539 1720.320 - 1727.767: 15.4616% ( 197) 00:27:25.539 1727.767 - 1735.215: 15.9011% ( 234) 00:27:25.539 1735.215 - 1742.662: 16.2767% ( 200) 00:27:25.539 1742.662 - 1750.109: 16.6898% ( 220) 00:27:25.539 1750.109 - 1757.556: 17.1011% ( 219) 00:27:25.539 1757.556 - 1765.004: 17.5068% ( 216) 00:27:25.539 1765.004 - 1772.451: 17.9049% ( 212) 00:27:25.539 1772.451 - 1779.898: 18.3274% ( 225) 00:27:25.539 1779.898 - 1787.345: 18.7181% ( 208) 00:27:25.539 1787.345 - 1794.793: 19.1350% ( 222) 00:27:25.539 1794.793 - 1802.240: 19.5256% ( 208) 00:27:25.539 1802.240 - 1809.687: 19.9388% ( 220) 00:27:25.539 1809.687 - 1817.135: 20.3444% ( 216) 00:27:25.539 1817.135 - 1824.582: 20.7482% ( 215) 00:27:25.539 1824.582 - 1832.029: 21.1463% ( 212) 00:27:25.539 1832.029 - 1839.476: 21.5388% ( 209) 00:27:25.539 1839.476 - 1846.924: 21.9445% ( 216) 00:27:25.539 1846.924 - 1854.371: 22.3370% ( 209) 00:27:25.539 1854.371 - 1861.818: 22.7539% ( 222) 00:27:25.539 1861.818 - 1869.265: 23.1483% ( 210) 00:27:25.539 1869.265 - 1876.713: 23.5445% ( 211) 00:27:25.539 1876.713 - 1884.160: 23.9446% ( 213) 00:27:25.539 1884.160 - 1891.607: 24.3709% ( 227) 00:27:25.539 1891.607 - 1899.055: 24.7709% ( 213) 00:27:25.539 1899.055 - 1906.502: 25.2066% ( 232) 00:27:25.539 1906.502 - 1921.396: 26.0066% ( 426) 00:27:25.539 1921.396 - 1936.291: 26.8329% ( 440) 00:27:25.539 1936.291 - 1951.185: 27.6461% ( 433) 00:27:25.539 1951.185 - 1966.080: 28.4837% ( 446) 00:27:25.539 1966.080 - 1980.975: 29.2875% ( 428) 00:27:25.539 1980.975 - 1995.869: 30.1082% ( 437) 00:27:25.539 1995.869 - 2010.764: 30.9326% ( 439) 00:27:25.539 2010.764 - 2025.658: 31.7514% ( 436) 00:27:25.539 2025.658 - 2040.553: 32.5796% ( 441) 00:27:25.539 2040.553 - 2055.447: 33.4473% ( 462) 00:27:25.539 2055.447 - 2070.342: 34.2661% ( 436) 00:27:25.539 2070.342 - 2085.236: 35.1093% ( 449) 00:27:25.539 2085.236 - 2100.131: 35.9394% ( 442) 00:27:25.539 2100.131 - 2115.025: 36.7788% ( 447) 00:27:25.539 2115.025 - 2129.920: 37.6164% ( 446) 00:27:25.539 2129.920 - 2144.815: 38.4615% ( 450) 00:27:25.539 2144.815 - 2159.709: 39.3066% ( 450) 00:27:25.539 2159.709 - 2174.604: 40.1461% ( 447) 00:27:25.539 2174.604 - 2189.498: 40.9856% ( 447) 00:27:25.539 2189.498 - 2204.393: 41.8232% ( 446) 00:27:25.539 2204.393 - 2219.287: 42.6889% ( 461) 00:27:25.539 2219.287 - 2234.182: 43.4965% ( 430) 00:27:25.539 2234.182 - 2249.076: 44.3491% ( 454) 00:27:25.539 2249.076 - 2263.971: 45.1792% ( 442) 00:27:25.539 2263.971 - 2278.865: 46.0149% ( 445) 00:27:25.539 2278.865 - 2293.760: 46.8675% ( 454) 00:27:25.539 2293.760 - 2308.655: 47.7126% ( 450) 00:27:25.539 2308.655 - 2323.549: 48.5596% ( 451) 00:27:25.539 2323.549 - 2338.444: 49.3878% ( 441) 00:27:25.539 2338.444 - 2353.338: 50.2441% ( 456) 00:27:25.539 2353.338 - 2368.233: 51.0892% ( 450) 00:27:25.539 2368.233 - 2383.127: 51.9531% ( 460) 00:27:25.539 2383.127 - 2398.022: 52.7794% ( 440) 00:27:25.539 2398.022 - 2412.916: 53.6245% ( 450) 00:27:25.539 2412.916 - 2427.811: 54.4715% ( 451) 00:27:25.539 2427.811 - 2442.705: 55.3148% ( 449) 00:27:25.539 2442.705 - 2457.600: 56.1580% ( 449) 00:27:25.539 2457.600 - 2472.495: 56.9918% ( 444) 00:27:25.539 2472.495 - 2487.389: 57.8294% ( 446) 00:27:25.539 2487.389 - 2502.284: 58.6801% ( 453) 00:27:25.539 2502.284 - 2517.178: 59.5102% ( 442) 00:27:25.539 2517.178 - 2532.073: 60.3741% ( 460) 00:27:25.539 2532.073 - 2546.967: 61.2417% ( 462) 00:27:25.539 2546.967 - 2561.862: 62.0681% ( 440) 00:27:25.539 2561.862 - 2576.756: 62.8963% ( 441) 00:27:25.539 2576.756 - 2591.651: 63.7620% ( 461) 00:27:25.539 2591.651 - 2606.545: 64.6109% ( 452) 00:27:25.539 2606.545 - 2621.440: 65.4560% ( 450) 00:27:25.539 2621.440 - 2636.335: 66.3048% ( 452) 00:27:25.539 2636.335 - 2651.229: 67.1199% ( 434) 00:27:25.539 2651.229 - 2666.124: 67.9631% ( 449) 00:27:25.539 2666.124 - 2681.018: 68.7988% ( 445) 00:27:25.539 2681.018 - 2695.913: 69.6590% ( 458) 00:27:25.539 2695.913 - 2710.807: 70.5266% ( 462) 00:27:25.539 2710.807 - 2725.702: 71.3510% ( 439) 00:27:25.539 2725.702 - 2740.596: 72.2168% ( 461) 00:27:25.539 2740.596 - 2755.491: 73.0638% ( 451) 00:27:25.539 2755.491 - 2770.385: 73.9032% ( 447) 00:27:25.539 2770.385 - 2785.280: 74.7239% ( 437) 00:27:25.539 2785.280 - 2800.175: 75.5859% ( 459) 00:27:25.539 2800.175 - 2815.069: 76.4010% ( 434) 00:27:25.539 2815.069 - 2829.964: 77.2705% ( 463) 00:27:25.539 2829.964 - 2844.858: 78.1100% ( 447) 00:27:25.539 2844.858 - 2859.753: 78.9588% ( 452) 00:27:25.539 2859.753 - 2874.647: 79.7908% ( 443) 00:27:25.539 2874.647 - 2889.542: 80.6209% ( 442) 00:27:25.539 2889.542 - 2904.436: 81.4716% ( 453) 00:27:25.539 2904.436 - 2919.331: 82.3205% ( 452) 00:27:25.539 2919.331 - 2934.225: 83.1674% ( 451) 00:27:25.539 2934.225 - 2949.120: 83.9956% ( 441) 00:27:25.539 2949.120 - 2964.015: 84.8483% ( 454) 00:27:25.539 2964.015 - 2978.909: 85.6483% ( 426) 00:27:25.539 2978.909 - 2993.804: 86.4314% ( 417) 00:27:25.539 2993.804 - 3008.698: 87.2108% ( 415) 00:27:25.539 3008.698 - 3023.593: 87.9432% ( 390) 00:27:25.539 3023.593 - 3038.487: 88.5817% ( 340) 00:27:25.539 3038.487 - 3053.382: 89.1695% ( 313) 00:27:25.539 3053.382 - 3068.276: 89.7160% ( 291) 00:27:25.539 3068.276 - 3083.171: 90.1874% ( 251) 00:27:25.539 3083.171 - 3098.065: 90.6100% ( 225) 00:27:25.539 3098.065 - 3112.960: 90.9762% ( 195) 00:27:25.539 3112.960 - 3127.855: 91.3161% ( 181) 00:27:25.539 3127.855 - 3142.749: 91.6335% ( 169) 00:27:25.539 3142.749 - 3157.644: 91.9227% ( 154) 00:27:25.539 3157.644 - 3172.538: 92.1819% ( 138) 00:27:25.539 3172.538 - 3187.433: 92.4485% ( 142) 00:27:25.539 3187.433 - 3202.327: 92.6964% ( 132) 00:27:25.539 3202.327 - 3217.222: 92.9218% ( 120) 00:27:25.539 3217.222 - 3232.116: 93.1603% ( 127) 00:27:25.539 3232.116 - 3247.011: 93.3875% ( 121) 00:27:25.539 3247.011 - 3261.905: 93.6016% ( 114) 00:27:25.539 3261.905 - 3276.800: 93.8026% ( 107) 00:27:25.539 3276.800 - 3291.695: 94.0204% ( 116) 00:27:25.539 3291.695 - 3306.589: 94.2289% ( 111) 00:27:25.539 3306.589 - 3321.484: 94.4373% ( 111) 00:27:25.539 3321.484 - 3336.378: 94.6439% ( 110) 00:27:25.539 3336.378 - 3351.273: 94.8599% ( 115) 00:27:25.539 3351.273 - 3366.167: 95.0590% ( 106) 00:27:25.539 3366.167 - 3381.062: 95.2562% ( 105) 00:27:25.539 3381.062 - 3395.956: 95.4534% ( 105) 00:27:25.539 3395.956 - 3410.851: 95.6505% ( 105) 00:27:25.539 3410.851 - 3425.745: 95.8440% ( 103) 00:27:25.539 3425.745 - 3440.640: 96.0243% ( 96) 00:27:25.539 3440.640 - 3455.535: 96.1933% ( 90) 00:27:25.539 3455.535 - 3470.429: 96.3529% ( 85) 00:27:25.539 3470.429 - 3485.324: 96.5032% ( 80) 00:27:25.539 3485.324 - 3500.218: 96.6346% ( 70) 00:27:25.539 3500.218 - 3515.113: 96.7680% ( 71) 00:27:25.539 3515.113 - 3530.007: 96.8769% ( 58) 00:27:25.539 3530.007 - 3544.902: 96.9802% ( 55) 00:27:25.539 3544.902 - 3559.796: 97.0703% ( 48) 00:27:25.539 3559.796 - 3574.691: 97.1586% ( 47) 00:27:25.539 3574.691 - 3589.585: 97.2468% ( 47) 00:27:25.539 3589.585 - 3604.480: 97.3163% ( 37) 00:27:25.539 3604.480 - 3619.375: 97.3877% ( 38) 00:27:25.539 3619.375 - 3634.269: 97.4384% ( 27) 00:27:25.539 3634.269 - 3649.164: 97.4891% ( 27) 00:27:25.539 3649.164 - 3664.058: 97.5379% ( 26) 00:27:25.539 3664.058 - 3678.953: 97.5755% ( 20) 00:27:25.539 3678.953 - 3693.847: 97.6206% ( 24) 00:27:25.539 3693.847 - 3708.742: 97.6506% ( 16) 00:27:25.539 3708.742 - 3723.636: 97.6769% ( 14) 00:27:25.539 3723.636 - 3738.531: 97.7201% ( 23) 00:27:25.539 3738.531 - 3753.425: 97.7502% ( 16) 00:27:25.539 3753.425 - 3768.320: 97.7821% ( 17) 00:27:25.539 3768.320 - 3783.215: 97.8140% ( 17) 00:27:25.539 3783.215 - 3798.109: 97.8328% ( 10) 00:27:25.539 3798.109 - 3813.004: 97.8572% ( 13) 00:27:25.539 3813.004 - 3842.793: 97.8985% ( 22) 00:27:25.539 3842.793 - 3872.582: 97.9361% ( 20) 00:27:25.539 3872.582 - 3902.371: 97.9642% ( 15) 00:27:25.539 3902.371 - 3932.160: 97.9962% ( 17) 00:27:25.539 3932.160 - 3961.949: 98.0319% ( 19) 00:27:25.539 3961.949 - 3991.738: 98.0657% ( 18) 00:27:25.539 3991.738 - 4021.527: 98.1013% ( 19) 00:27:25.539 4021.527 - 4051.316: 98.1276% ( 14) 00:27:25.539 4051.316 - 4081.105: 98.1614% ( 18) 00:27:25.539 4081.105 - 4110.895: 98.1934% ( 17) 00:27:25.539 4110.895 - 4140.684: 98.2253% ( 17) 00:27:25.539 4140.684 - 4170.473: 98.2516% ( 14) 00:27:25.539 4170.473 - 4200.262: 98.2910% ( 21) 00:27:25.539 4200.262 - 4230.051: 98.3229% ( 17) 00:27:25.539 4230.051 - 4259.840: 98.3586% ( 19) 00:27:25.539 4259.840 - 4289.629: 98.3905% ( 17) 00:27:25.539 4289.629 - 4319.418: 98.4225% ( 17) 00:27:25.539 4319.418 - 4349.207: 98.4563% ( 18) 00:27:25.539 4349.207 - 4378.996: 98.4920% ( 19) 00:27:25.539 4378.996 - 4408.785: 98.5258% ( 18) 00:27:25.539 4408.785 - 4438.575: 98.5558% ( 16) 00:27:25.539 4438.575 - 4468.364: 98.5859% ( 16) 00:27:25.539 4468.364 - 4498.153: 98.6197% ( 18) 00:27:25.539 4498.153 - 4527.942: 98.6516% ( 17) 00:27:25.539 4527.942 - 4557.731: 98.6760% ( 13) 00:27:25.539 4557.731 - 4587.520: 98.7079% ( 17) 00:27:25.540 4587.520 - 4617.309: 98.7361% ( 15) 00:27:25.540 4617.309 - 4647.098: 98.7662% ( 16) 00:27:25.540 4647.098 - 4676.887: 98.8018% ( 19) 00:27:25.540 4676.887 - 4706.676: 98.8300% ( 15) 00:27:25.540 4706.676 - 4736.465: 98.8657% ( 19) 00:27:25.540 4736.465 - 4766.255: 98.8920% ( 14) 00:27:25.540 4766.255 - 4796.044: 98.9126% ( 11) 00:27:25.540 4796.044 - 4825.833: 98.9295% ( 9) 00:27:25.540 4825.833 - 4855.622: 98.9521% ( 12) 00:27:25.540 4855.622 - 4885.411: 98.9746% ( 12) 00:27:25.540 4885.411 - 4915.200: 98.9896% ( 8) 00:27:25.540 4915.200 - 4944.989: 99.0103% ( 11) 00:27:25.540 4944.989 - 4974.778: 99.0328% ( 12) 00:27:25.540 4974.778 - 5004.567: 99.0497% ( 9) 00:27:25.540 5004.567 - 5034.356: 99.0685% ( 10) 00:27:25.540 5034.356 - 5064.145: 99.0873% ( 10) 00:27:25.540 5064.145 - 5093.935: 99.1042% ( 9) 00:27:25.540 5093.935 - 5123.724: 99.1192% ( 8) 00:27:25.540 5123.724 - 5153.513: 99.1361% ( 9) 00:27:25.540 5153.513 - 5183.302: 99.1568% ( 11) 00:27:25.540 5183.302 - 5213.091: 99.1680% ( 6) 00:27:25.540 5213.091 - 5242.880: 99.1812% ( 7) 00:27:25.540 5242.880 - 5272.669: 99.1962% ( 8) 00:27:25.540 5272.669 - 5302.458: 99.2112% ( 8) 00:27:25.540 5302.458 - 5332.247: 99.2300% ( 10) 00:27:25.540 5332.247 - 5362.036: 99.2413% ( 6) 00:27:25.540 5362.036 - 5391.825: 99.2563% ( 8) 00:27:25.540 5391.825 - 5421.615: 99.2657% ( 5) 00:27:25.540 5421.615 - 5451.404: 99.2788% ( 7) 00:27:25.540 5451.404 - 5481.193: 99.2882% ( 5) 00:27:25.540 5481.193 - 5510.982: 99.2976% ( 5) 00:27:25.540 5510.982 - 5540.771: 99.3089% ( 6) 00:27:25.540 5540.771 - 5570.560: 99.3183% ( 5) 00:27:25.540 5570.560 - 5600.349: 99.3296% ( 6) 00:27:25.540 5600.349 - 5630.138: 99.3389% ( 5) 00:27:25.540 5630.138 - 5659.927: 99.3502% ( 6) 00:27:25.540 5659.927 - 5689.716: 99.3577% ( 4) 00:27:25.540 5689.716 - 5719.505: 99.3671% ( 5) 00:27:25.540 5719.505 - 5749.295: 99.3803% ( 7) 00:27:25.540 5749.295 - 5779.084: 99.3859% ( 3) 00:27:25.540 5779.084 - 5808.873: 99.3990% ( 7) 00:27:25.540 5808.873 - 5838.662: 99.4084% ( 5) 00:27:25.540 5838.662 - 5868.451: 99.4159% ( 4) 00:27:25.540 5868.451 - 5898.240: 99.4291% ( 7) 00:27:25.540 5898.240 - 5928.029: 99.4385% ( 5) 00:27:25.540 5928.029 - 5957.818: 99.4479% ( 5) 00:27:25.540 5957.818 - 5987.607: 99.4535% ( 3) 00:27:25.540 5987.607 - 6017.396: 99.4666% ( 7) 00:27:25.540 6017.396 - 6047.185: 99.4760% ( 5) 00:27:25.540 6047.185 - 6076.975: 99.4854% ( 5) 00:27:25.540 6076.975 - 6106.764: 99.4967% ( 6) 00:27:25.540 6106.764 - 6136.553: 99.5042% ( 4) 00:27:25.540 6136.553 - 6166.342: 99.5174% ( 7) 00:27:25.540 6166.342 - 6196.131: 99.5230% ( 3) 00:27:25.540 6196.131 - 6225.920: 99.5343% ( 6) 00:27:25.540 6225.920 - 6255.709: 99.5436% ( 5) 00:27:25.540 6255.709 - 6285.498: 99.5549% ( 6) 00:27:25.540 6285.498 - 6315.287: 99.5624% ( 4) 00:27:25.540 6315.287 - 6345.076: 99.5756% ( 7) 00:27:25.540 6345.076 - 6374.865: 99.5850% ( 5) 00:27:25.540 6374.865 - 6404.655: 99.5925% ( 4) 00:27:25.540 6404.655 - 6434.444: 99.6037% ( 6) 00:27:25.540 6434.444 - 6464.233: 99.6131% ( 5) 00:27:25.540 6464.233 - 6494.022: 99.6244% ( 6) 00:27:25.540 6494.022 - 6523.811: 99.6300% ( 3) 00:27:25.540 6523.811 - 6553.600: 99.6394% ( 5) 00:27:25.540 6553.600 - 6583.389: 99.6488% ( 5) 00:27:25.540 6583.389 - 6613.178: 99.6526% ( 2) 00:27:25.540 6613.178 - 6642.967: 99.6601% ( 4) 00:27:25.540 6642.967 - 6672.756: 99.6676% ( 4) 00:27:25.540 6672.756 - 6702.545: 99.6751% ( 4) 00:27:25.540 6702.545 - 6732.335: 99.6826% ( 4) 00:27:25.540 6732.335 - 6762.124: 99.6883% ( 3) 00:27:25.540 6762.124 - 6791.913: 99.6958% ( 4) 00:27:25.540 6791.913 - 6821.702: 99.7033% ( 4) 00:27:25.540 6821.702 - 6851.491: 99.7089% ( 3) 00:27:25.540 6851.491 - 6881.280: 99.7164% ( 4) 00:27:25.540 6881.280 - 6911.069: 99.7239% ( 4) 00:27:25.540 6911.069 - 6940.858: 99.7277% ( 2) 00:27:25.540 6940.858 - 6970.647: 99.7333% ( 3) 00:27:25.540 6970.647 - 7000.436: 99.7408% ( 4) 00:27:25.540 7000.436 - 7030.225: 99.7427% ( 1) 00:27:25.540 7030.225 - 7060.015: 99.7502% ( 4) 00:27:25.540 7060.015 - 7089.804: 99.7540% ( 2) 00:27:25.540 7089.804 - 7119.593: 99.7615% ( 4) 00:27:25.540 7119.593 - 7149.382: 99.7652% ( 2) 00:27:25.540 7149.382 - 7179.171: 99.7709% ( 3) 00:27:25.540 7179.171 - 7208.960: 99.7765% ( 3) 00:27:25.540 7208.960 - 7238.749: 99.7822% ( 3) 00:27:25.540 7238.749 - 7268.538: 99.7878% ( 3) 00:27:25.540 7268.538 - 7298.327: 99.7934% ( 3) 00:27:25.540 7298.327 - 7328.116: 99.7991% ( 3) 00:27:25.540 7328.116 - 7357.905: 99.8047% ( 3) 00:27:25.540 7357.905 - 7387.695: 99.8122% ( 4) 00:27:25.540 7387.695 - 7417.484: 99.8160% ( 2) 00:27:25.540 7417.484 - 7447.273: 99.8216% ( 3) 00:27:25.540 7447.273 - 7477.062: 99.8272% ( 3) 00:27:25.540 7477.062 - 7506.851: 99.8329% ( 3) 00:27:25.540 7506.851 - 7536.640: 99.8404% ( 4) 00:27:25.540 7536.640 - 7566.429: 99.8422% ( 1) 00:27:25.540 7566.429 - 7596.218: 99.8498% ( 4) 00:27:25.540 7596.218 - 7626.007: 99.8554% ( 3) 00:27:25.540 7626.007 - 7685.585: 99.8667% ( 6) 00:27:25.540 7685.585 - 7745.164: 99.8798% ( 7) 00:27:25.540 7745.164 - 7804.742: 99.8892% ( 5) 00:27:25.540 7804.742 - 7864.320: 99.9005% ( 6) 00:27:25.540 7864.320 - 7923.898: 99.9099% ( 5) 00:27:25.540 7923.898 - 7983.476: 99.9211% ( 6) 00:27:25.540 7983.476 - 8043.055: 99.9286% ( 4) 00:27:25.540 8043.055 - 8102.633: 99.9418% ( 7) 00:27:25.540 8102.633 - 8162.211: 99.9512% ( 5) 00:27:25.540 8162.211 - 8221.789: 99.9606% ( 5) 00:27:25.540 8221.789 - 8281.367: 99.9662% ( 3) 00:27:25.540 8281.367 - 8340.945: 99.9718% ( 3) 00:27:25.540 8340.945 - 8400.524: 99.9756% ( 2) 00:27:25.540 8400.524 - 8460.102: 99.9793% ( 2) 00:27:25.540 8460.102 - 8519.680: 99.9831% ( 2) 00:27:25.540 8519.680 - 8579.258: 99.9887% ( 3) 00:27:25.540 8579.258 - 8638.836: 99.9925% ( 2) 00:27:25.540 8638.836 - 8698.415: 99.9944% ( 1) 00:27:25.540 8698.415 - 8757.993: 99.9962% ( 1) 00:27:25.540 8757.993 - 8817.571: 99.9981% ( 1) 00:27:25.540 8817.571 - 8877.149: 100.0000% ( 1) 00:27:25.540 00:27:25.540 02:50:50 -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:27:26.916 Initializing NVMe Controllers 00:27:26.916 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:27:26.916 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:27:26.916 Initialization complete. Launching workers. 00:27:26.916 ======================================================== 00:27:26.916 Latency(us) 00:27:26.916 Device Information : IOPS MiB/s Average min max 00:27:26.916 PCIE (0000:00:06.0) NSID 1 from core 0: 50271.66 589.12 2548.68 1381.89 5388.86 00:27:26.916 ======================================================== 00:27:26.916 Total : 50271.66 589.12 2548.68 1381.89 5388.86 00:27:26.916 00:27:26.916 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:27:26.916 ================================================================================= 00:27:26.916 1.00000% : 1824.582us 00:27:26.916 10.00000% : 2055.447us 00:27:26.916 25.00000% : 2219.287us 00:27:26.916 50.00000% : 2427.811us 00:27:26.916 75.00000% : 2800.175us 00:27:26.916 90.00000% : 3217.222us 00:27:26.916 95.00000% : 3530.007us 00:27:26.916 98.00000% : 3932.160us 00:27:26.916 99.00000% : 4170.473us 00:27:26.916 99.50000% : 4349.207us 00:27:26.916 99.90000% : 4766.255us 00:27:26.916 99.99000% : 5272.669us 00:27:26.916 99.99900% : 5391.825us 00:27:26.916 99.99990% : 5391.825us 00:27:26.916 99.99999% : 5391.825us 00:27:26.916 00:27:26.916 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:27:26.916 ============================================================================== 00:27:26.916 Range in us Cumulative IO count 00:27:26.916 1377.745 - 1385.193: 0.0020% ( 1) 00:27:26.916 1414.982 - 1422.429: 0.0060% ( 2) 00:27:26.916 1437.324 - 1444.771: 0.0080% ( 1) 00:27:26.916 1444.771 - 1452.218: 0.0119% ( 2) 00:27:26.916 1452.218 - 1459.665: 0.0139% ( 1) 00:27:26.917 1459.665 - 1467.113: 0.0159% ( 1) 00:27:26.917 1482.007 - 1489.455: 0.0179% ( 1) 00:27:26.917 1496.902 - 1504.349: 0.0239% ( 3) 00:27:26.917 1504.349 - 1511.796: 0.0298% ( 3) 00:27:26.917 1519.244 - 1526.691: 0.0338% ( 2) 00:27:26.917 1526.691 - 1534.138: 0.0378% ( 2) 00:27:26.917 1534.138 - 1541.585: 0.0477% ( 5) 00:27:26.917 1541.585 - 1549.033: 0.0517% ( 2) 00:27:26.917 1549.033 - 1556.480: 0.0557% ( 2) 00:27:26.917 1556.480 - 1563.927: 0.0656% ( 5) 00:27:26.917 1563.927 - 1571.375: 0.0716% ( 3) 00:27:26.917 1571.375 - 1578.822: 0.0776% ( 3) 00:27:26.917 1578.822 - 1586.269: 0.0796% ( 1) 00:27:26.917 1586.269 - 1593.716: 0.0835% ( 2) 00:27:26.917 1593.716 - 1601.164: 0.0855% ( 1) 00:27:26.917 1601.164 - 1608.611: 0.0935% ( 4) 00:27:26.917 1608.611 - 1616.058: 0.1034% ( 5) 00:27:26.917 1616.058 - 1623.505: 0.1074% ( 2) 00:27:26.917 1623.505 - 1630.953: 0.1094% ( 1) 00:27:26.917 1630.953 - 1638.400: 0.1173% ( 4) 00:27:26.917 1638.400 - 1645.847: 0.1233% ( 3) 00:27:26.917 1645.847 - 1653.295: 0.1293% ( 3) 00:27:26.917 1653.295 - 1660.742: 0.1452% ( 8) 00:27:26.917 1660.742 - 1668.189: 0.1531% ( 4) 00:27:26.917 1668.189 - 1675.636: 0.1671% ( 7) 00:27:26.917 1675.636 - 1683.084: 0.1830% ( 8) 00:27:26.917 1683.084 - 1690.531: 0.2049% ( 11) 00:27:26.917 1690.531 - 1697.978: 0.2247% ( 10) 00:27:26.917 1697.978 - 1705.425: 0.2407% ( 8) 00:27:26.917 1705.425 - 1712.873: 0.2625% ( 11) 00:27:26.917 1712.873 - 1720.320: 0.2844% ( 11) 00:27:26.917 1720.320 - 1727.767: 0.3063% ( 11) 00:27:26.917 1727.767 - 1735.215: 0.3361% ( 15) 00:27:26.917 1735.215 - 1742.662: 0.3580% ( 11) 00:27:26.917 1742.662 - 1750.109: 0.3878% ( 15) 00:27:26.917 1750.109 - 1757.556: 0.4236% ( 18) 00:27:26.917 1757.556 - 1765.004: 0.4614% ( 19) 00:27:26.917 1765.004 - 1772.451: 0.5052% ( 22) 00:27:26.917 1772.451 - 1779.898: 0.5748% ( 35) 00:27:26.917 1779.898 - 1787.345: 0.6265% ( 26) 00:27:26.917 1787.345 - 1794.793: 0.6961% ( 35) 00:27:26.917 1794.793 - 1802.240: 0.7717% ( 38) 00:27:26.917 1802.240 - 1809.687: 0.8333% ( 31) 00:27:26.917 1809.687 - 1817.135: 0.9268% ( 47) 00:27:26.917 1817.135 - 1824.582: 1.0024% ( 38) 00:27:26.917 1824.582 - 1832.029: 1.1138% ( 56) 00:27:26.917 1832.029 - 1839.476: 1.2351% ( 61) 00:27:26.917 1839.476 - 1846.924: 1.3723% ( 69) 00:27:26.917 1846.924 - 1854.371: 1.4996% ( 64) 00:27:26.917 1854.371 - 1861.818: 1.6528% ( 77) 00:27:26.917 1861.818 - 1869.265: 1.8159% ( 82) 00:27:26.917 1869.265 - 1876.713: 1.9630% ( 74) 00:27:26.917 1876.713 - 1884.160: 2.1102% ( 74) 00:27:26.917 1884.160 - 1891.607: 2.3151% ( 103) 00:27:26.917 1891.607 - 1899.055: 2.5219% ( 104) 00:27:26.917 1899.055 - 1906.502: 2.7467% ( 113) 00:27:26.917 1906.502 - 1921.396: 3.2141% ( 235) 00:27:26.917 1921.396 - 1936.291: 3.7511% ( 270) 00:27:26.917 1936.291 - 1951.185: 4.3875% ( 320) 00:27:26.917 1951.185 - 1966.080: 5.0737% ( 345) 00:27:26.917 1966.080 - 1980.975: 5.8354% ( 383) 00:27:26.917 1980.975 - 1995.869: 6.9293% ( 550) 00:27:26.917 1995.869 - 2010.764: 7.9039% ( 490) 00:27:26.917 2010.764 - 2025.658: 8.8884% ( 495) 00:27:26.917 2025.658 - 2040.553: 9.8471% ( 482) 00:27:26.917 2040.553 - 2055.447: 10.8435% ( 501) 00:27:26.917 2055.447 - 2070.342: 12.1403% ( 652) 00:27:26.917 2070.342 - 2085.236: 13.3614% ( 614) 00:27:26.917 2085.236 - 2100.131: 14.5966% ( 621) 00:27:26.917 2100.131 - 2115.025: 15.8277% ( 619) 00:27:26.917 2115.025 - 2129.920: 17.1324% ( 656) 00:27:26.917 2129.920 - 2144.815: 18.4650% ( 670) 00:27:26.917 2144.815 - 2159.709: 19.8870% ( 715) 00:27:26.917 2159.709 - 2174.604: 21.3389% ( 730) 00:27:26.917 2174.604 - 2189.498: 22.9062% ( 788) 00:27:26.917 2189.498 - 2204.393: 24.5769% ( 840) 00:27:26.917 2204.393 - 2219.287: 26.2157% ( 824) 00:27:26.917 2219.287 - 2234.182: 27.8347% ( 814) 00:27:26.917 2234.182 - 2249.076: 29.5392% ( 857) 00:27:26.917 2249.076 - 2263.971: 31.2238% ( 847) 00:27:26.917 2263.971 - 2278.865: 32.7373% ( 761) 00:27:26.917 2278.865 - 2293.760: 34.4160% ( 844) 00:27:26.917 2293.760 - 2308.655: 36.6097% ( 1103) 00:27:26.917 2308.655 - 2323.549: 38.5648% ( 983) 00:27:26.917 2323.549 - 2338.444: 40.4205% ( 933) 00:27:26.917 2338.444 - 2353.338: 41.9977% ( 793) 00:27:26.917 2353.338 - 2368.233: 43.6564% ( 834) 00:27:26.917 2368.233 - 2383.127: 45.3529% ( 853) 00:27:26.917 2383.127 - 2398.022: 47.0017% ( 829) 00:27:26.917 2398.022 - 2412.916: 48.6903% ( 849) 00:27:26.917 2412.916 - 2427.811: 50.1621% ( 740) 00:27:26.917 2427.811 - 2442.705: 51.6796% ( 763) 00:27:26.917 2442.705 - 2457.600: 53.1793% ( 754) 00:27:26.917 2457.600 - 2472.495: 54.5596% ( 694) 00:27:26.917 2472.495 - 2487.389: 55.7668% ( 607) 00:27:26.917 2487.389 - 2502.284: 57.0377% ( 639) 00:27:26.917 2502.284 - 2517.178: 58.1794% ( 574) 00:27:26.917 2517.178 - 2532.073: 59.3270% ( 577) 00:27:26.917 2532.073 - 2546.967: 60.5104% ( 595) 00:27:26.917 2546.967 - 2561.862: 61.5665% ( 531) 00:27:26.917 2561.862 - 2576.756: 62.4416% ( 440) 00:27:26.917 2576.756 - 2591.651: 63.4519% ( 508) 00:27:26.917 2591.651 - 2606.545: 64.3748% ( 464) 00:27:26.917 2606.545 - 2621.440: 65.2658% ( 448) 00:27:26.917 2621.440 - 2636.335: 66.1429% ( 441) 00:27:26.917 2636.335 - 2651.229: 67.0936% ( 478) 00:27:26.917 2651.229 - 2666.124: 68.0125% ( 462) 00:27:26.917 2666.124 - 2681.018: 68.8697% ( 431) 00:27:26.917 2681.018 - 2695.913: 69.7568% ( 446) 00:27:26.917 2695.913 - 2710.807: 70.5981% ( 423) 00:27:26.917 2710.807 - 2725.702: 71.3857% ( 396) 00:27:26.917 2725.702 - 2740.596: 72.1534% ( 386) 00:27:26.917 2740.596 - 2755.491: 72.9549% ( 403) 00:27:26.917 2755.491 - 2770.385: 73.7405% ( 395) 00:27:26.917 2770.385 - 2785.280: 74.5261% ( 395) 00:27:26.917 2785.280 - 2800.175: 75.2760% ( 377) 00:27:26.917 2800.175 - 2815.069: 75.9800% ( 354) 00:27:26.917 2815.069 - 2829.964: 76.7617% ( 393) 00:27:26.917 2829.964 - 2844.858: 77.4339% ( 338) 00:27:26.917 2844.858 - 2859.753: 78.1320% ( 351) 00:27:26.917 2859.753 - 2874.647: 78.7943% ( 333) 00:27:26.917 2874.647 - 2889.542: 79.4845% ( 347) 00:27:26.917 2889.542 - 2904.436: 80.1070% ( 313) 00:27:26.917 2904.436 - 2919.331: 80.7653% ( 331) 00:27:26.917 2919.331 - 2934.225: 81.3322% ( 285) 00:27:26.917 2934.225 - 2949.120: 81.9706% ( 321) 00:27:26.917 2949.120 - 2964.015: 82.5474% ( 290) 00:27:26.917 2964.015 - 2978.909: 83.1202% ( 288) 00:27:26.917 2978.909 - 2993.804: 83.6731% ( 278) 00:27:26.917 2993.804 - 3008.698: 84.2360% ( 283) 00:27:26.917 3008.698 - 3023.593: 84.7411% ( 254) 00:27:26.917 3023.593 - 3038.487: 85.2702% ( 266) 00:27:26.917 3038.487 - 3053.382: 85.7237% ( 228) 00:27:26.917 3053.382 - 3068.276: 86.2070% ( 243) 00:27:26.917 3068.276 - 3083.171: 86.6584% ( 227) 00:27:26.917 3083.171 - 3098.065: 87.0960% ( 220) 00:27:26.917 3098.065 - 3112.960: 87.5256% ( 216) 00:27:26.917 3112.960 - 3127.855: 87.9254% ( 201) 00:27:26.917 3127.855 - 3142.749: 88.3609% ( 219) 00:27:26.917 3142.749 - 3157.644: 88.7468% ( 194) 00:27:26.917 3157.644 - 3172.538: 89.1446% ( 200) 00:27:26.917 3172.538 - 3187.433: 89.5026% ( 180) 00:27:26.917 3187.433 - 3202.327: 89.8367% ( 168) 00:27:26.917 3202.327 - 3217.222: 90.1708% ( 168) 00:27:26.917 3217.222 - 3232.116: 90.4732% ( 152) 00:27:26.917 3232.116 - 3247.011: 90.8053% ( 167) 00:27:26.918 3247.011 - 3261.905: 91.1255% ( 161) 00:27:26.918 3261.905 - 3276.800: 91.3841% ( 130) 00:27:26.918 3276.800 - 3291.695: 91.6645% ( 141) 00:27:26.918 3291.695 - 3306.589: 91.9211% ( 129) 00:27:26.918 3306.589 - 3321.484: 92.1896% ( 135) 00:27:26.918 3321.484 - 3336.378: 92.4243% ( 118) 00:27:26.918 3336.378 - 3351.273: 92.6550% ( 116) 00:27:26.918 3351.273 - 3366.167: 92.9056% ( 126) 00:27:26.918 3366.167 - 3381.062: 93.1303% ( 113) 00:27:26.918 3381.062 - 3395.956: 93.3690% ( 120) 00:27:26.918 3395.956 - 3410.851: 93.6017% ( 117) 00:27:26.918 3410.851 - 3425.745: 93.7867% ( 93) 00:27:26.918 3425.745 - 3440.640: 94.0253% ( 120) 00:27:26.918 3440.640 - 3455.535: 94.2342% ( 105) 00:27:26.918 3455.535 - 3470.429: 94.4211% ( 94) 00:27:26.918 3470.429 - 3485.324: 94.6260% ( 103) 00:27:26.918 3485.324 - 3500.218: 94.8209% ( 98) 00:27:26.918 3500.218 - 3515.113: 94.9939% ( 87) 00:27:26.918 3515.113 - 3530.007: 95.1411% ( 74) 00:27:26.918 3530.007 - 3544.902: 95.3221% ( 91) 00:27:26.918 3544.902 - 3559.796: 95.4733% ( 76) 00:27:26.918 3559.796 - 3574.691: 95.6244% ( 76) 00:27:26.918 3574.691 - 3589.585: 95.7676% ( 72) 00:27:26.918 3589.585 - 3604.480: 95.9208% ( 77) 00:27:26.918 3604.480 - 3619.375: 96.0341% ( 57) 00:27:26.918 3619.375 - 3634.269: 96.1654% ( 66) 00:27:26.918 3634.269 - 3649.164: 96.2967% ( 66) 00:27:26.918 3649.164 - 3664.058: 96.4140% ( 59) 00:27:26.918 3664.058 - 3678.953: 96.5194% ( 53) 00:27:26.918 3678.953 - 3693.847: 96.6209% ( 51) 00:27:26.918 3693.847 - 3708.742: 96.7302% ( 55) 00:27:26.918 3708.742 - 3723.636: 96.8357% ( 53) 00:27:26.918 3723.636 - 3738.531: 96.9470% ( 56) 00:27:26.918 3738.531 - 3753.425: 97.0465% ( 50) 00:27:26.918 3753.425 - 3768.320: 97.1439% ( 49) 00:27:26.918 3768.320 - 3783.215: 97.2394% ( 48) 00:27:26.918 3783.215 - 3798.109: 97.3329% ( 47) 00:27:26.918 3798.109 - 3813.004: 97.4164% ( 42) 00:27:26.918 3813.004 - 3842.793: 97.5855% ( 85) 00:27:26.918 3842.793 - 3872.582: 97.7585% ( 87) 00:27:26.918 3872.582 - 3902.371: 97.9176% ( 80) 00:27:26.918 3902.371 - 3932.160: 98.0807% ( 82) 00:27:26.918 3932.160 - 3961.949: 98.2199% ( 70) 00:27:26.918 3961.949 - 3991.738: 98.3572% ( 69) 00:27:26.918 3991.738 - 4021.527: 98.5024% ( 73) 00:27:26.918 4021.527 - 4051.316: 98.6277% ( 63) 00:27:26.918 4051.316 - 4081.105: 98.7510% ( 62) 00:27:26.918 4081.105 - 4110.895: 98.8783% ( 64) 00:27:26.918 4110.895 - 4140.684: 98.9936% ( 58) 00:27:26.918 4140.684 - 4170.473: 99.0851% ( 46) 00:27:26.918 4170.473 - 4200.262: 99.1647% ( 40) 00:27:26.918 4200.262 - 4230.051: 99.2562% ( 46) 00:27:26.918 4230.051 - 4259.840: 99.3397% ( 42) 00:27:26.918 4259.840 - 4289.629: 99.4153% ( 38) 00:27:26.918 4289.629 - 4319.418: 99.4809% ( 33) 00:27:26.918 4319.418 - 4349.207: 99.5326% ( 26) 00:27:26.918 4349.207 - 4378.996: 99.5863% ( 27) 00:27:26.918 4378.996 - 4408.785: 99.6241% ( 19) 00:27:26.918 4408.785 - 4438.575: 99.6619% ( 19) 00:27:26.918 4438.575 - 4468.364: 99.6897% ( 14) 00:27:26.918 4468.364 - 4498.153: 99.7176% ( 14) 00:27:26.918 4498.153 - 4527.942: 99.7454% ( 14) 00:27:26.918 4527.942 - 4557.731: 99.7673% ( 11) 00:27:26.918 4557.731 - 4587.520: 99.7912% ( 12) 00:27:26.918 4587.520 - 4617.309: 99.8111% ( 10) 00:27:26.918 4617.309 - 4647.098: 99.8369% ( 13) 00:27:26.918 4647.098 - 4676.887: 99.8628% ( 13) 00:27:26.918 4676.887 - 4706.676: 99.8787% ( 8) 00:27:26.918 4706.676 - 4736.465: 99.8906% ( 6) 00:27:26.918 4736.465 - 4766.255: 99.9025% ( 6) 00:27:26.918 4766.255 - 4796.044: 99.9085% ( 3) 00:27:26.918 4796.044 - 4825.833: 99.9204% ( 6) 00:27:26.918 4825.833 - 4855.622: 99.9284% ( 4) 00:27:26.918 4855.622 - 4885.411: 99.9344% ( 3) 00:27:26.918 4885.411 - 4915.200: 99.9403% ( 3) 00:27:26.918 4915.200 - 4944.989: 99.9443% ( 2) 00:27:26.918 4944.989 - 4974.778: 99.9523% ( 4) 00:27:26.918 4974.778 - 5004.567: 99.9562% ( 2) 00:27:26.918 5004.567 - 5034.356: 99.9602% ( 2) 00:27:26.918 5034.356 - 5064.145: 99.9662% ( 3) 00:27:26.918 5064.145 - 5093.935: 99.9702% ( 2) 00:27:26.918 5093.935 - 5123.724: 99.9761% ( 3) 00:27:26.918 5123.724 - 5153.513: 99.9801% ( 2) 00:27:26.918 5153.513 - 5183.302: 99.9841% ( 2) 00:27:26.918 5183.302 - 5213.091: 99.9861% ( 1) 00:27:26.918 5213.091 - 5242.880: 99.9881% ( 1) 00:27:26.918 5242.880 - 5272.669: 99.9920% ( 2) 00:27:26.918 5272.669 - 5302.458: 99.9940% ( 1) 00:27:26.918 5302.458 - 5332.247: 99.9980% ( 2) 00:27:26.918 5362.036 - 5391.825: 100.0000% ( 1) 00:27:26.918 00:27:26.918 02:50:51 -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:27:26.918 00:27:26.918 real 0m2.584s 00:27:26.918 user 0m2.247s 00:27:26.918 sys 0m0.186s 00:27:26.918 02:50:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:26.918 ************************************ 00:27:26.918 END TEST nvme_perf 00:27:26.918 ************************************ 00:27:26.918 02:50:51 -- common/autotest_common.sh@10 -- # set +x 00:27:26.918 02:50:51 -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:27:26.918 02:50:51 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:27:26.918 02:50:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:26.918 02:50:51 -- common/autotest_common.sh@10 -- # set +x 00:27:26.918 ************************************ 00:27:26.918 START TEST nvme_hello_world 00:27:26.918 ************************************ 00:27:26.918 02:50:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:27:26.918 Initializing NVMe Controllers 00:27:26.918 Attached to 0000:00:06.0 00:27:26.918 Namespace ID: 1 size: 5GB 00:27:26.918 Initialization complete. 00:27:26.918 INFO: using host memory buffer for IO 00:27:26.918 Hello world! 00:27:26.918 00:27:26.918 real 0m0.282s 00:27:26.918 user 0m0.082s 00:27:26.918 sys 0m0.122s 00:27:26.918 02:50:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:26.918 02:50:51 -- common/autotest_common.sh@10 -- # set +x 00:27:26.918 ************************************ 00:27:26.918 END TEST nvme_hello_world 00:27:26.918 ************************************ 00:27:26.918 02:50:52 -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:27:26.918 02:50:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:26.918 02:50:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:26.918 02:50:52 -- common/autotest_common.sh@10 -- # set +x 00:27:27.178 ************************************ 00:27:27.178 START TEST nvme_sgl 00:27:27.178 ************************************ 00:27:27.178 02:50:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:27:27.178 0000:00:06.0: build_io_request_0 Invalid IO length parameter 00:27:27.178 0000:00:06.0: build_io_request_1 Invalid IO length parameter 00:27:27.178 0000:00:06.0: build_io_request_3 Invalid IO length parameter 00:27:27.436 0000:00:06.0: build_io_request_8 Invalid IO length parameter 00:27:27.436 0000:00:06.0: build_io_request_9 Invalid IO length parameter 00:27:27.436 0000:00:06.0: build_io_request_11 Invalid IO length parameter 00:27:27.436 NVMe Readv/Writev Request test 00:27:27.436 Attached to 0000:00:06.0 00:27:27.436 0000:00:06.0: build_io_request_2 test passed 00:27:27.436 0000:00:06.0: build_io_request_4 test passed 00:27:27.436 0000:00:06.0: build_io_request_5 test passed 00:27:27.436 0000:00:06.0: build_io_request_6 test passed 00:27:27.436 0000:00:06.0: build_io_request_7 test passed 00:27:27.436 0000:00:06.0: build_io_request_10 test passed 00:27:27.436 Cleaning up... 00:27:27.436 00:27:27.436 real 0m0.342s 00:27:27.436 user 0m0.146s 00:27:27.436 sys 0m0.098s 00:27:27.436 02:50:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:27.436 02:50:52 -- common/autotest_common.sh@10 -- # set +x 00:27:27.436 ************************************ 00:27:27.436 END TEST nvme_sgl 00:27:27.436 ************************************ 00:27:27.436 02:50:52 -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:27:27.436 02:50:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:27.436 02:50:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:27.436 02:50:52 -- common/autotest_common.sh@10 -- # set +x 00:27:27.436 ************************************ 00:27:27.436 START TEST nvme_e2edp 00:27:27.436 ************************************ 00:27:27.436 02:50:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:27:27.694 NVMe Write/Read with End-to-End data protection test 00:27:27.694 Attached to 0000:00:06.0 00:27:27.694 Cleaning up... 00:27:27.694 00:27:27.694 real 0m0.244s 00:27:27.694 user 0m0.085s 00:27:27.694 sys 0m0.085s 00:27:27.694 ************************************ 00:27:27.694 END TEST nvme_e2edp 00:27:27.694 02:50:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:27.694 02:50:52 -- common/autotest_common.sh@10 -- # set +x 00:27:27.694 ************************************ 00:27:27.694 02:50:52 -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:27:27.694 02:50:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:27.694 02:50:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:27.694 02:50:52 -- common/autotest_common.sh@10 -- # set +x 00:27:27.694 ************************************ 00:27:27.694 START TEST nvme_reserve 00:27:27.694 ************************************ 00:27:27.694 02:50:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:27:27.953 ===================================================== 00:27:27.953 NVMe Controller at PCI bus 0, device 6, function 0 00:27:27.953 ===================================================== 00:27:27.953 Reservations: Not Supported 00:27:27.953 Reservation test passed 00:27:27.953 00:27:27.953 real 0m0.269s 00:27:27.953 user 0m0.067s 00:27:27.953 sys 0m0.127s 00:27:27.953 02:50:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:27.953 02:50:52 -- common/autotest_common.sh@10 -- # set +x 00:27:27.953 ************************************ 00:27:27.953 END TEST nvme_reserve 00:27:27.953 ************************************ 00:27:27.953 02:50:52 -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:27:27.953 02:50:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:27.953 02:50:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:27.953 02:50:52 -- common/autotest_common.sh@10 -- # set +x 00:27:27.953 ************************************ 00:27:27.953 START TEST nvme_err_injection 00:27:27.953 ************************************ 00:27:27.953 02:50:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:27:28.211 NVMe Error Injection test 00:27:28.211 Attached to 0000:00:06.0 00:27:28.211 0000:00:06.0: get features failed as expected 00:27:28.211 0000:00:06.0: get features successfully as expected 00:27:28.211 0000:00:06.0: read failed as expected 00:27:28.211 0000:00:06.0: read successfully as expected 00:27:28.211 Cleaning up... 00:27:28.211 00:27:28.211 real 0m0.271s 00:27:28.211 user 0m0.096s 00:27:28.211 sys 0m0.101s 00:27:28.211 02:50:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:28.211 ************************************ 00:27:28.211 END TEST nvme_err_injection 00:27:28.211 ************************************ 00:27:28.211 02:50:53 -- common/autotest_common.sh@10 -- # set +x 00:27:28.469 02:50:53 -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:27:28.469 02:50:53 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:27:28.469 02:50:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:28.469 02:50:53 -- common/autotest_common.sh@10 -- # set +x 00:27:28.469 ************************************ 00:27:28.469 START TEST nvme_overhead 00:27:28.469 ************************************ 00:27:28.469 02:50:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:27:29.462 Initializing NVMe Controllers 00:27:29.462 Attached to 0000:00:06.0 00:27:29.462 Initialization complete. Launching workers. 00:27:29.462 submit (in ns) avg, min, max = 14736.3, 12482.7, 89023.6 00:27:29.462 complete (in ns) avg, min, max = 9432.8, 8427.3, 85896.4 00:27:29.462 00:27:29.462 Submit histogram 00:27:29.462 ================ 00:27:29.462 Range in us Cumulative Count 00:27:29.462 12.451 - 12.509: 0.0088% ( 1) 00:27:29.462 12.567 - 12.625: 0.0177% ( 1) 00:27:29.462 13.382 - 13.440: 0.0265% ( 1) 00:27:29.462 13.440 - 13.498: 0.3094% ( 32) 00:27:29.462 13.498 - 13.556: 1.4672% ( 131) 00:27:29.462 13.556 - 13.615: 5.7186% ( 481) 00:27:29.462 13.615 - 13.673: 14.5660% ( 1001) 00:27:29.462 13.673 - 13.731: 27.5765% ( 1472) 00:27:29.462 13.731 - 13.789: 39.6765% ( 1369) 00:27:29.462 13.789 - 13.847: 48.2323% ( 968) 00:27:29.462 13.847 - 13.905: 53.6062% ( 608) 00:27:29.462 13.905 - 13.964: 57.5040% ( 441) 00:27:29.462 13.964 - 14.022: 60.9952% ( 395) 00:27:29.462 14.022 - 14.080: 63.8943% ( 328) 00:27:29.462 14.080 - 14.138: 66.0686% ( 246) 00:27:29.462 14.138 - 14.196: 67.9866% ( 217) 00:27:29.462 14.196 - 14.255: 69.2858% ( 147) 00:27:29.462 14.255 - 14.313: 70.5763% ( 146) 00:27:29.462 14.313 - 14.371: 71.5574% ( 111) 00:27:29.462 14.371 - 14.429: 72.7064% ( 130) 00:27:29.462 14.429 - 14.487: 73.6786% ( 110) 00:27:29.462 14.487 - 14.545: 74.4387% ( 86) 00:27:29.462 14.545 - 14.604: 75.1989% ( 86) 00:27:29.462 14.604 - 14.662: 75.9060% ( 80) 00:27:29.462 14.662 - 14.720: 76.5158% ( 69) 00:27:29.462 14.720 - 14.778: 76.8605% ( 39) 00:27:29.462 14.778 - 14.836: 77.2406% ( 43) 00:27:29.462 14.836 - 14.895: 77.4969% ( 29) 00:27:29.462 14.895 - 15.011: 77.8946% ( 45) 00:27:29.462 15.011 - 15.127: 78.1863% ( 33) 00:27:29.462 15.127 - 15.244: 78.3808% ( 22) 00:27:29.462 15.244 - 15.360: 78.5045% ( 14) 00:27:29.462 15.360 - 15.476: 78.5929% ( 10) 00:27:29.462 15.476 - 15.593: 78.6636% ( 8) 00:27:29.462 15.593 - 15.709: 78.7432% ( 9) 00:27:29.462 15.709 - 15.825: 78.8404% ( 11) 00:27:29.462 15.825 - 15.942: 78.9199% ( 9) 00:27:29.462 15.942 - 16.058: 78.9730% ( 6) 00:27:29.462 16.058 - 16.175: 79.0348% ( 7) 00:27:29.462 16.175 - 16.291: 79.1409% ( 12) 00:27:29.462 16.291 - 16.407: 79.2116% ( 8) 00:27:29.462 16.407 - 16.524: 79.2735% ( 7) 00:27:29.462 16.524 - 16.640: 79.6358% ( 41) 00:27:29.462 16.640 - 16.756: 81.8632% ( 252) 00:27:29.462 16.756 - 16.873: 86.2825% ( 500) 00:27:29.462 16.873 - 16.989: 89.8621% ( 405) 00:27:29.462 16.989 - 17.105: 91.7094% ( 209) 00:27:29.462 17.105 - 17.222: 92.9380% ( 139) 00:27:29.462 17.222 - 17.338: 93.6097% ( 76) 00:27:29.462 17.338 - 17.455: 94.3079% ( 79) 00:27:29.462 17.455 - 17.571: 95.0150% ( 80) 00:27:29.462 17.571 - 17.687: 95.4481% ( 49) 00:27:29.462 17.687 - 17.804: 95.8459% ( 45) 00:27:29.462 17.804 - 17.920: 96.1640% ( 36) 00:27:29.462 17.920 - 18.036: 96.3320% ( 19) 00:27:29.462 18.036 - 18.153: 96.5353% ( 23) 00:27:29.462 18.153 - 18.269: 96.6237% ( 10) 00:27:29.462 18.269 - 18.385: 96.7562% ( 15) 00:27:29.462 18.385 - 18.502: 96.8269% ( 8) 00:27:29.462 18.502 - 18.618: 96.9065% ( 9) 00:27:29.462 18.618 - 18.735: 96.9860% ( 9) 00:27:29.462 18.735 - 18.851: 97.0037% ( 2) 00:27:29.462 18.851 - 18.967: 97.0744% ( 8) 00:27:29.462 18.967 - 19.084: 97.1098% ( 4) 00:27:29.462 19.084 - 19.200: 97.1628% ( 6) 00:27:29.462 19.200 - 19.316: 97.2424% ( 9) 00:27:29.462 19.316 - 19.433: 97.3042% ( 7) 00:27:29.462 19.433 - 19.549: 97.3926% ( 10) 00:27:29.462 19.549 - 19.665: 97.4280% ( 4) 00:27:29.462 19.665 - 19.782: 97.5340% ( 12) 00:27:29.462 19.782 - 19.898: 97.6666% ( 15) 00:27:29.462 19.898 - 20.015: 97.7108% ( 5) 00:27:29.462 20.015 - 20.131: 97.7638% ( 6) 00:27:29.462 20.131 - 20.247: 97.8787% ( 13) 00:27:29.462 20.247 - 20.364: 97.9141% ( 4) 00:27:29.462 20.364 - 20.480: 97.9494% ( 4) 00:27:29.462 20.480 - 20.596: 97.9671% ( 2) 00:27:29.462 20.596 - 20.713: 98.0113% ( 5) 00:27:29.462 20.713 - 20.829: 98.1174% ( 12) 00:27:29.462 20.829 - 20.945: 98.1704% ( 6) 00:27:29.462 20.945 - 21.062: 98.2500% ( 9) 00:27:29.462 21.062 - 21.178: 98.2941% ( 5) 00:27:29.462 21.178 - 21.295: 98.3560% ( 7) 00:27:29.462 21.295 - 21.411: 98.4267% ( 8) 00:27:29.462 21.411 - 21.527: 98.5151% ( 10) 00:27:29.462 21.527 - 21.644: 98.5593% ( 5) 00:27:29.462 21.644 - 21.760: 98.5947% ( 4) 00:27:29.462 21.760 - 21.876: 98.6212% ( 3) 00:27:29.462 21.876 - 21.993: 98.6654% ( 5) 00:27:29.463 21.993 - 22.109: 98.7272% ( 7) 00:27:29.463 22.109 - 22.225: 98.7803% ( 6) 00:27:29.463 22.225 - 22.342: 98.8421% ( 7) 00:27:29.463 22.342 - 22.458: 98.8952% ( 6) 00:27:29.463 22.458 - 22.575: 98.9394% ( 5) 00:27:29.463 22.575 - 22.691: 98.9924% ( 6) 00:27:29.463 22.691 - 22.807: 99.0101% ( 2) 00:27:29.463 22.807 - 22.924: 99.0189% ( 1) 00:27:29.463 22.924 - 23.040: 99.0631% ( 5) 00:27:29.463 23.040 - 23.156: 99.1161% ( 6) 00:27:29.463 23.156 - 23.273: 99.1515% ( 4) 00:27:29.463 23.273 - 23.389: 99.1868% ( 4) 00:27:29.463 23.389 - 23.505: 99.1957% ( 1) 00:27:29.463 23.505 - 23.622: 99.2134% ( 2) 00:27:29.463 23.622 - 23.738: 99.2487% ( 4) 00:27:29.463 23.738 - 23.855: 99.2752% ( 3) 00:27:29.463 23.855 - 23.971: 99.3018% ( 3) 00:27:29.463 23.971 - 24.087: 99.3194% ( 2) 00:27:29.463 24.087 - 24.204: 99.3459% ( 3) 00:27:29.463 24.204 - 24.320: 99.3548% ( 1) 00:27:29.463 24.320 - 24.436: 99.3901% ( 4) 00:27:29.463 24.436 - 24.553: 99.4078% ( 2) 00:27:29.463 24.553 - 24.669: 99.4255% ( 2) 00:27:29.463 24.669 - 24.785: 99.4520% ( 3) 00:27:29.463 24.902 - 25.018: 99.4697% ( 2) 00:27:29.463 25.018 - 25.135: 99.5050% ( 4) 00:27:29.463 25.135 - 25.251: 99.5316% ( 3) 00:27:29.463 25.251 - 25.367: 99.5404% ( 1) 00:27:29.463 25.367 - 25.484: 99.5669% ( 3) 00:27:29.463 25.716 - 25.833: 99.5934% ( 3) 00:27:29.463 25.949 - 26.065: 99.6288% ( 4) 00:27:29.463 26.065 - 26.182: 99.6641% ( 4) 00:27:29.463 26.182 - 26.298: 99.6818% ( 2) 00:27:29.463 26.298 - 26.415: 99.7172% ( 4) 00:27:29.463 26.415 - 26.531: 99.7437% ( 3) 00:27:29.463 26.764 - 26.880: 99.7614% ( 2) 00:27:29.463 26.996 - 27.113: 99.7702% ( 1) 00:27:29.463 27.345 - 27.462: 99.7967% ( 3) 00:27:29.463 27.462 - 27.578: 99.8056% ( 1) 00:27:29.463 28.044 - 28.160: 99.8232% ( 2) 00:27:29.463 28.160 - 28.276: 99.8409% ( 2) 00:27:29.463 28.276 - 28.393: 99.8497% ( 1) 00:27:29.463 28.393 - 28.509: 99.8674% ( 2) 00:27:29.463 28.509 - 28.625: 99.8763% ( 1) 00:27:29.463 28.625 - 28.742: 99.8851% ( 1) 00:27:29.463 28.975 - 29.091: 99.8939% ( 1) 00:27:29.463 29.207 - 29.324: 99.9028% ( 1) 00:27:29.463 30.255 - 30.487: 99.9116% ( 1) 00:27:29.463 30.953 - 31.185: 99.9205% ( 1) 00:27:29.463 32.815 - 33.047: 99.9293% ( 1) 00:27:29.463 33.978 - 34.211: 99.9381% ( 1) 00:27:29.463 38.865 - 39.098: 99.9470% ( 1) 00:27:29.463 41.193 - 41.425: 99.9558% ( 1) 00:27:29.463 43.055 - 43.287: 99.9646% ( 1) 00:27:29.463 43.985 - 44.218: 99.9735% ( 1) 00:27:29.463 54.458 - 54.691: 99.9823% ( 1) 00:27:29.463 86.575 - 87.040: 99.9912% ( 1) 00:27:29.463 88.902 - 89.367: 100.0000% ( 1) 00:27:29.463 00:27:29.463 Complete histogram 00:27:29.463 ================== 00:27:29.463 Range in us Cumulative Count 00:27:29.463 8.378 - 8.436: 0.0088% ( 1) 00:27:29.463 8.436 - 8.495: 0.1768% ( 19) 00:27:29.463 8.495 - 8.553: 3.1642% ( 338) 00:27:29.463 8.553 - 8.611: 16.5635% ( 1516) 00:27:29.463 8.611 - 8.669: 39.5351% ( 2599) 00:27:29.463 8.669 - 8.727: 56.5141% ( 1921) 00:27:29.463 8.727 - 8.785: 65.0963% ( 971) 00:27:29.463 8.785 - 8.844: 68.5257% ( 388) 00:27:29.463 8.844 - 8.902: 69.9399% ( 160) 00:27:29.463 8.902 - 8.960: 70.5498% ( 69) 00:27:29.463 8.960 - 9.018: 70.9475% ( 45) 00:27:29.463 9.018 - 9.076: 71.3010% ( 40) 00:27:29.463 9.076 - 9.135: 71.7341% ( 49) 00:27:29.463 9.135 - 9.193: 71.9905% ( 29) 00:27:29.463 9.193 - 9.251: 72.1584% ( 19) 00:27:29.463 9.251 - 9.309: 72.2998% ( 16) 00:27:29.463 9.309 - 9.367: 72.3528% ( 6) 00:27:29.463 9.367 - 9.425: 72.3882% ( 4) 00:27:29.463 9.425 - 9.484: 72.4854% ( 11) 00:27:29.463 9.484 - 9.542: 72.6180% ( 15) 00:27:29.463 9.542 - 9.600: 72.8301% ( 24) 00:27:29.463 9.600 - 9.658: 73.1041% ( 31) 00:27:29.463 9.658 - 9.716: 73.3958% ( 33) 00:27:29.463 9.716 - 9.775: 73.6875% ( 33) 00:27:29.463 9.775 - 9.833: 74.1117% ( 48) 00:27:29.463 9.833 - 9.891: 74.4211% ( 35) 00:27:29.463 9.891 - 9.949: 74.6597% ( 27) 00:27:29.463 9.949 - 10.007: 74.9249% ( 30) 00:27:29.463 10.007 - 10.065: 75.1900% ( 30) 00:27:29.463 10.065 - 10.124: 75.4729% ( 32) 00:27:29.463 10.124 - 10.182: 75.8264% ( 40) 00:27:29.463 10.182 - 10.240: 76.3125% ( 55) 00:27:29.463 10.240 - 10.298: 76.7633% ( 51) 00:27:29.463 10.298 - 10.356: 77.2583% ( 56) 00:27:29.463 10.356 - 10.415: 77.6560% ( 45) 00:27:29.463 10.415 - 10.473: 77.8858% ( 26) 00:27:29.463 10.473 - 10.531: 78.0626% ( 20) 00:27:29.463 10.531 - 10.589: 78.2305% ( 19) 00:27:29.463 10.589 - 10.647: 78.3366% ( 12) 00:27:29.463 10.647 - 10.705: 78.5664% ( 26) 00:27:29.463 10.705 - 10.764: 79.9275% ( 154) 00:27:29.463 10.764 - 10.822: 83.8872% ( 448) 00:27:29.463 10.822 - 10.880: 88.3507% ( 505) 00:27:29.463 10.880 - 10.938: 91.4354% ( 349) 00:27:29.463 10.938 - 10.996: 92.9203% ( 168) 00:27:29.463 10.996 - 11.055: 93.7069% ( 89) 00:27:29.463 11.055 - 11.113: 93.9544% ( 28) 00:27:29.463 11.113 - 11.171: 94.1400% ( 21) 00:27:29.463 11.171 - 11.229: 94.2991% ( 18) 00:27:29.463 11.229 - 11.287: 94.4228% ( 14) 00:27:29.463 11.287 - 11.345: 94.5554% ( 15) 00:27:29.463 11.345 - 11.404: 94.6526% ( 11) 00:27:29.463 11.404 - 11.462: 94.6792% ( 3) 00:27:29.463 11.462 - 11.520: 94.7941% ( 13) 00:27:29.463 11.520 - 11.578: 94.8471% ( 6) 00:27:29.463 11.578 - 11.636: 94.8824% ( 4) 00:27:29.463 11.636 - 11.695: 94.9443% ( 7) 00:27:29.463 11.695 - 11.753: 94.9797% ( 4) 00:27:29.463 11.753 - 11.811: 95.0062% ( 3) 00:27:29.463 11.811 - 11.869: 95.0327% ( 3) 00:27:29.463 11.869 - 11.927: 95.0946% ( 7) 00:27:29.463 11.927 - 11.985: 95.1741% ( 9) 00:27:29.463 11.985 - 12.044: 95.2360% ( 7) 00:27:29.463 12.044 - 12.102: 95.2979% ( 7) 00:27:29.463 12.102 - 12.160: 95.4923% ( 22) 00:27:29.463 12.160 - 12.218: 95.6868% ( 22) 00:27:29.463 12.218 - 12.276: 95.8105% ( 14) 00:27:29.463 12.276 - 12.335: 96.0491% ( 27) 00:27:29.463 12.335 - 12.393: 96.1640% ( 13) 00:27:29.463 12.393 - 12.451: 96.3231% ( 18) 00:27:29.463 12.451 - 12.509: 96.4115% ( 10) 00:27:29.463 12.509 - 12.567: 96.4822% ( 8) 00:27:29.463 12.567 - 12.625: 96.5795% ( 11) 00:27:29.463 12.625 - 12.684: 96.7474% ( 19) 00:27:29.463 12.684 - 12.742: 96.8711% ( 14) 00:27:29.463 12.742 - 12.800: 97.0126% ( 16) 00:27:29.463 12.800 - 12.858: 97.1893% ( 20) 00:27:29.463 12.858 - 12.916: 97.3042% ( 13) 00:27:29.463 12.916 - 12.975: 97.4103% ( 12) 00:27:29.463 12.975 - 13.033: 97.4810% ( 8) 00:27:29.463 13.033 - 13.091: 97.5694% ( 10) 00:27:29.463 13.091 - 13.149: 97.6047% ( 4) 00:27:29.463 13.149 - 13.207: 97.6136% ( 1) 00:27:29.463 13.265 - 13.324: 97.6931% ( 9) 00:27:29.463 13.324 - 13.382: 97.7462% ( 6) 00:27:29.463 13.382 - 13.440: 97.7903% ( 5) 00:27:29.463 13.440 - 13.498: 97.8434% ( 6) 00:27:29.463 13.498 - 13.556: 97.8699% ( 3) 00:27:29.463 13.556 - 13.615: 97.8876% ( 2) 00:27:29.463 13.615 - 13.673: 97.9406% ( 6) 00:27:29.463 13.673 - 13.731: 98.0202% ( 9) 00:27:29.463 13.731 - 13.789: 98.0643% ( 5) 00:27:29.463 13.789 - 13.847: 98.1262% ( 7) 00:27:29.463 13.847 - 13.905: 98.1969% ( 8) 00:27:29.463 13.905 - 13.964: 98.2411% ( 5) 00:27:29.463 13.964 - 14.022: 98.2588% ( 2) 00:27:29.463 14.022 - 14.080: 98.2853% ( 3) 00:27:29.463 14.080 - 14.138: 98.2941% ( 1) 00:27:29.463 14.138 - 14.196: 98.3383% ( 5) 00:27:29.463 14.196 - 14.255: 98.3649% ( 3) 00:27:29.463 14.255 - 14.313: 98.4267% ( 7) 00:27:29.463 14.313 - 14.371: 98.4621% ( 4) 00:27:29.463 14.371 - 14.429: 98.4886% ( 3) 00:27:29.463 14.429 - 14.487: 98.5240% ( 4) 00:27:29.463 14.487 - 14.545: 98.5416% ( 2) 00:27:29.463 14.545 - 14.604: 98.5505% ( 1) 00:27:29.723 14.604 - 14.662: 98.5681% ( 2) 00:27:29.723 14.662 - 14.720: 98.5947% ( 3) 00:27:29.723 14.720 - 14.778: 98.6300% ( 4) 00:27:29.723 14.778 - 14.836: 98.6389% ( 1) 00:27:29.723 14.836 - 14.895: 98.6477% ( 1) 00:27:29.723 14.895 - 15.011: 98.6742% ( 3) 00:27:29.723 15.011 - 15.127: 98.7272% ( 6) 00:27:29.723 15.127 - 15.244: 98.7626% ( 4) 00:27:29.723 15.244 - 15.360: 98.7979% ( 4) 00:27:29.723 15.360 - 15.476: 98.8421% ( 5) 00:27:29.723 15.476 - 15.593: 98.9129% ( 8) 00:27:29.723 15.593 - 15.709: 98.9305% ( 2) 00:27:29.723 15.709 - 15.825: 98.9659% ( 4) 00:27:29.723 15.825 - 15.942: 99.0012% ( 4) 00:27:29.723 15.942 - 16.058: 99.0366% ( 4) 00:27:29.723 16.058 - 16.175: 99.0543% ( 2) 00:27:29.723 16.175 - 16.291: 99.0719% ( 2) 00:27:29.723 16.291 - 16.407: 99.1073% ( 4) 00:27:29.723 16.407 - 16.524: 99.1338% ( 3) 00:27:29.723 16.524 - 16.640: 99.1603% ( 3) 00:27:29.723 16.640 - 16.756: 99.1957% ( 4) 00:27:29.723 16.756 - 16.873: 99.2045% ( 1) 00:27:29.723 16.873 - 16.989: 99.2222% ( 2) 00:27:29.723 16.989 - 17.105: 99.2487% ( 3) 00:27:29.723 17.105 - 17.222: 99.2664% ( 2) 00:27:29.723 17.222 - 17.338: 99.3018% ( 4) 00:27:29.723 17.338 - 17.455: 99.3371% ( 4) 00:27:29.723 17.455 - 17.571: 99.3813% ( 5) 00:27:29.723 17.571 - 17.687: 99.3901% ( 1) 00:27:29.723 17.687 - 17.804: 99.4255% ( 4) 00:27:29.723 17.804 - 17.920: 99.4432% ( 2) 00:27:29.723 17.920 - 18.036: 99.4697% ( 3) 00:27:29.723 18.036 - 18.153: 99.5050% ( 4) 00:27:29.723 18.153 - 18.269: 99.5139% ( 1) 00:27:29.723 18.269 - 18.385: 99.5227% ( 1) 00:27:29.723 18.385 - 18.502: 99.5404% ( 2) 00:27:29.723 18.502 - 18.618: 99.5581% ( 2) 00:27:29.723 18.618 - 18.735: 99.5757% ( 2) 00:27:29.723 18.735 - 18.851: 99.5846% ( 1) 00:27:29.723 18.851 - 18.967: 99.6111% ( 3) 00:27:29.723 18.967 - 19.084: 99.6199% ( 1) 00:27:29.723 19.200 - 19.316: 99.6288% ( 1) 00:27:29.723 19.316 - 19.433: 99.6553% ( 3) 00:27:29.723 19.433 - 19.549: 99.6641% ( 1) 00:27:29.723 20.015 - 20.131: 99.6730% ( 1) 00:27:29.723 20.131 - 20.247: 99.6818% ( 1) 00:27:29.723 20.247 - 20.364: 99.6906% ( 1) 00:27:29.723 20.364 - 20.480: 99.6995% ( 1) 00:27:29.723 20.480 - 20.596: 99.7083% ( 1) 00:27:29.723 20.713 - 20.829: 99.7172% ( 1) 00:27:29.723 21.062 - 21.178: 99.7260% ( 1) 00:27:29.723 21.178 - 21.295: 99.7348% ( 1) 00:27:29.723 21.295 - 21.411: 99.7437% ( 1) 00:27:29.723 21.644 - 21.760: 99.7525% ( 1) 00:27:29.723 21.760 - 21.876: 99.7614% ( 1) 00:27:29.723 22.109 - 22.225: 99.7702% ( 1) 00:27:29.723 22.342 - 22.458: 99.7879% ( 2) 00:27:29.723 22.458 - 22.575: 99.7967% ( 1) 00:27:29.723 22.575 - 22.691: 99.8056% ( 1) 00:27:29.723 22.807 - 22.924: 99.8232% ( 2) 00:27:29.723 23.156 - 23.273: 99.8409% ( 2) 00:27:29.723 24.320 - 24.436: 99.8497% ( 1) 00:27:29.723 25.135 - 25.251: 99.8586% ( 1) 00:27:29.723 25.716 - 25.833: 99.8674% ( 1) 00:27:29.723 25.833 - 25.949: 99.8763% ( 1) 00:27:29.723 26.182 - 26.298: 99.8851% ( 1) 00:27:29.723 26.298 - 26.415: 99.8939% ( 1) 00:27:29.723 27.695 - 27.811: 99.9028% ( 1) 00:27:29.723 28.975 - 29.091: 99.9205% ( 2) 00:27:29.723 29.440 - 29.556: 99.9293% ( 1) 00:27:29.723 29.789 - 30.022: 99.9381% ( 1) 00:27:29.723 30.022 - 30.255: 99.9470% ( 1) 00:27:29.723 37.236 - 37.469: 99.9558% ( 1) 00:27:29.723 39.564 - 39.796: 99.9646% ( 1) 00:27:29.723 43.985 - 44.218: 99.9735% ( 1) 00:27:29.723 52.596 - 52.829: 99.9823% ( 1) 00:27:29.723 74.473 - 74.938: 99.9912% ( 1) 00:27:29.723 85.644 - 86.109: 100.0000% ( 1) 00:27:29.723 00:27:29.723 00:27:29.723 real 0m1.260s 00:27:29.723 user 0m1.117s 00:27:29.723 sys 0m0.073s 00:27:29.723 ************************************ 00:27:29.723 END TEST nvme_overhead 00:27:29.723 ************************************ 00:27:29.723 02:50:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:29.723 02:50:54 -- common/autotest_common.sh@10 -- # set +x 00:27:29.723 02:50:54 -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:27:29.723 02:50:54 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:27:29.723 02:50:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:29.723 02:50:54 -- common/autotest_common.sh@10 -- # set +x 00:27:29.723 ************************************ 00:27:29.723 START TEST nvme_arbitration 00:27:29.723 ************************************ 00:27:29.723 02:50:54 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:27:33.003 Initializing NVMe Controllers 00:27:33.003 Attached to 0000:00:06.0 00:27:33.003 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:27:33.003 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:27:33.003 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:27:33.003 Associating QEMU NVMe Ctrl (12340 ) with lcore 3 00:27:33.003 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:27:33.003 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:27:33.003 Initialization complete. Launching workers. 00:27:33.003 Starting thread on core 1 with urgent priority queue 00:27:33.003 Starting thread on core 2 with urgent priority queue 00:27:33.003 Starting thread on core 3 with urgent priority queue 00:27:33.003 Starting thread on core 0 with urgent priority queue 00:27:33.003 QEMU NVMe Ctrl (12340 ) core 0: 6930.33 IO/s 14.43 secs/100000 ios 00:27:33.003 QEMU NVMe Ctrl (12340 ) core 1: 7010.33 IO/s 14.26 secs/100000 ios 00:27:33.004 QEMU NVMe Ctrl (12340 ) core 2: 4027.67 IO/s 24.83 secs/100000 ios 00:27:33.004 QEMU NVMe Ctrl (12340 ) core 3: 3934.33 IO/s 25.42 secs/100000 ios 00:27:33.004 ======================================================== 00:27:33.004 00:27:33.004 00:27:33.004 real 0m3.333s 00:27:33.004 user 0m9.157s 00:27:33.004 sys 0m0.146s 00:27:33.004 02:50:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:33.004 02:50:57 -- common/autotest_common.sh@10 -- # set +x 00:27:33.004 ************************************ 00:27:33.004 END TEST nvme_arbitration 00:27:33.004 ************************************ 00:27:33.004 02:50:57 -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:27:33.004 02:50:57 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:27:33.004 02:50:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:33.004 02:50:57 -- common/autotest_common.sh@10 -- # set +x 00:27:33.004 ************************************ 00:27:33.004 START TEST nvme_single_aen 00:27:33.004 ************************************ 00:27:33.004 02:50:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:27:33.004 [2024-07-11 02:50:58.008137] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:27:33.004 [2024-07-11 02:50:58.008277] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:33.262 [2024-07-11 02:50:58.182756] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:27:33.262 Asynchronous Event Request test 00:27:33.262 Attached to 0000:00:06.0 00:27:33.262 Reset controller to setup AER completions for this process 00:27:33.262 Registering asynchronous event callbacks... 00:27:33.262 Getting orig temperature thresholds of all controllers 00:27:33.262 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:27:33.262 Setting all controllers temperature threshold low to trigger AER 00:27:33.262 Waiting for all controllers temperature threshold to be set lower 00:27:33.262 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:27:33.262 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:27:33.262 Waiting for all controllers to trigger AER and reset threshold 00:27:33.262 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:27:33.262 Cleaning up... 00:27:33.262 00:27:33.262 real 0m0.246s 00:27:33.262 user 0m0.060s 00:27:33.262 sys 0m0.106s 00:27:33.262 02:50:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:33.262 02:50:58 -- common/autotest_common.sh@10 -- # set +x 00:27:33.262 ************************************ 00:27:33.262 END TEST nvme_single_aen 00:27:33.262 ************************************ 00:27:33.262 02:50:58 -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:27:33.262 02:50:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:33.262 02:50:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:33.262 02:50:58 -- common/autotest_common.sh@10 -- # set +x 00:27:33.262 ************************************ 00:27:33.262 START TEST nvme_doorbell_aers 00:27:33.262 ************************************ 00:27:33.262 02:50:58 -- common/autotest_common.sh@1104 -- # nvme_doorbell_aers 00:27:33.262 02:50:58 -- nvme/nvme.sh@70 -- # bdfs=() 00:27:33.262 02:50:58 -- nvme/nvme.sh@70 -- # local bdfs bdf 00:27:33.262 02:50:58 -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:27:33.262 02:50:58 -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:27:33.262 02:50:58 -- common/autotest_common.sh@1498 -- # bdfs=() 00:27:33.262 02:50:58 -- common/autotest_common.sh@1498 -- # local bdfs 00:27:33.262 02:50:58 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:33.262 02:50:58 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:27:33.262 02:50:58 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:27:33.262 02:50:58 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:27:33.262 02:50:58 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:27:33.262 02:50:58 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:27:33.262 02:50:58 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:06.0' 00:27:33.520 [2024-07-11 02:50:58.562690] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 151746) is not found. Dropping the request. 00:27:43.484 Executing: test_write_invalid_db 00:27:43.484 Waiting for AER completion... 00:27:43.484 Failure: test_write_invalid_db 00:27:43.484 00:27:43.484 Executing: test_invalid_db_write_overflow_sq 00:27:43.484 Waiting for AER completion... 00:27:43.484 Failure: test_invalid_db_write_overflow_sq 00:27:43.484 00:27:43.484 Executing: test_invalid_db_write_overflow_cq 00:27:43.484 Waiting for AER completion... 00:27:43.484 Failure: test_invalid_db_write_overflow_cq 00:27:43.484 00:27:43.484 00:27:43.484 real 0m10.105s 00:27:43.484 user 0m8.538s 00:27:43.484 sys 0m1.505s 00:27:43.484 02:51:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:43.484 ************************************ 00:27:43.484 END TEST nvme_doorbell_aers 00:27:43.484 ************************************ 00:27:43.484 02:51:08 -- common/autotest_common.sh@10 -- # set +x 00:27:43.484 02:51:08 -- nvme/nvme.sh@97 -- # uname 00:27:43.484 02:51:08 -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:27:43.484 02:51:08 -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:27:43.484 02:51:08 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:27:43.484 02:51:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:43.484 02:51:08 -- common/autotest_common.sh@10 -- # set +x 00:27:43.484 ************************************ 00:27:43.484 START TEST nvme_multi_aen 00:27:43.484 ************************************ 00:27:43.484 02:51:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:27:43.484 [2024-07-11 02:51:08.461908] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:27:43.484 [2024-07-11 02:51:08.462212] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:43.743 [2024-07-11 02:51:08.648462] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:27:43.743 [2024-07-11 02:51:08.648543] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 151746) is not found. Dropping the request. 00:27:43.743 [2024-07-11 02:51:08.649115] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 151746) is not found. Dropping the request. 00:27:43.743 [2024-07-11 02:51:08.649268] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 151746) is not found. Dropping the request. 00:27:43.743 [2024-07-11 02:51:08.656628] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:27:43.743 Child process pid: 151946 00:27:43.743 [2024-07-11 02:51:08.656891] [ DPDK EAL parameters: aer -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:44.002 [Child] Asynchronous Event Request test 00:27:44.002 [Child] Attached to 0000:00:06.0 00:27:44.002 [Child] Registering asynchronous event callbacks... 00:27:44.002 [Child] Getting orig temperature thresholds of all controllers 00:27:44.002 [Child] 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:27:44.002 [Child] Waiting for all controllers to trigger AER and reset threshold 00:27:44.002 [Child] 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:27:44.002 [Child] 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:27:44.002 [Child] Cleaning up... 00:27:44.002 Asynchronous Event Request test 00:27:44.002 Attached to 0000:00:06.0 00:27:44.002 Reset controller to setup AER completions for this process 00:27:44.002 Registering asynchronous event callbacks... 00:27:44.002 Getting orig temperature thresholds of all controllers 00:27:44.002 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:27:44.002 Setting all controllers temperature threshold low to trigger AER 00:27:44.002 Waiting for all controllers temperature threshold to be set lower 00:27:44.002 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:27:44.002 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:27:44.002 Waiting for all controllers to trigger AER and reset threshold 00:27:44.002 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:27:44.002 Cleaning up... 00:27:44.002 00:27:44.002 real 0m0.548s 00:27:44.002 user 0m0.150s 00:27:44.002 sys 0m0.219s 00:27:44.002 02:51:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:44.002 ************************************ 00:27:44.002 END TEST nvme_multi_aen 00:27:44.002 ************************************ 00:27:44.002 02:51:08 -- common/autotest_common.sh@10 -- # set +x 00:27:44.002 02:51:09 -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:27:44.002 02:51:09 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:27:44.002 02:51:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:44.002 02:51:09 -- common/autotest_common.sh@10 -- # set +x 00:27:44.002 ************************************ 00:27:44.002 START TEST nvme_startup 00:27:44.002 ************************************ 00:27:44.002 02:51:09 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:27:44.261 Initializing NVMe Controllers 00:27:44.261 Attached to 0000:00:06.0 00:27:44.261 Initialization complete. 00:27:44.261 Time used:204620.594 (us). 00:27:44.261 00:27:44.261 real 0m0.284s 00:27:44.261 user 0m0.110s 00:27:44.261 sys 0m0.085s 00:27:44.261 02:51:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:44.261 02:51:09 -- common/autotest_common.sh@10 -- # set +x 00:27:44.261 ************************************ 00:27:44.261 END TEST nvme_startup 00:27:44.261 ************************************ 00:27:44.261 02:51:09 -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:27:44.261 02:51:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:44.261 02:51:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:44.261 02:51:09 -- common/autotest_common.sh@10 -- # set +x 00:27:44.520 ************************************ 00:27:44.520 START TEST nvme_multi_secondary 00:27:44.520 ************************************ 00:27:44.520 02:51:09 -- common/autotest_common.sh@1104 -- # nvme_multi_secondary 00:27:44.520 02:51:09 -- nvme/nvme.sh@52 -- # pid0=152013 00:27:44.520 02:51:09 -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:27:44.520 02:51:09 -- nvme/nvme.sh@54 -- # pid1=152014 00:27:44.520 02:51:09 -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:27:44.520 02:51:09 -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:27:47.817 Initializing NVMe Controllers 00:27:47.817 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:27:47.817 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:27:47.817 Initialization complete. Launching workers. 00:27:47.817 ======================================================== 00:27:47.817 Latency(us) 00:27:47.817 Device Information : IOPS MiB/s Average min max 00:27:47.817 PCIE (0000:00:06.0) NSID 1 from core 2: 14640.00 57.19 1092.66 150.38 17167.28 00:27:47.817 ======================================================== 00:27:47.817 Total : 14640.00 57.19 1092.66 150.38 17167.28 00:27:47.817 00:27:47.817 02:51:12 -- nvme/nvme.sh@56 -- # wait 152013 00:27:48.075 Initializing NVMe Controllers 00:27:48.075 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:27:48.075 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:27:48.075 Initialization complete. Launching workers. 00:27:48.075 ======================================================== 00:27:48.075 Latency(us) 00:27:48.075 Device Information : IOPS MiB/s Average min max 00:27:48.076 PCIE (0000:00:06.0) NSID 1 from core 1: 33918.07 132.49 471.43 137.30 2638.67 00:27:48.076 ======================================================== 00:27:48.076 Total : 33918.07 132.49 471.43 137.30 2638.67 00:27:48.076 00:27:49.976 Initializing NVMe Controllers 00:27:49.976 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:27:49.976 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:27:49.976 Initialization complete. Launching workers. 00:27:49.976 ======================================================== 00:27:49.976 Latency(us) 00:27:49.976 Device Information : IOPS MiB/s Average min max 00:27:49.976 PCIE (0000:00:06.0) NSID 1 from core 0: 41584.59 162.44 384.43 89.44 1456.44 00:27:49.976 ======================================================== 00:27:49.976 Total : 41584.59 162.44 384.43 89.44 1456.44 00:27:49.976 00:27:49.976 02:51:14 -- nvme/nvme.sh@57 -- # wait 152014 00:27:49.976 02:51:14 -- nvme/nvme.sh@61 -- # pid0=152106 00:27:49.976 02:51:14 -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:27:49.976 02:51:14 -- nvme/nvme.sh@63 -- # pid1=152107 00:27:49.976 02:51:14 -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:27:49.976 02:51:14 -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:27:53.260 Initializing NVMe Controllers 00:27:53.260 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:27:53.260 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:27:53.260 Initialization complete. Launching workers. 00:27:53.260 ======================================================== 00:27:53.260 Latency(us) 00:27:53.260 Device Information : IOPS MiB/s Average min max 00:27:53.260 PCIE (0000:00:06.0) NSID 1 from core 1: 33740.32 131.80 473.91 122.51 16692.98 00:27:53.260 ======================================================== 00:27:53.260 Total : 33740.32 131.80 473.91 122.51 16692.98 00:27:53.260 00:27:53.260 Initializing NVMe Controllers 00:27:53.260 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:27:53.260 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:27:53.260 Initialization complete. Launching workers. 00:27:53.260 ======================================================== 00:27:53.260 Latency(us) 00:27:53.260 Device Information : IOPS MiB/s Average min max 00:27:53.260 PCIE (0000:00:06.0) NSID 1 from core 0: 33918.97 132.50 471.41 114.40 1802.94 00:27:53.260 ======================================================== 00:27:53.260 Total : 33918.97 132.50 471.41 114.40 1802.94 00:27:53.260 00:27:55.161 Initializing NVMe Controllers 00:27:55.161 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:27:55.161 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:27:55.161 Initialization complete. Launching workers. 00:27:55.161 ======================================================== 00:27:55.161 Latency(us) 00:27:55.161 Device Information : IOPS MiB/s Average min max 00:27:55.161 PCIE (0000:00:06.0) NSID 1 from core 2: 17584.37 68.69 909.34 126.83 28836.17 00:27:55.161 ======================================================== 00:27:55.161 Total : 17584.37 68.69 909.34 126.83 28836.17 00:27:55.161 00:27:55.161 02:51:20 -- nvme/nvme.sh@65 -- # wait 152106 00:27:55.161 02:51:20 -- nvme/nvme.sh@66 -- # wait 152107 00:27:55.161 00:27:55.161 real 0m10.848s 00:27:55.161 user 0m18.517s 00:27:55.161 sys 0m0.642s 00:27:55.161 02:51:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:55.161 02:51:20 -- common/autotest_common.sh@10 -- # set +x 00:27:55.161 ************************************ 00:27:55.161 END TEST nvme_multi_secondary 00:27:55.161 ************************************ 00:27:55.161 02:51:20 -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:27:55.161 02:51:20 -- nvme/nvme.sh@102 -- # kill_stub 00:27:55.161 02:51:20 -- common/autotest_common.sh@1065 -- # [[ -e /proc/151295 ]] 00:27:55.161 02:51:20 -- common/autotest_common.sh@1066 -- # kill 151295 00:27:55.161 02:51:20 -- common/autotest_common.sh@1067 -- # wait 151295 00:27:55.420 [2024-07-11 02:51:20.495994] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 151945) is not found. Dropping the request. 00:27:55.420 [2024-07-11 02:51:20.496198] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 151945) is not found. Dropping the request. 00:27:55.420 [2024-07-11 02:51:20.496275] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 151945) is not found. Dropping the request. 00:27:55.420 [2024-07-11 02:51:20.496332] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 151945) is not found. Dropping the request. 00:27:55.678 02:51:20 -- common/autotest_common.sh@1069 -- # rm -f /var/run/spdk_stub0 00:27:55.678 02:51:20 -- common/autotest_common.sh@1073 -- # echo 2 00:27:55.678 02:51:20 -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:27:55.678 02:51:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:55.678 02:51:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:55.678 02:51:20 -- common/autotest_common.sh@10 -- # set +x 00:27:55.678 ************************************ 00:27:55.678 START TEST bdev_nvme_reset_stuck_adm_cmd 00:27:55.678 ************************************ 00:27:55.678 02:51:20 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:27:55.679 * Looking for test storage... 00:27:55.679 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:27:55.679 02:51:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:27:55.679 02:51:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:27:55.679 02:51:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:27:55.679 02:51:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:27:55.679 02:51:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:27:55.679 02:51:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:27:55.679 02:51:20 -- common/autotest_common.sh@1509 -- # bdfs=() 00:27:55.679 02:51:20 -- common/autotest_common.sh@1509 -- # local bdfs 00:27:55.679 02:51:20 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:27:55.679 02:51:20 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:27:55.679 02:51:20 -- common/autotest_common.sh@1498 -- # bdfs=() 00:27:55.679 02:51:20 -- common/autotest_common.sh@1498 -- # local bdfs 00:27:55.679 02:51:20 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:55.679 02:51:20 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:27:55.679 02:51:20 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:27:55.679 02:51:20 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:27:55.679 02:51:20 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:27:55.679 02:51:20 -- common/autotest_common.sh@1512 -- # echo 0000:00:06.0 00:27:55.679 02:51:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:06.0 00:27:55.679 02:51:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:06.0 ']' 00:27:55.679 02:51:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=152261 00:27:55.679 02:51:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:27:55.679 02:51:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 152261 00:27:55.679 02:51:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:27:55.679 02:51:20 -- common/autotest_common.sh@819 -- # '[' -z 152261 ']' 00:27:55.679 02:51:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:55.679 02:51:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:55.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:55.679 02:51:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:55.679 02:51:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:55.679 02:51:20 -- common/autotest_common.sh@10 -- # set +x 00:27:55.937 [2024-07-11 02:51:20.788289] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:27:55.937 [2024-07-11 02:51:20.788779] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid152261 ] 00:27:55.937 [2024-07-11 02:51:20.964663] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:56.195 [2024-07-11 02:51:21.060881] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:56.195 [2024-07-11 02:51:21.061327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:56.195 [2024-07-11 02:51:21.061435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:56.195 [2024-07-11 02:51:21.061559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:56.195 [2024-07-11 02:51:21.061554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:56.763 02:51:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:56.763 02:51:21 -- common/autotest_common.sh@852 -- # return 0 00:27:56.763 02:51:21 -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:06.0 00:27:56.763 02:51:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:56.763 02:51:21 -- common/autotest_common.sh@10 -- # set +x 00:27:56.763 nvme0n1 00:27:56.763 02:51:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:56.763 02:51:21 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:27:56.763 02:51:21 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_yi4qh.txt 00:27:56.763 02:51:21 -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:27:56.763 02:51:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:56.763 02:51:21 -- common/autotest_common.sh@10 -- # set +x 00:27:56.763 true 00:27:56.763 02:51:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:56.763 02:51:21 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:27:56.763 02:51:21 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1720666281 00:27:56.763 02:51:21 -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=152280 00:27:56.763 02:51:21 -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:27:56.763 02:51:21 -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:27:56.763 02:51:21 -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:27:58.666 02:51:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:27:58.666 02:51:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:58.666 02:51:23 -- common/autotest_common.sh@10 -- # set +x 00:27:58.666 [2024-07-11 02:51:23.742213] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:27:58.666 [2024-07-11 02:51:23.742669] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:58.666 [2024-07-11 02:51:23.742773] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:27:58.666 [2024-07-11 02:51:23.742828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.666 [2024-07-11 02:51:23.744527] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:58.666 02:51:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:58.666 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 152280 00:27:58.666 02:51:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 152280 00:27:58.666 02:51:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 152280 00:27:58.923 02:51:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:27:58.923 02:51:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:27:58.923 02:51:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.923 02:51:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:58.923 02:51:23 -- common/autotest_common.sh@10 -- # set +x 00:27:58.923 02:51:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:58.923 02:51:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:27:58.923 02:51:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_yi4qh.txt 00:27:58.923 02:51:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:27:58.923 02:51:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:27:58.923 02:51:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:27:58.923 02:51:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:27:58.923 02:51:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:27:58.923 02:51:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:27:58.923 02:51:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:27:58.923 02:51:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:27:58.923 02:51:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:27:58.923 02:51:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:27:58.923 02:51:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:27:58.923 02:51:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:27:58.923 02:51:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:27:58.923 02:51:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:27:58.923 02:51:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:27:58.923 02:51:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:27:58.923 02:51:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:27:58.923 02:51:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:27:58.923 02:51:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:27:58.923 02:51:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_yi4qh.txt 00:27:58.923 02:51:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 152261 00:27:58.923 02:51:23 -- common/autotest_common.sh@926 -- # '[' -z 152261 ']' 00:27:58.923 02:51:23 -- common/autotest_common.sh@930 -- # kill -0 152261 00:27:58.923 02:51:23 -- common/autotest_common.sh@931 -- # uname 00:27:58.923 02:51:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:58.923 02:51:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 152261 00:27:58.923 02:51:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:58.924 killing process with pid 152261 00:27:58.924 02:51:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:58.924 02:51:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 152261' 00:27:58.924 02:51:23 -- common/autotest_common.sh@945 -- # kill 152261 00:27:58.924 02:51:23 -- common/autotest_common.sh@950 -- # wait 152261 00:27:59.489 02:51:24 -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:27:59.489 02:51:24 -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:27:59.489 00:27:59.489 real 0m3.697s 00:27:59.489 user 0m13.246s 00:27:59.489 sys 0m0.506s 00:27:59.489 ************************************ 00:27:59.489 02:51:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:59.489 02:51:24 -- common/autotest_common.sh@10 -- # set +x 00:27:59.489 END TEST bdev_nvme_reset_stuck_adm_cmd 00:27:59.489 ************************************ 00:27:59.489 02:51:24 -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:27:59.489 02:51:24 -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:27:59.489 02:51:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:59.489 02:51:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:59.489 02:51:24 -- common/autotest_common.sh@10 -- # set +x 00:27:59.489 ************************************ 00:27:59.489 START TEST nvme_fio 00:27:59.489 ************************************ 00:27:59.489 02:51:24 -- common/autotest_common.sh@1104 -- # nvme_fio_test 00:27:59.490 02:51:24 -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:27:59.490 02:51:24 -- nvme/nvme.sh@32 -- # ran_fio=false 00:27:59.490 02:51:24 -- nvme/nvme.sh@33 -- # bdfs=($(get_nvme_bdfs)) 00:27:59.490 02:51:24 -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:27:59.490 02:51:24 -- common/autotest_common.sh@1498 -- # bdfs=() 00:27:59.490 02:51:24 -- common/autotest_common.sh@1498 -- # local bdfs 00:27:59.490 02:51:24 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:59.490 02:51:24 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:27:59.490 02:51:24 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:27:59.490 02:51:24 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:27:59.490 02:51:24 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:27:59.490 02:51:24 -- nvme/nvme.sh@33 -- # local bdfs bdf 00:27:59.490 02:51:24 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:27:59.490 02:51:24 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:27:59.490 02:51:24 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:27:59.748 02:51:24 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:27:59.748 02:51:24 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:27:59.748 02:51:24 -- nvme/nvme.sh@41 -- # bs=4096 00:27:59.748 02:51:24 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:27:59.748 02:51:24 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:27:59.748 02:51:24 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:27:59.748 02:51:24 -- common/autotest_common.sh@1318 -- # sanitizers=(libasan libclang_rt.asan) 00:27:59.748 02:51:24 -- common/autotest_common.sh@1318 -- # local sanitizers 00:27:59.748 02:51:24 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:27:59.748 02:51:24 -- common/autotest_common.sh@1320 -- # shift 00:27:59.748 02:51:24 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:27:59.748 02:51:24 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:59.748 02:51:24 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:27:59.748 02:51:24 -- common/autotest_common.sh@1324 -- # grep libasan 00:27:59.748 02:51:24 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:59.748 02:51:24 -- common/autotest_common.sh@1324 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.5 00:27:59.748 02:51:24 -- common/autotest_common.sh@1325 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.5 ]] 00:27:59.748 02:51:24 -- common/autotest_common.sh@1326 -- # break 00:27:59.748 02:51:24 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.5 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:27:59.748 02:51:24 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:28:00.007 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:00.007 fio-3.35 00:28:00.007 Starting 1 thread 00:28:03.377 00:28:03.377 test: (groupid=0, jobs=1): err= 0: pid=152423: Thu Jul 11 02:51:28 2024 00:28:03.377 read: IOPS=16.7k, BW=65.1MiB/s (68.3MB/s)(130MiB/2001msec) 00:28:03.377 slat (usec): min=3, max=112, avg= 5.56, stdev= 2.94 00:28:03.377 clat (usec): min=302, max=10403, avg=3817.52, stdev=453.48 00:28:03.377 lat (usec): min=307, max=10515, avg=3823.08, stdev=453.81 00:28:03.377 clat percentiles (usec): 00:28:03.377 | 1.00th=[ 2966], 5.00th=[ 3228], 10.00th=[ 3326], 20.00th=[ 3458], 00:28:03.377 | 30.00th=[ 3556], 40.00th=[ 3654], 50.00th=[ 3752], 60.00th=[ 3916], 00:28:03.377 | 70.00th=[ 4047], 80.00th=[ 4178], 90.00th=[ 4359], 95.00th=[ 4555], 00:28:03.377 | 99.00th=[ 4817], 99.50th=[ 4948], 99.90th=[ 6390], 99.95th=[ 8455], 00:28:03.377 | 99.99th=[10159] 00:28:03.377 bw ( KiB/s): min=63024, max=72247, per=100.00%, avg=68090.33, stdev=4678.31, samples=3 00:28:03.377 iops : min=15756, max=18061, avg=17022.33, stdev=1169.24, samples=3 00:28:03.377 write: IOPS=16.7k, BW=65.2MiB/s (68.4MB/s)(131MiB/2001msec); 0 zone resets 00:28:03.377 slat (nsec): min=3977, max=50296, avg=5823.76, stdev=3040.55 00:28:03.377 clat (usec): min=330, max=10208, avg=3832.92, stdev=456.11 00:28:03.377 lat (usec): min=336, max=10226, avg=3838.74, stdev=456.39 00:28:03.377 clat percentiles (usec): 00:28:03.377 | 1.00th=[ 2966], 5.00th=[ 3228], 10.00th=[ 3359], 20.00th=[ 3458], 00:28:03.377 | 30.00th=[ 3556], 40.00th=[ 3654], 50.00th=[ 3785], 60.00th=[ 3916], 00:28:03.377 | 70.00th=[ 4047], 80.00th=[ 4228], 90.00th=[ 4424], 95.00th=[ 4555], 00:28:03.377 | 99.00th=[ 4883], 99.50th=[ 5014], 99.90th=[ 6849], 99.95th=[ 8848], 00:28:03.377 | 99.99th=[ 9896] 00:28:03.377 bw ( KiB/s): min=62280, max=72263, per=100.00%, avg=67911.67, stdev=5113.17, samples=3 00:28:03.377 iops : min=15570, max=18065, avg=16977.67, stdev=1277.97, samples=3 00:28:03.377 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:28:03.377 lat (msec) : 2=0.06%, 4=67.04%, 10=32.86%, 20=0.01% 00:28:03.377 cpu : usr=99.95%, sys=0.00%, ctx=10, majf=0, minf=38 00:28:03.377 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:28:03.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:03.378 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:03.378 issued rwts: total=33350,33416,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:03.378 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:03.378 00:28:03.378 Run status group 0 (all jobs): 00:28:03.378 READ: bw=65.1MiB/s (68.3MB/s), 65.1MiB/s-65.1MiB/s (68.3MB/s-68.3MB/s), io=130MiB (137MB), run=2001-2001msec 00:28:03.378 WRITE: bw=65.2MiB/s (68.4MB/s), 65.2MiB/s-65.2MiB/s (68.4MB/s-68.4MB/s), io=131MiB (137MB), run=2001-2001msec 00:28:03.378 ----------------------------------------------------- 00:28:03.378 Suppressions used: 00:28:03.378 count bytes template 00:28:03.378 1 32 /usr/src/fio/parse.c 00:28:03.378 ----------------------------------------------------- 00:28:03.378 00:28:03.378 02:51:28 -- nvme/nvme.sh@44 -- # ran_fio=true 00:28:03.378 02:51:28 -- nvme/nvme.sh@46 -- # true 00:28:03.378 00:28:03.378 real 0m4.090s 00:28:03.378 user 0m3.447s 00:28:03.378 sys 0m0.301s 00:28:03.378 02:51:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:03.378 02:51:28 -- common/autotest_common.sh@10 -- # set +x 00:28:03.378 ************************************ 00:28:03.378 END TEST nvme_fio 00:28:03.378 ************************************ 00:28:03.637 00:28:03.637 real 0m44.010s 00:28:03.637 user 1m56.726s 00:28:03.637 sys 0m7.304s 00:28:03.637 02:51:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:03.637 02:51:28 -- common/autotest_common.sh@10 -- # set +x 00:28:03.637 ************************************ 00:28:03.637 END TEST nvme 00:28:03.637 ************************************ 00:28:03.637 02:51:28 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:28:03.637 02:51:28 -- spdk/autotest.sh@227 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:28:03.637 02:51:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:03.637 02:51:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:03.637 02:51:28 -- common/autotest_common.sh@10 -- # set +x 00:28:03.637 ************************************ 00:28:03.637 START TEST nvme_scc 00:28:03.637 ************************************ 00:28:03.637 02:51:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:28:03.637 * Looking for test storage... 00:28:03.637 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:28:03.637 02:51:28 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:28:03.637 02:51:28 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:28:03.637 02:51:28 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:28:03.637 02:51:28 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:03.637 02:51:28 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:03.637 02:51:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:03.637 02:51:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:03.637 02:51:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:03.637 02:51:28 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:03.637 02:51:28 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:03.637 02:51:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:03.637 02:51:28 -- paths/export.sh@5 -- # export PATH 00:28:03.637 02:51:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:03.637 02:51:28 -- nvme/functions.sh@10 -- # ctrls=() 00:28:03.637 02:51:28 -- nvme/functions.sh@10 -- # declare -A ctrls 00:28:03.637 02:51:28 -- nvme/functions.sh@11 -- # nvmes=() 00:28:03.637 02:51:28 -- nvme/functions.sh@11 -- # declare -A nvmes 00:28:03.637 02:51:28 -- nvme/functions.sh@12 -- # bdfs=() 00:28:03.637 02:51:28 -- nvme/functions.sh@12 -- # declare -A bdfs 00:28:03.637 02:51:28 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:28:03.637 02:51:28 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:28:03.637 02:51:28 -- nvme/functions.sh@14 -- # nvme_name= 00:28:03.637 02:51:28 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:03.637 02:51:28 -- nvme/nvme_scc.sh@12 -- # uname 00:28:03.637 02:51:28 -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:28:03.637 02:51:28 -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:28:03.637 02:51:28 -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:03.896 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:28:03.896 Waiting for block devices as requested 00:28:03.896 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:28:04.156 02:51:29 -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:28:04.156 02:51:29 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:28:04.156 02:51:29 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:28:04.156 02:51:29 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:28:04.156 02:51:29 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:28:04.156 02:51:29 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:28:04.156 02:51:29 -- scripts/common.sh@15 -- # local i 00:28:04.156 02:51:29 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:28:04.156 02:51:29 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:28:04.156 02:51:29 -- scripts/common.sh@24 -- # return 0 00:28:04.156 02:51:29 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:28:04.156 02:51:29 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:28:04.156 02:51:29 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:28:04.156 02:51:29 -- nvme/functions.sh@18 -- # shift 00:28:04.156 02:51:29 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:28:04.156 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.156 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.156 02:51:29 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:28:04.156 02:51:29 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:28:04.156 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.156 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.156 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:28:04.156 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:28:04.156 02:51:29 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:28:04.156 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.156 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.156 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:28:04.156 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:28:04.156 02:51:29 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:28:04.156 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.156 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.156 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:28:04.156 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12340 "' 00:28:04.156 02:51:29 -- nvme/functions.sh@23 -- # nvme0[sn]='12340 ' 00:28:04.156 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.156 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.156 02:51:29 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:28:04.156 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:28:04.156 02:51:29 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:28:04.156 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.156 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.156 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:28:04.156 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:28:04.156 02:51:29 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:28:04.156 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.156 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.156 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:28:04.156 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:28:04.156 02:51:29 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:28:04.156 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.156 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.156 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:28:04.156 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:28:04.156 02:51:29 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:28:04.156 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.156 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.156 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.156 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:28:04.156 02:51:29 -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:28:04.156 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.156 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.156 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:28:04.156 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:28:04.156 02:51:29 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:28:04.156 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.156 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.156 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.156 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:28:04.156 02:51:29 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:28:04.156 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.156 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.156 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:28:04.156 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:28:04.156 02:51:29 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:28:04.156 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.156 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.156 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.156 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.157 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.157 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.157 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.157 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.157 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.157 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.157 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.157 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.157 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.157 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.157 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.157 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.157 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.157 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.157 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.157 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.157 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.157 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.157 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.157 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.157 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.157 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.157 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.157 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.157 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.157 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.157 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.157 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.157 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.157 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.157 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.157 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.157 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.157 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.157 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.157 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.158 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.158 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.158 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.158 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.158 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.158 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.158 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.158 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.158 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.158 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.158 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.158 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.158 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.158 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.158 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.158 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.158 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.158 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.158 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.158 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.158 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.158 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.158 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.158 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.158 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.158 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.158 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.158 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.158 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.158 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.158 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.158 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.158 02:51:29 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12340"' 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12340 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.158 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.158 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.158 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:28:04.159 02:51:29 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.159 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.159 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:28:04.159 02:51:29 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.159 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.159 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:28:04.159 02:51:29 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.159 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.159 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:28:04.159 02:51:29 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.159 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.159 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:28:04.159 02:51:29 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.159 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.159 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:28:04.159 02:51:29 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.159 02:51:29 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:28:04.159 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:28:04.159 02:51:29 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.159 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:28:04.159 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:28:04.159 02:51:29 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.159 02:51:29 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:28:04.159 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:28:04.159 02:51:29 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.159 02:51:29 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:28:04.159 02:51:29 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:28:04.159 02:51:29 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:28:04.159 02:51:29 -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:28:04.159 02:51:29 -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:28:04.159 02:51:29 -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:28:04.159 02:51:29 -- nvme/functions.sh@18 -- # shift 00:28:04.159 02:51:29 -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.159 02:51:29 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:28:04.159 02:51:29 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.159 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:28:04.159 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:28:04.159 02:51:29 -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.159 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:28:04.159 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:28:04.159 02:51:29 -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.159 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:28:04.159 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:28:04.159 02:51:29 -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.159 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:28:04.159 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:28:04.159 02:51:29 -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.159 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:28:04.159 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:28:04.159 02:51:29 -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.159 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:28:04.159 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:28:04.159 02:51:29 -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.159 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:28:04.159 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:28:04.159 02:51:29 -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.159 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:28:04.159 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:28:04.159 02:51:29 -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.159 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.159 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:28:04.159 02:51:29 -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.159 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.159 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:28:04.159 02:51:29 -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.159 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.159 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:28:04.159 02:51:29 -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.159 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.159 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:28:04.159 02:51:29 -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.159 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:28:04.159 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:28:04.159 02:51:29 -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.159 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.159 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:28:04.159 02:51:29 -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.159 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.159 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:28:04.159 02:51:29 -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.159 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.159 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:28:04.159 02:51:29 -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.159 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.159 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:28:04.159 02:51:29 -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.159 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.159 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:28:04.159 02:51:29 -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.159 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.159 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:28:04.159 02:51:29 -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.159 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.159 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:28:04.159 02:51:29 -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.159 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.159 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:28:04.159 02:51:29 -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.159 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.159 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.160 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:28:04.160 02:51:29 -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:28:04.160 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.160 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.160 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.160 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:28:04.160 02:51:29 -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:28:04.160 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.160 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.160 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.160 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:28:04.160 02:51:29 -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:28:04.160 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.160 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.160 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.160 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:28:04.160 02:51:29 -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:28:04.160 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.160 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.160 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.160 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:28:04.160 02:51:29 -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:28:04.160 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.160 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.160 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:28:04.160 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:28:04.160 02:51:29 -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:28:04.160 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.160 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.160 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:28:04.160 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:28:04.160 02:51:29 -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:28:04.160 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.160 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.160 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:28:04.160 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:28:04.160 02:51:29 -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:28:04.160 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.160 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.160 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.160 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:28:04.160 02:51:29 -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:28:04.160 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.160 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.160 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.160 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:28:04.160 02:51:29 -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:28:04.160 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.160 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.160 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.160 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:28:04.160 02:51:29 -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:28:04.160 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.160 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.160 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.160 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:28:04.160 02:51:29 -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:28:04.160 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.160 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.160 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:04.160 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:28:04.160 02:51:29 -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:28:04.160 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.160 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.160 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:28:04.160 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:28:04.160 02:51:29 -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:28:04.160 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.160 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.160 02:51:29 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:28:04.160 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:28:04.160 02:51:29 -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:28:04.160 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.160 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.160 02:51:29 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:28:04.160 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:28:04.160 02:51:29 -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:28:04.160 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.160 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.160 02:51:29 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:28:04.160 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:28:04.160 02:51:29 -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:28:04.160 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.160 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.160 02:51:29 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:28:04.160 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:28:04.160 02:51:29 -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:28:04.160 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.160 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.160 02:51:29 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:28:04.160 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:28:04.160 02:51:29 -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:28:04.160 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.160 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.160 02:51:29 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:28:04.160 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:28:04.160 02:51:29 -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:28:04.160 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.160 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.160 02:51:29 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:28:04.160 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:28:04.160 02:51:29 -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:28:04.160 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.160 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.160 02:51:29 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:28:04.160 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:28:04.160 02:51:29 -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:28:04.160 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.160 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.160 02:51:29 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:28:04.160 02:51:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:28:04.160 02:51:29 -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:28:04.160 02:51:29 -- nvme/functions.sh@21 -- # IFS=: 00:28:04.160 02:51:29 -- nvme/functions.sh@21 -- # read -r reg val 00:28:04.160 02:51:29 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:28:04.160 02:51:29 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:28:04.160 02:51:29 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:28:04.160 02:51:29 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:28:04.160 02:51:29 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:28:04.160 02:51:29 -- nvme/functions.sh@65 -- # (( 1 > 0 )) 00:28:04.160 02:51:29 -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:28:04.160 02:51:29 -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:28:04.160 02:51:29 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:28:04.160 02:51:29 -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:28:04.160 02:51:29 -- nvme/functions.sh@190 -- # (( 1 == 0 )) 00:28:04.160 02:51:29 -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:28:04.160 02:51:29 -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:28:04.160 02:51:29 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:28:04.160 02:51:29 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:28:04.160 02:51:29 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:28:04.160 02:51:29 -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:28:04.160 02:51:29 -- nvme/functions.sh@184 -- # get_oncs nvme0 00:28:04.160 02:51:29 -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:28:04.160 02:51:29 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:28:04.160 02:51:29 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:28:04.160 02:51:29 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:28:04.160 02:51:29 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:28:04.160 02:51:29 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:28:04.160 02:51:29 -- nvme/functions.sh@76 -- # echo 0x15d 00:28:04.160 02:51:29 -- nvme/functions.sh@184 -- # oncs=0x15d 00:28:04.160 02:51:29 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:28:04.160 02:51:29 -- nvme/functions.sh@197 -- # echo nvme0 00:28:04.160 02:51:29 -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:28:04.160 02:51:29 -- nvme/functions.sh@206 -- # echo nvme0 00:28:04.160 02:51:29 -- nvme/functions.sh@207 -- # return 0 00:28:04.160 02:51:29 -- nvme/nvme_scc.sh@17 -- # ctrl=nvme0 00:28:04.160 02:51:29 -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:06.0 00:28:04.160 02:51:29 -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:04.419 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:28:04.678 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:28:05.612 02:51:30 -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:06.0' 00:28:05.612 02:51:30 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:28:05.612 02:51:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:05.612 02:51:30 -- common/autotest_common.sh@10 -- # set +x 00:28:05.612 ************************************ 00:28:05.612 START TEST nvme_simple_copy 00:28:05.612 ************************************ 00:28:05.612 02:51:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:06.0' 00:28:05.869 Initializing NVMe Controllers 00:28:05.869 Attaching to 0000:00:06.0 00:28:05.869 Controller supports SCC. Attached to 0000:00:06.0 00:28:05.869 Namespace ID: 1 size: 5GB 00:28:05.869 Initialization complete. 00:28:05.869 00:28:05.869 Controller QEMU NVMe Ctrl (12340 ) 00:28:05.869 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:28:05.869 Namespace Block Size:4096 00:28:05.869 Writing LBAs 0 to 63 with Random Data 00:28:05.869 Copied LBAs from 0 - 63 to the Destination LBA 256 00:28:05.869 LBAs matching Written Data: 64 00:28:05.869 00:28:05.869 real 0m0.248s 00:28:05.869 user 0m0.082s 00:28:05.869 sys 0m0.067s 00:28:05.869 02:51:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:05.869 ************************************ 00:28:05.869 END TEST nvme_simple_copy 00:28:05.869 ************************************ 00:28:05.869 02:51:30 -- common/autotest_common.sh@10 -- # set +x 00:28:05.869 00:28:05.869 real 0m2.417s 00:28:05.869 user 0m0.715s 00:28:05.869 sys 0m1.549s 00:28:05.869 02:51:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:05.869 02:51:30 -- common/autotest_common.sh@10 -- # set +x 00:28:05.869 ************************************ 00:28:05.869 END TEST nvme_scc 00:28:05.869 ************************************ 00:28:06.134 02:51:30 -- spdk/autotest.sh@229 -- # [[ 0 -eq 1 ]] 00:28:06.134 02:51:30 -- spdk/autotest.sh@232 -- # [[ 0 -eq 1 ]] 00:28:06.134 02:51:30 -- spdk/autotest.sh@235 -- # [[ '' -eq 1 ]] 00:28:06.134 02:51:30 -- spdk/autotest.sh@238 -- # [[ 0 -eq 1 ]] 00:28:06.134 02:51:30 -- spdk/autotest.sh@242 -- # [[ '' -eq 1 ]] 00:28:06.134 02:51:30 -- spdk/autotest.sh@246 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:28:06.134 02:51:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:06.134 02:51:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:06.134 02:51:30 -- common/autotest_common.sh@10 -- # set +x 00:28:06.134 ************************************ 00:28:06.134 START TEST nvme_rpc 00:28:06.134 ************************************ 00:28:06.134 02:51:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:28:06.134 * Looking for test storage... 00:28:06.134 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:28:06.134 02:51:31 -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:06.134 02:51:31 -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:28:06.134 02:51:31 -- common/autotest_common.sh@1509 -- # bdfs=() 00:28:06.134 02:51:31 -- common/autotest_common.sh@1509 -- # local bdfs 00:28:06.134 02:51:31 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:28:06.134 02:51:31 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:28:06.134 02:51:31 -- common/autotest_common.sh@1498 -- # bdfs=() 00:28:06.134 02:51:31 -- common/autotest_common.sh@1498 -- # local bdfs 00:28:06.134 02:51:31 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:28:06.134 02:51:31 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:28:06.134 02:51:31 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:28:06.134 02:51:31 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:28:06.134 02:51:31 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:28:06.134 02:51:31 -- common/autotest_common.sh@1512 -- # echo 0000:00:06.0 00:28:06.134 02:51:31 -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:06.0 00:28:06.134 02:51:31 -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=152919 00:28:06.134 02:51:31 -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:28:06.134 02:51:31 -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:28:06.134 02:51:31 -- nvme/nvme_rpc.sh@19 -- # waitforlisten 152919 00:28:06.134 02:51:31 -- common/autotest_common.sh@819 -- # '[' -z 152919 ']' 00:28:06.134 02:51:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:06.134 02:51:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:06.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:06.134 02:51:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:06.134 02:51:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:06.134 02:51:31 -- common/autotest_common.sh@10 -- # set +x 00:28:06.134 [2024-07-11 02:51:31.186000] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:28:06.134 [2024-07-11 02:51:31.186250] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid152919 ] 00:28:06.392 [2024-07-11 02:51:31.338154] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:06.392 [2024-07-11 02:51:31.428022] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:06.392 [2024-07-11 02:51:31.428395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:06.392 [2024-07-11 02:51:31.428405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:07.322 02:51:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:07.322 02:51:32 -- common/autotest_common.sh@852 -- # return 0 00:28:07.322 02:51:32 -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:28:07.322 Nvme0n1 00:28:07.322 02:51:32 -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:28:07.322 02:51:32 -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:28:07.580 request: 00:28:07.580 { 00:28:07.580 "filename": "non_existing_file", 00:28:07.580 "bdev_name": "Nvme0n1", 00:28:07.580 "method": "bdev_nvme_apply_firmware", 00:28:07.580 "req_id": 1 00:28:07.580 } 00:28:07.580 Got JSON-RPC error response 00:28:07.580 response: 00:28:07.580 { 00:28:07.580 "code": -32603, 00:28:07.580 "message": "open file failed." 00:28:07.580 } 00:28:07.580 02:51:32 -- nvme/nvme_rpc.sh@32 -- # rv=1 00:28:07.580 02:51:32 -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:28:07.580 02:51:32 -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:28:07.838 02:51:32 -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:28:07.838 02:51:32 -- nvme/nvme_rpc.sh@40 -- # killprocess 152919 00:28:07.838 02:51:32 -- common/autotest_common.sh@926 -- # '[' -z 152919 ']' 00:28:07.838 02:51:32 -- common/autotest_common.sh@930 -- # kill -0 152919 00:28:07.838 02:51:32 -- common/autotest_common.sh@931 -- # uname 00:28:07.838 02:51:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:07.838 02:51:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 152919 00:28:07.838 02:51:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:07.838 02:51:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:07.838 killing process with pid 152919 00:28:07.838 02:51:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 152919' 00:28:07.838 02:51:32 -- common/autotest_common.sh@945 -- # kill 152919 00:28:07.838 02:51:32 -- common/autotest_common.sh@950 -- # wait 152919 00:28:08.406 00:28:08.406 real 0m2.273s 00:28:08.406 user 0m4.558s 00:28:08.406 sys 0m0.509s 00:28:08.406 02:51:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:08.406 02:51:33 -- common/autotest_common.sh@10 -- # set +x 00:28:08.406 ************************************ 00:28:08.406 END TEST nvme_rpc 00:28:08.406 ************************************ 00:28:08.406 02:51:33 -- spdk/autotest.sh@247 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:28:08.406 02:51:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:08.406 02:51:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:08.406 02:51:33 -- common/autotest_common.sh@10 -- # set +x 00:28:08.406 ************************************ 00:28:08.406 START TEST nvme_rpc_timeouts 00:28:08.406 ************************************ 00:28:08.406 02:51:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:28:08.406 * Looking for test storage... 00:28:08.406 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:28:08.406 02:51:33 -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:08.406 02:51:33 -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_152974 00:28:08.406 02:51:33 -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_152974 00:28:08.406 02:51:33 -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=152998 00:28:08.406 02:51:33 -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:28:08.406 02:51:33 -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:28:08.406 02:51:33 -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 152998 00:28:08.406 02:51:33 -- common/autotest_common.sh@819 -- # '[' -z 152998 ']' 00:28:08.406 02:51:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:08.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:08.406 02:51:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:08.406 02:51:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:08.407 02:51:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:08.407 02:51:33 -- common/autotest_common.sh@10 -- # set +x 00:28:08.407 [2024-07-11 02:51:33.439662] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:28:08.407 [2024-07-11 02:51:33.439831] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid152998 ] 00:28:08.667 [2024-07-11 02:51:33.581804] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:08.667 [2024-07-11 02:51:33.653377] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:08.667 [2024-07-11 02:51:33.653747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:08.668 [2024-07-11 02:51:33.653744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:09.604 02:51:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:09.604 Checking default timeout settings: 00:28:09.604 02:51:34 -- common/autotest_common.sh@852 -- # return 0 00:28:09.604 02:51:34 -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:28:09.604 02:51:34 -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:28:09.862 Making settings changes with rpc: 00:28:09.862 02:51:34 -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:28:09.862 02:51:34 -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:28:10.121 Check default vs. modified settings: 00:28:10.121 02:51:34 -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:28:10.121 02:51:34 -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:28:10.380 02:51:35 -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:28:10.380 02:51:35 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:28:10.380 02:51:35 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_152974 00:28:10.380 02:51:35 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:28:10.380 02:51:35 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:28:10.380 02:51:35 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:28:10.380 02:51:35 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_152974 00:28:10.380 02:51:35 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:28:10.380 02:51:35 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:28:10.380 02:51:35 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:28:10.380 02:51:35 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:28:10.380 Setting action_on_timeout is changed as expected. 00:28:10.380 02:51:35 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:28:10.380 02:51:35 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:28:10.380 02:51:35 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_152974 00:28:10.380 02:51:35 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:28:10.380 02:51:35 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:28:10.380 02:51:35 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:28:10.380 02:51:35 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_152974 00:28:10.380 02:51:35 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:28:10.380 02:51:35 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:28:10.380 02:51:35 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:28:10.380 Setting timeout_us is changed as expected. 00:28:10.380 02:51:35 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:28:10.380 02:51:35 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:28:10.380 02:51:35 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:28:10.380 02:51:35 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_152974 00:28:10.380 02:51:35 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:28:10.380 02:51:35 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:28:10.380 02:51:35 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:28:10.380 02:51:35 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_152974 00:28:10.380 02:51:35 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:28:10.380 02:51:35 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:28:10.380 02:51:35 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:28:10.380 Setting timeout_admin_us is changed as expected. 00:28:10.380 02:51:35 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:28:10.380 02:51:35 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:28:10.380 02:51:35 -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:28:10.380 02:51:35 -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_152974 /tmp/settings_modified_152974 00:28:10.380 02:51:35 -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 152998 00:28:10.380 02:51:35 -- common/autotest_common.sh@926 -- # '[' -z 152998 ']' 00:28:10.380 02:51:35 -- common/autotest_common.sh@930 -- # kill -0 152998 00:28:10.380 02:51:35 -- common/autotest_common.sh@931 -- # uname 00:28:10.380 02:51:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:10.380 02:51:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 152998 00:28:10.380 02:51:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:10.380 killing process with pid 152998 00:28:10.380 02:51:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:10.380 02:51:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 152998' 00:28:10.380 02:51:35 -- common/autotest_common.sh@945 -- # kill 152998 00:28:10.380 02:51:35 -- common/autotest_common.sh@950 -- # wait 152998 00:28:10.948 RPC TIMEOUT SETTING TEST PASSED. 00:28:10.948 02:51:35 -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:28:10.948 00:28:10.948 real 0m2.465s 00:28:10.948 user 0m5.122s 00:28:10.948 sys 0m0.532s 00:28:10.948 02:51:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:10.948 02:51:35 -- common/autotest_common.sh@10 -- # set +x 00:28:10.948 ************************************ 00:28:10.948 END TEST nvme_rpc_timeouts 00:28:10.948 ************************************ 00:28:10.948 02:51:35 -- spdk/autotest.sh@251 -- # '[' 1 -eq 0 ']' 00:28:10.948 02:51:35 -- spdk/autotest.sh@255 -- # [[ 0 -eq 1 ]] 00:28:10.948 02:51:35 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:28:10.948 02:51:35 -- spdk/autotest.sh@268 -- # timing_exit lib 00:28:10.948 02:51:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:10.948 02:51:35 -- common/autotest_common.sh@10 -- # set +x 00:28:10.948 02:51:35 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:28:10.948 02:51:35 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:28:10.948 02:51:35 -- spdk/autotest.sh@287 -- # '[' 0 -eq 1 ']' 00:28:10.948 02:51:35 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:28:10.948 02:51:35 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:28:10.948 02:51:35 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:28:10.948 02:51:35 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:28:10.948 02:51:35 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:28:10.948 02:51:35 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:28:10.948 02:51:35 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:28:10.948 02:51:35 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:28:10.948 02:51:35 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:28:10.948 02:51:35 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:28:10.948 02:51:35 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:28:10.948 02:51:35 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:28:10.948 02:51:35 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:28:10.948 02:51:35 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:28:10.948 02:51:35 -- spdk/autotest.sh@378 -- # [[ 1 -eq 1 ]] 00:28:10.948 02:51:35 -- spdk/autotest.sh@379 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:28:10.948 02:51:35 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:10.948 02:51:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:10.948 02:51:35 -- common/autotest_common.sh@10 -- # set +x 00:28:10.948 ************************************ 00:28:10.948 START TEST blockdev_raid5f 00:28:10.948 ************************************ 00:28:10.948 02:51:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:28:10.948 * Looking for test storage... 00:28:10.948 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:28:10.948 02:51:35 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:28:10.948 02:51:35 -- bdev/nbd_common.sh@6 -- # set -e 00:28:10.948 02:51:35 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:28:10.948 02:51:35 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:28:10.948 02:51:35 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:28:10.948 02:51:35 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:28:10.948 02:51:35 -- bdev/blockdev.sh@18 -- # : 00:28:10.948 02:51:35 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:28:10.948 02:51:35 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:28:10.948 02:51:35 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:28:10.948 02:51:35 -- bdev/blockdev.sh@672 -- # uname -s 00:28:10.948 02:51:35 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:28:10.948 02:51:35 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:28:10.948 02:51:35 -- bdev/blockdev.sh@680 -- # test_type=raid5f 00:28:10.948 02:51:35 -- bdev/blockdev.sh@681 -- # crypto_device= 00:28:10.948 02:51:35 -- bdev/blockdev.sh@682 -- # dek= 00:28:10.949 02:51:35 -- bdev/blockdev.sh@683 -- # env_ctx= 00:28:10.949 02:51:35 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:28:10.949 02:51:35 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:28:10.949 02:51:35 -- bdev/blockdev.sh@688 -- # [[ raid5f == bdev ]] 00:28:10.949 02:51:35 -- bdev/blockdev.sh@688 -- # [[ raid5f == crypto_* ]] 00:28:10.949 02:51:35 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:28:10.949 02:51:35 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=153138 00:28:10.949 02:51:35 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:28:10.949 02:51:35 -- bdev/blockdev.sh@47 -- # waitforlisten 153138 00:28:10.949 02:51:35 -- common/autotest_common.sh@819 -- # '[' -z 153138 ']' 00:28:10.949 02:51:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:10.949 02:51:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:10.949 02:51:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:10.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:10.949 02:51:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:10.949 02:51:35 -- common/autotest_common.sh@10 -- # set +x 00:28:10.949 02:51:35 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:28:10.949 [2024-07-11 02:51:35.996304] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:28:10.949 [2024-07-11 02:51:35.997171] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid153138 ] 00:28:11.208 [2024-07-11 02:51:36.142215] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:11.208 [2024-07-11 02:51:36.212063] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:11.208 [2024-07-11 02:51:36.212310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:12.145 02:51:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:12.145 02:51:36 -- common/autotest_common.sh@852 -- # return 0 00:28:12.145 02:51:36 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:28:12.145 02:51:36 -- bdev/blockdev.sh@724 -- # setup_raid5f_conf 00:28:12.145 02:51:36 -- bdev/blockdev.sh@278 -- # rpc_cmd 00:28:12.145 02:51:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:12.145 02:51:36 -- common/autotest_common.sh@10 -- # set +x 00:28:12.145 Malloc0 00:28:12.145 Malloc1 00:28:12.145 Malloc2 00:28:12.145 02:51:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:12.145 02:51:37 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:28:12.145 02:51:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:12.145 02:51:37 -- common/autotest_common.sh@10 -- # set +x 00:28:12.145 02:51:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:12.145 02:51:37 -- bdev/blockdev.sh@738 -- # cat 00:28:12.145 02:51:37 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:28:12.145 02:51:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:12.145 02:51:37 -- common/autotest_common.sh@10 -- # set +x 00:28:12.145 02:51:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:12.145 02:51:37 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:28:12.145 02:51:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:12.145 02:51:37 -- common/autotest_common.sh@10 -- # set +x 00:28:12.145 02:51:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:12.145 02:51:37 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:28:12.145 02:51:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:12.145 02:51:37 -- common/autotest_common.sh@10 -- # set +x 00:28:12.145 02:51:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:12.145 02:51:37 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:28:12.145 02:51:37 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:28:12.145 02:51:37 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:28:12.145 02:51:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:12.145 02:51:37 -- common/autotest_common.sh@10 -- # set +x 00:28:12.145 02:51:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:12.145 02:51:37 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:28:12.145 02:51:37 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "04c45814-6356-47f7-ba76-d9252caf66eb"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "04c45814-6356-47f7-ba76-d9252caf66eb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "04c45814-6356-47f7-ba76-d9252caf66eb",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "36d62d90-e48f-4d76-b99e-cf6d90dc69bb",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "1b48f615-363b-4a50-b16c-e6c0e4a829cf",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "797ed93f-c42c-40c2-a41a-a2e5fea876b8",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:28:12.145 02:51:37 -- bdev/blockdev.sh@747 -- # jq -r .name 00:28:12.145 02:51:37 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:28:12.145 02:51:37 -- bdev/blockdev.sh@750 -- # hello_world_bdev=raid5f 00:28:12.145 02:51:37 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:28:12.145 02:51:37 -- bdev/blockdev.sh@752 -- # killprocess 153138 00:28:12.145 02:51:37 -- common/autotest_common.sh@926 -- # '[' -z 153138 ']' 00:28:12.145 02:51:37 -- common/autotest_common.sh@930 -- # kill -0 153138 00:28:12.145 02:51:37 -- common/autotest_common.sh@931 -- # uname 00:28:12.145 02:51:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:12.145 02:51:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 153138 00:28:12.145 02:51:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:12.145 killing process with pid 153138 00:28:12.145 02:51:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:12.145 02:51:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 153138' 00:28:12.145 02:51:37 -- common/autotest_common.sh@945 -- # kill 153138 00:28:12.145 02:51:37 -- common/autotest_common.sh@950 -- # wait 153138 00:28:12.712 02:51:37 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:12.712 02:51:37 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:28:12.712 02:51:37 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:28:12.712 02:51:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:12.712 02:51:37 -- common/autotest_common.sh@10 -- # set +x 00:28:12.712 ************************************ 00:28:12.712 START TEST bdev_hello_world 00:28:12.712 ************************************ 00:28:12.712 02:51:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:28:12.713 [2024-07-11 02:51:37.740739] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:28:12.713 [2024-07-11 02:51:37.741001] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid153195 ] 00:28:12.971 [2024-07-11 02:51:37.885998] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:12.971 [2024-07-11 02:51:37.942766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:13.229 [2024-07-11 02:51:38.156467] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:28:13.229 [2024-07-11 02:51:38.156571] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:28:13.229 [2024-07-11 02:51:38.156607] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:28:13.229 [2024-07-11 02:51:38.156992] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:28:13.229 [2024-07-11 02:51:38.157144] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:28:13.229 [2024-07-11 02:51:38.157174] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:28:13.229 [2024-07-11 02:51:38.157269] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:28:13.229 00:28:13.229 [2024-07-11 02:51:38.157308] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:28:13.488 00:28:13.488 real 0m0.754s 00:28:13.488 user 0m0.427s 00:28:13.488 sys 0m0.215s 00:28:13.488 02:51:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:13.488 02:51:38 -- common/autotest_common.sh@10 -- # set +x 00:28:13.488 ************************************ 00:28:13.488 END TEST bdev_hello_world 00:28:13.488 ************************************ 00:28:13.488 02:51:38 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:28:13.488 02:51:38 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:13.488 02:51:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:13.488 02:51:38 -- common/autotest_common.sh@10 -- # set +x 00:28:13.488 ************************************ 00:28:13.488 START TEST bdev_bounds 00:28:13.488 ************************************ 00:28:13.488 02:51:38 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:28:13.488 02:51:38 -- bdev/blockdev.sh@288 -- # bdevio_pid=153226 00:28:13.488 02:51:38 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:28:13.488 02:51:38 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:28:13.488 02:51:38 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 153226' 00:28:13.488 Process bdevio pid: 153226 00:28:13.488 02:51:38 -- bdev/blockdev.sh@291 -- # waitforlisten 153226 00:28:13.488 02:51:38 -- common/autotest_common.sh@819 -- # '[' -z 153226 ']' 00:28:13.488 02:51:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:13.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:13.488 02:51:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:13.488 02:51:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:13.488 02:51:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:13.488 02:51:38 -- common/autotest_common.sh@10 -- # set +x 00:28:13.488 [2024-07-11 02:51:38.545080] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:28:13.488 [2024-07-11 02:51:38.545307] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid153226 ] 00:28:13.747 [2024-07-11 02:51:38.716806] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:13.747 [2024-07-11 02:51:38.804819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:13.747 [2024-07-11 02:51:38.804974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:13.747 [2024-07-11 02:51:38.804969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:14.682 02:51:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:14.682 02:51:39 -- common/autotest_common.sh@852 -- # return 0 00:28:14.682 02:51:39 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:28:14.682 I/O targets: 00:28:14.682 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:28:14.682 00:28:14.682 00:28:14.682 CUnit - A unit testing framework for C - Version 2.1-3 00:28:14.682 http://cunit.sourceforge.net/ 00:28:14.682 00:28:14.682 00:28:14.682 Suite: bdevio tests on: raid5f 00:28:14.682 Test: blockdev write read block ...passed 00:28:14.682 Test: blockdev write zeroes read block ...passed 00:28:14.682 Test: blockdev write zeroes read no split ...passed 00:28:14.682 Test: blockdev write zeroes read split ...passed 00:28:14.682 Test: blockdev write zeroes read split partial ...passed 00:28:14.682 Test: blockdev reset ...passed 00:28:14.682 Test: blockdev write read 8 blocks ...passed 00:28:14.682 Test: blockdev write read size > 128k ...passed 00:28:14.682 Test: blockdev write read invalid size ...passed 00:28:14.682 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:28:14.682 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:28:14.682 Test: blockdev write read max offset ...passed 00:28:14.682 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:28:14.682 Test: blockdev writev readv 8 blocks ...passed 00:28:14.682 Test: blockdev writev readv 30 x 1block ...passed 00:28:14.682 Test: blockdev writev readv block ...passed 00:28:14.682 Test: blockdev writev readv size > 128k ...passed 00:28:14.682 Test: blockdev writev readv size > 128k in two iovs ...passed 00:28:14.682 Test: blockdev comparev and writev ...passed 00:28:14.682 Test: blockdev nvme passthru rw ...passed 00:28:14.682 Test: blockdev nvme passthru vendor specific ...passed 00:28:14.682 Test: blockdev nvme admin passthru ...passed 00:28:14.682 Test: blockdev copy ...passed 00:28:14.682 00:28:14.682 Run Summary: Type Total Ran Passed Failed Inactive 00:28:14.682 suites 1 1 n/a 0 0 00:28:14.682 tests 23 23 23 0 0 00:28:14.682 asserts 130 130 130 0 n/a 00:28:14.682 00:28:14.682 Elapsed time = 0.368 seconds 00:28:14.682 0 00:28:14.682 02:51:39 -- bdev/blockdev.sh@293 -- # killprocess 153226 00:28:14.682 02:51:39 -- common/autotest_common.sh@926 -- # '[' -z 153226 ']' 00:28:14.682 02:51:39 -- common/autotest_common.sh@930 -- # kill -0 153226 00:28:14.682 02:51:39 -- common/autotest_common.sh@931 -- # uname 00:28:14.682 02:51:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:14.682 02:51:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 153226 00:28:14.942 02:51:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:14.942 killing process with pid 153226 00:28:14.942 02:51:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:14.942 02:51:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 153226' 00:28:14.942 02:51:39 -- common/autotest_common.sh@945 -- # kill 153226 00:28:14.942 02:51:39 -- common/autotest_common.sh@950 -- # wait 153226 00:28:15.200 02:51:40 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:28:15.200 00:28:15.200 real 0m1.572s 00:28:15.200 user 0m3.781s 00:28:15.200 sys 0m0.345s 00:28:15.200 02:51:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:15.200 02:51:40 -- common/autotest_common.sh@10 -- # set +x 00:28:15.200 ************************************ 00:28:15.200 END TEST bdev_bounds 00:28:15.200 ************************************ 00:28:15.200 02:51:40 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:28:15.200 02:51:40 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:28:15.200 02:51:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:15.200 02:51:40 -- common/autotest_common.sh@10 -- # set +x 00:28:15.201 ************************************ 00:28:15.201 START TEST bdev_nbd 00:28:15.201 ************************************ 00:28:15.201 02:51:40 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:28:15.201 02:51:40 -- bdev/blockdev.sh@298 -- # uname -s 00:28:15.201 02:51:40 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:28:15.201 02:51:40 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:15.201 02:51:40 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:28:15.201 02:51:40 -- bdev/blockdev.sh@302 -- # bdev_all=($2) 00:28:15.201 02:51:40 -- bdev/blockdev.sh@302 -- # local bdev_all 00:28:15.201 02:51:40 -- bdev/blockdev.sh@303 -- # local bdev_num=1 00:28:15.201 02:51:40 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:28:15.201 02:51:40 -- bdev/blockdev.sh@309 -- # nbd_all=(/dev/nbd+([0-9])) 00:28:15.201 02:51:40 -- bdev/blockdev.sh@309 -- # local nbd_all 00:28:15.201 02:51:40 -- bdev/blockdev.sh@310 -- # bdev_num=1 00:28:15.201 02:51:40 -- bdev/blockdev.sh@312 -- # nbd_list=(${nbd_all[@]::bdev_num}) 00:28:15.201 02:51:40 -- bdev/blockdev.sh@312 -- # local nbd_list 00:28:15.201 02:51:40 -- bdev/blockdev.sh@313 -- # bdev_list=(${bdev_all[@]::bdev_num}) 00:28:15.201 02:51:40 -- bdev/blockdev.sh@313 -- # local bdev_list 00:28:15.201 02:51:40 -- bdev/blockdev.sh@316 -- # nbd_pid=153288 00:28:15.201 02:51:40 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:28:15.201 02:51:40 -- bdev/blockdev.sh@318 -- # waitforlisten 153288 /var/tmp/spdk-nbd.sock 00:28:15.201 02:51:40 -- common/autotest_common.sh@819 -- # '[' -z 153288 ']' 00:28:15.201 02:51:40 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:28:15.201 02:51:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:28:15.201 02:51:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:15.201 02:51:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:28:15.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:28:15.201 02:51:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:15.201 02:51:40 -- common/autotest_common.sh@10 -- # set +x 00:28:15.201 [2024-07-11 02:51:40.167246] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:28:15.201 [2024-07-11 02:51:40.167425] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:15.459 [2024-07-11 02:51:40.307354] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:15.459 [2024-07-11 02:51:40.377032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:16.025 02:51:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:16.025 02:51:41 -- common/autotest_common.sh@852 -- # return 0 00:28:16.025 02:51:41 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:28:16.025 02:51:41 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:16.025 02:51:41 -- bdev/nbd_common.sh@114 -- # bdev_list=($2) 00:28:16.025 02:51:41 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:28:16.025 02:51:41 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:28:16.025 02:51:41 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:16.025 02:51:41 -- bdev/nbd_common.sh@23 -- # bdev_list=($2) 00:28:16.025 02:51:41 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:28:16.025 02:51:41 -- bdev/nbd_common.sh@24 -- # local i 00:28:16.025 02:51:41 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:28:16.025 02:51:41 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:28:16.025 02:51:41 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:28:16.025 02:51:41 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:28:16.592 02:51:41 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:28:16.592 02:51:41 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:28:16.592 02:51:41 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:28:16.592 02:51:41 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:28:16.592 02:51:41 -- common/autotest_common.sh@857 -- # local i 00:28:16.592 02:51:41 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:28:16.592 02:51:41 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:28:16.592 02:51:41 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:28:16.592 02:51:41 -- common/autotest_common.sh@861 -- # break 00:28:16.592 02:51:41 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:28:16.592 02:51:41 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:28:16.592 02:51:41 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:16.592 1+0 records in 00:28:16.592 1+0 records out 00:28:16.592 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000891885 s, 4.6 MB/s 00:28:16.592 02:51:41 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:16.592 02:51:41 -- common/autotest_common.sh@874 -- # size=4096 00:28:16.592 02:51:41 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:16.592 02:51:41 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:28:16.592 02:51:41 -- common/autotest_common.sh@877 -- # return 0 00:28:16.592 02:51:41 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:28:16.592 02:51:41 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:28:16.592 02:51:41 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:16.850 02:51:41 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:28:16.850 { 00:28:16.850 "nbd_device": "/dev/nbd0", 00:28:16.850 "bdev_name": "raid5f" 00:28:16.850 } 00:28:16.850 ]' 00:28:16.850 02:51:41 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:28:16.850 02:51:41 -- bdev/nbd_common.sh@119 -- # echo '[ 00:28:16.850 { 00:28:16.850 "nbd_device": "/dev/nbd0", 00:28:16.850 "bdev_name": "raid5f" 00:28:16.850 } 00:28:16.850 ]' 00:28:16.850 02:51:41 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:28:16.850 02:51:41 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:28:16.850 02:51:41 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:16.850 02:51:41 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:28:16.850 02:51:41 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:16.850 02:51:41 -- bdev/nbd_common.sh@51 -- # local i 00:28:16.850 02:51:41 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:16.850 02:51:41 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:17.109 02:51:41 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:17.109 02:51:41 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:17.109 02:51:41 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:17.109 02:51:41 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:17.109 02:51:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:17.109 02:51:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:17.109 02:51:41 -- bdev/nbd_common.sh@41 -- # break 00:28:17.109 02:51:41 -- bdev/nbd_common.sh@45 -- # return 0 00:28:17.109 02:51:41 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:17.109 02:51:41 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:17.109 02:51:41 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:17.109 02:51:42 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:28:17.109 02:51:42 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:28:17.109 02:51:42 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:17.367 02:51:42 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:28:17.367 02:51:42 -- bdev/nbd_common.sh@65 -- # echo '' 00:28:17.367 02:51:42 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:17.367 02:51:42 -- bdev/nbd_common.sh@65 -- # true 00:28:17.367 02:51:42 -- bdev/nbd_common.sh@65 -- # count=0 00:28:17.367 02:51:42 -- bdev/nbd_common.sh@66 -- # echo 0 00:28:17.367 02:51:42 -- bdev/nbd_common.sh@122 -- # count=0 00:28:17.367 02:51:42 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:28:17.367 02:51:42 -- bdev/nbd_common.sh@127 -- # return 0 00:28:17.367 02:51:42 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:28:17.367 02:51:42 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:17.367 02:51:42 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:28:17.367 02:51:42 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:28:17.367 02:51:42 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:28:17.367 02:51:42 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:28:17.367 02:51:42 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:28:17.367 02:51:42 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:17.367 02:51:42 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:28:17.367 02:51:42 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:17.367 02:51:42 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:28:17.367 02:51:42 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:17.367 02:51:42 -- bdev/nbd_common.sh@12 -- # local i 00:28:17.367 02:51:42 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:17.367 02:51:42 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:17.367 02:51:42 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:28:17.630 /dev/nbd0 00:28:17.630 02:51:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:17.630 02:51:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:17.630 02:51:42 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:28:17.630 02:51:42 -- common/autotest_common.sh@857 -- # local i 00:28:17.630 02:51:42 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:28:17.630 02:51:42 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:28:17.630 02:51:42 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:28:17.630 02:51:42 -- common/autotest_common.sh@861 -- # break 00:28:17.630 02:51:42 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:28:17.630 02:51:42 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:28:17.630 02:51:42 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:17.630 1+0 records in 00:28:17.630 1+0 records out 00:28:17.630 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000322079 s, 12.7 MB/s 00:28:17.630 02:51:42 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:17.630 02:51:42 -- common/autotest_common.sh@874 -- # size=4096 00:28:17.630 02:51:42 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:17.630 02:51:42 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:28:17.630 02:51:42 -- common/autotest_common.sh@877 -- # return 0 00:28:17.630 02:51:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:17.630 02:51:42 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:17.630 02:51:42 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:17.630 02:51:42 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:17.630 02:51:42 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:17.904 02:51:42 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:28:17.904 { 00:28:17.904 "nbd_device": "/dev/nbd0", 00:28:17.904 "bdev_name": "raid5f" 00:28:17.904 } 00:28:17.904 ]' 00:28:17.904 02:51:42 -- bdev/nbd_common.sh@64 -- # echo '[ 00:28:17.904 { 00:28:17.904 "nbd_device": "/dev/nbd0", 00:28:17.904 "bdev_name": "raid5f" 00:28:17.904 } 00:28:17.904 ]' 00:28:17.904 02:51:42 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:17.904 02:51:42 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:28:17.904 02:51:42 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:28:17.904 02:51:42 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:17.904 02:51:42 -- bdev/nbd_common.sh@65 -- # count=1 00:28:17.904 02:51:42 -- bdev/nbd_common.sh@66 -- # echo 1 00:28:17.904 02:51:42 -- bdev/nbd_common.sh@95 -- # count=1 00:28:17.904 02:51:42 -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:28:17.904 02:51:42 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:28:17.904 02:51:42 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:28:17.904 02:51:42 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:28:17.904 02:51:42 -- bdev/nbd_common.sh@71 -- # local operation=write 00:28:17.904 02:51:42 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:28:17.904 02:51:42 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:28:17.904 02:51:42 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:28:17.904 256+0 records in 00:28:17.904 256+0 records out 00:28:17.904 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00845018 s, 124 MB/s 00:28:17.904 02:51:42 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:17.904 02:51:42 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:28:17.904 256+0 records in 00:28:17.904 256+0 records out 00:28:17.904 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0284113 s, 36.9 MB/s 00:28:17.904 02:51:42 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:28:17.904 02:51:42 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:28:17.904 02:51:42 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:28:17.904 02:51:42 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:28:17.904 02:51:42 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:28:17.904 02:51:42 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:28:17.904 02:51:42 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:28:17.904 02:51:42 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:17.904 02:51:42 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:28:17.904 02:51:42 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:28:17.904 02:51:42 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:28:17.904 02:51:42 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:17.904 02:51:42 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:28:17.904 02:51:42 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:17.904 02:51:42 -- bdev/nbd_common.sh@51 -- # local i 00:28:17.904 02:51:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:17.904 02:51:42 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:18.173 02:51:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:18.173 02:51:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:18.174 02:51:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:18.174 02:51:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:18.174 02:51:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:18.174 02:51:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:18.174 02:51:43 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:28:18.174 02:51:43 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:28:18.174 02:51:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:18.174 02:51:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:18.174 02:51:43 -- bdev/nbd_common.sh@41 -- # break 00:28:18.174 02:51:43 -- bdev/nbd_common.sh@45 -- # return 0 00:28:18.174 02:51:43 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:18.174 02:51:43 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:18.174 02:51:43 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:18.431 02:51:43 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:28:18.431 02:51:43 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:28:18.431 02:51:43 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:18.431 02:51:43 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:28:18.431 02:51:43 -- bdev/nbd_common.sh@65 -- # echo '' 00:28:18.431 02:51:43 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:18.689 02:51:43 -- bdev/nbd_common.sh@65 -- # true 00:28:18.689 02:51:43 -- bdev/nbd_common.sh@65 -- # count=0 00:28:18.689 02:51:43 -- bdev/nbd_common.sh@66 -- # echo 0 00:28:18.689 02:51:43 -- bdev/nbd_common.sh@104 -- # count=0 00:28:18.689 02:51:43 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:28:18.689 02:51:43 -- bdev/nbd_common.sh@109 -- # return 0 00:28:18.689 02:51:43 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:28:18.689 02:51:43 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:18.689 02:51:43 -- bdev/nbd_common.sh@132 -- # nbd_list=($2) 00:28:18.689 02:51:43 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:28:18.689 02:51:43 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:28:18.689 02:51:43 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:28:18.689 malloc_lvol_verify 00:28:18.689 02:51:43 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:28:18.947 82342d20-a5f3-4957-b974-5232dccff7b8 00:28:19.205 02:51:44 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:28:19.205 29c21193-baeb-4183-af62-ffb965dd9188 00:28:19.205 02:51:44 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:28:19.463 /dev/nbd0 00:28:19.463 02:51:44 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:28:19.463 mke2fs 1.45.5 (07-Jan-2020) 00:28:19.463 00:28:19.463 Filesystem too small for a journal 00:28:19.463 Creating filesystem with 1024 4k blocks and 1024 inodes 00:28:19.463 00:28:19.463 Allocating group tables: 0/1 done 00:28:19.463 Writing inode tables: 0/1 done 00:28:19.463 Writing superblocks and filesystem accounting information: 0/1 done 00:28:19.463 00:28:19.463 02:51:44 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:28:19.463 02:51:44 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:28:19.463 02:51:44 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:19.463 02:51:44 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:28:19.463 02:51:44 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:19.463 02:51:44 -- bdev/nbd_common.sh@51 -- # local i 00:28:19.463 02:51:44 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:19.463 02:51:44 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:19.721 02:51:44 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:19.721 02:51:44 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:19.721 02:51:44 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:19.721 02:51:44 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:19.721 02:51:44 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:19.721 02:51:44 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:19.721 02:51:44 -- bdev/nbd_common.sh@41 -- # break 00:28:19.721 02:51:44 -- bdev/nbd_common.sh@45 -- # return 0 00:28:19.721 02:51:44 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:28:19.721 02:51:44 -- bdev/nbd_common.sh@147 -- # return 0 00:28:19.721 02:51:44 -- bdev/blockdev.sh@324 -- # killprocess 153288 00:28:19.721 02:51:44 -- common/autotest_common.sh@926 -- # '[' -z 153288 ']' 00:28:19.721 02:51:44 -- common/autotest_common.sh@930 -- # kill -0 153288 00:28:19.721 02:51:44 -- common/autotest_common.sh@931 -- # uname 00:28:19.721 02:51:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:19.721 02:51:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 153288 00:28:19.721 02:51:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:19.721 killing process with pid 153288 00:28:19.721 02:51:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:19.721 02:51:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 153288' 00:28:19.721 02:51:44 -- common/autotest_common.sh@945 -- # kill 153288 00:28:19.721 02:51:44 -- common/autotest_common.sh@950 -- # wait 153288 00:28:19.980 02:51:45 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:28:19.980 00:28:19.980 real 0m4.892s 00:28:19.980 user 0m7.595s 00:28:19.980 sys 0m0.955s 00:28:19.980 02:51:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:19.980 02:51:45 -- common/autotest_common.sh@10 -- # set +x 00:28:19.980 ************************************ 00:28:19.980 END TEST bdev_nbd 00:28:19.980 ************************************ 00:28:19.980 02:51:45 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:28:19.980 02:51:45 -- bdev/blockdev.sh@762 -- # '[' raid5f = nvme ']' 00:28:19.980 02:51:45 -- bdev/blockdev.sh@762 -- # '[' raid5f = gpt ']' 00:28:19.980 02:51:45 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:28:19.980 02:51:45 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:19.980 02:51:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:19.980 02:51:45 -- common/autotest_common.sh@10 -- # set +x 00:28:19.980 ************************************ 00:28:19.980 START TEST bdev_fio 00:28:19.980 ************************************ 00:28:19.980 02:51:45 -- common/autotest_common.sh@1104 -- # fio_test_suite '' 00:28:19.980 02:51:45 -- bdev/blockdev.sh@329 -- # local env_context 00:28:19.980 02:51:45 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:28:19.980 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:28:19.980 02:51:45 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:28:19.980 02:51:45 -- bdev/blockdev.sh@337 -- # echo '' 00:28:19.980 02:51:45 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:28:19.980 02:51:45 -- bdev/blockdev.sh@337 -- # env_context= 00:28:19.980 02:51:45 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:28:19.980 02:51:45 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:28:19.980 02:51:45 -- common/autotest_common.sh@1260 -- # local workload=verify 00:28:19.980 02:51:45 -- common/autotest_common.sh@1261 -- # local bdev_type=AIO 00:28:19.980 02:51:45 -- common/autotest_common.sh@1262 -- # local env_context= 00:28:19.980 02:51:45 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:28:19.980 02:51:45 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:28:19.980 02:51:45 -- common/autotest_common.sh@1270 -- # '[' -z verify ']' 00:28:19.980 02:51:45 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:28:19.980 02:51:45 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:28:19.980 02:51:45 -- common/autotest_common.sh@1280 -- # cat 00:28:20.238 02:51:45 -- common/autotest_common.sh@1292 -- # '[' verify == verify ']' 00:28:20.238 02:51:45 -- common/autotest_common.sh@1293 -- # cat 00:28:20.238 02:51:45 -- common/autotest_common.sh@1302 -- # '[' AIO == AIO ']' 00:28:20.238 02:51:45 -- common/autotest_common.sh@1303 -- # /usr/src/fio/fio --version 00:28:20.238 02:51:45 -- common/autotest_common.sh@1303 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:28:20.238 02:51:45 -- common/autotest_common.sh@1304 -- # echo serialize_overlap=1 00:28:20.238 02:51:45 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:28:20.238 02:51:45 -- bdev/blockdev.sh@340 -- # echo '[job_raid5f]' 00:28:20.238 02:51:45 -- bdev/blockdev.sh@341 -- # echo filename=raid5f 00:28:20.238 02:51:45 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:28:20.238 02:51:45 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:28:20.238 02:51:45 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:28:20.238 02:51:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:20.238 02:51:45 -- common/autotest_common.sh@10 -- # set +x 00:28:20.238 ************************************ 00:28:20.238 START TEST bdev_fio_rw_verify 00:28:20.238 ************************************ 00:28:20.238 02:51:45 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:28:20.238 02:51:45 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:28:20.238 02:51:45 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:28:20.238 02:51:45 -- common/autotest_common.sh@1318 -- # sanitizers=(libasan libclang_rt.asan) 00:28:20.238 02:51:45 -- common/autotest_common.sh@1318 -- # local sanitizers 00:28:20.238 02:51:45 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:20.238 02:51:45 -- common/autotest_common.sh@1320 -- # shift 00:28:20.238 02:51:45 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:28:20.238 02:51:45 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:20.238 02:51:45 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:20.238 02:51:45 -- common/autotest_common.sh@1324 -- # grep libasan 00:28:20.238 02:51:45 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:20.238 02:51:45 -- common/autotest_common.sh@1324 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.5 00:28:20.238 02:51:45 -- common/autotest_common.sh@1325 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.5 ]] 00:28:20.238 02:51:45 -- common/autotest_common.sh@1326 -- # break 00:28:20.239 02:51:45 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.5 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:28:20.239 02:51:45 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:28:20.239 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:28:20.239 fio-3.35 00:28:20.239 Starting 1 thread 00:28:32.442 00:28:32.442 job_raid5f: (groupid=0, jobs=1): err= 0: pid=153523: Thu Jul 11 02:51:55 2024 00:28:32.442 read: IOPS=10.9k, BW=42.7MiB/s (44.8MB/s)(427MiB/10001msec) 00:28:32.442 slat (usec): min=18, max=254, avg=21.90, stdev= 4.73 00:28:32.442 clat (usec): min=11, max=478, avg=144.55, stdev=54.24 00:28:32.442 lat (usec): min=32, max=500, avg=166.45, stdev=55.19 00:28:32.442 clat percentiles (usec): 00:28:32.442 | 50.000th=[ 145], 99.000th=[ 273], 99.900th=[ 314], 99.990th=[ 343], 00:28:32.442 | 99.999th=[ 457] 00:28:32.442 write: IOPS=11.4k, BW=44.7MiB/s (46.9MB/s)(441MiB/9878msec); 0 zone resets 00:28:32.442 slat (usec): min=9, max=168, avg=19.30, stdev= 5.09 00:28:32.442 clat (usec): min=59, max=847, avg=331.60, stdev=54.21 00:28:32.442 lat (usec): min=76, max=1015, avg=350.90, stdev=55.91 00:28:32.442 clat percentiles (usec): 00:28:32.442 | 50.000th=[ 330], 99.000th=[ 482], 99.900th=[ 562], 99.990th=[ 742], 00:28:32.442 | 99.999th=[ 824] 00:28:32.442 bw ( KiB/s): min=42216, max=49880, per=99.13%, avg=45354.95, stdev=2100.64, samples=19 00:28:32.442 iops : min=10554, max=12470, avg=11338.74, stdev=525.16, samples=19 00:28:32.442 lat (usec) : 20=0.01%, 50=0.01%, 100=11.61%, 250=39.17%, 500=48.93% 00:28:32.442 lat (usec) : 750=0.29%, 1000=0.01% 00:28:32.442 cpu : usr=99.35%, sys=0.63%, ctx=25, majf=0, minf=10830 00:28:32.442 IO depths : 1=7.8%, 2=20.1%, 4=55.0%, 8=17.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:32.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:32.442 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:32.442 issued rwts: total=109392,112989,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:32.442 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:32.442 00:28:32.442 Run status group 0 (all jobs): 00:28:32.442 READ: bw=42.7MiB/s (44.8MB/s), 42.7MiB/s-42.7MiB/s (44.8MB/s-44.8MB/s), io=427MiB (448MB), run=10001-10001msec 00:28:32.442 WRITE: bw=44.7MiB/s (46.9MB/s), 44.7MiB/s-44.7MiB/s (46.9MB/s-46.9MB/s), io=441MiB (463MB), run=9878-9878msec 00:28:32.442 ----------------------------------------------------- 00:28:32.442 Suppressions used: 00:28:32.442 count bytes template 00:28:32.442 1 7 /usr/src/fio/parse.c 00:28:32.442 204 19584 /usr/src/fio/iolog.c 00:28:32.442 2 596 libcrypto.so 00:28:32.442 ----------------------------------------------------- 00:28:32.442 00:28:32.442 00:28:32.442 real 0m11.297s 00:28:32.442 user 0m11.865s 00:28:32.442 sys 0m0.593s 00:28:32.442 02:51:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:32.442 ************************************ 00:28:32.442 END TEST bdev_fio_rw_verify 00:28:32.442 ************************************ 00:28:32.442 02:51:56 -- common/autotest_common.sh@10 -- # set +x 00:28:32.442 02:51:56 -- bdev/blockdev.sh@348 -- # rm -f 00:28:32.442 02:51:56 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:28:32.442 02:51:56 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:28:32.442 02:51:56 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:28:32.442 02:51:56 -- common/autotest_common.sh@1260 -- # local workload=trim 00:28:32.442 02:51:56 -- common/autotest_common.sh@1261 -- # local bdev_type= 00:28:32.442 02:51:56 -- common/autotest_common.sh@1262 -- # local env_context= 00:28:32.442 02:51:56 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:28:32.442 02:51:56 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:28:32.442 02:51:56 -- common/autotest_common.sh@1270 -- # '[' -z trim ']' 00:28:32.442 02:51:56 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:28:32.442 02:51:56 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:28:32.442 02:51:56 -- common/autotest_common.sh@1280 -- # cat 00:28:32.442 02:51:56 -- common/autotest_common.sh@1292 -- # '[' trim == verify ']' 00:28:32.442 02:51:56 -- common/autotest_common.sh@1307 -- # '[' trim == trim ']' 00:28:32.442 02:51:56 -- common/autotest_common.sh@1308 -- # echo rw=trimwrite 00:28:32.442 02:51:56 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:28:32.442 02:51:56 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "04c45814-6356-47f7-ba76-d9252caf66eb"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "04c45814-6356-47f7-ba76-d9252caf66eb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "04c45814-6356-47f7-ba76-d9252caf66eb",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "36d62d90-e48f-4d76-b99e-cf6d90dc69bb",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "1b48f615-363b-4a50-b16c-e6c0e4a829cf",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "797ed93f-c42c-40c2-a41a-a2e5fea876b8",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:28:32.442 02:51:56 -- bdev/blockdev.sh@353 -- # [[ -n '' ]] 00:28:32.442 02:51:56 -- bdev/blockdev.sh@359 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:28:32.442 /home/vagrant/spdk_repo/spdk 00:28:32.442 02:51:56 -- bdev/blockdev.sh@360 -- # popd 00:28:32.442 02:51:56 -- bdev/blockdev.sh@361 -- # trap - SIGINT SIGTERM EXIT 00:28:32.442 02:51:56 -- bdev/blockdev.sh@362 -- # return 0 00:28:32.442 00:28:32.442 real 0m11.469s 00:28:32.442 user 0m11.993s 00:28:32.442 sys 0m0.636s 00:28:32.442 02:51:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:32.442 02:51:56 -- common/autotest_common.sh@10 -- # set +x 00:28:32.442 ************************************ 00:28:32.442 END TEST bdev_fio 00:28:32.442 ************************************ 00:28:32.442 02:51:56 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:32.442 02:51:56 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:28:32.442 02:51:56 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:28:32.442 02:51:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:32.442 02:51:56 -- common/autotest_common.sh@10 -- # set +x 00:28:32.442 ************************************ 00:28:32.442 START TEST bdev_verify 00:28:32.442 ************************************ 00:28:32.442 02:51:56 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:28:32.442 [2024-07-11 02:51:56.641540] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:28:32.442 [2024-07-11 02:51:56.641810] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid153713 ] 00:28:32.442 [2024-07-11 02:51:56.793252] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:32.442 [2024-07-11 02:51:56.848824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:32.442 [2024-07-11 02:51:56.848831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:32.442 Running I/O for 5 seconds... 00:28:37.707 00:28:37.707 Latency(us) 00:28:37.707 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:37.707 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:37.707 Verification LBA range: start 0x0 length 0x2000 00:28:37.707 raid5f : 5.01 11766.94 45.96 0.00 0.00 17232.43 333.27 13464.67 00:28:37.707 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:37.707 Verification LBA range: start 0x2000 length 0x2000 00:28:37.707 raid5f : 5.01 11765.17 45.96 0.00 0.00 17234.35 209.45 17992.61 00:28:37.707 =================================================================================================================== 00:28:37.707 Total : 23532.11 91.92 0.00 0.00 17233.39 209.45 17992.61 00:28:37.707 00:28:37.707 real 0m5.757s 00:28:37.707 user 0m10.804s 00:28:37.707 sys 0m0.220s 00:28:37.707 02:52:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:37.707 02:52:02 -- common/autotest_common.sh@10 -- # set +x 00:28:37.707 ************************************ 00:28:37.707 END TEST bdev_verify 00:28:37.707 ************************************ 00:28:37.707 02:52:02 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:28:37.707 02:52:02 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:28:37.707 02:52:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:37.707 02:52:02 -- common/autotest_common.sh@10 -- # set +x 00:28:37.707 ************************************ 00:28:37.707 START TEST bdev_verify_big_io 00:28:37.707 ************************************ 00:28:37.707 02:52:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:28:37.707 [2024-07-11 02:52:02.437956] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:28:37.707 [2024-07-11 02:52:02.438772] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid153803 ] 00:28:37.707 [2024-07-11 02:52:02.581273] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:37.707 [2024-07-11 02:52:02.641352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:37.707 [2024-07-11 02:52:02.641358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:37.965 Running I/O for 5 seconds... 00:28:43.227 00:28:43.227 Latency(us) 00:28:43.227 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:43.227 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:43.227 Verification LBA range: start 0x0 length 0x200 00:28:43.227 raid5f : 5.13 791.99 49.50 0.00 0.00 4212809.87 318.37 125829.12 00:28:43.227 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:43.227 Verification LBA range: start 0x200 length 0x200 00:28:43.227 raid5f : 5.13 798.63 49.91 0.00 0.00 4178593.67 301.61 127735.62 00:28:43.227 =================================================================================================================== 00:28:43.227 Total : 1590.62 99.41 0.00 0.00 4195630.42 301.61 127735.62 00:28:43.227 00:28:43.227 real 0m5.855s 00:28:43.227 user 0m11.032s 00:28:43.227 sys 0m0.201s 00:28:43.227 02:52:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:43.227 02:52:08 -- common/autotest_common.sh@10 -- # set +x 00:28:43.227 ************************************ 00:28:43.227 END TEST bdev_verify_big_io 00:28:43.227 ************************************ 00:28:43.227 02:52:08 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:43.227 02:52:08 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:28:43.227 02:52:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:43.227 02:52:08 -- common/autotest_common.sh@10 -- # set +x 00:28:43.227 ************************************ 00:28:43.227 START TEST bdev_write_zeroes 00:28:43.227 ************************************ 00:28:43.227 02:52:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:43.485 [2024-07-11 02:52:08.358754] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:28:43.485 [2024-07-11 02:52:08.359003] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid153907 ] 00:28:43.485 [2024-07-11 02:52:08.503230] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:43.485 [2024-07-11 02:52:08.558797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:43.743 Running I/O for 1 seconds... 00:28:45.138 00:28:45.138 Latency(us) 00:28:45.138 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:45.138 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:45.138 raid5f : 1.00 27137.62 106.01 0.00 0.00 4701.91 1519.24 6583.39 00:28:45.138 =================================================================================================================== 00:28:45.138 Total : 27137.62 106.01 0.00 0.00 4701.91 1519.24 6583.39 00:28:45.138 00:28:45.138 real 0m1.767s 00:28:45.138 user 0m1.438s 00:28:45.138 sys 0m0.204s 00:28:45.138 02:52:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:45.138 02:52:10 -- common/autotest_common.sh@10 -- # set +x 00:28:45.138 ************************************ 00:28:45.138 END TEST bdev_write_zeroes 00:28:45.138 ************************************ 00:28:45.138 02:52:10 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:45.138 02:52:10 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:28:45.138 02:52:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:45.138 02:52:10 -- common/autotest_common.sh@10 -- # set +x 00:28:45.138 ************************************ 00:28:45.138 START TEST bdev_json_nonenclosed 00:28:45.138 ************************************ 00:28:45.138 02:52:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:45.138 [2024-07-11 02:52:10.176667] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:28:45.138 [2024-07-11 02:52:10.177108] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid153957 ] 00:28:45.397 [2024-07-11 02:52:10.321730] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:45.397 [2024-07-11 02:52:10.387274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:45.397 [2024-07-11 02:52:10.387762] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:28:45.397 [2024-07-11 02:52:10.387927] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:45.655 00:28:45.655 real 0m0.370s 00:28:45.655 user 0m0.189s 00:28:45.655 sys 0m0.081s 00:28:45.655 02:52:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:45.655 ************************************ 00:28:45.655 END TEST bdev_json_nonenclosed 00:28:45.655 ************************************ 00:28:45.655 02:52:10 -- common/autotest_common.sh@10 -- # set +x 00:28:45.655 02:52:10 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:45.655 02:52:10 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:28:45.655 02:52:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:45.655 02:52:10 -- common/autotest_common.sh@10 -- # set +x 00:28:45.655 ************************************ 00:28:45.655 START TEST bdev_json_nonarray 00:28:45.655 ************************************ 00:28:45.655 02:52:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:45.655 [2024-07-11 02:52:10.599732] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:28:45.655 [2024-07-11 02:52:10.599995] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid153979 ] 00:28:45.655 [2024-07-11 02:52:10.746612] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:45.914 [2024-07-11 02:52:10.811341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:45.914 [2024-07-11 02:52:10.811843] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:28:45.914 [2024-07-11 02:52:10.812008] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:45.914 00:28:45.914 real 0m0.373s 00:28:45.914 user 0m0.160s 00:28:45.914 sys 0m0.113s 00:28:45.914 02:52:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:45.914 02:52:10 -- common/autotest_common.sh@10 -- # set +x 00:28:45.914 ************************************ 00:28:45.914 END TEST bdev_json_nonarray 00:28:45.914 ************************************ 00:28:45.914 02:52:10 -- bdev/blockdev.sh@785 -- # [[ raid5f == bdev ]] 00:28:45.914 02:52:10 -- bdev/blockdev.sh@792 -- # [[ raid5f == gpt ]] 00:28:45.914 02:52:10 -- bdev/blockdev.sh@796 -- # [[ raid5f == crypto_sw ]] 00:28:45.914 02:52:10 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:28:45.914 02:52:10 -- bdev/blockdev.sh@809 -- # cleanup 00:28:45.914 02:52:10 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:28:45.914 02:52:10 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:28:45.914 02:52:10 -- bdev/blockdev.sh@24 -- # [[ raid5f == rbd ]] 00:28:45.914 02:52:10 -- bdev/blockdev.sh@28 -- # [[ raid5f == daos ]] 00:28:45.914 02:52:10 -- bdev/blockdev.sh@32 -- # [[ raid5f = \g\p\t ]] 00:28:45.914 02:52:10 -- bdev/blockdev.sh@38 -- # [[ raid5f == xnvme ]] 00:28:45.914 00:28:45.914 real 0m35.119s 00:28:45.914 user 0m49.643s 00:28:45.914 sys 0m3.601s 00:28:45.914 02:52:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:45.914 02:52:10 -- common/autotest_common.sh@10 -- # set +x 00:28:45.914 ************************************ 00:28:45.914 END TEST blockdev_raid5f 00:28:45.914 ************************************ 00:28:46.172 02:52:11 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:28:46.172 02:52:11 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:28:46.173 02:52:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:46.173 02:52:11 -- common/autotest_common.sh@10 -- # set +x 00:28:46.173 02:52:11 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:28:46.173 02:52:11 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:28:46.173 02:52:11 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:28:46.173 02:52:11 -- common/autotest_common.sh@10 -- # set +x 00:28:47.549 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:28:47.549 Waiting for block devices as requested 00:28:47.549 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:28:47.808 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:28:47.808 Cleaning 00:28:47.808 Removing: /var/run/dpdk/spdk0/config 00:28:47.808 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:28:47.808 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:28:47.808 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:28:47.808 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:28:47.808 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:28:47.808 Removing: /var/run/dpdk/spdk0/hugepage_info 00:28:47.808 Removing: /dev/shm/spdk_tgt_trace.pid116437 00:28:47.808 Removing: /var/run/dpdk/spdk0 00:28:47.808 Removing: /var/run/dpdk/spdk_pid116263 00:28:47.808 Removing: /var/run/dpdk/spdk_pid116437 00:28:47.808 Removing: /var/run/dpdk/spdk_pid116724 00:28:47.808 Removing: /var/run/dpdk/spdk_pid116968 00:28:47.808 Removing: /var/run/dpdk/spdk_pid117145 00:28:47.808 Removing: /var/run/dpdk/spdk_pid117225 00:28:47.808 Removing: /var/run/dpdk/spdk_pid117306 00:28:47.808 Removing: /var/run/dpdk/spdk_pid117397 00:28:47.808 Removing: /var/run/dpdk/spdk_pid117481 00:28:47.808 Removing: /var/run/dpdk/spdk_pid117526 00:28:47.808 Removing: /var/run/dpdk/spdk_pid117594 00:28:47.808 Removing: /var/run/dpdk/spdk_pid117658 00:28:47.808 Removing: /var/run/dpdk/spdk_pid117769 00:28:48.067 Removing: /var/run/dpdk/spdk_pid118320 00:28:48.067 Removing: /var/run/dpdk/spdk_pid118381 00:28:48.067 Removing: /var/run/dpdk/spdk_pid118429 00:28:48.067 Removing: /var/run/dpdk/spdk_pid118450 00:28:48.067 Removing: /var/run/dpdk/spdk_pid118542 00:28:48.067 Removing: /var/run/dpdk/spdk_pid118563 00:28:48.067 Removing: /var/run/dpdk/spdk_pid118632 00:28:48.067 Removing: /var/run/dpdk/spdk_pid118653 00:28:48.067 Removing: /var/run/dpdk/spdk_pid118703 00:28:48.067 Removing: /var/run/dpdk/spdk_pid118725 00:28:48.067 Removing: /var/run/dpdk/spdk_pid118800 00:28:48.067 Removing: /var/run/dpdk/spdk_pid118823 00:28:48.067 Removing: /var/run/dpdk/spdk_pid118948 00:28:48.067 Removing: /var/run/dpdk/spdk_pid118993 00:28:48.067 Removing: /var/run/dpdk/spdk_pid119034 00:28:48.067 Removing: /var/run/dpdk/spdk_pid119112 00:28:48.067 Removing: /var/run/dpdk/spdk_pid119177 00:28:48.067 Removing: /var/run/dpdk/spdk_pid119223 00:28:48.067 Removing: /var/run/dpdk/spdk_pid119302 00:28:48.067 Removing: /var/run/dpdk/spdk_pid119329 00:28:48.067 Removing: /var/run/dpdk/spdk_pid119371 00:28:48.067 Removing: /var/run/dpdk/spdk_pid119398 00:28:48.067 Removing: /var/run/dpdk/spdk_pid119440 00:28:48.067 Removing: /var/run/dpdk/spdk_pid119469 00:28:48.067 Removing: /var/run/dpdk/spdk_pid119509 00:28:48.067 Removing: /var/run/dpdk/spdk_pid119557 00:28:48.067 Removing: /var/run/dpdk/spdk_pid119597 00:28:48.067 Removing: /var/run/dpdk/spdk_pid119626 00:28:48.067 Removing: /var/run/dpdk/spdk_pid119661 00:28:48.067 Removing: /var/run/dpdk/spdk_pid119695 00:28:48.067 Removing: /var/run/dpdk/spdk_pid119730 00:28:48.067 Removing: /var/run/dpdk/spdk_pid119781 00:28:48.067 Removing: /var/run/dpdk/spdk_pid119815 00:28:48.067 Removing: /var/run/dpdk/spdk_pid119844 00:28:48.067 Removing: /var/run/dpdk/spdk_pid119884 00:28:48.067 Removing: /var/run/dpdk/spdk_pid119913 00:28:48.067 Removing: /var/run/dpdk/spdk_pid119953 00:28:48.067 Removing: /var/run/dpdk/spdk_pid119980 00:28:48.067 Removing: /var/run/dpdk/spdk_pid120035 00:28:48.067 Removing: /var/run/dpdk/spdk_pid120069 00:28:48.067 Removing: /var/run/dpdk/spdk_pid120104 00:28:48.067 Removing: /var/run/dpdk/spdk_pid120138 00:28:48.067 Removing: /var/run/dpdk/spdk_pid120173 00:28:48.067 Removing: /var/run/dpdk/spdk_pid120206 00:28:48.067 Removing: /var/run/dpdk/spdk_pid120242 00:28:48.067 Removing: /var/run/dpdk/spdk_pid120291 00:28:48.067 Removing: /var/run/dpdk/spdk_pid120331 00:28:48.067 Removing: /var/run/dpdk/spdk_pid120360 00:28:48.067 Removing: /var/run/dpdk/spdk_pid120401 00:28:48.067 Removing: /var/run/dpdk/spdk_pid120429 00:28:48.067 Removing: /var/run/dpdk/spdk_pid120464 00:28:48.067 Removing: /var/run/dpdk/spdk_pid120520 00:28:48.067 Removing: /var/run/dpdk/spdk_pid120565 00:28:48.067 Removing: /var/run/dpdk/spdk_pid120595 00:28:48.067 Removing: /var/run/dpdk/spdk_pid120645 00:28:48.067 Removing: /var/run/dpdk/spdk_pid120674 00:28:48.067 Removing: /var/run/dpdk/spdk_pid120714 00:28:48.067 Removing: /var/run/dpdk/spdk_pid120763 00:28:48.067 Removing: /var/run/dpdk/spdk_pid120810 00:28:48.067 Removing: /var/run/dpdk/spdk_pid120886 00:28:48.067 Removing: /var/run/dpdk/spdk_pid120993 00:28:48.067 Removing: /var/run/dpdk/spdk_pid121147 00:28:48.067 Removing: /var/run/dpdk/spdk_pid121228 00:28:48.067 Removing: /var/run/dpdk/spdk_pid121266 00:28:48.067 Removing: /var/run/dpdk/spdk_pid122549 00:28:48.067 Removing: /var/run/dpdk/spdk_pid122764 00:28:48.067 Removing: /var/run/dpdk/spdk_pid122972 00:28:48.067 Removing: /var/run/dpdk/spdk_pid123071 00:28:48.067 Removing: /var/run/dpdk/spdk_pid123213 00:28:48.067 Removing: /var/run/dpdk/spdk_pid123263 00:28:48.067 Removing: /var/run/dpdk/spdk_pid123285 00:28:48.067 Removing: /var/run/dpdk/spdk_pid123316 00:28:48.067 Removing: /var/run/dpdk/spdk_pid123824 00:28:48.067 Removing: /var/run/dpdk/spdk_pid123906 00:28:48.067 Removing: /var/run/dpdk/spdk_pid124023 00:28:48.067 Removing: /var/run/dpdk/spdk_pid124069 00:28:48.067 Removing: /var/run/dpdk/spdk_pid125280 00:28:48.067 Removing: /var/run/dpdk/spdk_pid126172 00:28:48.067 Removing: /var/run/dpdk/spdk_pid127085 00:28:48.067 Removing: /var/run/dpdk/spdk_pid128203 00:28:48.067 Removing: /var/run/dpdk/spdk_pid129285 00:28:48.067 Removing: /var/run/dpdk/spdk_pid130391 00:28:48.067 Removing: /var/run/dpdk/spdk_pid131924 00:28:48.067 Removing: /var/run/dpdk/spdk_pid133173 00:28:48.067 Removing: /var/run/dpdk/spdk_pid134414 00:28:48.067 Removing: /var/run/dpdk/spdk_pid135119 00:28:48.067 Removing: /var/run/dpdk/spdk_pid135698 00:28:48.067 Removing: /var/run/dpdk/spdk_pid136349 00:28:48.067 Removing: /var/run/dpdk/spdk_pid136841 00:28:48.067 Removing: /var/run/dpdk/spdk_pid137406 00:28:48.067 Removing: /var/run/dpdk/spdk_pid137995 00:28:48.067 Removing: /var/run/dpdk/spdk_pid138666 00:28:48.067 Removing: /var/run/dpdk/spdk_pid139225 00:28:48.067 Removing: /var/run/dpdk/spdk_pid140651 00:28:48.067 Removing: /var/run/dpdk/spdk_pid141277 00:28:48.067 Removing: /var/run/dpdk/spdk_pid141838 00:28:48.067 Removing: /var/run/dpdk/spdk_pid143413 00:28:48.067 Removing: /var/run/dpdk/spdk_pid144098 00:28:48.067 Removing: /var/run/dpdk/spdk_pid144754 00:28:48.067 Removing: /var/run/dpdk/spdk_pid145570 00:28:48.067 Removing: /var/run/dpdk/spdk_pid145613 00:28:48.067 Removing: /var/run/dpdk/spdk_pid145656 00:28:48.067 Removing: /var/run/dpdk/spdk_pid145714 00:28:48.067 Removing: /var/run/dpdk/spdk_pid145828 00:28:48.067 Removing: /var/run/dpdk/spdk_pid145969 00:28:48.067 Removing: /var/run/dpdk/spdk_pid146195 00:28:48.326 Removing: /var/run/dpdk/spdk_pid146468 00:28:48.326 Removing: /var/run/dpdk/spdk_pid146492 00:28:48.326 Removing: /var/run/dpdk/spdk_pid146531 00:28:48.326 Removing: /var/run/dpdk/spdk_pid146553 00:28:48.326 Removing: /var/run/dpdk/spdk_pid146562 00:28:48.326 Removing: /var/run/dpdk/spdk_pid146614 00:28:48.326 Removing: /var/run/dpdk/spdk_pid146623 00:28:48.326 Removing: /var/run/dpdk/spdk_pid146644 00:28:48.326 Removing: /var/run/dpdk/spdk_pid146664 00:28:48.326 Removing: /var/run/dpdk/spdk_pid146679 00:28:48.326 Removing: /var/run/dpdk/spdk_pid146700 00:28:48.326 Removing: /var/run/dpdk/spdk_pid146720 00:28:48.326 Removing: /var/run/dpdk/spdk_pid146739 00:28:48.326 Removing: /var/run/dpdk/spdk_pid146749 00:28:48.326 Removing: /var/run/dpdk/spdk_pid146776 00:28:48.326 Removing: /var/run/dpdk/spdk_pid146788 00:28:48.326 Removing: /var/run/dpdk/spdk_pid146822 00:28:48.326 Removing: /var/run/dpdk/spdk_pid146842 00:28:48.326 Removing: /var/run/dpdk/spdk_pid146862 00:28:48.326 Removing: /var/run/dpdk/spdk_pid146877 00:28:48.326 Removing: /var/run/dpdk/spdk_pid146911 00:28:48.326 Removing: /var/run/dpdk/spdk_pid146934 00:28:48.326 Removing: /var/run/dpdk/spdk_pid146965 00:28:48.326 Removing: /var/run/dpdk/spdk_pid147034 00:28:48.326 Removing: /var/run/dpdk/spdk_pid147063 00:28:48.326 Removing: /var/run/dpdk/spdk_pid147081 00:28:48.326 Removing: /var/run/dpdk/spdk_pid147114 00:28:48.326 Removing: /var/run/dpdk/spdk_pid147126 00:28:48.326 Removing: /var/run/dpdk/spdk_pid147138 00:28:48.326 Removing: /var/run/dpdk/spdk_pid147213 00:28:48.326 Removing: /var/run/dpdk/spdk_pid147220 00:28:48.326 Removing: /var/run/dpdk/spdk_pid147256 00:28:48.326 Removing: /var/run/dpdk/spdk_pid147261 00:28:48.326 Removing: /var/run/dpdk/spdk_pid147282 00:28:48.326 Removing: /var/run/dpdk/spdk_pid147294 00:28:48.326 Removing: /var/run/dpdk/spdk_pid147304 00:28:48.326 Removing: /var/run/dpdk/spdk_pid147316 00:28:48.326 Removing: /var/run/dpdk/spdk_pid147333 00:28:48.326 Removing: /var/run/dpdk/spdk_pid147338 00:28:48.326 Removing: /var/run/dpdk/spdk_pid147376 00:28:48.326 Removing: /var/run/dpdk/spdk_pid147418 00:28:48.326 Removing: /var/run/dpdk/spdk_pid147424 00:28:48.326 Removing: /var/run/dpdk/spdk_pid147465 00:28:48.326 Removing: /var/run/dpdk/spdk_pid147493 00:28:48.327 Removing: /var/run/dpdk/spdk_pid147505 00:28:48.327 Removing: /var/run/dpdk/spdk_pid147558 00:28:48.327 Removing: /var/run/dpdk/spdk_pid147576 00:28:48.327 Removing: /var/run/dpdk/spdk_pid147601 00:28:48.327 Removing: /var/run/dpdk/spdk_pid147621 00:28:48.327 Removing: /var/run/dpdk/spdk_pid147627 00:28:48.327 Removing: /var/run/dpdk/spdk_pid147643 00:28:48.327 Removing: /var/run/dpdk/spdk_pid147656 00:28:48.327 Removing: /var/run/dpdk/spdk_pid147665 00:28:48.327 Removing: /var/run/dpdk/spdk_pid147678 00:28:48.327 Removing: /var/run/dpdk/spdk_pid147687 00:28:48.327 Removing: /var/run/dpdk/spdk_pid147765 00:28:48.327 Removing: /var/run/dpdk/spdk_pid147839 00:28:48.327 Removing: /var/run/dpdk/spdk_pid147954 00:28:48.327 Removing: /var/run/dpdk/spdk_pid147970 00:28:48.327 Removing: /var/run/dpdk/spdk_pid148017 00:28:48.327 Removing: /var/run/dpdk/spdk_pid148086 00:28:48.327 Removing: /var/run/dpdk/spdk_pid148112 00:28:48.327 Removing: /var/run/dpdk/spdk_pid148129 00:28:48.327 Removing: /var/run/dpdk/spdk_pid148151 00:28:48.327 Removing: /var/run/dpdk/spdk_pid148188 00:28:48.327 Removing: /var/run/dpdk/spdk_pid148209 00:28:48.327 Removing: /var/run/dpdk/spdk_pid148279 00:28:48.327 Removing: /var/run/dpdk/spdk_pid148332 00:28:48.327 Removing: /var/run/dpdk/spdk_pid148369 00:28:48.327 Removing: /var/run/dpdk/spdk_pid148622 00:28:48.327 Removing: /var/run/dpdk/spdk_pid148751 00:28:48.327 Removing: /var/run/dpdk/spdk_pid148785 00:28:48.327 Removing: /var/run/dpdk/spdk_pid148870 00:28:48.327 Removing: /var/run/dpdk/spdk_pid148937 00:28:48.327 Removing: /var/run/dpdk/spdk_pid148975 00:28:48.327 Removing: /var/run/dpdk/spdk_pid149243 00:28:48.327 Removing: /var/run/dpdk/spdk_pid149396 00:28:48.327 Removing: /var/run/dpdk/spdk_pid149504 00:28:48.327 Removing: /var/run/dpdk/spdk_pid149548 00:28:48.327 Removing: /var/run/dpdk/spdk_pid149570 00:28:48.327 Removing: /var/run/dpdk/spdk_pid149653 00:28:48.327 Removing: /var/run/dpdk/spdk_pid150193 00:28:48.327 Removing: /var/run/dpdk/spdk_pid150232 00:28:48.327 Removing: /var/run/dpdk/spdk_pid150539 00:28:48.327 Removing: /var/run/dpdk/spdk_pid150664 00:28:48.327 Removing: /var/run/dpdk/spdk_pid150760 00:28:48.327 Removing: /var/run/dpdk/spdk_pid150806 00:28:48.327 Removing: /var/run/dpdk/spdk_pid150828 00:28:48.327 Removing: /var/run/dpdk/spdk_pid150858 00:28:48.327 Removing: /var/run/dpdk/spdk_pid152261 00:28:48.327 Removing: /var/run/dpdk/spdk_pid152388 00:28:48.327 Removing: /var/run/dpdk/spdk_pid152392 00:28:48.327 Removing: /var/run/dpdk/spdk_pid152419 00:28:48.327 Removing: /var/run/dpdk/spdk_pid152919 00:28:48.327 Removing: /var/run/dpdk/spdk_pid152998 00:28:48.327 Removing: /var/run/dpdk/spdk_pid153138 00:28:48.327 Removing: /var/run/dpdk/spdk_pid153195 00:28:48.327 Removing: /var/run/dpdk/spdk_pid153226 00:28:48.327 Removing: /var/run/dpdk/spdk_pid153509 00:28:48.327 Removing: /var/run/dpdk/spdk_pid153713 00:28:48.327 Removing: /var/run/dpdk/spdk_pid153803 00:28:48.327 Removing: /var/run/dpdk/spdk_pid153907 00:28:48.327 Removing: /var/run/dpdk/spdk_pid153957 00:28:48.327 Removing: /var/run/dpdk/spdk_pid153979 00:28:48.585 Clean 00:28:48.585 killing process with pid 105541 00:28:48.586 killing process with pid 105562 00:28:48.586 02:52:13 -- common/autotest_common.sh@1436 -- # return 0 00:28:48.586 02:52:13 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:28:48.586 02:52:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:48.586 02:52:13 -- common/autotest_common.sh@10 -- # set +x 00:28:48.586 02:52:13 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:28:48.586 02:52:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:48.586 02:52:13 -- common/autotest_common.sh@10 -- # set +x 00:28:48.586 02:52:13 -- spdk/autotest.sh@390 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:28:48.586 02:52:13 -- spdk/autotest.sh@392 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:28:48.586 02:52:13 -- spdk/autotest.sh@392 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:28:48.586 02:52:13 -- spdk/autotest.sh@394 -- # hash lcov 00:28:48.586 02:52:13 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:28:48.586 02:52:13 -- spdk/autotest.sh@396 -- # hostname 00:28:48.586 02:52:13 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t ubuntu2004-cloud-1712646987-2220 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:28:48.843 geninfo: WARNING: invalid characters removed from testname! 00:29:35.555 02:52:54 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:35.555 02:52:59 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:38.120 02:53:02 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:40.649 02:53:05 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:43.935 02:53:08 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:47.218 02:53:11 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:50.503 02:53:14 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:29:50.503 02:53:14 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:50.503 02:53:14 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:29:50.503 02:53:14 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:50.503 02:53:14 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:50.503 02:53:14 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:50.503 02:53:14 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:50.503 02:53:14 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:50.503 02:53:14 -- paths/export.sh@5 -- $ export PATH 00:29:50.503 02:53:14 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:50.503 02:53:14 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:29:50.503 02:53:14 -- common/autobuild_common.sh@435 -- $ date +%s 00:29:50.503 02:53:14 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1720666394.XXXXXX 00:29:50.503 02:53:14 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1720666394.H3DdGr 00:29:50.503 02:53:14 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:29:50.503 02:53:14 -- common/autobuild_common.sh@441 -- $ '[' -n v22.11.4 ']' 00:29:50.503 02:53:14 -- common/autobuild_common.sh@442 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:29:50.503 02:53:14 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:29:50.503 02:53:14 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:29:50.503 02:53:14 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:29:50.503 02:53:14 -- common/autobuild_common.sh@451 -- $ get_config_params 00:29:50.503 02:53:14 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:29:50.503 02:53:14 -- common/autotest_common.sh@10 -- $ set +x 00:29:50.503 02:53:14 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:29:50.503 02:53:14 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:29:50.503 02:53:14 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:29:50.503 02:53:14 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:29:50.503 02:53:14 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:29:50.503 02:53:14 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:29:50.503 02:53:14 -- spdk/autopackage.sh@23 -- $ timing_enter build_release 00:29:50.503 02:53:14 -- common/autotest_common.sh@712 -- $ xtrace_disable 00:29:50.503 02:53:14 -- common/autotest_common.sh@10 -- $ set +x 00:29:50.503 02:53:14 -- spdk/autopackage.sh@26 -- $ [[ '' == *clang* ]] 00:29:50.503 02:53:14 -- spdk/autopackage.sh@36 -- $ [[ -n v22.11.4 ]] 00:29:50.503 02:53:14 -- spdk/autopackage.sh@36 -- $ [[ -e /tmp/spdk-ld-path ]] 00:29:50.503 02:53:14 -- spdk/autopackage.sh@37 -- $ source /tmp/spdk-ld-path 00:29:50.503 02:53:14 -- tmp/spdk-ld-path@1 -- $ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:29:50.503 02:53:14 -- tmp/spdk-ld-path@1 -- $ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:29:50.503 02:53:14 -- tmp/spdk-ld-path@2 -- $ export PKG_CONFIG_PATH= 00:29:50.503 02:53:14 -- tmp/spdk-ld-path@2 -- $ PKG_CONFIG_PATH= 00:29:50.503 02:53:14 -- spdk/autopackage.sh@40 -- $ get_config_params 00:29:50.503 02:53:14 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:29:50.503 02:53:14 -- spdk/autopackage.sh@40 -- $ sed s/--enable-debug//g 00:29:50.503 02:53:14 -- common/autotest_common.sh@10 -- $ set +x 00:29:50.503 02:53:14 -- spdk/autopackage.sh@40 -- $ config_params=' --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:29:50.503 02:53:14 -- spdk/autopackage.sh@41 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --enable-lto 00:29:50.503 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:29:50.503 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:29:50.503 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:29:50.503 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:29:50.503 Using 'verbs' RDMA provider 00:30:03.014 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:30:15.232 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:30:15.232 Creating mk/config.mk...done. 00:30:15.232 Creating mk/cc.flags.mk...done. 00:30:15.232 Type 'make' to build. 00:30:15.232 02:53:39 -- spdk/autopackage.sh@43 -- $ make -j10 00:30:15.233 make[1]: Nothing to be done for 'all'. 00:30:15.233 CC lib/log/log.o 00:30:15.233 CC lib/log/log_flags.o 00:30:15.233 CC lib/log/log_deprecated.o 00:30:15.233 CC lib/ut/ut.o 00:30:15.233 CC lib/ut_mock/mock.o 00:30:15.233 LIB libspdk_ut_mock.a 00:30:15.233 LIB libspdk_log.a 00:30:15.233 LIB libspdk_ut.a 00:30:15.233 CC lib/util/base64.o 00:30:15.233 CC lib/dma/dma.o 00:30:15.233 CC lib/util/bit_array.o 00:30:15.233 CC lib/util/cpuset.o 00:30:15.233 CC lib/util/crc16.o 00:30:15.233 CXX lib/trace_parser/trace.o 00:30:15.233 CC lib/util/crc32.o 00:30:15.233 CC lib/util/crc32c.o 00:30:15.233 CC lib/ioat/ioat.o 00:30:15.233 CC lib/vfio_user/host/vfio_user_pci.o 00:30:15.491 CC lib/util/crc32_ieee.o 00:30:15.491 CC lib/vfio_user/host/vfio_user.o 00:30:15.491 CC lib/util/crc64.o 00:30:15.491 CC lib/util/dif.o 00:30:15.491 LIB libspdk_dma.a 00:30:15.491 CC lib/util/fd.o 00:30:15.491 CC lib/util/file.o 00:30:15.491 CC lib/util/hexlify.o 00:30:15.491 CC lib/util/iov.o 00:30:15.491 CC lib/util/math.o 00:30:15.491 CC lib/util/pipe.o 00:30:15.491 LIB libspdk_ioat.a 00:30:15.491 LIB libspdk_vfio_user.a 00:30:15.491 CC lib/util/strerror_tls.o 00:30:15.491 CC lib/util/string.o 00:30:15.491 CC lib/util/uuid.o 00:30:15.491 CC lib/util/fd_group.o 00:30:15.749 CC lib/util/xor.o 00:30:15.749 CC lib/util/zipf.o 00:30:16.007 LIB libspdk_util.a 00:30:16.007 CC lib/rdma/common.o 00:30:16.007 CC lib/json/json_parse.o 00:30:16.007 CC lib/env_dpdk/env.o 00:30:16.007 CC lib/env_dpdk/memory.o 00:30:16.007 CC lib/rdma/rdma_verbs.o 00:30:16.007 CC lib/conf/conf.o 00:30:16.007 CC lib/json/json_util.o 00:30:16.007 CC lib/vmd/vmd.o 00:30:16.007 CC lib/idxd/idxd.o 00:30:16.278 LIB libspdk_trace_parser.a 00:30:16.278 CC lib/idxd/idxd_user.o 00:30:16.278 LIB libspdk_conf.a 00:30:16.278 CC lib/vmd/led.o 00:30:16.278 CC lib/json/json_write.o 00:30:16.278 CC lib/env_dpdk/pci.o 00:30:16.278 LIB libspdk_rdma.a 00:30:16.278 CC lib/env_dpdk/init.o 00:30:16.278 CC lib/env_dpdk/threads.o 00:30:16.278 CC lib/env_dpdk/pci_ioat.o 00:30:16.278 CC lib/env_dpdk/pci_virtio.o 00:30:16.555 CC lib/env_dpdk/pci_vmd.o 00:30:16.555 CC lib/env_dpdk/pci_idxd.o 00:30:16.555 LIB libspdk_vmd.a 00:30:16.555 CC lib/env_dpdk/pci_event.o 00:30:16.555 LIB libspdk_idxd.a 00:30:16.555 CC lib/env_dpdk/pci_dpdk.o 00:30:16.555 CC lib/env_dpdk/sigbus_handler.o 00:30:16.555 CC lib/env_dpdk/pci_dpdk_2207.o 00:30:16.555 LIB libspdk_json.a 00:30:16.555 CC lib/env_dpdk/pci_dpdk_2211.o 00:30:16.555 CC lib/jsonrpc/jsonrpc_server.o 00:30:16.555 CC lib/jsonrpc/jsonrpc_client.o 00:30:16.555 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:30:16.555 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:30:16.813 LIB libspdk_jsonrpc.a 00:30:17.071 CC lib/rpc/rpc.o 00:30:17.071 LIB libspdk_env_dpdk.a 00:30:17.071 LIB libspdk_rpc.a 00:30:17.329 CC lib/notify/notify.o 00:30:17.329 CC lib/notify/notify_rpc.o 00:30:17.329 CC lib/sock/sock_rpc.o 00:30:17.329 CC lib/sock/sock.o 00:30:17.329 CC lib/trace/trace.o 00:30:17.329 CC lib/trace/trace_flags.o 00:30:17.329 CC lib/trace/trace_rpc.o 00:30:17.329 LIB libspdk_notify.a 00:30:17.586 LIB libspdk_sock.a 00:30:17.586 LIB libspdk_trace.a 00:30:17.586 CC lib/thread/thread.o 00:30:17.586 CC lib/thread/iobuf.o 00:30:17.586 CC lib/nvme/nvme_ctrlr_cmd.o 00:30:17.586 CC lib/nvme/nvme_ctrlr.o 00:30:17.586 CC lib/nvme/nvme_fabric.o 00:30:17.586 CC lib/nvme/nvme_ns_cmd.o 00:30:17.586 CC lib/nvme/nvme_ns.o 00:30:17.586 CC lib/nvme/nvme_pcie_common.o 00:30:17.586 CC lib/nvme/nvme_pcie.o 00:30:17.844 CC lib/nvme/nvme_qpair.o 00:30:17.844 CC lib/nvme/nvme.o 00:30:18.101 CC lib/nvme/nvme_quirks.o 00:30:18.359 CC lib/nvme/nvme_transport.o 00:30:18.359 CC lib/nvme/nvme_discovery.o 00:30:18.359 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:30:18.359 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:30:18.359 CC lib/nvme/nvme_tcp.o 00:30:18.359 LIB libspdk_thread.a 00:30:18.359 CC lib/nvme/nvme_opal.o 00:30:18.617 CC lib/nvme/nvme_io_msg.o 00:30:18.617 CC lib/nvme/nvme_poll_group.o 00:30:18.617 CC lib/accel/accel.o 00:30:18.617 CC lib/nvme/nvme_zns.o 00:30:18.874 CC lib/accel/accel_rpc.o 00:30:18.874 CC lib/nvme/nvme_cuse.o 00:30:18.874 CC lib/accel/accel_sw.o 00:30:18.874 CC lib/nvme/nvme_vfio_user.o 00:30:18.874 CC lib/nvme/nvme_rdma.o 00:30:19.132 CC lib/blob/blobstore.o 00:30:19.132 CC lib/blob/request.o 00:30:19.132 CC lib/blob/zeroes.o 00:30:19.132 CC lib/blob/blob_bs_dev.o 00:30:19.390 CC lib/init/json_config.o 00:30:19.390 CC lib/init/subsystem.o 00:30:19.390 CC lib/init/subsystem_rpc.o 00:30:19.390 CC lib/virtio/virtio.o 00:30:19.390 LIB libspdk_accel.a 00:30:19.390 CC lib/init/rpc.o 00:30:19.390 CC lib/virtio/virtio_vhost_user.o 00:30:19.390 CC lib/virtio/virtio_vfio_user.o 00:30:19.390 CC lib/virtio/virtio_pci.o 00:30:19.390 LIB libspdk_init.a 00:30:19.647 CC lib/bdev/bdev.o 00:30:19.647 CC lib/bdev/bdev_rpc.o 00:30:19.647 CC lib/bdev/bdev_zone.o 00:30:19.647 CC lib/bdev/part.o 00:30:19.647 CC lib/bdev/scsi_nvme.o 00:30:19.647 LIB libspdk_virtio.a 00:30:19.647 CC lib/event/app.o 00:30:19.647 CC lib/event/reactor.o 00:30:19.647 CC lib/event/log_rpc.o 00:30:19.647 CC lib/event/app_rpc.o 00:30:19.647 CC lib/event/scheduler_static.o 00:30:19.648 LIB libspdk_nvme.a 00:30:19.906 LIB libspdk_event.a 00:30:20.841 LIB libspdk_blob.a 00:30:20.841 CC lib/lvol/lvol.o 00:30:20.841 CC lib/blobfs/blobfs.o 00:30:20.841 CC lib/blobfs/tree.o 00:30:21.100 LIB libspdk_bdev.a 00:30:21.100 CC lib/nvmf/ctrlr_discovery.o 00:30:21.100 CC lib/nvmf/ctrlr.o 00:30:21.100 CC lib/scsi/dev.o 00:30:21.100 CC lib/scsi/lun.o 00:30:21.100 CC lib/nbd/nbd.o 00:30:21.100 CC lib/scsi/port.o 00:30:21.100 CC lib/nvmf/ctrlr_bdev.o 00:30:21.100 CC lib/ftl/ftl_core.o 00:30:21.359 CC lib/nvmf/subsystem.o 00:30:21.359 LIB libspdk_blobfs.a 00:30:21.359 LIB libspdk_lvol.a 00:30:21.359 CC lib/nvmf/nvmf.o 00:30:21.359 CC lib/nvmf/nvmf_rpc.o 00:30:21.359 CC lib/nvmf/transport.o 00:30:21.359 CC lib/scsi/scsi.o 00:30:21.617 CC lib/nvmf/tcp.o 00:30:21.617 CC lib/nbd/nbd_rpc.o 00:30:21.617 CC lib/ftl/ftl_init.o 00:30:21.617 CC lib/ftl/ftl_layout.o 00:30:21.617 CC lib/scsi/scsi_bdev.o 00:30:21.617 LIB libspdk_nbd.a 00:30:21.617 CC lib/scsi/scsi_pr.o 00:30:21.617 CC lib/nvmf/rdma.o 00:30:21.617 CC lib/ftl/ftl_debug.o 00:30:21.876 CC lib/ftl/ftl_io.o 00:30:21.876 CC lib/scsi/scsi_rpc.o 00:30:21.876 CC lib/ftl/ftl_sb.o 00:30:21.876 CC lib/scsi/task.o 00:30:21.876 CC lib/ftl/ftl_l2p.o 00:30:21.876 CC lib/ftl/ftl_l2p_flat.o 00:30:21.876 CC lib/ftl/ftl_nv_cache.o 00:30:22.135 CC lib/ftl/ftl_band.o 00:30:22.135 CC lib/ftl/ftl_band_ops.o 00:30:22.135 CC lib/ftl/ftl_writer.o 00:30:22.135 CC lib/ftl/ftl_rq.o 00:30:22.135 LIB libspdk_scsi.a 00:30:22.135 CC lib/ftl/ftl_reloc.o 00:30:22.135 CC lib/ftl/ftl_l2p_cache.o 00:30:22.135 CC lib/ftl/ftl_p2l.o 00:30:22.135 CC lib/ftl/mngt/ftl_mngt.o 00:30:22.135 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:30:22.394 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:30:22.394 CC lib/ftl/mngt/ftl_mngt_startup.o 00:30:22.394 CC lib/ftl/mngt/ftl_mngt_md.o 00:30:22.394 CC lib/iscsi/conn.o 00:30:22.394 CC lib/ftl/mngt/ftl_mngt_misc.o 00:30:22.394 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:30:22.394 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:30:22.394 CC lib/vhost/vhost.o 00:30:22.652 CC lib/vhost/vhost_rpc.o 00:30:22.652 CC lib/vhost/vhost_scsi.o 00:30:22.652 CC lib/vhost/vhost_blk.o 00:30:22.652 CC lib/vhost/rte_vhost_user.o 00:30:22.652 CC lib/iscsi/init_grp.o 00:30:22.652 CC lib/iscsi/iscsi.o 00:30:22.652 CC lib/ftl/mngt/ftl_mngt_band.o 00:30:22.652 LIB libspdk_nvmf.a 00:30:22.911 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:30:22.911 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:30:22.911 CC lib/iscsi/md5.o 00:30:22.911 CC lib/iscsi/param.o 00:30:22.911 CC lib/iscsi/portal_grp.o 00:30:22.911 CC lib/iscsi/tgt_node.o 00:30:22.911 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:30:22.911 CC lib/iscsi/iscsi_subsystem.o 00:30:22.911 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:30:23.169 CC lib/ftl/utils/ftl_conf.o 00:30:23.169 CC lib/iscsi/iscsi_rpc.o 00:30:23.169 CC lib/iscsi/task.o 00:30:23.169 CC lib/ftl/utils/ftl_md.o 00:30:23.428 CC lib/ftl/utils/ftl_mempool.o 00:30:23.428 CC lib/ftl/utils/ftl_bitmap.o 00:30:23.428 CC lib/ftl/utils/ftl_property.o 00:30:23.428 LIB libspdk_vhost.a 00:30:23.428 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:30:23.428 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:30:23.428 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:30:23.428 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:30:23.428 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:30:23.428 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:30:23.428 CC lib/ftl/upgrade/ftl_sb_v3.o 00:30:23.428 LIB libspdk_iscsi.a 00:30:23.428 CC lib/ftl/upgrade/ftl_sb_v5.o 00:30:23.686 CC lib/ftl/nvc/ftl_nvc_dev.o 00:30:23.686 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:30:23.686 CC lib/ftl/base/ftl_base_dev.o 00:30:23.686 CC lib/ftl/base/ftl_base_bdev.o 00:30:23.945 LIB libspdk_ftl.a 00:30:23.945 CC module/env_dpdk/env_dpdk_rpc.o 00:30:23.945 CC module/accel/ioat/accel_ioat.o 00:30:23.945 CC module/accel/error/accel_error.o 00:30:23.945 CC module/blob/bdev/blob_bdev.o 00:30:23.945 CC module/sock/posix/posix.o 00:30:23.945 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:30:23.945 CC module/accel/iaa/accel_iaa.o 00:30:24.203 CC module/scheduler/dynamic/scheduler_dynamic.o 00:30:24.203 CC module/accel/dsa/accel_dsa.o 00:30:24.203 CC module/scheduler/gscheduler/gscheduler.o 00:30:24.203 LIB libspdk_env_dpdk_rpc.a 00:30:24.203 CC module/accel/dsa/accel_dsa_rpc.o 00:30:24.203 LIB libspdk_scheduler_dpdk_governor.a 00:30:24.203 CC module/accel/error/accel_error_rpc.o 00:30:24.203 LIB libspdk_scheduler_gscheduler.a 00:30:24.203 CC module/accel/ioat/accel_ioat_rpc.o 00:30:24.203 CC module/accel/iaa/accel_iaa_rpc.o 00:30:24.203 LIB libspdk_scheduler_dynamic.a 00:30:24.203 LIB libspdk_blob_bdev.a 00:30:24.203 LIB libspdk_accel_dsa.a 00:30:24.203 LIB libspdk_accel_error.a 00:30:24.462 LIB libspdk_accel_iaa.a 00:30:24.462 LIB libspdk_accel_ioat.a 00:30:24.462 CC module/bdev/delay/vbdev_delay.o 00:30:24.462 CC module/bdev/malloc/bdev_malloc.o 00:30:24.462 CC module/bdev/lvol/vbdev_lvol.o 00:30:24.462 CC module/blobfs/bdev/blobfs_bdev.o 00:30:24.462 CC module/bdev/error/vbdev_error.o 00:30:24.462 CC module/bdev/gpt/gpt.o 00:30:24.462 CC module/bdev/passthru/vbdev_passthru.o 00:30:24.462 CC module/bdev/null/bdev_null.o 00:30:24.462 CC module/bdev/nvme/bdev_nvme.o 00:30:24.462 LIB libspdk_sock_posix.a 00:30:24.462 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:30:24.462 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:30:24.462 CC module/bdev/gpt/vbdev_gpt.o 00:30:24.720 CC module/bdev/error/vbdev_error_rpc.o 00:30:24.720 CC module/bdev/null/bdev_null_rpc.o 00:30:24.720 CC module/bdev/malloc/bdev_malloc_rpc.o 00:30:24.720 CC module/bdev/delay/vbdev_delay_rpc.o 00:30:24.720 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:30:24.720 LIB libspdk_blobfs_bdev.a 00:30:24.720 LIB libspdk_bdev_passthru.a 00:30:24.720 LIB libspdk_bdev_error.a 00:30:24.720 LIB libspdk_bdev_gpt.a 00:30:24.720 LIB libspdk_bdev_null.a 00:30:24.979 LIB libspdk_bdev_malloc.a 00:30:24.979 CC module/bdev/raid/bdev_raid.o 00:30:24.979 CC module/bdev/zone_block/vbdev_zone_block.o 00:30:24.979 CC module/bdev/split/vbdev_split.o 00:30:24.979 LIB libspdk_bdev_delay.a 00:30:24.979 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:30:24.979 CC module/bdev/aio/bdev_aio.o 00:30:24.979 CC module/bdev/aio/bdev_aio_rpc.o 00:30:24.979 CC module/bdev/ftl/bdev_ftl.o 00:30:24.979 CC module/bdev/iscsi/bdev_iscsi.o 00:30:24.979 LIB libspdk_bdev_lvol.a 00:30:24.979 CC module/bdev/ftl/bdev_ftl_rpc.o 00:30:24.979 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:30:24.979 CC module/bdev/split/vbdev_split_rpc.o 00:30:25.237 LIB libspdk_bdev_aio.a 00:30:25.237 LIB libspdk_bdev_zone_block.a 00:30:25.237 CC module/bdev/nvme/bdev_nvme_rpc.o 00:30:25.237 CC module/bdev/nvme/nvme_rpc.o 00:30:25.237 CC module/bdev/nvme/bdev_mdns_client.o 00:30:25.237 CC module/bdev/nvme/vbdev_opal.o 00:30:25.237 LIB libspdk_bdev_ftl.a 00:30:25.237 LIB libspdk_bdev_split.a 00:30:25.237 CC module/bdev/virtio/bdev_virtio_scsi.o 00:30:25.237 LIB libspdk_bdev_iscsi.a 00:30:25.237 CC module/bdev/nvme/vbdev_opal_rpc.o 00:30:25.237 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:30:25.237 CC module/bdev/virtio/bdev_virtio_blk.o 00:30:25.237 CC module/bdev/raid/bdev_raid_rpc.o 00:30:25.495 CC module/bdev/raid/bdev_raid_sb.o 00:30:25.495 CC module/bdev/raid/raid0.o 00:30:25.495 CC module/bdev/raid/raid1.o 00:30:25.495 CC module/bdev/raid/concat.o 00:30:25.495 CC module/bdev/raid/raid5f.o 00:30:25.495 CC module/bdev/virtio/bdev_virtio_rpc.o 00:30:25.753 LIB libspdk_bdev_virtio.a 00:30:25.753 LIB libspdk_bdev_raid.a 00:30:25.753 LIB libspdk_bdev_nvme.a 00:30:26.012 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:30:26.012 CC module/event/subsystems/iobuf/iobuf.o 00:30:26.012 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:30:26.012 CC module/event/subsystems/sock/sock.o 00:30:26.012 CC module/event/subsystems/scheduler/scheduler.o 00:30:26.012 CC module/event/subsystems/vmd/vmd.o 00:30:26.012 CC module/event/subsystems/vmd/vmd_rpc.o 00:30:26.269 LIB libspdk_event_vhost_blk.a 00:30:26.269 LIB libspdk_event_sock.a 00:30:26.269 LIB libspdk_event_scheduler.a 00:30:26.269 LIB libspdk_event_iobuf.a 00:30:26.270 LIB libspdk_event_vmd.a 00:30:26.270 CC module/event/subsystems/accel/accel.o 00:30:26.528 LIB libspdk_event_accel.a 00:30:26.528 CC module/event/subsystems/bdev/bdev.o 00:30:26.787 LIB libspdk_event_bdev.a 00:30:26.787 CC module/event/subsystems/nbd/nbd.o 00:30:26.787 CC module/event/subsystems/scsi/scsi.o 00:30:26.787 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:30:26.787 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:30:27.046 LIB libspdk_event_scsi.a 00:30:27.046 LIB libspdk_event_nbd.a 00:30:27.046 LIB libspdk_event_nvmf.a 00:30:27.046 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:30:27.046 CC module/event/subsystems/iscsi/iscsi.o 00:30:27.305 LIB libspdk_event_vhost_scsi.a 00:30:27.305 LIB libspdk_event_iscsi.a 00:30:27.564 CXX app/trace/trace.o 00:30:27.564 CC examples/ioat/perf/perf.o 00:30:27.564 CC examples/nvme/hello_world/hello_world.o 00:30:27.564 CC test/blobfs/mkfs/mkfs.o 00:30:27.564 CC examples/blob/hello_world/hello_blob.o 00:30:27.564 CC examples/accel/perf/accel_perf.o 00:30:27.564 CC examples/bdev/hello_world/hello_bdev.o 00:30:27.564 CC test/app/bdev_svc/bdev_svc.o 00:30:27.564 CC test/accel/dif/dif.o 00:30:27.564 CC test/bdev/bdevio/bdevio.o 00:30:27.823 LINK ioat_perf 00:30:27.823 LINK mkfs 00:30:27.823 LINK hello_world 00:30:27.823 LINK bdev_svc 00:30:27.823 LINK hello_blob 00:30:27.823 LINK hello_bdev 00:30:27.823 LINK accel_perf 00:30:27.823 LINK dif 00:30:28.081 LINK bdevio 00:30:28.081 LINK spdk_trace 00:30:36.201 CC examples/ioat/verify/verify.o 00:30:37.573 LINK verify 00:30:38.140 CC app/trace_record/trace_record.o 00:30:40.035 LINK spdk_trace_record 00:30:52.263 CC app/nvmf_tgt/nvmf_main.o 00:30:53.641 LINK nvmf_tgt 00:30:55.540 CC app/iscsi_tgt/iscsi_tgt.o 00:30:56.476 LINK iscsi_tgt 00:30:59.007 CC examples/bdev/bdevperf/bdevperf.o 00:30:59.007 CC examples/nvme/reconnect/reconnect.o 00:31:00.405 LINK reconnect 00:31:01.777 LINK bdevperf 00:31:57.998 CC examples/nvme/nvme_manage/nvme_manage.o 00:31:57.998 LINK nvme_manage 00:32:54.250 CC examples/nvme/arbitration/arbitration.o 00:32:54.250 LINK arbitration 00:32:56.794 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:32:57.053 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:32:57.619 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:32:57.879 CC examples/blob/cli/blobcli.o 00:32:58.445 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:32:58.445 LINK nvme_fuzz 00:32:59.011 CC examples/nvme/hotplug/hotplug.o 00:32:59.576 LINK blobcli 00:33:00.142 LINK vhost_fuzz 00:33:00.142 LINK hotplug 00:33:01.079 TEST_HEADER include/spdk/config.h 00:33:01.079 CXX test/cpp_headers/accel_module.o 00:33:02.015 LINK iscsi_fuzz 00:33:02.015 CXX test/cpp_headers/bit_pool.o 00:33:03.012 CXX test/cpp_headers/ioat.o 00:33:04.388 CXX test/cpp_headers/blobfs.o 00:33:05.763 CXX test/cpp_headers/notify.o 00:33:07.135 CXX test/cpp_headers/pipe.o 00:33:08.511 CXX test/cpp_headers/accel.o 00:33:09.884 CXX test/cpp_headers/file.o 00:33:11.259 CXX test/cpp_headers/version.o 00:33:11.940 CXX test/cpp_headers/trace_parser.o 00:33:13.316 CXX test/cpp_headers/opal_spec.o 00:33:14.692 CXX test/cpp_headers/uuid.o 00:33:16.067 CXX test/cpp_headers/likely.o 00:33:17.441 CXX test/cpp_headers/dif.o 00:33:19.342 CXX test/cpp_headers/memory.o 00:33:20.713 CXX test/cpp_headers/vfio_user_pci.o 00:33:22.083 CXX test/cpp_headers/dma.o 00:33:23.459 CXX test/cpp_headers/nbd.o 00:33:23.459 CXX test/cpp_headers/conf.o 00:33:24.835 CXX test/cpp_headers/env_dpdk.o 00:33:26.210 CXX test/cpp_headers/nvmf_spec.o 00:33:27.587 CXX test/cpp_headers/iscsi_spec.o 00:33:28.964 CXX test/cpp_headers/mmio.o 00:33:29.531 CC test/dma/test_dma/test_dma.o 00:33:30.118 CXX test/cpp_headers/json.o 00:33:31.514 CC test/env/mem_callbacks/mem_callbacks.o 00:33:31.514 CXX test/cpp_headers/opal.o 00:33:32.081 LINK test_dma 00:33:32.648 LINK mem_callbacks 00:33:32.907 CXX test/cpp_headers/bdev.o 00:33:34.283 CXX test/cpp_headers/base64.o 00:33:35.218 CXX test/cpp_headers/blobfs_bdev.o 00:33:36.153 CXX test/cpp_headers/nvme_ocssd.o 00:33:37.088 CXX test/cpp_headers/fd.o 00:33:37.088 CC test/env/vtophys/vtophys.o 00:33:37.654 LINK vtophys 00:33:37.913 CXX test/cpp_headers/barrier.o 00:33:38.479 CXX test/cpp_headers/scsi_spec.o 00:33:38.736 CC examples/nvme/cmb_copy/cmb_copy.o 00:33:39.300 CXX test/cpp_headers/zipf.o 00:33:39.558 LINK cmb_copy 00:33:39.815 CXX test/cpp_headers/nvmf.o 00:33:40.073 CC test/app/histogram_perf/histogram_perf.o 00:33:40.073 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:33:40.330 CXX test/cpp_headers/queue.o 00:33:40.588 LINK env_dpdk_post_init 00:33:40.588 LINK histogram_perf 00:33:40.588 CXX test/cpp_headers/xor.o 00:33:41.154 CXX test/cpp_headers/cpuset.o 00:33:41.720 CXX test/cpp_headers/thread.o 00:33:42.653 CXX test/cpp_headers/bdev_zone.o 00:33:42.653 CC examples/sock/hello_world/hello_sock.o 00:33:43.587 CXX test/cpp_headers/fd_group.o 00:33:43.845 LINK hello_sock 00:33:44.779 CXX test/cpp_headers/tree.o 00:33:44.779 CXX test/cpp_headers/blob_bdev.o 00:33:45.715 CC examples/vmd/lsvmd/lsvmd.o 00:33:45.974 CXX test/cpp_headers/crc64.o 00:33:46.540 LINK lsvmd 00:33:46.798 CXX test/cpp_headers/assert.o 00:33:47.364 CC examples/nvmf/nvmf/nvmf.o 00:33:47.938 CXX test/cpp_headers/nvme_spec.o 00:33:48.197 CC app/spdk_tgt/spdk_tgt.o 00:33:48.762 LINK nvmf 00:33:49.019 CXX test/cpp_headers/endian.o 00:33:49.585 LINK spdk_tgt 00:33:50.151 CXX test/cpp_headers/pci_ids.o 00:33:50.719 CXX test/cpp_headers/log.o 00:33:51.655 CXX test/cpp_headers/nvme_ocssd_spec.o 00:33:51.914 CC test/app/jsoncat/jsoncat.o 00:33:52.849 LINK jsoncat 00:33:53.107 CXX test/cpp_headers/ftl.o 00:33:54.484 CXX test/cpp_headers/config.o 00:33:54.743 CXX test/cpp_headers/vhost.o 00:33:55.678 CXX test/cpp_headers/bdev_module.o 00:33:57.055 CXX test/cpp_headers/nvme_intel.o 00:33:58.008 CXX test/cpp_headers/idxd_spec.o 00:33:58.950 CXX test/cpp_headers/crc16.o 00:33:59.885 CXX test/cpp_headers/nvme.o 00:34:00.820 CXX test/cpp_headers/stdinc.o 00:34:01.386 CXX test/cpp_headers/scsi.o 00:34:02.321 CC test/event/event_perf/event_perf.o 00:34:02.321 CXX test/cpp_headers/nvmf_fc_spec.o 00:34:02.887 LINK event_perf 00:34:03.144 CXX test/cpp_headers/idxd.o 00:34:03.711 CC test/env/memory/memory_ut.o 00:34:03.968 CXX test/cpp_headers/hexlify.o 00:34:04.225 CC examples/nvme/abort/abort.o 00:34:04.791 CXX test/cpp_headers/reduce.o 00:34:05.100 LINK memory_ut 00:34:05.665 CXX test/cpp_headers/crc32.o 00:34:05.665 LINK abort 00:34:06.597 CC test/app/stub/stub.o 00:34:06.855 CXX test/cpp_headers/init.o 00:34:07.789 LINK stub 00:34:07.789 CXX test/cpp_headers/nvmf_transport.o 00:34:09.189 CXX test/cpp_headers/nvme_zns.o 00:34:10.565 CC examples/vmd/led/led.o 00:34:10.565 CXX test/cpp_headers/vfio_user_spec.o 00:34:11.501 LINK led 00:34:11.759 CXX test/cpp_headers/util.o 00:34:12.695 CXX test/cpp_headers/jsonrpc.o 00:34:14.077 CXX test/cpp_headers/env.o 00:34:15.455 CXX test/cpp_headers/nvmf_cmd.o 00:34:16.022 CC test/env/pci/pci_ut.o 00:34:16.959 CXX test/cpp_headers/lvol.o 00:34:17.897 LINK pci_ut 00:34:18.157 CXX test/cpp_headers/histogram_data.o 00:34:18.747 CC test/event/reactor/reactor.o 00:34:19.680 LINK reactor 00:34:19.680 CXX test/cpp_headers/event.o 00:34:21.057 CXX test/cpp_headers/trace.o 00:34:22.433 CXX test/cpp_headers/ioat_spec.o 00:34:23.810 CXX test/cpp_headers/string.o 00:34:25.186 CXX test/cpp_headers/ublk.o 00:34:26.562 CXX test/cpp_headers/bit_array.o 00:34:27.938 CXX test/cpp_headers/scheduler.o 00:34:29.838 CXX test/cpp_headers/blob.o 00:34:31.215 CXX test/cpp_headers/gpt_spec.o 00:34:32.591 CC test/lvol/esnap/esnap.o 00:34:32.592 CXX test/cpp_headers/sock.o 00:34:33.968 CXX test/cpp_headers/vmd.o 00:34:35.344 CXX test/cpp_headers/rpc.o 00:34:37.247 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:34:38.625 LINK pmr_persistence 00:34:42.814 CC test/event/reactor_perf/reactor_perf.o 00:34:43.072 LINK reactor_perf 00:34:46.359 CC examples/util/zipf/zipf.o 00:34:46.359 CC examples/thread/thread/thread_ex.o 00:34:46.926 LINK zipf 00:34:47.862 LINK thread 00:34:50.391 CC test/nvme/aer/aer.o 00:34:50.391 LINK esnap 00:34:51.368 LINK aer 00:34:52.300 CC examples/idxd/perf/perf.o 00:34:52.865 CC test/event/app_repeat/app_repeat.o 00:34:53.796 LINK app_repeat 00:34:54.054 LINK idxd_perf 00:35:00.607 CC test/rpc_client/rpc_client_test.o 00:35:01.173 LINK rpc_client_test 00:35:13.437 CC test/thread/poller_perf/poller_perf.o 00:35:13.437 CC test/thread/lock/spdk_lock.o 00:35:14.004 LINK poller_perf 00:35:20.568 LINK spdk_lock 00:35:23.850 CC examples/interrupt_tgt/interrupt_tgt.o 00:35:24.785 LINK interrupt_tgt 00:35:28.972 CC test/nvme/reset/reset.o 00:35:28.972 CC test/nvme/sgl/sgl.o 00:35:29.907 LINK sgl 00:35:29.907 LINK reset 00:35:30.862 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:35:31.444 LINK histogram_ut 00:35:33.346 CC test/unit/lib/accel/accel.c/accel_ut.o 00:35:34.281 CC test/event/scheduler/scheduler.o 00:35:35.654 LINK scheduler 00:35:36.219 CC test/nvme/e2edp/nvme_dp.o 00:35:37.592 CC test/nvme/overhead/overhead.o 00:35:37.592 LINK nvme_dp 00:35:38.966 LINK overhead 00:35:42.327 LINK accel_ut 00:36:14.394 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:36:14.394 CC test/nvme/err_injection/err_injection.o 00:36:14.394 CC test/nvme/startup/startup.o 00:36:14.394 LINK err_injection 00:36:14.394 LINK startup 00:36:18.582 CC test/nvme/reserve/reserve.o 00:36:19.147 CC test/unit/lib/bdev/part.c/part_ut.o 00:36:19.147 LINK reserve 00:36:22.427 CC test/nvme/simple_copy/simple_copy.o 00:36:23.802 LINK simple_copy 00:36:30.368 LINK bdev_ut 00:36:33.651 LINK part_ut 00:36:45.851 CC app/spdk_lspci/spdk_lspci.o 00:36:46.416 LINK spdk_lspci 00:36:47.791 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:36:49.174 LINK scsi_nvme_ut 00:36:50.107 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:36:50.107 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:36:50.672 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:36:51.607 LINK gpt_ut 00:36:52.539 LINK vbdev_lvol_ut 00:36:53.105 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:36:53.363 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:36:54.297 LINK bdev_zone_ut 00:36:54.297 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:36:55.671 CC test/nvme/connect_stress/connect_stress.o 00:36:55.928 LINK vbdev_zone_block_ut 00:36:55.928 LINK bdev_raid_ut 00:36:56.494 LINK connect_stress 00:36:57.866 LINK bdev_ut 00:36:57.866 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:36:58.124 CC app/spdk_nvme_perf/perf.o 00:37:00.650 CC app/spdk_nvme_identify/identify.o 00:37:01.214 LINK spdk_nvme_perf 00:37:03.743 LINK spdk_nvme_identify 00:37:06.272 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:37:06.272 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:37:07.207 LINK bdev_raid_sb_ut 00:37:07.774 CC app/spdk_nvme_discover/discovery_aer.o 00:37:07.774 LINK concat_ut 00:37:08.709 LINK bdev_nvme_ut 00:37:08.967 LINK spdk_nvme_discover 00:37:11.497 CC app/spdk_top/spdk_top.o 00:37:14.027 CC app/vhost/vhost.o 00:37:14.960 LINK vhost 00:37:15.217 LINK spdk_top 00:37:16.593 CC app/spdk_dd/spdk_dd.o 00:37:18.497 LINK spdk_dd 00:37:21.038 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:37:23.587 LINK raid1_ut 00:37:23.587 CC test/nvme/boot_partition/boot_partition.o 00:37:24.521 LINK boot_partition 00:37:28.708 CC test/nvme/compliance/nvme_compliance.o 00:37:29.642 LINK nvme_compliance 00:37:30.577 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:37:30.835 CC app/fio/nvme/fio_plugin.o 00:37:32.230 LINK blob_bdev_ut 00:37:32.230 CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o 00:37:32.489 LINK spdk_nvme 00:37:33.865 CC app/fio/bdev/fio_plugin.o 00:37:35.241 LINK raid5f_ut 00:37:35.808 LINK spdk_bdev 00:37:36.375 CC test/nvme/fused_ordering/fused_ordering.o 00:37:37.311 LINK fused_ordering 00:37:39.213 CC test/unit/lib/blob/blob.c/blob_ut.o 00:37:43.397 CC test/nvme/doorbell_aers/doorbell_aers.o 00:37:44.771 LINK doorbell_aers 00:37:48.116 CC test/nvme/fdp/fdp.o 00:37:50.038 LINK fdp 00:37:54.226 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:37:54.793 LINK tree_ut 00:38:01.354 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:38:03.888 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:38:04.147 LINK blob_ut 00:38:06.677 LINK blobfs_async_ut 00:38:09.962 LINK blobfs_sync_ut 00:38:22.156 CC test/unit/lib/dma/dma.c/dma_ut.o 00:38:22.156 LINK dma_ut 00:38:26.345 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:38:26.345 CC test/unit/lib/event/app.c/app_ut.o 00:38:28.248 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:38:28.248 LINK app_ut 00:38:28.814 LINK reactor_ut 00:38:29.073 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:38:29.331 LINK ioat_ut 00:38:29.897 CC test/nvme/cuse/cuse.o 00:38:30.156 LINK blobfs_bdev_ut 00:38:31.533 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:38:32.468 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:38:32.468 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:38:32.725 LINK cuse 00:38:32.983 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:38:33.549 LINK init_grp_ut 00:38:34.484 LINK conn_ut 00:38:35.049 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:38:35.980 LINK json_util_ut 00:38:36.913 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:38:37.478 LINK iscsi_ut 00:38:37.736 LINK json_parse_ut 00:38:37.994 LINK jsonrpc_server_ut 00:38:38.569 CC test/unit/lib/iscsi/param.c/param_ut.o 00:38:40.474 LINK param_ut 00:38:40.733 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:38:43.310 LINK portal_grp_ut 00:38:46.693 CC test/unit/lib/log/log.c/log_ut.o 00:38:47.259 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:38:47.826 LINK log_ut 00:38:48.085 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:38:49.468 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:38:49.468 LINK json_write_ut 00:38:50.404 LINK tgt_node_ut 00:38:50.663 CC test/unit/lib/notify/notify.c/notify_ut.o 00:38:52.040 LINK notify_ut 00:38:52.040 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:38:52.608 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:38:53.984 LINK lvol_ut 00:38:55.887 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:38:55.887 LINK nvme_ut 00:38:56.822 LINK dev_ut 00:38:57.081 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:38:57.339 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:38:57.906 LINK scsi_ut 00:38:58.474 LINK lun_ut 00:38:59.406 LINK tcp_ut 00:38:59.973 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:38:59.973 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:39:00.539 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:39:01.105 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:39:02.037 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:39:02.972 LINK subsystem_ut 00:39:02.972 LINK ctrlr_ut 00:39:02.972 LINK ctrlr_discovery_ut 00:39:03.540 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:39:03.799 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:39:03.799 LINK scsi_bdev_ut 00:39:05.176 LINK scsi_pr_ut 00:39:05.743 LINK ctrlr_bdev_ut 00:39:06.311 LINK nvme_ctrlr_ut 00:39:09.733 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:39:09.733 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:39:10.301 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:39:10.301 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:39:10.301 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:39:10.301 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:39:10.301 LINK nvmf_ut 00:39:10.868 LINK nvme_ctrlr_ocssd_cmd_ut 00:39:10.868 LINK nvme_ctrlr_cmd_ut 00:39:11.127 LINK nvme_ns_ut 00:39:11.386 CC test/unit/lib/sock/sock.c/sock_ut.o 00:39:11.386 LINK rdma_ut 00:39:11.386 CC test/unit/lib/thread/thread.c/thread_ut.o 00:39:11.386 LINK transport_ut 00:39:11.645 CC test/unit/lib/util/base64.c/base64_ut.o 00:39:11.903 LINK base64_ut 00:39:12.162 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:39:12.421 LINK sock_ut 00:39:12.680 LINK thread_ut 00:39:12.939 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:39:13.198 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:39:13.198 LINK bit_array_ut 00:39:14.135 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:39:14.393 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:39:14.652 LINK cpuset_ut 00:39:14.910 LINK crc16_ut 00:39:16.286 LINK nvme_ns_ocssd_cmd_ut 00:39:16.286 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:39:16.286 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:39:16.544 LINK nvme_ns_cmd_ut 00:39:16.544 LINK crc32_ieee_ut 00:39:16.544 LINK crc32c_ut 00:39:16.544 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:39:16.803 LINK crc64_ut 00:39:17.061 CC test/unit/lib/sock/posix.c/posix_ut.o 00:39:17.320 CC test/unit/lib/util/dif.c/dif_ut.o 00:39:17.320 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:39:17.320 CC test/unit/lib/util/iov.c/iov_ut.o 00:39:17.579 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:39:17.579 LINK posix_ut 00:39:17.837 LINK iobuf_ut 00:39:17.837 LINK iov_ut 00:39:17.837 LINK pci_event_ut 00:39:18.096 CC test/unit/lib/util/math.c/math_ut.o 00:39:18.096 LINK dif_ut 00:39:18.355 LINK math_ut 00:39:18.613 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:39:19.179 LINK subsystem_ut 00:39:19.437 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:39:19.695 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:39:19.695 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:39:19.953 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:39:19.953 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:39:20.212 LINK idxd_user_ut 00:39:20.212 LINK pipe_ut 00:39:20.212 LINK rpc_ut 00:39:20.212 CC test/unit/lib/util/xor.c/xor_ut.o 00:39:20.212 CC test/unit/lib/util/string.c/string_ut.o 00:39:20.470 LINK nvme_pcie_ut 00:39:20.470 LINK string_ut 00:39:20.470 LINK xor_ut 00:39:21.406 LINK vhost_ut 00:39:21.406 CC test/unit/lib/rdma/common.c/common_ut.o 00:39:21.406 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:39:21.406 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:39:21.406 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:39:21.406 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:39:21.406 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:39:21.673 LINK ftl_l2p_ut 00:39:21.673 LINK common_ut 00:39:21.938 LINK idxd_ut 00:39:21.938 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:39:22.196 LINK nvme_poll_group_ut 00:39:22.454 LINK ftl_band_ut 00:39:22.454 LINK nvme_qpair_ut 00:39:22.454 LINK nvme_quirks_ut 00:39:23.020 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:39:23.020 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:39:23.278 LINK ftl_bitmap_ut 00:39:23.278 LINK ftl_io_ut 00:39:23.536 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:39:23.536 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:39:23.536 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:39:23.794 LINK ftl_mempool_ut 00:39:24.360 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:39:24.361 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:39:24.619 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:39:24.619 LINK nvme_transport_ut 00:39:24.619 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:39:24.619 LINK ftl_mngt_ut 00:39:24.879 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:39:24.879 LINK nvme_tcp_ut 00:39:25.138 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:39:25.396 LINK ftl_sb_ut 00:39:25.396 LINK nvme_io_msg_ut 00:39:25.396 LINK ftl_layout_upgrade_ut 00:39:25.656 LINK nvme_pcie_common_ut 00:39:25.914 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:39:25.914 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:39:25.914 LINK nvme_fabric_ut 00:39:26.850 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:39:26.850 LINK nvme_opal_ut 00:39:27.109 LINK nvme_rdma_ut 00:39:27.675 LINK nvme_cuse_ut 00:40:35.359 03:03:53 -- spdk/autopackage.sh@44 -- $ make -j10 clean 00:40:35.359 make[1]: Nothing to be done for 'clean'. 00:40:35.359 03:03:57 -- spdk/autopackage.sh@46 -- $ timing_exit build_release 00:40:35.359 03:03:57 -- common/autotest_common.sh@718 -- $ xtrace_disable 00:40:35.359 03:03:57 -- common/autotest_common.sh@10 -- $ set +x 00:40:35.359 03:03:57 -- spdk/autopackage.sh@48 -- $ timing_finish 00:40:35.359 03:03:57 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:40:35.359 03:03:57 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:40:35.359 03:03:57 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:40:35.359 + [[ -n 2525 ]] 00:40:35.359 + sudo kill 2525 00:40:35.359 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:40:35.366 [Pipeline] } 00:40:35.383 [Pipeline] // timeout 00:40:35.387 [Pipeline] } 00:40:35.402 [Pipeline] // stage 00:40:35.406 [Pipeline] } 00:40:35.418 [Pipeline] // catchError 00:40:35.424 [Pipeline] stage 00:40:35.426 [Pipeline] { (Stop VM) 00:40:35.435 [Pipeline] sh 00:40:35.709 + vagrant halt 00:40:38.993 ==> default: Halting domain... 00:40:47.119 [Pipeline] sh 00:40:47.399 + vagrant destroy -f 00:40:50.684 ==> default: Removing domain... 00:40:51.653 [Pipeline] sh 00:40:51.996 + mv output /var/jenkins/workspace/ubuntu20-vg-autotest/output 00:40:52.005 [Pipeline] } 00:40:52.024 [Pipeline] // stage 00:40:52.029 [Pipeline] } 00:40:52.047 [Pipeline] // dir 00:40:52.052 [Pipeline] } 00:40:52.069 [Pipeline] // wrap 00:40:52.075 [Pipeline] } 00:40:52.091 [Pipeline] // catchError 00:40:52.100 [Pipeline] stage 00:40:52.103 [Pipeline] { (Epilogue) 00:40:52.118 [Pipeline] sh 00:40:52.397 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:41:10.490 [Pipeline] catchError 00:41:10.492 [Pipeline] { 00:41:10.508 [Pipeline] sh 00:41:10.789 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:41:10.789 Artifacts sizes are good 00:41:10.799 [Pipeline] } 00:41:10.817 [Pipeline] // catchError 00:41:10.828 [Pipeline] archiveArtifacts 00:41:10.844 Archiving artifacts 00:41:11.265 [Pipeline] cleanWs 00:41:11.276 [WS-CLEANUP] Deleting project workspace... 00:41:11.276 [WS-CLEANUP] Deferred wipeout is used... 00:41:11.282 [WS-CLEANUP] done 00:41:11.285 [Pipeline] } 00:41:11.306 [Pipeline] // stage 00:41:11.312 [Pipeline] } 00:41:11.329 [Pipeline] // node 00:41:11.335 [Pipeline] End of Pipeline 00:41:11.374 Finished: SUCCESS