00:00:00.001 Started by upstream project "autotest-nightly-lts" build number 1715 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 2976 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.117 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/freebsd-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.118 The recommended git tool is: git 00:00:00.118 using credential 00000000-0000-0000-0000-000000000002 00:00:00.120 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/freebsd-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.169 Fetching changes from the remote Git repository 00:00:00.171 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.213 Using shallow fetch with depth 1 00:00:00.213 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.213 > git --version # timeout=10 00:00:00.239 > git --version # 'git version 2.39.2' 00:00:00.239 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.240 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.240 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.330 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.340 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.351 Checking out Revision d55dd09e9e6d4661df5d1073790609767cbcb60c (FETCH_HEAD) 00:00:07.351 > git config core.sparsecheckout # timeout=10 00:00:07.359 > git read-tree -mu HEAD # timeout=10 00:00:07.374 > git checkout -f d55dd09e9e6d4661df5d1073790609767cbcb60c # timeout=5 00:00:07.406 Commit message: "ansible/roles/custom_facts: Add subsystem info to VMDs' nvmes" 00:00:07.406 > git rev-list --no-walk d55dd09e9e6d4661df5d1073790609767cbcb60c # timeout=10 00:00:07.488 [Pipeline] Start of Pipeline 00:00:07.500 [Pipeline] library 00:00:07.501 Loading library shm_lib@master 00:00:07.502 Library shm_lib@master is cached. Copying from home. 00:00:07.515 [Pipeline] node 00:00:07.528 Running on VM-host-WFP7 in /var/jenkins/workspace/freebsd-vg-autotest 00:00:07.529 [Pipeline] { 00:00:07.537 [Pipeline] catchError 00:00:07.538 [Pipeline] { 00:00:07.547 [Pipeline] wrap 00:00:07.553 [Pipeline] { 00:00:07.560 [Pipeline] stage 00:00:07.562 [Pipeline] { (Prologue) 00:00:07.580 [Pipeline] echo 00:00:07.582 Node: VM-host-WFP7 00:00:07.588 [Pipeline] cleanWs 00:00:07.597 [WS-CLEANUP] Deleting project workspace... 00:00:07.597 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.603 [WS-CLEANUP] done 00:00:07.757 [Pipeline] setCustomBuildProperty 00:00:07.834 [Pipeline] nodesByLabel 00:00:07.835 Found a total of 1 nodes with the 'sorcerer' label 00:00:07.845 [Pipeline] httpRequest 00:00:07.849 HttpMethod: GET 00:00:07.849 URL: http://10.211.164.101/packages/jbp_d55dd09e9e6d4661df5d1073790609767cbcb60c.tar.gz 00:00:07.850 Sending request to url: http://10.211.164.101/packages/jbp_d55dd09e9e6d4661df5d1073790609767cbcb60c.tar.gz 00:00:07.866 Response Code: HTTP/1.1 200 OK 00:00:07.867 Success: Status code 200 is in the accepted range: 200,404 00:00:07.867 Saving response body to /var/jenkins/workspace/freebsd-vg-autotest/jbp_d55dd09e9e6d4661df5d1073790609767cbcb60c.tar.gz 00:00:13.551 [Pipeline] sh 00:00:13.835 + tar --no-same-owner -xf jbp_d55dd09e9e6d4661df5d1073790609767cbcb60c.tar.gz 00:00:13.855 [Pipeline] httpRequest 00:00:13.860 HttpMethod: GET 00:00:13.861 URL: http://10.211.164.101/packages/spdk_4b134b4abdb5f2f6eeebb3eae1bf496dfaad149f.tar.gz 00:00:13.861 Sending request to url: http://10.211.164.101/packages/spdk_4b134b4abdb5f2f6eeebb3eae1bf496dfaad149f.tar.gz 00:00:13.881 Response Code: HTTP/1.1 200 OK 00:00:13.881 Success: Status code 200 is in the accepted range: 200,404 00:00:13.882 Saving response body to /var/jenkins/workspace/freebsd-vg-autotest/spdk_4b134b4abdb5f2f6eeebb3eae1bf496dfaad149f.tar.gz 00:00:52.177 [Pipeline] sh 00:00:52.459 + tar --no-same-owner -xf spdk_4b134b4abdb5f2f6eeebb3eae1bf496dfaad149f.tar.gz 00:00:55.013 [Pipeline] sh 00:00:55.296 + git -C spdk log --oneline -n5 00:00:55.296 4b134b4ab bdev/nvme: Delay callbacks when the next operation is a failover 00:00:55.296 d2ea4ecb1 llvm/vfio: Suppress checking leaks for `spdk_nvme_ctrlr_alloc_io_qpair` 00:00:55.296 3b33f4333 test/nvme/cuse: Fix typo 00:00:55.296 bf784f7a1 test/nvme: Set SEL only when the field is supported 00:00:55.296 a5153247d autopackage: Slurp spdk-ld-path while building against native DPDK 00:00:55.315 [Pipeline] writeFile 00:00:55.332 [Pipeline] sh 00:00:55.616 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:55.628 [Pipeline] sh 00:00:55.912 + cat autorun-spdk.conf 00:00:55.912 SPDK_TEST_UNITTEST=1 00:00:55.912 SPDK_RUN_VALGRIND=0 00:00:55.912 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:55.912 SPDK_TEST_NVME=1 00:00:55.912 SPDK_TEST_BLOCKDEV=1 00:00:55.912 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:55.920 RUN_NIGHTLY=1 00:00:55.921 [Pipeline] } 00:00:55.936 [Pipeline] // stage 00:00:55.950 [Pipeline] stage 00:00:55.952 [Pipeline] { (Run VM) 00:00:55.965 [Pipeline] sh 00:00:56.248 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:56.248 + echo 'Start stage prepare_nvme.sh' 00:00:56.248 Start stage prepare_nvme.sh 00:00:56.248 + [[ -n 7 ]] 00:00:56.248 + disk_prefix=ex7 00:00:56.248 + [[ -n /var/jenkins/workspace/freebsd-vg-autotest ]] 00:00:56.248 + [[ -e /var/jenkins/workspace/freebsd-vg-autotest/autorun-spdk.conf ]] 00:00:56.248 + source /var/jenkins/workspace/freebsd-vg-autotest/autorun-spdk.conf 00:00:56.248 ++ SPDK_TEST_UNITTEST=1 00:00:56.248 ++ SPDK_RUN_VALGRIND=0 00:00:56.248 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:56.248 ++ SPDK_TEST_NVME=1 00:00:56.248 ++ SPDK_TEST_BLOCKDEV=1 00:00:56.248 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:56.248 ++ RUN_NIGHTLY=1 00:00:56.248 + cd /var/jenkins/workspace/freebsd-vg-autotest 00:00:56.248 + nvme_files=() 00:00:56.248 + declare -A nvme_files 00:00:56.248 + backend_dir=/var/lib/libvirt/images/backends 00:00:56.248 + nvme_files['nvme.img']=5G 00:00:56.248 + nvme_files['nvme-cmb.img']=5G 00:00:56.248 + nvme_files['nvme-multi0.img']=4G 00:00:56.248 + nvme_files['nvme-multi1.img']=4G 00:00:56.248 + nvme_files['nvme-multi2.img']=4G 00:00:56.248 + nvme_files['nvme-openstack.img']=8G 00:00:56.248 + nvme_files['nvme-zns.img']=5G 00:00:56.248 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:56.248 + (( SPDK_TEST_FTL == 1 )) 00:00:56.248 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:56.248 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:56.248 + for nvme in "${!nvme_files[@]}" 00:00:56.248 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:00:56.248 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:56.248 + for nvme in "${!nvme_files[@]}" 00:00:56.248 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:00:56.248 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:56.248 + for nvme in "${!nvme_files[@]}" 00:00:56.248 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:00:56.248 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:56.248 + for nvme in "${!nvme_files[@]}" 00:00:56.248 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:00:56.507 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:56.507 + for nvme in "${!nvme_files[@]}" 00:00:56.507 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:00:56.507 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:56.507 + for nvme in "${!nvme_files[@]}" 00:00:56.507 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:00:56.507 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:56.507 + for nvme in "${!nvme_files[@]}" 00:00:56.507 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:00:56.507 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:56.507 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:00:56.507 + echo 'End stage prepare_nvme.sh' 00:00:56.508 End stage prepare_nvme.sh 00:00:56.520 [Pipeline] sh 00:00:56.803 + DISTRO=freebsd13 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:56.803 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex7-nvme.img -H -a -v -f freebsd13 00:00:56.803 00:00:56.803 DIR=/var/jenkins/workspace/freebsd-vg-autotest/spdk/scripts/vagrant 00:00:56.803 SPDK_DIR=/var/jenkins/workspace/freebsd-vg-autotest/spdk 00:00:56.803 VAGRANT_TARGET=/var/jenkins/workspace/freebsd-vg-autotest 00:00:56.803 HELP=0 00:00:56.803 DRY_RUN=0 00:00:56.803 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme.img, 00:00:56.803 NVME_DISKS_TYPE=nvme, 00:00:56.803 NVME_AUTO_CREATE=0 00:00:56.803 NVME_DISKS_NAMESPACES=, 00:00:56.803 NVME_CMB=, 00:00:56.803 NVME_PMR=, 00:00:56.803 NVME_ZNS=, 00:00:56.803 NVME_MS=, 00:00:56.803 NVME_FDP=, 00:00:56.803 SPDK_VAGRANT_DISTRO=freebsd13 00:00:56.803 SPDK_VAGRANT_VMCPU=10 00:00:56.803 SPDK_VAGRANT_VMRAM=12288 00:00:56.803 SPDK_VAGRANT_PROVIDER=libvirt 00:00:56.803 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:56.803 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:56.803 SPDK_OPENSTACK_NETWORK=0 00:00:56.803 VAGRANT_PACKAGE_BOX=0 00:00:56.803 VAGRANTFILE=/var/jenkins/workspace/freebsd-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:56.803 FORCE_DISTRO=true 00:00:56.803 VAGRANT_BOX_VERSION= 00:00:56.803 EXTRA_VAGRANTFILES= 00:00:56.803 NIC_MODEL=virtio 00:00:56.803 00:00:56.803 mkdir: created directory '/var/jenkins/workspace/freebsd-vg-autotest/freebsd13-libvirt' 00:00:56.803 /var/jenkins/workspace/freebsd-vg-autotest/freebsd13-libvirt /var/jenkins/workspace/freebsd-vg-autotest 00:00:58.708 Bringing machine 'default' up with 'libvirt' provider... 00:00:59.276 ==> default: Creating image (snapshot of base box volume). 00:00:59.276 ==> default: Creating domain with the following settings... 00:00:59.276 ==> default: -- Name: freebsd13-13.2-RELEASE-1712646987-2220_default_1713299930_bb1e446b8bb8ff97810c 00:00:59.276 ==> default: -- Domain type: kvm 00:00:59.276 ==> default: -- Cpus: 10 00:00:59.276 ==> default: -- Feature: acpi 00:00:59.276 ==> default: -- Feature: apic 00:00:59.276 ==> default: -- Feature: pae 00:00:59.276 ==> default: -- Memory: 12288M 00:00:59.276 ==> default: -- Memory Backing: hugepages: 00:00:59.276 ==> default: -- Management MAC: 00:00:59.276 ==> default: -- Loader: 00:00:59.276 ==> default: -- Nvram: 00:00:59.276 ==> default: -- Base box: spdk/freebsd13 00:00:59.276 ==> default: -- Storage pool: default 00:00:59.276 ==> default: -- Image: /var/lib/libvirt/images/freebsd13-13.2-RELEASE-1712646987-2220_default_1713299930_bb1e446b8bb8ff97810c.img (32G) 00:00:59.276 ==> default: -- Volume Cache: default 00:00:59.276 ==> default: -- Kernel: 00:00:59.276 ==> default: -- Initrd: 00:00:59.276 ==> default: -- Graphics Type: vnc 00:00:59.276 ==> default: -- Graphics Port: -1 00:00:59.276 ==> default: -- Graphics IP: 127.0.0.1 00:00:59.276 ==> default: -- Graphics Password: Not defined 00:00:59.276 ==> default: -- Video Type: cirrus 00:00:59.276 ==> default: -- Video VRAM: 9216 00:00:59.276 ==> default: -- Sound Type: 00:00:59.276 ==> default: -- Keymap: en-us 00:00:59.276 ==> default: -- TPM Path: 00:00:59.276 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:59.276 ==> default: -- Command line args: 00:00:59.276 ==> default: -> value=-device, 00:00:59.276 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:00:59.276 ==> default: -> value=-drive, 00:00:59.276 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-0-drive0, 00:00:59.276 ==> default: -> value=-device, 00:00:59.276 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:59.535 ==> default: Creating shared folders metadata... 00:00:59.535 ==> default: Starting domain. 00:01:01.444 ==> default: Waiting for domain to get an IP address... 00:01:19.537 ==> default: Waiting for SSH to become available... 00:01:34.426 ==> default: Configuring and enabling network interfaces... 00:01:36.334 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/freebsd-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:41.644 ==> default: Mounting SSHFS shared folder... 00:01:42.584 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/freebsd-vg-autotest/freebsd13-libvirt/output => /home/vagrant/spdk_repo/output 00:01:42.584 ==> default: Checking Mount.. 00:01:43.153 ==> default: Folder Successfully Mounted! 00:01:43.413 ==> default: Running provisioner: file... 00:01:43.672 default: ~/.gitconfig => .gitconfig 00:01:43.936 00:01:43.936 SUCCESS! 00:01:43.936 00:01:43.936 cd to /var/jenkins/workspace/freebsd-vg-autotest/freebsd13-libvirt and type "vagrant ssh" to use. 00:01:43.936 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:43.936 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/freebsd-vg-autotest/freebsd13-libvirt" to destroy all trace of vm. 00:01:43.936 00:01:43.957 [Pipeline] } 00:01:43.972 [Pipeline] // stage 00:01:43.978 [Pipeline] dir 00:01:43.978 Running in /var/jenkins/workspace/freebsd-vg-autotest/freebsd13-libvirt 00:01:43.979 [Pipeline] { 00:01:43.988 [Pipeline] catchError 00:01:43.989 [Pipeline] { 00:01:43.999 [Pipeline] sh 00:01:44.275 + vagrant ssh-config --host vagrant 00:01:44.276 + sed -ne /^Host/,$p 00:01:44.276 + tee ssh_conf 00:01:46.813 Host vagrant 00:01:46.813 HostName 192.168.121.122 00:01:46.813 User vagrant 00:01:46.813 Port 22 00:01:46.813 UserKnownHostsFile /dev/null 00:01:46.813 StrictHostKeyChecking no 00:01:46.813 PasswordAuthentication no 00:01:46.813 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-freebsd13/13.2-RELEASE-1712646987-2220/libvirt/freebsd13 00:01:46.813 IdentitiesOnly yes 00:01:46.813 LogLevel FATAL 00:01:46.813 ForwardAgent yes 00:01:46.813 ForwardX11 yes 00:01:46.813 00:01:46.828 [Pipeline] withEnv 00:01:46.830 [Pipeline] { 00:01:46.845 [Pipeline] sh 00:01:47.127 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:47.127 source /etc/os-release 00:01:47.127 [[ -e /image.version ]] && img=$(< /image.version) 00:01:47.127 # Minimal, systemd-like check. 00:01:47.127 if [[ -e /.dockerenv ]]; then 00:01:47.127 # Clear garbage from the node's name: 00:01:47.127 # agt-er_autotest_547-896 -> autotest_547-896 00:01:47.127 # $HOSTNAME is the actual container id 00:01:47.127 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:47.127 if mountpoint -q /etc/hostname; then 00:01:47.127 # We can assume this is a mount from a host where container is running, 00:01:47.127 # so fetch its hostname to easily identify the target swarm worker. 00:01:47.127 container="$(< /etc/hostname) ($agent)" 00:01:47.127 else 00:01:47.127 # Fallback 00:01:47.127 container=$agent 00:01:47.127 fi 00:01:47.127 fi 00:01:47.127 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:47.127 00:01:47.139 [Pipeline] } 00:01:47.158 [Pipeline] // withEnv 00:01:47.167 [Pipeline] setCustomBuildProperty 00:01:47.181 [Pipeline] stage 00:01:47.183 [Pipeline] { (Tests) 00:01:47.201 [Pipeline] sh 00:01:47.483 + scp -F ssh_conf -r /var/jenkins/workspace/freebsd-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:47.498 [Pipeline] timeout 00:01:47.499 Timeout set to expire in 1 hr 0 min 00:01:47.501 [Pipeline] { 00:01:47.516 [Pipeline] sh 00:01:47.798 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:48.367 HEAD is now at 4b134b4ab bdev/nvme: Delay callbacks when the next operation is a failover 00:01:48.381 [Pipeline] sh 00:01:48.664 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:48.679 [Pipeline] sh 00:01:48.961 + scp -F ssh_conf -r /var/jenkins/workspace/freebsd-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:48.975 [Pipeline] sh 00:01:49.256 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant CXX=/usr/bin/clang++ CC=/usr/bin/clang ./autoruner.sh spdk_repo 00:01:49.256 ++ readlink -f spdk_repo 00:01:49.256 + DIR_ROOT=/usr/home/vagrant/spdk_repo 00:01:49.256 + [[ -n /usr/home/vagrant/spdk_repo ]] 00:01:49.256 + DIR_SPDK=/usr/home/vagrant/spdk_repo/spdk 00:01:49.256 + DIR_OUTPUT=/usr/home/vagrant/spdk_repo/output 00:01:49.256 + [[ -d /usr/home/vagrant/spdk_repo/spdk ]] 00:01:49.256 + [[ ! -d /usr/home/vagrant/spdk_repo/output ]] 00:01:49.256 + [[ -d /usr/home/vagrant/spdk_repo/output ]] 00:01:49.256 + cd /usr/home/vagrant/spdk_repo 00:01:49.256 + source /etc/os-release 00:01:49.256 ++ NAME=FreeBSD 00:01:49.256 ++ VERSION=13.2-RELEASE 00:01:49.256 ++ VERSION_ID=13.2 00:01:49.256 ++ ID=freebsd 00:01:49.256 ++ ANSI_COLOR='0;31' 00:01:49.256 ++ PRETTY_NAME='FreeBSD 13.2-RELEASE' 00:01:49.256 ++ CPE_NAME=cpe:/o:freebsd:freebsd:13.2 00:01:49.256 ++ HOME_URL=https://FreeBSD.org/ 00:01:49.256 ++ BUG_REPORT_URL=https://bugs.FreeBSD.org/ 00:01:49.256 + uname -a 00:01:49.256 FreeBSD freebsd-cloud-1712646987-2220.local 13.2-RELEASE FreeBSD 13.2-RELEASE releng/13.2-n254617-525ecfdad597 GENERIC amd64 00:01:49.256 + sudo /usr/home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:49.515 Contigmem (not present) 00:01:49.515 Buffer Size: not set 00:01:49.516 Num Buffers: not set 00:01:49.516 00:01:49.516 00:01:49.516 Type BDF Vendor Device Driver 00:01:49.516 NVMe 0:0:6:0 0x1b36 0x0010 nvme0 00:01:49.516 + rm -f /tmp/spdk-ld-path 00:01:49.516 + source autorun-spdk.conf 00:01:49.516 ++ SPDK_TEST_UNITTEST=1 00:01:49.516 ++ SPDK_RUN_VALGRIND=0 00:01:49.516 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:49.516 ++ SPDK_TEST_NVME=1 00:01:49.516 ++ SPDK_TEST_BLOCKDEV=1 00:01:49.516 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:49.516 ++ RUN_NIGHTLY=1 00:01:49.516 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:49.516 + [[ -n '' ]] 00:01:49.516 + sudo git config --global --add safe.directory /usr/home/vagrant/spdk_repo/spdk 00:01:49.516 + for M in /var/spdk/build-*-manifest.txt 00:01:49.516 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:49.516 + cp /var/spdk/build-pkg-manifest.txt /usr/home/vagrant/spdk_repo/output/ 00:01:49.516 + for M in /var/spdk/build-*-manifest.txt 00:01:49.516 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:49.516 + cp /var/spdk/build-repo-manifest.txt /usr/home/vagrant/spdk_repo/output/ 00:01:49.516 ++ uname 00:01:49.516 + [[ FreeBSD == \L\i\n\u\x ]] 00:01:49.516 + dmesg_pid=1257 00:01:49.516 + [[ FreeBSD == FreeBSD ]] 00:01:49.516 + export LC_ALL=C LC_CTYPE=C 00:01:49.516 + LC_ALL=C 00:01:49.516 + tail -F /var/log/messages 00:01:49.516 + LC_CTYPE=C 00:01:49.516 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:49.516 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:49.516 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:49.516 + [[ -x /usr/src/fio-static/fio ]] 00:01:49.516 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:49.516 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:49.516 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:49.516 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:01:49.516 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:01:49.516 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:01:49.516 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:49.516 + spdk/autorun.sh /usr/home/vagrant/spdk_repo/autorun-spdk.conf 00:01:49.516 Test configuration: 00:01:49.516 SPDK_TEST_UNITTEST=1 00:01:49.516 SPDK_RUN_VALGRIND=0 00:01:49.516 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:49.516 SPDK_TEST_NVME=1 00:01:49.516 SPDK_TEST_BLOCKDEV=1 00:01:49.516 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:49.781 RUN_NIGHTLY=1 20:39:40 -- common/autobuild_common.sh@15 -- $ source /usr/home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:49.781 20:39:40 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:49.781 20:39:40 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:49.781 20:39:40 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:49.781 20:39:40 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:01:49.781 20:39:40 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:01:49.781 20:39:40 -- paths/export.sh@4 -- $ export PATH 00:01:49.781 20:39:40 -- paths/export.sh@5 -- $ echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:01:49.781 20:39:40 -- common/autobuild_common.sh@434 -- $ out=/usr/home/vagrant/spdk_repo/spdk/../output 00:01:49.781 20:39:40 -- common/autobuild_common.sh@435 -- $ date +%s 00:01:49.781 20:39:40 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713299980.XXXXXX 00:01:49.781 20:39:40 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713299980.XXXXXX.C5PhScgH 00:01:49.781 20:39:40 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:01:49.781 20:39:40 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:01:49.781 20:39:40 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /usr/home/vagrant/spdk_repo/spdk/dpdk/' 00:01:49.781 20:39:40 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /usr/home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:49.781 20:39:40 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /usr/home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /usr/home/vagrant/spdk_repo/spdk/dpdk/ --exclude /usr/home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:49.781 20:39:40 -- common/autobuild_common.sh@451 -- $ get_config_params 00:01:49.781 20:39:40 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:01:49.781 20:39:40 -- common/autotest_common.sh@10 -- $ set +x 00:01:49.781 20:39:40 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio' 00:01:49.781 20:39:40 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:49.781 20:39:40 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:49.781 20:39:40 -- spdk/autobuild.sh@13 -- $ cd /usr/home/vagrant/spdk_repo/spdk 00:01:49.781 20:39:40 -- spdk/autobuild.sh@16 -- $ date -u 00:01:49.781 Tue Apr 16 20:39:40 UTC 2024 00:01:49.781 20:39:40 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:49.781 LTS-22-g4b134b4ab 00:01:49.781 20:39:40 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:49.781 20:39:40 -- spdk/autobuild.sh@23 -- $ '[' 0 -eq 1 ']' 00:01:49.781 20:39:40 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:49.781 20:39:40 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:49.781 20:39:40 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:49.781 20:39:40 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:49.781 20:39:40 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:49.781 20:39:40 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:01:49.781 20:39:40 -- spdk/autobuild.sh@58 -- $ unittest_build 00:01:49.781 20:39:40 -- common/autobuild_common.sh@411 -- $ run_test unittest_build _unittest_build 00:01:49.781 20:39:40 -- common/autotest_common.sh@1077 -- $ '[' 2 -le 1 ']' 00:01:49.781 20:39:40 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:49.781 20:39:40 -- common/autotest_common.sh@10 -- $ set +x 00:01:49.781 ************************************ 00:01:49.781 START TEST unittest_build 00:01:49.781 ************************************ 00:01:49.781 20:39:40 -- common/autotest_common.sh@1104 -- $ _unittest_build 00:01:49.781 20:39:40 -- common/autobuild_common.sh@402 -- $ /usr/home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --without-shared 00:01:50.728 Notice: Vhost, rte_vhost library, virtio, and fuse 00:01:50.728 are only supported on Linux. Turning off default feature. 00:01:50.728 Using default SPDK env in /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:50.728 Using default DPDK in /usr/home/vagrant/spdk_repo/spdk/dpdk/build 00:01:51.666 RDMA_OPTION_ID_ACK_TIMEOUT is not supported 00:01:51.926 Using 'verbs' RDMA provider 00:02:04.142 Configuring ISA-L (logfile: /usr/home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:02:16.355 Configuring ISA-L-crypto (logfile: /usr/home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:02:16.355 Creating mk/config.mk...done. 00:02:16.355 Creating mk/cc.flags.mk...done. 00:02:16.355 Type 'gmake' to build. 00:02:16.355 20:40:07 -- common/autobuild_common.sh@403 -- $ gmake -j10 00:02:16.614 gmake[1]: Nothing to be done for 'all'. 00:02:19.140 ps: stdin: not a terminal 00:02:23.339 The Meson build system 00:02:23.339 Version: 1.3.1 00:02:23.339 Source dir: /usr/home/vagrant/spdk_repo/spdk/dpdk 00:02:23.339 Build dir: /usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:23.339 Build type: native build 00:02:23.339 Program cat found: YES (/bin/cat) 00:02:23.339 Project name: DPDK 00:02:23.339 Project version: 23.11.0 00:02:23.339 C compiler for the host machine: /usr/bin/clang (clang 14.0.5 "FreeBSD clang version 14.0.5 (https://github.com/llvm/llvm-project.git llvmorg-14.0.5-0-gc12386ae247c)") 00:02:23.339 C linker for the host machine: /usr/bin/clang ld.lld 14.0.5 00:02:23.339 Host machine cpu family: x86_64 00:02:23.339 Host machine cpu: x86_64 00:02:23.339 Message: ## Building in Developer Mode ## 00:02:23.339 Program pkg-config found: YES (/usr/local/bin/pkg-config) 00:02:23.339 Program check-symbols.sh found: YES (/usr/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:23.339 Program options-ibverbs-static.sh found: YES (/usr/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:23.339 Program python3 found: YES (/usr/local/bin/python3.9) 00:02:23.339 Program cat found: YES (/bin/cat) 00:02:23.339 Compiler for C supports arguments -march=native: YES 00:02:23.339 Checking for size of "void *" : 8 00:02:23.339 Checking for size of "void *" : 8 (cached) 00:02:23.339 Library m found: YES 00:02:23.339 Library numa found: NO 00:02:23.339 Library fdt found: NO 00:02:23.339 Library execinfo found: YES 00:02:23.339 Has header "execinfo.h" : YES 00:02:23.339 Found pkg-config: YES (/usr/local/bin/pkg-config) 2.0.3 00:02:23.340 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:23.340 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:23.340 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:23.340 Run-time dependency openssl found: YES 3.0.13 00:02:23.340 Run-time dependency libpcap found: NO (tried pkgconfig) 00:02:23.340 Library pcap found: YES 00:02:23.340 Has header "pcap.h" with dependency -lpcap: YES 00:02:23.340 Compiler for C supports arguments -Wcast-qual: YES 00:02:23.340 Compiler for C supports arguments -Wdeprecated: YES 00:02:23.340 Compiler for C supports arguments -Wformat: YES 00:02:23.340 Compiler for C supports arguments -Wformat-nonliteral: YES 00:02:23.340 Compiler for C supports arguments -Wformat-security: YES 00:02:23.340 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:23.340 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:23.340 Compiler for C supports arguments -Wnested-externs: YES 00:02:23.340 Compiler for C supports arguments -Wold-style-definition: YES 00:02:23.340 Compiler for C supports arguments -Wpointer-arith: YES 00:02:23.340 Compiler for C supports arguments -Wsign-compare: YES 00:02:23.340 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:23.340 Compiler for C supports arguments -Wundef: YES 00:02:23.340 Compiler for C supports arguments -Wwrite-strings: YES 00:02:23.340 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:23.340 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:02:23.340 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:23.340 Compiler for C supports arguments -mavx512f: YES 00:02:23.340 Checking if "AVX512 checking" compiles: YES 00:02:23.340 Fetching value of define "__SSE4_2__" : 1 00:02:23.340 Fetching value of define "__AES__" : 1 00:02:23.340 Fetching value of define "__AVX__" : 1 00:02:23.340 Fetching value of define "__AVX2__" : 1 00:02:23.340 Fetching value of define "__AVX512BW__" : 1 00:02:23.340 Fetching value of define "__AVX512CD__" : 1 00:02:23.340 Fetching value of define "__AVX512DQ__" : 1 00:02:23.340 Fetching value of define "__AVX512F__" : 1 00:02:23.340 Fetching value of define "__AVX512VL__" : 1 00:02:23.340 Fetching value of define "__PCLMUL__" : 1 00:02:23.340 Fetching value of define "__RDRND__" : 1 00:02:23.340 Fetching value of define "__RDSEED__" : 1 00:02:23.340 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:23.340 Fetching value of define "__znver1__" : (undefined) 00:02:23.340 Fetching value of define "__znver2__" : (undefined) 00:02:23.340 Fetching value of define "__znver3__" : (undefined) 00:02:23.340 Fetching value of define "__znver4__" : (undefined) 00:02:23.340 Compiler for C supports arguments -Wno-format-truncation: NO 00:02:23.340 Message: lib/log: Defining dependency "log" 00:02:23.340 Message: lib/kvargs: Defining dependency "kvargs" 00:02:23.340 Message: lib/telemetry: Defining dependency "telemetry" 00:02:23.340 Checking if "Detect argument count for CPU_OR" compiles: YES 00:02:23.340 Checking for function "getentropy" : YES 00:02:23.340 Message: lib/eal: Defining dependency "eal" 00:02:23.340 Message: lib/ring: Defining dependency "ring" 00:02:23.340 Message: lib/rcu: Defining dependency "rcu" 00:02:23.340 Message: lib/mempool: Defining dependency "mempool" 00:02:23.340 Message: lib/mbuf: Defining dependency "mbuf" 00:02:23.340 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:23.340 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:23.340 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:23.340 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:23.340 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:23.340 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:23.340 Compiler for C supports arguments -mpclmul: YES 00:02:23.340 Compiler for C supports arguments -maes: YES 00:02:23.340 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:23.340 Compiler for C supports arguments -mavx512bw: YES 00:02:23.340 Compiler for C supports arguments -mavx512dq: YES 00:02:23.340 Compiler for C supports arguments -mavx512vl: YES 00:02:23.340 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:23.340 Compiler for C supports arguments -mavx2: YES 00:02:23.340 Compiler for C supports arguments -mavx: YES 00:02:23.340 Message: lib/net: Defining dependency "net" 00:02:23.340 Message: lib/meter: Defining dependency "meter" 00:02:23.340 Message: lib/ethdev: Defining dependency "ethdev" 00:02:23.340 Message: lib/pci: Defining dependency "pci" 00:02:23.340 Message: lib/cmdline: Defining dependency "cmdline" 00:02:23.340 Message: lib/hash: Defining dependency "hash" 00:02:23.340 Message: lib/timer: Defining dependency "timer" 00:02:23.340 Message: lib/compressdev: Defining dependency "compressdev" 00:02:23.340 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:23.340 Message: lib/dmadev: Defining dependency "dmadev" 00:02:23.340 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:23.340 Message: lib/reorder: Defining dependency "reorder" 00:02:23.340 Message: lib/security: Defining dependency "security" 00:02:23.340 Has header "linux/userfaultfd.h" : NO 00:02:23.340 Has header "linux/vduse.h" : NO 00:02:23.340 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:02:23.340 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:23.340 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:23.340 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:23.340 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:23.340 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:23.340 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:23.340 Message: Disabling vdpa/* drivers: missing internal dependency "vhost" 00:02:23.340 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:23.340 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:23.340 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:23.340 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:23.340 Configuring doxy-api-html.conf using configuration 00:02:23.340 Configuring doxy-api-man.conf using configuration 00:02:23.340 Program mandb found: NO 00:02:23.340 Program sphinx-build found: NO 00:02:23.340 Configuring rte_build_config.h using configuration 00:02:23.340 Message: 00:02:23.340 ================= 00:02:23.340 Applications Enabled 00:02:23.340 ================= 00:02:23.340 00:02:23.340 apps: 00:02:23.340 00:02:23.340 00:02:23.340 Message: 00:02:23.340 ================= 00:02:23.340 Libraries Enabled 00:02:23.340 ================= 00:02:23.340 00:02:23.340 libs: 00:02:23.340 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:23.340 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:23.340 cryptodev, dmadev, reorder, security, 00:02:23.340 00:02:23.340 Message: 00:02:23.340 =============== 00:02:23.340 Drivers Enabled 00:02:23.340 =============== 00:02:23.340 00:02:23.340 common: 00:02:23.340 00:02:23.340 bus: 00:02:23.340 pci, vdev, 00:02:23.340 mempool: 00:02:23.340 ring, 00:02:23.340 dma: 00:02:23.340 00:02:23.340 net: 00:02:23.340 00:02:23.340 crypto: 00:02:23.340 00:02:23.340 compress: 00:02:23.340 00:02:23.340 00:02:23.340 Message: 00:02:23.340 ================= 00:02:23.341 Content Skipped 00:02:23.341 ================= 00:02:23.341 00:02:23.341 apps: 00:02:23.341 dumpcap: explicitly disabled via build config 00:02:23.341 graph: explicitly disabled via build config 00:02:23.341 pdump: explicitly disabled via build config 00:02:23.341 proc-info: explicitly disabled via build config 00:02:23.341 test-acl: explicitly disabled via build config 00:02:23.341 test-bbdev: explicitly disabled via build config 00:02:23.341 test-cmdline: explicitly disabled via build config 00:02:23.341 test-compress-perf: explicitly disabled via build config 00:02:23.341 test-crypto-perf: explicitly disabled via build config 00:02:23.341 test-dma-perf: explicitly disabled via build config 00:02:23.341 test-eventdev: explicitly disabled via build config 00:02:23.341 test-fib: explicitly disabled via build config 00:02:23.341 test-flow-perf: explicitly disabled via build config 00:02:23.341 test-gpudev: explicitly disabled via build config 00:02:23.341 test-mldev: explicitly disabled via build config 00:02:23.341 test-pipeline: explicitly disabled via build config 00:02:23.341 test-pmd: explicitly disabled via build config 00:02:23.341 test-regex: explicitly disabled via build config 00:02:23.341 test-sad: explicitly disabled via build config 00:02:23.341 test-security-perf: explicitly disabled via build config 00:02:23.341 00:02:23.341 libs: 00:02:23.341 metrics: explicitly disabled via build config 00:02:23.341 acl: explicitly disabled via build config 00:02:23.341 bbdev: explicitly disabled via build config 00:02:23.341 bitratestats: explicitly disabled via build config 00:02:23.341 bpf: explicitly disabled via build config 00:02:23.341 cfgfile: explicitly disabled via build config 00:02:23.341 distributor: explicitly disabled via build config 00:02:23.341 efd: explicitly disabled via build config 00:02:23.341 eventdev: explicitly disabled via build config 00:02:23.341 dispatcher: explicitly disabled via build config 00:02:23.341 gpudev: explicitly disabled via build config 00:02:23.341 gro: explicitly disabled via build config 00:02:23.341 gso: explicitly disabled via build config 00:02:23.341 ip_frag: explicitly disabled via build config 00:02:23.341 jobstats: explicitly disabled via build config 00:02:23.341 latencystats: explicitly disabled via build config 00:02:23.341 lpm: explicitly disabled via build config 00:02:23.341 member: explicitly disabled via build config 00:02:23.341 pcapng: explicitly disabled via build config 00:02:23.341 power: only supported on Linux 00:02:23.341 rawdev: explicitly disabled via build config 00:02:23.341 regexdev: explicitly disabled via build config 00:02:23.341 mldev: explicitly disabled via build config 00:02:23.341 rib: explicitly disabled via build config 00:02:23.341 sched: explicitly disabled via build config 00:02:23.341 stack: explicitly disabled via build config 00:02:23.341 vhost: only supported on Linux 00:02:23.341 ipsec: explicitly disabled via build config 00:02:23.341 pdcp: explicitly disabled via build config 00:02:23.341 fib: explicitly disabled via build config 00:02:23.341 port: explicitly disabled via build config 00:02:23.341 pdump: explicitly disabled via build config 00:02:23.341 table: explicitly disabled via build config 00:02:23.341 pipeline: explicitly disabled via build config 00:02:23.341 graph: explicitly disabled via build config 00:02:23.341 node: explicitly disabled via build config 00:02:23.341 00:02:23.341 drivers: 00:02:23.341 common/cpt: not in enabled drivers build config 00:02:23.341 common/dpaax: not in enabled drivers build config 00:02:23.341 common/iavf: not in enabled drivers build config 00:02:23.341 common/idpf: not in enabled drivers build config 00:02:23.341 common/mvep: not in enabled drivers build config 00:02:23.341 common/octeontx: not in enabled drivers build config 00:02:23.341 bus/auxiliary: not in enabled drivers build config 00:02:23.341 bus/cdx: not in enabled drivers build config 00:02:23.341 bus/dpaa: not in enabled drivers build config 00:02:23.341 bus/fslmc: not in enabled drivers build config 00:02:23.341 bus/ifpga: not in enabled drivers build config 00:02:23.341 bus/platform: not in enabled drivers build config 00:02:23.341 bus/vmbus: not in enabled drivers build config 00:02:23.341 common/cnxk: not in enabled drivers build config 00:02:23.341 common/mlx5: not in enabled drivers build config 00:02:23.341 common/nfp: not in enabled drivers build config 00:02:23.341 common/qat: not in enabled drivers build config 00:02:23.341 common/sfc_efx: not in enabled drivers build config 00:02:23.341 mempool/bucket: not in enabled drivers build config 00:02:23.341 mempool/cnxk: not in enabled drivers build config 00:02:23.341 mempool/dpaa: not in enabled drivers build config 00:02:23.341 mempool/dpaa2: not in enabled drivers build config 00:02:23.341 mempool/octeontx: not in enabled drivers build config 00:02:23.341 mempool/stack: not in enabled drivers build config 00:02:23.341 dma/cnxk: not in enabled drivers build config 00:02:23.341 dma/dpaa: not in enabled drivers build config 00:02:23.341 dma/dpaa2: not in enabled drivers build config 00:02:23.341 dma/hisilicon: not in enabled drivers build config 00:02:23.341 dma/idxd: not in enabled drivers build config 00:02:23.341 dma/ioat: not in enabled drivers build config 00:02:23.341 dma/skeleton: not in enabled drivers build config 00:02:23.341 net/af_packet: not in enabled drivers build config 00:02:23.341 net/af_xdp: not in enabled drivers build config 00:02:23.341 net/ark: not in enabled drivers build config 00:02:23.341 net/atlantic: not in enabled drivers build config 00:02:23.341 net/avp: not in enabled drivers build config 00:02:23.341 net/axgbe: not in enabled drivers build config 00:02:23.341 net/bnx2x: not in enabled drivers build config 00:02:23.341 net/bnxt: not in enabled drivers build config 00:02:23.341 net/bonding: not in enabled drivers build config 00:02:23.341 net/cnxk: not in enabled drivers build config 00:02:23.341 net/cpfl: not in enabled drivers build config 00:02:23.341 net/cxgbe: not in enabled drivers build config 00:02:23.341 net/dpaa: not in enabled drivers build config 00:02:23.341 net/dpaa2: not in enabled drivers build config 00:02:23.341 net/e1000: not in enabled drivers build config 00:02:23.341 net/ena: not in enabled drivers build config 00:02:23.341 net/enetc: not in enabled drivers build config 00:02:23.341 net/enetfec: not in enabled drivers build config 00:02:23.341 net/enic: not in enabled drivers build config 00:02:23.341 net/failsafe: not in enabled drivers build config 00:02:23.341 net/fm10k: not in enabled drivers build config 00:02:23.341 net/gve: not in enabled drivers build config 00:02:23.341 net/hinic: not in enabled drivers build config 00:02:23.341 net/hns3: not in enabled drivers build config 00:02:23.341 net/i40e: not in enabled drivers build config 00:02:23.341 net/iavf: not in enabled drivers build config 00:02:23.341 net/ice: not in enabled drivers build config 00:02:23.341 net/idpf: not in enabled drivers build config 00:02:23.341 net/igc: not in enabled drivers build config 00:02:23.342 net/ionic: not in enabled drivers build config 00:02:23.342 net/ipn3ke: not in enabled drivers build config 00:02:23.342 net/ixgbe: not in enabled drivers build config 00:02:23.342 net/mana: not in enabled drivers build config 00:02:23.342 net/memif: not in enabled drivers build config 00:02:23.342 net/mlx4: not in enabled drivers build config 00:02:23.342 net/mlx5: not in enabled drivers build config 00:02:23.342 net/mvneta: not in enabled drivers build config 00:02:23.342 net/mvpp2: not in enabled drivers build config 00:02:23.342 net/netvsc: not in enabled drivers build config 00:02:23.342 net/nfb: not in enabled drivers build config 00:02:23.342 net/nfp: not in enabled drivers build config 00:02:23.342 net/ngbe: not in enabled drivers build config 00:02:23.342 net/null: not in enabled drivers build config 00:02:23.342 net/octeontx: not in enabled drivers build config 00:02:23.342 net/octeon_ep: not in enabled drivers build config 00:02:23.342 net/pcap: not in enabled drivers build config 00:02:23.342 net/pfe: not in enabled drivers build config 00:02:23.342 net/qede: not in enabled drivers build config 00:02:23.342 net/ring: not in enabled drivers build config 00:02:23.342 net/sfc: not in enabled drivers build config 00:02:23.342 net/softnic: not in enabled drivers build config 00:02:23.342 net/tap: not in enabled drivers build config 00:02:23.342 net/thunderx: not in enabled drivers build config 00:02:23.342 net/txgbe: not in enabled drivers build config 00:02:23.342 net/vdev_netvsc: not in enabled drivers build config 00:02:23.342 net/vhost: not in enabled drivers build config 00:02:23.342 net/virtio: not in enabled drivers build config 00:02:23.342 net/vmxnet3: not in enabled drivers build config 00:02:23.342 raw/*: missing internal dependency, "rawdev" 00:02:23.342 crypto/armv8: not in enabled drivers build config 00:02:23.342 crypto/bcmfs: not in enabled drivers build config 00:02:23.342 crypto/caam_jr: not in enabled drivers build config 00:02:23.342 crypto/ccp: not in enabled drivers build config 00:02:23.342 crypto/cnxk: not in enabled drivers build config 00:02:23.342 crypto/dpaa_sec: not in enabled drivers build config 00:02:23.342 crypto/dpaa2_sec: not in enabled drivers build config 00:02:23.342 crypto/ipsec_mb: not in enabled drivers build config 00:02:23.342 crypto/mlx5: not in enabled drivers build config 00:02:23.342 crypto/mvsam: not in enabled drivers build config 00:02:23.342 crypto/nitrox: not in enabled drivers build config 00:02:23.342 crypto/null: not in enabled drivers build config 00:02:23.342 crypto/octeontx: not in enabled drivers build config 00:02:23.342 crypto/openssl: not in enabled drivers build config 00:02:23.342 crypto/scheduler: not in enabled drivers build config 00:02:23.342 crypto/uadk: not in enabled drivers build config 00:02:23.342 crypto/virtio: not in enabled drivers build config 00:02:23.342 compress/isal: not in enabled drivers build config 00:02:23.342 compress/mlx5: not in enabled drivers build config 00:02:23.342 compress/octeontx: not in enabled drivers build config 00:02:23.342 compress/zlib: not in enabled drivers build config 00:02:23.342 regex/*: missing internal dependency, "regexdev" 00:02:23.342 ml/*: missing internal dependency, "mldev" 00:02:23.342 vdpa/*: missing internal dependency, "vhost" 00:02:23.342 event/*: missing internal dependency, "eventdev" 00:02:23.342 baseband/*: missing internal dependency, "bbdev" 00:02:23.342 gpu/*: missing internal dependency, "gpudev" 00:02:23.342 00:02:23.342 00:02:23.604 Build targets in project: 81 00:02:23.604 00:02:23.604 DPDK 23.11.0 00:02:23.604 00:02:23.604 User defined options 00:02:23.604 buildtype : debug 00:02:23.604 default_library : static 00:02:23.604 libdir : lib 00:02:23.604 prefix : / 00:02:23.604 c_args : -fPIC -Werror 00:02:23.604 c_link_args : 00:02:23.604 cpu_instruction_set: native 00:02:23.604 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:23.604 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:23.604 enable_docs : false 00:02:23.604 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:23.604 enable_kmods : true 00:02:23.604 tests : false 00:02:23.604 00:02:23.604 Found ninja-1.11.1 at /usr/local/bin/ninja 00:02:23.862 ninja: Entering directory `/usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:23.862 [1/231] Compiling C object lib/librte_log.a.p/log_log_freebsd.c.o 00:02:23.862 [2/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:23.862 [3/231] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:23.862 [4/231] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:23.862 [5/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:23.862 [6/231] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:24.120 [7/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:24.120 [8/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:24.120 [9/231] Linking static target lib/librte_kvargs.a 00:02:24.120 [10/231] Linking static target lib/librte_log.a 00:02:24.120 [11/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:24.120 [12/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:24.379 [13/231] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:24.379 [14/231] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:24.379 [15/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:24.379 [16/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:24.379 [17/231] Linking static target lib/librte_telemetry.a 00:02:24.379 [18/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:24.379 [19/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:24.379 [20/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:24.379 [21/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:24.379 [22/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:24.379 [23/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:24.637 [24/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:24.637 [25/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:24.637 [26/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:24.637 [27/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:24.637 [28/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:24.637 [29/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:24.637 [30/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:24.637 [31/231] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:24.637 [32/231] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:24.637 [33/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:24.637 [34/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:24.637 [35/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:24.637 [36/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:24.902 [37/231] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:24.902 [38/231] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:24.902 [39/231] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:24.902 [40/231] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.902 [41/231] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:24.902 [42/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:24.902 [43/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:24.902 [44/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:24.902 [45/231] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:24.902 [46/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:24.902 [47/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:24.902 [48/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:24.902 [49/231] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:25.171 [50/231] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:25.171 [51/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_cpuflags.c.o 00:02:25.171 [52/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_dev.c.o 00:02:25.171 [53/231] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:25.171 [54/231] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:25.171 [55/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:25.171 [56/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:25.171 [57/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:25.171 [58/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:25.171 [59/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:25.171 [60/231] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:25.438 [61/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal.c.o 00:02:25.438 [62/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_alarm.c.o 00:02:25.438 [63/231] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:25.438 [64/231] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:25.438 [65/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_hugepage_info.c.o 00:02:25.438 [66/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_lcore.c.o 00:02:25.438 [67/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_interrupts.c.o 00:02:25.438 [68/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_memalloc.c.o 00:02:25.438 [69/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_thread.c.o 00:02:25.438 [70/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_timer.c.o 00:02:25.438 [71/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_memory.c.o 00:02:25.438 [72/231] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:25.697 [73/231] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:25.697 [74/231] Linking static target lib/librte_eal.a 00:02:25.697 [75/231] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:25.697 [76/231] Linking static target lib/librte_ring.a 00:02:25.697 [77/231] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:25.697 [78/231] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:25.697 [79/231] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:25.697 [80/231] Linking static target lib/librte_rcu.a 00:02:25.697 [81/231] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:25.697 [82/231] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:25.697 [83/231] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:25.697 [84/231] Linking static target lib/librte_mempool.a 00:02:25.955 [85/231] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:25.955 [86/231] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.955 [87/231] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.955 [88/231] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:25.955 [89/231] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.955 [90/231] Linking target lib/librte_log.so.24.0 00:02:25.955 [91/231] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:26.214 [92/231] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:26.214 [93/231] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:26.215 [94/231] Linking static target lib/librte_mbuf.a 00:02:26.215 [95/231] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:26.215 [96/231] Linking target lib/librte_kvargs.so.24.0 00:02:26.215 [97/231] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:26.215 [98/231] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.215 [99/231] Linking target lib/librte_telemetry.so.24.0 00:02:26.215 [100/231] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:26.215 [101/231] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:26.215 [102/231] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:26.215 [103/231] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:26.215 [104/231] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:26.215 [105/231] Linking static target lib/librte_meter.a 00:02:26.215 [106/231] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:26.215 [107/231] Linking static target lib/librte_net.a 00:02:26.215 [108/231] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:26.474 [109/231] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.474 [110/231] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.474 [111/231] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:26.474 [112/231] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:26.474 [113/231] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:26.474 [114/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:26.733 [115/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:26.733 [116/231] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.733 [117/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:26.992 [118/231] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:26.992 [119/231] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:26.992 [120/231] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:26.992 [121/231] Linking static target lib/librte_pci.a 00:02:26.992 [122/231] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:26.992 [123/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:26.992 [124/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:26.992 [125/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:26.992 [126/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:26.992 [127/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:26.992 [128/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:26.992 [129/231] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:26.992 [130/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:27.251 [131/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:27.251 [132/231] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:27.251 [133/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:27.251 [134/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:27.251 [135/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:27.251 [136/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:27.251 [137/231] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.251 [138/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:27.251 [139/231] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:27.251 [140/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:27.251 [141/231] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.251 [142/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:27.251 [143/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:27.251 [144/231] Linking static target lib/librte_cmdline.a 00:02:27.509 [145/231] Linking static target lib/librte_ethdev.a 00:02:27.509 [146/231] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:27.509 [147/231] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:27.509 [148/231] Linking static target lib/librte_timer.a 00:02:27.509 [149/231] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:27.509 [150/231] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:27.509 [151/231] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:27.509 [152/231] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:27.509 [153/231] Linking static target lib/librte_compressdev.a 00:02:27.509 [154/231] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:27.777 [155/231] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:27.778 [156/231] Linking static target lib/librte_hash.a 00:02:27.778 [157/231] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:27.778 [158/231] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:27.778 [159/231] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.778 [160/231] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:27.778 [161/231] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:27.778 [162/231] Linking static target lib/librte_dmadev.a 00:02:28.041 [163/231] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:28.041 [164/231] Linking static target lib/librte_reorder.a 00:02:28.041 [165/231] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.041 [166/231] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:28.041 [167/231] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:28.041 [168/231] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:28.041 [169/231] Linking static target lib/librte_security.a 00:02:28.041 [170/231] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:28.041 [171/231] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:28.041 [172/231] Linking static target lib/librte_cryptodev.a 00:02:28.041 [173/231] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.041 [174/231] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.299 [175/231] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_bsd_pci.c.o 00:02:28.299 [176/231] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:28.299 [177/231] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.299 [178/231] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.299 [179/231] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.299 [180/231] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:28.299 [181/231] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:28.299 [182/231] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:28.299 [183/231] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:28.299 [184/231] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:28.299 [185/231] Linking static target drivers/librte_bus_pci.a 00:02:28.558 [186/231] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:28.558 [187/231] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:28.558 [188/231] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:28.558 [189/231] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:28.558 [190/231] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:28.558 [191/231] Linking static target drivers/librte_bus_vdev.a 00:02:28.558 [192/231] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:28.558 [193/231] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:28.558 [194/231] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:28.558 [195/231] Linking static target drivers/librte_mempool_ring.a 00:02:28.558 [196/231] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.817 [197/231] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.817 [198/231] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.754 [199/231] Generating kernel/freebsd/contigmem with a custom command 00:02:29.754 machine -> /usr/src/sys/amd64/include 00:02:29.754 x86 -> /usr/src/sys/x86/include 00:02:29.754 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/kern/device_if.m -h 00:02:29.754 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/kern/bus_if.m -h 00:02:29.754 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/dev/pci/pci_if.m -h 00:02:29.754 touch opt_global.h 00:02:29.754 clang -O2 -pipe -include rte_config.h -fno-strict-aliasing -Werror -D_KERNEL -DKLD_MODULE -nostdinc -I/usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp -I/usr/home/vagrant/spdk_repo/spdk/dpdk/config -include /usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp/kernel/freebsd/opt_global.h -I. -I/usr/src/sys -I/usr/src/sys/contrib/ck/include -fno-common -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -fdebug-prefix-map=./machine=/usr/src/sys/amd64/include -fdebug-prefix-map=./x86=/usr/src/sys/x86/include -MD -MF.depend.contigmem.o -MTcontigmem.o -mcmodel=kernel -mno-red-zone -mno-mmx -mno-sse -msoft-float -fno-asynchronous-unwind-tables -ffreestanding -fwrapv -fstack-protector -Wall -Wredundant-decls -Wnested-externs -Wstrict-prototypes -Wmissing-prototypes -Wpointer-arith -Wcast-qual -Wundef -Wno-pointer-sign -D__printf__=__freebsd_kprintf__ -Wmissing-include-dirs -fdiagnostics-show-option -Wno-unknown-pragmas -Wno-error=tautological-compare -Wno-error=empty-body -Wno-error=parentheses-equality -Wno-error=unused-function -Wno-error=pointer-sign -Wno-error=shift-negative-value -Wno-address-of-packed-member -Wno-error=unused-but-set-variable -Wno-format-zero-length -mno-aes -mno-avx -std=iso9899:1999 -c /usr/home/vagrant/spdk_repo/spdk/dpdk/kernel/freebsd/contigmem/contigmem.c -o contigmem.o 00:02:29.754 ld -m elf_x86_64_fbsd -warn-common --build-id=sha1 -T /usr/src/sys/conf/ldscript.kmod.amd64 -r -o contigmem.ko contigmem.o 00:02:29.754 :> export_syms 00:02:29.754 awk -f /usr/src/sys/conf/kmod_syms.awk contigmem.ko export_syms | xargs -J% objcopy % contigmem.ko 00:02:29.754 objcopy --strip-debug contigmem.ko 00:02:30.013 [200/231] Generating kernel/freebsd/nic_uio with a custom command 00:02:30.013 clang -O2 -pipe -include rte_config.h -fno-strict-aliasing -Werror -D_KERNEL -DKLD_MODULE -nostdinc -I/usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp -I/usr/home/vagrant/spdk_repo/spdk/dpdk/config -include /usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp/kernel/freebsd/opt_global.h -I. -I/usr/src/sys -I/usr/src/sys/contrib/ck/include -fno-common -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -fdebug-prefix-map=./machine=/usr/src/sys/amd64/include -fdebug-prefix-map=./x86=/usr/src/sys/x86/include -MD -MF.depend.nic_uio.o -MTnic_uio.o -mcmodel=kernel -mno-red-zone -mno-mmx -mno-sse -msoft-float -fno-asynchronous-unwind-tables -ffreestanding -fwrapv -fstack-protector -Wall -Wredundant-decls -Wnested-externs -Wstrict-prototypes -Wmissing-prototypes -Wpointer-arith -Wcast-qual -Wundef -Wno-pointer-sign -D__printf__=__freebsd_kprintf__ -Wmissing-include-dirs -fdiagnostics-show-option -Wno-unknown-pragmas -Wno-error=tautological-compare -Wno-error=empty-body -Wno-error=parentheses-equality -Wno-error=unused-function -Wno-error=pointer-sign -Wno-error=shift-negative-value -Wno-address-of-packed-member -Wno-error=unused-but-set-variable -Wno-format-zero-length -mno-aes -mno-avx -std=iso9899:1999 -c /usr/home/vagrant/spdk_repo/spdk/dpdk/kernel/freebsd/nic_uio/nic_uio.c -o nic_uio.o 00:02:30.013 ld -m elf_x86_64_fbsd -warn-common --build-id=sha1 -T /usr/src/sys/conf/ldscript.kmod.amd64 -r -o nic_uio.ko nic_uio.o 00:02:30.013 :> export_syms 00:02:30.013 awk -f /usr/src/sys/conf/kmod_syms.awk nic_uio.ko export_syms | xargs -J% objcopy % nic_uio.ko 00:02:30.013 objcopy --strip-debug nic_uio.ko 00:02:35.348 [201/231] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.639 [202/231] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.639 [203/231] Linking target lib/librte_eal.so.24.0 00:02:38.639 [204/231] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:38.639 [205/231] Linking target lib/librte_dmadev.so.24.0 00:02:38.639 [206/231] Linking target lib/librte_ring.so.24.0 00:02:38.639 [207/231] Linking target lib/librte_timer.so.24.0 00:02:38.639 [208/231] Linking target lib/librte_pci.so.24.0 00:02:38.639 [209/231] Linking target lib/librte_meter.so.24.0 00:02:38.639 [210/231] Linking target drivers/librte_bus_vdev.so.24.0 00:02:38.899 [211/231] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:38.899 [212/231] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:38.899 [213/231] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:38.899 [214/231] Linking target lib/librte_mempool.so.24.0 00:02:38.899 [215/231] Linking target lib/librte_rcu.so.24.0 00:02:38.899 [216/231] Linking target drivers/librte_bus_pci.so.24.0 00:02:38.899 [217/231] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:38.899 [218/231] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:38.899 [219/231] Linking target lib/librte_mbuf.so.24.0 00:02:38.899 [220/231] Linking target drivers/librte_mempool_ring.so.24.0 00:02:39.158 [221/231] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:39.158 [222/231] Linking target lib/librte_reorder.so.24.0 00:02:39.158 [223/231] Linking target lib/librte_net.so.24.0 00:02:39.158 [224/231] Linking target lib/librte_compressdev.so.24.0 00:02:39.158 [225/231] Linking target lib/librte_cryptodev.so.24.0 00:02:39.158 [226/231] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:39.158 [227/231] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:39.417 [228/231] Linking target lib/librte_cmdline.so.24.0 00:02:39.417 [229/231] Linking target lib/librte_security.so.24.0 00:02:39.417 [230/231] Linking target lib/librte_hash.so.24.0 00:02:39.417 [231/231] Linking target lib/librte_ethdev.so.24.0 00:02:39.417 INFO: autodetecting backend as ninja 00:02:39.417 INFO: calculating backend command to run: /usr/local/bin/ninja -C /usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:39.987 CC lib/ut/ut.o 00:02:39.987 CC lib/log/log.o 00:02:39.987 CC lib/log/log_flags.o 00:02:39.987 CC lib/log/log_deprecated.o 00:02:39.987 CC lib/ut_mock/mock.o 00:02:40.246 LIB libspdk_ut_mock.a 00:02:40.246 LIB libspdk_log.a 00:02:40.246 LIB libspdk_ut.a 00:02:40.246 CXX lib/trace_parser/trace.o 00:02:40.246 CC lib/ioat/ioat.o 00:02:40.246 CC lib/dma/dma.o 00:02:40.246 CC lib/util/base64.o 00:02:40.246 CC lib/util/cpuset.o 00:02:40.246 CC lib/util/bit_array.o 00:02:40.246 CC lib/util/crc16.o 00:02:40.246 CC lib/util/crc32.o 00:02:40.246 CC lib/util/crc32_ieee.o 00:02:40.246 CC lib/util/crc32c.o 00:02:40.504 CC lib/util/crc64.o 00:02:40.504 CC lib/util/dif.o 00:02:40.504 CC lib/util/fd.o 00:02:40.504 CC lib/util/file.o 00:02:40.504 CC lib/util/hexlify.o 00:02:40.504 CC lib/util/iov.o 00:02:40.504 LIB libspdk_dma.a 00:02:40.504 LIB libspdk_ioat.a 00:02:40.504 CC lib/util/math.o 00:02:40.504 CC lib/util/pipe.o 00:02:40.504 CC lib/util/strerror_tls.o 00:02:40.504 CC lib/util/string.o 00:02:40.504 CC lib/util/uuid.o 00:02:40.504 CC lib/util/fd_group.o 00:02:40.504 CC lib/util/xor.o 00:02:40.504 CC lib/util/zipf.o 00:02:40.504 LIB libspdk_util.a 00:02:40.763 CC lib/idxd/idxd_user.o 00:02:40.763 CC lib/idxd/idxd.o 00:02:40.763 CC lib/rdma/common.o 00:02:40.763 CC lib/env_dpdk/env.o 00:02:40.763 CC lib/env_dpdk/memory.o 00:02:40.763 CC lib/rdma/rdma_verbs.o 00:02:40.763 CC lib/conf/conf.o 00:02:40.763 CC lib/json/json_parse.o 00:02:40.763 CC lib/vmd/vmd.o 00:02:40.763 CC lib/json/json_util.o 00:02:40.763 LIB libspdk_conf.a 00:02:40.763 CC lib/env_dpdk/pci.o 00:02:40.763 CC lib/json/json_write.o 00:02:40.763 CC lib/env_dpdk/init.o 00:02:40.763 LIB libspdk_rdma.a 00:02:40.763 CC lib/env_dpdk/threads.o 00:02:40.763 LIB libspdk_idxd.a 00:02:40.763 CC lib/vmd/led.o 00:02:40.763 CC lib/env_dpdk/pci_ioat.o 00:02:40.763 CC lib/env_dpdk/pci_virtio.o 00:02:40.763 CC lib/env_dpdk/pci_vmd.o 00:02:40.763 CC lib/env_dpdk/pci_idxd.o 00:02:40.763 LIB libspdk_vmd.a 00:02:40.763 LIB libspdk_json.a 00:02:40.763 CC lib/env_dpdk/pci_event.o 00:02:40.763 CC lib/env_dpdk/sigbus_handler.o 00:02:40.763 CC lib/env_dpdk/pci_dpdk.o 00:02:41.022 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:41.022 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:41.022 CC lib/jsonrpc/jsonrpc_server.o 00:02:41.022 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:41.022 CC lib/jsonrpc/jsonrpc_client.o 00:02:41.022 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:41.022 LIB libspdk_jsonrpc.a 00:02:41.022 LIB libspdk_trace_parser.a 00:02:41.281 LIB libspdk_env_dpdk.a 00:02:41.281 CC lib/rpc/rpc.o 00:02:41.281 LIB libspdk_rpc.a 00:02:41.540 CC lib/sock/sock.o 00:02:41.540 CC lib/sock/sock_rpc.o 00:02:41.540 CC lib/trace/trace.o 00:02:41.540 CC lib/trace/trace_flags.o 00:02:41.540 CC lib/notify/notify.o 00:02:41.540 CC lib/trace/trace_rpc.o 00:02:41.540 CC lib/notify/notify_rpc.o 00:02:41.540 LIB libspdk_trace.a 00:02:41.540 LIB libspdk_notify.a 00:02:41.540 LIB libspdk_sock.a 00:02:41.798 CC lib/thread/thread.o 00:02:41.798 CC lib/thread/iobuf.o 00:02:41.798 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:41.798 CC lib/nvme/nvme_fabric.o 00:02:41.798 CC lib/nvme/nvme_ctrlr.o 00:02:41.798 CC lib/nvme/nvme_ns_cmd.o 00:02:41.798 CC lib/nvme/nvme_ns.o 00:02:41.798 CC lib/nvme/nvme_pcie_common.o 00:02:41.798 CC lib/nvme/nvme_pcie.o 00:02:41.798 CC lib/nvme/nvme_qpair.o 00:02:41.798 CC lib/nvme/nvme.o 00:02:41.798 LIB libspdk_thread.a 00:02:41.798 CC lib/nvme/nvme_quirks.o 00:02:42.057 CC lib/nvme/nvme_transport.o 00:02:42.057 CC lib/nvme/nvme_discovery.o 00:02:42.057 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:42.057 CC lib/accel/accel.o 00:02:42.057 CC lib/blob/blobstore.o 00:02:42.057 CC lib/blob/request.o 00:02:42.057 CC lib/init/json_config.o 00:02:42.057 CC lib/blob/zeroes.o 00:02:42.057 CC lib/accel/accel_rpc.o 00:02:42.057 CC lib/init/subsystem.o 00:02:42.057 CC lib/accel/accel_sw.o 00:02:42.314 CC lib/blob/blob_bs_dev.o 00:02:42.314 CC lib/init/subsystem_rpc.o 00:02:42.314 CC lib/init/rpc.o 00:02:42.314 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:42.314 CC lib/nvme/nvme_tcp.o 00:02:42.314 LIB libspdk_accel.a 00:02:42.314 CC lib/nvme/nvme_opal.o 00:02:42.314 CC lib/nvme/nvme_io_msg.o 00:02:42.314 LIB libspdk_init.a 00:02:42.314 CC lib/nvme/nvme_poll_group.o 00:02:42.314 CC lib/nvme/nvme_zns.o 00:02:42.314 CC lib/bdev/bdev.o 00:02:42.314 CC lib/bdev/bdev_rpc.o 00:02:42.314 CC lib/event/app.o 00:02:42.314 CC lib/event/reactor.o 00:02:42.572 LIB libspdk_blob.a 00:02:42.572 CC lib/nvme/nvme_cuse.o 00:02:42.572 CC lib/bdev/bdev_zone.o 00:02:42.572 CC lib/event/log_rpc.o 00:02:42.572 CC lib/bdev/part.o 00:02:42.572 CC lib/nvme/nvme_rdma.o 00:02:42.572 CC lib/bdev/scsi_nvme.o 00:02:42.572 CC lib/event/app_rpc.o 00:02:42.572 CC lib/event/scheduler_static.o 00:02:42.572 CC lib/blobfs/blobfs.o 00:02:42.572 CC lib/blobfs/tree.o 00:02:42.572 CC lib/lvol/lvol.o 00:02:42.572 LIB libspdk_event.a 00:02:42.831 LIB libspdk_blobfs.a 00:02:42.831 LIB libspdk_lvol.a 00:02:42.831 LIB libspdk_bdev.a 00:02:43.090 LIB libspdk_nvme.a 00:02:43.090 CC lib/scsi/lun.o 00:02:43.090 CC lib/scsi/dev.o 00:02:43.090 CC lib/scsi/port.o 00:02:43.090 CC lib/scsi/scsi.o 00:02:43.090 CC lib/scsi/scsi_pr.o 00:02:43.090 CC lib/scsi/scsi_bdev.o 00:02:43.090 CC lib/scsi/scsi_rpc.o 00:02:43.090 CC lib/scsi/task.o 00:02:43.090 CC lib/nvmf/ctrlr.o 00:02:43.090 CC lib/nvmf/ctrlr_discovery.o 00:02:43.090 CC lib/nvmf/ctrlr_bdev.o 00:02:43.090 CC lib/nvmf/subsystem.o 00:02:43.090 CC lib/nvmf/nvmf.o 00:02:43.090 CC lib/nvmf/nvmf_rpc.o 00:02:43.090 CC lib/nvmf/transport.o 00:02:43.090 CC lib/nvmf/tcp.o 00:02:43.090 CC lib/nvmf/rdma.o 00:02:43.090 LIB libspdk_scsi.a 00:02:43.354 CC lib/iscsi/conn.o 00:02:43.355 CC lib/iscsi/init_grp.o 00:02:43.355 CC lib/iscsi/iscsi.o 00:02:43.355 CC lib/iscsi/md5.o 00:02:43.355 CC lib/iscsi/param.o 00:02:43.355 CC lib/iscsi/portal_grp.o 00:02:43.355 CC lib/iscsi/tgt_node.o 00:02:43.355 CC lib/iscsi/iscsi_subsystem.o 00:02:43.355 CC lib/iscsi/iscsi_rpc.o 00:02:43.355 CC lib/iscsi/task.o 00:02:43.355 LIB libspdk_nvmf.a 00:02:43.612 LIB libspdk_iscsi.a 00:02:43.870 CC module/env_dpdk/env_dpdk_rpc.o 00:02:43.870 CC module/accel/error/accel_error.o 00:02:43.870 CC module/accel/iaa/accel_iaa.o 00:02:43.870 CC module/accel/error/accel_error_rpc.o 00:02:43.870 CC module/accel/iaa/accel_iaa_rpc.o 00:02:43.870 CC module/accel/dsa/accel_dsa.o 00:02:43.870 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:43.870 CC module/accel/ioat/accel_ioat.o 00:02:43.870 CC module/sock/posix/posix.o 00:02:43.870 CC module/blob/bdev/blob_bdev.o 00:02:43.870 LIB libspdk_env_dpdk_rpc.a 00:02:43.870 CC module/accel/dsa/accel_dsa_rpc.o 00:02:43.870 CC module/accel/ioat/accel_ioat_rpc.o 00:02:43.870 LIB libspdk_accel_error.a 00:02:43.870 LIB libspdk_scheduler_dynamic.a 00:02:43.870 LIB libspdk_accel_iaa.a 00:02:43.870 LIB libspdk_accel_dsa.a 00:02:43.870 LIB libspdk_blob_bdev.a 00:02:43.870 LIB libspdk_accel_ioat.a 00:02:44.129 LIB libspdk_sock_posix.a 00:02:44.129 CC module/bdev/nvme/bdev_nvme.o 00:02:44.129 CC module/bdev/malloc/bdev_malloc.o 00:02:44.129 CC module/bdev/gpt/gpt.o 00:02:44.129 CC module/bdev/passthru/vbdev_passthru.o 00:02:44.129 CC module/bdev/null/bdev_null.o 00:02:44.129 CC module/bdev/error/vbdev_error.o 00:02:44.129 CC module/blobfs/bdev/blobfs_bdev.o 00:02:44.129 CC module/bdev/lvol/vbdev_lvol.o 00:02:44.129 CC module/bdev/delay/vbdev_delay.o 00:02:44.129 CC module/bdev/raid/bdev_raid.o 00:02:44.129 CC module/bdev/gpt/vbdev_gpt.o 00:02:44.129 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:44.129 CC module/bdev/null/bdev_null_rpc.o 00:02:44.129 CC module/bdev/error/vbdev_error_rpc.o 00:02:44.129 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:44.129 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:44.129 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:44.129 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:44.129 LIB libspdk_blobfs_bdev.a 00:02:44.129 LIB libspdk_bdev_gpt.a 00:02:44.129 LIB libspdk_bdev_error.a 00:02:44.129 LIB libspdk_bdev_null.a 00:02:44.129 LIB libspdk_bdev_passthru.a 00:02:44.129 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:44.129 CC module/bdev/nvme/nvme_rpc.o 00:02:44.129 CC module/bdev/raid/bdev_raid_rpc.o 00:02:44.129 CC module/bdev/nvme/bdev_mdns_client.o 00:02:44.129 LIB libspdk_bdev_malloc.a 00:02:44.388 CC module/bdev/raid/bdev_raid_sb.o 00:02:44.388 LIB libspdk_bdev_delay.a 00:02:44.388 CC module/bdev/raid/raid0.o 00:02:44.388 CC module/bdev/split/vbdev_split.o 00:02:44.388 CC module/bdev/split/vbdev_split_rpc.o 00:02:44.388 LIB libspdk_bdev_lvol.a 00:02:44.388 CC module/bdev/raid/raid1.o 00:02:44.388 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:44.388 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:44.388 CC module/bdev/raid/concat.o 00:02:44.388 CC module/bdev/aio/bdev_aio_rpc.o 00:02:44.388 CC module/bdev/aio/bdev_aio.o 00:02:44.388 LIB libspdk_bdev_split.a 00:02:44.388 LIB libspdk_bdev_raid.a 00:02:44.388 LIB libspdk_bdev_zone_block.a 00:02:44.388 LIB libspdk_bdev_nvme.a 00:02:44.388 LIB libspdk_bdev_aio.a 00:02:44.957 CC module/event/subsystems/sock/sock.o 00:02:44.957 CC module/event/subsystems/vmd/vmd.o 00:02:44.957 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:44.957 CC module/event/subsystems/scheduler/scheduler.o 00:02:44.957 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:44.957 CC module/event/subsystems/iobuf/iobuf.o 00:02:44.957 LIB libspdk_event_sock.a 00:02:44.957 LIB libspdk_event_vmd.a 00:02:44.957 LIB libspdk_event_scheduler.a 00:02:44.957 LIB libspdk_event_iobuf.a 00:02:45.217 CC module/event/subsystems/accel/accel.o 00:02:45.217 LIB libspdk_event_accel.a 00:02:45.477 CC module/event/subsystems/bdev/bdev.o 00:02:45.477 LIB libspdk_event_bdev.a 00:02:45.737 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:45.737 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:45.737 CC module/event/subsystems/scsi/scsi.o 00:02:45.737 LIB libspdk_event_scsi.a 00:02:45.737 LIB libspdk_event_nvmf.a 00:02:45.996 CC module/event/subsystems/iscsi/iscsi.o 00:02:45.996 LIB libspdk_event_iscsi.a 00:02:46.254 CXX app/trace/trace.o 00:02:46.254 CC examples/ioat/perf/perf.o 00:02:46.254 CC examples/vmd/lsvmd/lsvmd.o 00:02:46.254 CC examples/sock/hello_world/hello_sock.o 00:02:46.254 CC examples/nvmf/nvmf/nvmf.o 00:02:46.254 CC examples/nvme/hello_world/hello_world.o 00:02:46.255 CC examples/accel/perf/accel_perf.o 00:02:46.255 CC examples/blob/hello_world/hello_blob.o 00:02:46.255 CC examples/bdev/hello_world/hello_bdev.o 00:02:46.255 CC test/accel/dif/dif.o 00:02:46.255 LINK lsvmd 00:02:46.255 LINK ioat_perf 00:02:46.513 LINK hello_sock 00:02:46.513 LINK hello_world 00:02:46.513 LINK hello_blob 00:02:46.513 LINK hello_bdev 00:02:46.513 LINK nvmf 00:02:46.513 LINK dif 00:02:46.513 CC examples/ioat/verify/verify.o 00:02:46.513 CC examples/vmd/led/led.o 00:02:46.513 LINK accel_perf 00:02:46.513 CC examples/blob/cli/blobcli.o 00:02:46.513 CC examples/nvme/reconnect/reconnect.o 00:02:46.513 LINK verify 00:02:46.513 LINK led 00:02:46.513 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:46.513 CC examples/bdev/bdevperf/bdevperf.o 00:02:46.513 CC examples/nvme/arbitration/arbitration.o 00:02:46.513 CC examples/util/zipf/zipf.o 00:02:46.513 LINK reconnect 00:02:46.772 CC test/app/bdev_svc/bdev_svc.o 00:02:46.772 LINK spdk_trace 00:02:46.772 LINK zipf 00:02:46.772 CC examples/nvme/hotplug/hotplug.o 00:02:46.772 LINK blobcli 00:02:46.772 CC app/trace_record/trace_record.o 00:02:46.772 LINK arbitration 00:02:46.772 CC test/bdev/bdevio/bdevio.o 00:02:46.772 LINK nvme_manage 00:02:46.772 LINK bdev_svc 00:02:46.772 LINK hotplug 00:02:46.772 LINK spdk_trace_record 00:02:46.772 CC app/nvmf_tgt/nvmf_main.o 00:02:46.772 CC examples/thread/thread/thread_ex.o 00:02:46.772 LINK bdevperf 00:02:46.772 CC examples/idxd/perf/perf.o 00:02:46.772 CC app/iscsi_tgt/iscsi_tgt.o 00:02:47.031 LINK nvmf_tgt 00:02:47.031 LINK bdevio 00:02:47.031 TEST_HEADER include/spdk/accel.h 00:02:47.031 TEST_HEADER include/spdk/accel_module.h 00:02:47.031 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:47.031 TEST_HEADER include/spdk/assert.h 00:02:47.031 TEST_HEADER include/spdk/barrier.h 00:02:47.031 TEST_HEADER include/spdk/base64.h 00:02:47.031 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:47.031 TEST_HEADER include/spdk/bdev.h 00:02:47.031 TEST_HEADER include/spdk/bdev_module.h 00:02:47.031 TEST_HEADER include/spdk/bdev_zone.h 00:02:47.031 TEST_HEADER include/spdk/bit_array.h 00:02:47.031 TEST_HEADER include/spdk/bit_pool.h 00:02:47.031 TEST_HEADER include/spdk/blob.h 00:02:47.031 TEST_HEADER include/spdk/blob_bdev.h 00:02:47.031 TEST_HEADER include/spdk/blobfs.h 00:02:47.031 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:47.031 TEST_HEADER include/spdk/conf.h 00:02:47.031 TEST_HEADER include/spdk/config.h 00:02:47.031 CC test/blobfs/mkfs/mkfs.o 00:02:47.031 TEST_HEADER include/spdk/cpuset.h 00:02:47.031 TEST_HEADER include/spdk/crc16.h 00:02:47.031 TEST_HEADER include/spdk/crc32.h 00:02:47.031 TEST_HEADER include/spdk/crc64.h 00:02:47.031 TEST_HEADER include/spdk/dif.h 00:02:47.031 TEST_HEADER include/spdk/dma.h 00:02:47.031 TEST_HEADER include/spdk/endian.h 00:02:47.031 TEST_HEADER include/spdk/env.h 00:02:47.031 TEST_HEADER include/spdk/env_dpdk.h 00:02:47.031 TEST_HEADER include/spdk/event.h 00:02:47.031 TEST_HEADER include/spdk/fd.h 00:02:47.031 TEST_HEADER include/spdk/fd_group.h 00:02:47.031 TEST_HEADER include/spdk/file.h 00:02:47.031 TEST_HEADER include/spdk/ftl.h 00:02:47.031 TEST_HEADER include/spdk/gpt_spec.h 00:02:47.031 TEST_HEADER include/spdk/hexlify.h 00:02:47.031 LINK thread 00:02:47.031 TEST_HEADER include/spdk/histogram_data.h 00:02:47.031 TEST_HEADER include/spdk/idxd.h 00:02:47.031 TEST_HEADER include/spdk/idxd_spec.h 00:02:47.031 LINK idxd_perf 00:02:47.031 TEST_HEADER include/spdk/init.h 00:02:47.031 TEST_HEADER include/spdk/ioat.h 00:02:47.031 TEST_HEADER include/spdk/ioat_spec.h 00:02:47.031 TEST_HEADER include/spdk/iscsi_spec.h 00:02:47.031 TEST_HEADER include/spdk/json.h 00:02:47.031 TEST_HEADER include/spdk/jsonrpc.h 00:02:47.031 TEST_HEADER include/spdk/likely.h 00:02:47.031 TEST_HEADER include/spdk/log.h 00:02:47.031 TEST_HEADER include/spdk/lvol.h 00:02:47.031 TEST_HEADER include/spdk/memory.h 00:02:47.031 TEST_HEADER include/spdk/mmio.h 00:02:47.031 TEST_HEADER include/spdk/nbd.h 00:02:47.031 TEST_HEADER include/spdk/notify.h 00:02:47.031 TEST_HEADER include/spdk/nvme.h 00:02:47.031 TEST_HEADER include/spdk/nvme_intel.h 00:02:47.031 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:47.031 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:47.031 LINK cmb_copy 00:02:47.031 TEST_HEADER include/spdk/nvme_spec.h 00:02:47.031 TEST_HEADER include/spdk/nvme_zns.h 00:02:47.031 LINK iscsi_tgt 00:02:47.031 TEST_HEADER include/spdk/nvmf.h 00:02:47.031 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:47.031 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:47.031 TEST_HEADER include/spdk/nvmf_spec.h 00:02:47.031 TEST_HEADER include/spdk/nvmf_transport.h 00:02:47.031 TEST_HEADER include/spdk/opal.h 00:02:47.031 TEST_HEADER include/spdk/opal_spec.h 00:02:47.031 TEST_HEADER include/spdk/pci_ids.h 00:02:47.031 TEST_HEADER include/spdk/pipe.h 00:02:47.031 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:47.031 TEST_HEADER include/spdk/queue.h 00:02:47.031 TEST_HEADER include/spdk/reduce.h 00:02:47.031 CC test/app/histogram_perf/histogram_perf.o 00:02:47.031 TEST_HEADER include/spdk/rpc.h 00:02:47.031 LINK nvme_fuzz 00:02:47.031 TEST_HEADER include/spdk/scheduler.h 00:02:47.031 TEST_HEADER include/spdk/scsi.h 00:02:47.031 TEST_HEADER include/spdk/scsi_spec.h 00:02:47.031 TEST_HEADER include/spdk/sock.h 00:02:47.031 TEST_HEADER include/spdk/stdinc.h 00:02:47.031 TEST_HEADER include/spdk/string.h 00:02:47.031 TEST_HEADER include/spdk/thread.h 00:02:47.031 TEST_HEADER include/spdk/trace.h 00:02:47.031 TEST_HEADER include/spdk/trace_parser.h 00:02:47.031 TEST_HEADER include/spdk/tree.h 00:02:47.031 TEST_HEADER include/spdk/ublk.h 00:02:47.031 TEST_HEADER include/spdk/util.h 00:02:47.031 TEST_HEADER include/spdk/uuid.h 00:02:47.031 TEST_HEADER include/spdk/version.h 00:02:47.031 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:47.031 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:47.031 TEST_HEADER include/spdk/vhost.h 00:02:47.031 TEST_HEADER include/spdk/vmd.h 00:02:47.031 TEST_HEADER include/spdk/xor.h 00:02:47.031 TEST_HEADER include/spdk/zipf.h 00:02:47.031 LINK mkfs 00:02:47.031 CXX test/cpp_headers/accel.o 00:02:47.031 CC test/dma/test_dma/test_dma.o 00:02:47.031 LINK histogram_perf 00:02:47.031 CC test/app/jsoncat/jsoncat.o 00:02:47.290 CC examples/nvme/abort/abort.o 00:02:47.290 CC test/env/mem_callbacks/mem_callbacks.o 00:02:47.290 CXX test/cpp_headers/accel_module.o 00:02:47.290 CC test/env/vtophys/vtophys.o 00:02:47.290 CC app/spdk_tgt/spdk_tgt.o 00:02:47.290 LINK jsoncat 00:02:47.290 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:47.290 CC app/spdk_lspci/spdk_lspci.o 00:02:47.290 LINK test_dma 00:02:47.290 LINK vtophys 00:02:47.290 LINK abort 00:02:47.290 LINK spdk_lspci 00:02:47.290 LINK spdk_tgt 00:02:47.290 LINK pmr_persistence 00:02:47.290 CXX test/cpp_headers/assert.o 00:02:47.290 CC test/event/event_perf/event_perf.o 00:02:47.290 CXX test/cpp_headers/barrier.o 00:02:47.290 CC app/spdk_nvme_perf/perf.o 00:02:47.290 CC test/event/reactor/reactor.o 00:02:47.290 LINK event_perf 00:02:47.290 LINK iscsi_fuzz 00:02:47.549 CC app/spdk_nvme_identify/identify.o 00:02:47.549 CC test/event/reactor_perf/reactor_perf.o 00:02:47.549 LINK reactor 00:02:47.549 gmake[2]: Nothing to be done for 'all'. 00:02:47.549 CXX test/cpp_headers/base64.o 00:02:47.549 CC app/spdk_nvme_discover/discovery_aer.o 00:02:47.549 LINK reactor_perf 00:02:47.549 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:47.549 CC app/spdk_top/spdk_top.o 00:02:47.549 LINK mem_callbacks 00:02:47.549 CC test/app/stub/stub.o 00:02:47.549 LINK spdk_nvme_perf 00:02:47.549 CXX test/cpp_headers/bdev.o 00:02:47.549 CC app/fio/nvme/fio_plugin.o 00:02:47.549 CC test/env/memory/memory_ut.o 00:02:47.549 LINK env_dpdk_post_init 00:02:47.549 LINK spdk_nvme_discover 00:02:47.549 LINK stub 00:02:47.549 CXX test/cpp_headers/bdev_module.o 00:02:47.549 LINK spdk_nvme_identify 00:02:47.549 CC test/nvme/aer/aer.o 00:02:47.807 CXX test/cpp_headers/bdev_zone.o 00:02:47.807 CC app/fio/bdev/fio_plugin.o 00:02:47.807 CC test/nvme/reset/reset.o 00:02:47.807 LINK spdk_top 00:02:47.807 CC test/rpc_client/rpc_client_test.o 00:02:47.807 CC test/env/pci/pci_ut.o 00:02:47.807 LINK aer 00:02:47.807 fio_plugin.c:1491:29: warning: field 'ruhs' with variable sized type 'struct spdk_nvme_fdp_ruhs' not at the end of a struct or class is a GNU extension [-Wgnu-variable-sized-type-not-at-end] 00:02:47.807 struct spdk_nvme_fdp_ruhs ruhs; 00:02:47.807 ^ 00:02:47.807 CXX test/cpp_headers/bit_array.o 00:02:47.807 CC test/nvme/sgl/sgl.o 00:02:47.807 LINK reset 00:02:47.807 1 warning generated. 00:02:47.807 LINK spdk_nvme 00:02:47.807 LINK rpc_client_test 00:02:47.807 LINK pci_ut 00:02:47.807 CXX test/cpp_headers/bit_pool.o 00:02:47.807 LINK sgl 00:02:47.807 LINK spdk_bdev 00:02:47.807 CC test/thread/poller_perf/poller_perf.o 00:02:48.066 CC test/nvme/e2edp/nvme_dp.o 00:02:48.066 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:02:48.066 CC test/nvme/overhead/overhead.o 00:02:48.066 CC test/nvme/err_injection/err_injection.o 00:02:48.066 LINK poller_perf 00:02:48.066 CC test/unit/lib/accel/accel.c/accel_ut.o 00:02:48.066 CXX test/cpp_headers/blob.o 00:02:48.066 LINK memory_ut 00:02:48.066 CC test/thread/lock/spdk_lock.o 00:02:48.066 LINK histogram_ut 00:02:48.066 LINK nvme_dp 00:02:48.066 CXX test/cpp_headers/blob_bdev.o 00:02:48.066 CXX test/cpp_headers/blobfs.o 00:02:48.066 LINK err_injection 00:02:48.066 LINK overhead 00:02:48.066 CXX test/cpp_headers/blobfs_bdev.o 00:02:48.066 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:02:48.066 CC test/nvme/startup/startup.o 00:02:48.066 CC test/unit/lib/bdev/part.c/part_ut.o 00:02:48.325 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:02:48.325 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:02:48.325 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:02:48.325 CXX test/cpp_headers/conf.o 00:02:48.325 CC test/unit/lib/blob/blob.c/blob_ut.o 00:02:48.325 LINK startup 00:02:48.325 LINK spdk_lock 00:02:48.325 LINK tree_ut 00:02:48.325 LINK scsi_nvme_ut 00:02:48.325 CXX test/cpp_headers/config.o 00:02:48.325 CXX test/cpp_headers/cpuset.o 00:02:48.325 CC test/nvme/reserve/reserve.o 00:02:48.325 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:02:48.325 CC test/nvme/simple_copy/simple_copy.o 00:02:48.325 LINK blob_bdev_ut 00:02:48.325 CC test/unit/lib/dma/dma.c/dma_ut.o 00:02:48.325 LINK reserve 00:02:48.325 CXX test/cpp_headers/crc16.o 00:02:48.583 LINK simple_copy 00:02:48.583 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:02:48.583 LINK accel_ut 00:02:48.583 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:02:48.583 CXX test/cpp_headers/crc32.o 00:02:48.583 LINK dma_ut 00:02:48.583 CC test/nvme/connect_stress/connect_stress.o 00:02:48.583 LINK gpt_ut 00:02:48.583 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:02:48.583 CC test/nvme/boot_partition/boot_partition.o 00:02:48.583 CXX test/cpp_headers/crc64.o 00:02:48.583 LINK blobfs_async_ut 00:02:48.583 LINK connect_stress 00:02:48.583 LINK boot_partition 00:02:48.841 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:02:48.841 CC test/unit/lib/event/app.c/app_ut.o 00:02:48.841 CXX test/cpp_headers/dif.o 00:02:48.841 LINK part_ut 00:02:48.841 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:02:48.841 CC test/nvme/compliance/nvme_compliance.o 00:02:48.841 LINK vbdev_lvol_ut 00:02:48.841 CXX test/cpp_headers/dma.o 00:02:48.841 LINK app_ut 00:02:48.841 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:02:48.841 LINK ioat_ut 00:02:48.841 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:02:48.841 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:02:49.099 LINK nvme_compliance 00:02:49.099 LINK blobfs_sync_ut 00:02:49.099 CC test/nvme/fused_ordering/fused_ordering.o 00:02:49.099 CXX test/cpp_headers/endian.o 00:02:49.099 LINK bdev_ut 00:02:49.099 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:02:49.099 LINK fused_ordering 00:02:49.099 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:02:49.099 CXX test/cpp_headers/env.o 00:02:49.099 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:02:49.099 LINK reactor_ut 00:02:49.099 LINK blobfs_bdev_ut 00:02:49.099 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:49.099 LINK conn_ut 00:02:49.099 CXX test/cpp_headers/env_dpdk.o 00:02:49.099 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:02:49.099 CC test/nvme/fdp/fdp.o 00:02:49.357 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:02:49.357 LINK jsonrpc_server_ut 00:02:49.357 LINK doorbell_aers 00:02:49.357 LINK bdev_ut 00:02:49.357 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:02:49.357 LINK fdp 00:02:49.357 CXX test/cpp_headers/event.o 00:02:49.357 LINK init_grp_ut 00:02:49.357 CC test/unit/lib/log/log.c/log_ut.o 00:02:49.357 LINK json_util_ut 00:02:49.357 CXX test/cpp_headers/fd.o 00:02:49.357 LINK json_parse_ut 00:02:49.357 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:02:49.357 LINK blob_ut 00:02:49.357 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:02:49.357 LINK bdev_raid_sb_ut 00:02:49.357 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:02:49.357 CC test/unit/lib/iscsi/param.c/param_ut.o 00:02:49.615 LINK log_ut 00:02:49.615 CXX test/cpp_headers/fd_group.o 00:02:49.615 CC test/unit/lib/notify/notify.c/notify_ut.o 00:02:49.615 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:02:49.615 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:02:49.615 LINK bdev_raid_ut 00:02:49.615 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:02:49.615 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:02:49.615 LINK notify_ut 00:02:49.615 CXX test/cpp_headers/file.o 00:02:49.615 LINK bdev_zone_ut 00:02:49.615 LINK param_ut 00:02:49.615 LINK concat_ut 00:02:49.615 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:02:49.616 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:02:49.616 CXX test/cpp_headers/ftl.o 00:02:49.875 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:02:49.875 LINK json_write_ut 00:02:49.875 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:02:49.875 LINK vbdev_zone_block_ut 00:02:49.875 LINK portal_grp_ut 00:02:49.875 LINK lvol_ut 00:02:49.875 CXX test/cpp_headers/gpt_spec.o 00:02:49.875 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:02:49.875 LINK raid1_ut 00:02:49.875 CXX test/cpp_headers/hexlify.o 00:02:49.875 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:02:49.875 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:02:49.875 LINK tgt_node_ut 00:02:49.875 LINK iscsi_ut 00:02:49.875 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:02:49.875 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:02:49.875 CXX test/cpp_headers/histogram_data.o 00:02:49.875 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:02:49.875 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:02:50.133 LINK dev_ut 00:02:50.133 CXX test/cpp_headers/idxd.o 00:02:50.133 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:02:50.133 CXX test/cpp_headers/idxd_spec.o 00:02:50.133 LINK nvme_ut 00:02:50.392 LINK lun_ut 00:02:50.392 CXX test/cpp_headers/init.o 00:02:50.392 LINK ctrlr_discovery_ut 00:02:50.392 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:02:50.392 LINK ctrlr_ut 00:02:50.392 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:02:50.392 LINK nvme_ctrlr_cmd_ut 00:02:50.392 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:02:50.392 CXX test/cpp_headers/ioat.o 00:02:50.392 LINK subsystem_ut 00:02:50.392 CXX test/cpp_headers/ioat_spec.o 00:02:50.392 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:02:50.392 LINK scsi_ut 00:02:50.392 CXX test/cpp_headers/iscsi_spec.o 00:02:50.392 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:02:50.650 LINK ctrlr_bdev_ut 00:02:50.651 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:02:50.651 CC test/unit/lib/sock/sock.c/sock_ut.o 00:02:50.651 LINK nvmf_ut 00:02:50.910 CXX test/cpp_headers/json.o 00:02:50.910 LINK scsi_pr_ut 00:02:50.910 LINK scsi_bdev_ut 00:02:50.910 LINK nvme_ctrlr_ocssd_cmd_ut 00:02:50.910 LINK bdev_nvme_ut 00:02:50.910 LINK sock_ut 00:02:50.910 LINK nvme_ctrlr_ut 00:02:50.910 LINK tcp_ut 00:02:51.167 CXX test/cpp_headers/jsonrpc.o 00:02:51.167 CXX test/cpp_headers/likely.o 00:02:51.168 CXX test/cpp_headers/log.o 00:02:51.168 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:02:51.168 CC test/unit/lib/sock/posix.c/posix_ut.o 00:02:51.168 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:02:51.168 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:02:51.168 CC test/unit/lib/thread/thread.c/thread_ut.o 00:02:51.168 CC test/unit/lib/util/base64.c/base64_ut.o 00:02:51.168 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:02:51.168 CXX test/cpp_headers/lvol.o 00:02:51.168 LINK base64_ut 00:02:51.168 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:02:51.168 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:02:51.168 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:02:51.425 CXX test/cpp_headers/memory.o 00:02:51.425 LINK pci_event_ut 00:02:51.425 LINK posix_ut 00:02:51.425 LINK iobuf_ut 00:02:51.425 CXX test/cpp_headers/mmio.o 00:02:51.425 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:02:51.425 LINK bit_array_ut 00:02:51.425 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:02:51.425 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:02:51.425 LINK thread_ut 00:02:51.425 LINK cpuset_ut 00:02:51.425 LINK nvme_ns_ut 00:02:51.425 CXX test/cpp_headers/nbd.o 00:02:51.425 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:02:51.425 CXX test/cpp_headers/notify.o 00:02:51.425 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:02:51.425 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:02:51.684 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:02:51.684 LINK subsystem_ut 00:02:51.684 LINK crc16_ut 00:02:51.684 LINK rdma_ut 00:02:51.684 CXX test/cpp_headers/nvme.o 00:02:51.684 LINK transport_ut 00:02:51.684 LINK crc32_ieee_ut 00:02:51.684 CXX test/cpp_headers/nvme_intel.o 00:02:51.684 LINK rpc_ut 00:02:51.684 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:02:51.684 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:02:51.684 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:02:51.684 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:02:51.684 LINK nvme_ns_cmd_ut 00:02:51.684 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:02:51.684 LINK crc32c_ut 00:02:51.684 CXX test/cpp_headers/nvme_ocssd.o 00:02:51.684 LINK crc64_ut 00:02:51.684 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:51.684 CC test/unit/lib/util/dif.c/dif_ut.o 00:02:51.684 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:02:51.942 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:02:51.942 CXX test/cpp_headers/nvme_spec.o 00:02:51.942 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:02:51.942 LINK nvme_ns_ocssd_cmd_ut 00:02:51.942 LINK idxd_user_ut 00:02:51.942 CXX test/cpp_headers/nvme_zns.o 00:02:51.942 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:02:51.942 LINK nvme_quirks_ut 00:02:51.942 LINK nvme_poll_group_ut 00:02:51.942 LINK dif_ut 00:02:52.200 LINK nvme_pcie_ut 00:02:52.200 CXX test/cpp_headers/nvmf.o 00:02:52.200 CC test/unit/lib/util/iov.c/iov_ut.o 00:02:52.200 CC test/unit/lib/rdma/common.c/common_ut.o 00:02:52.200 CXX test/cpp_headers/nvmf_cmd.o 00:02:52.200 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:02:52.200 LINK nvme_qpair_ut 00:02:52.200 LINK iov_ut 00:02:52.200 CC test/unit/lib/util/math.c/math_ut.o 00:02:52.200 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:02:52.200 LINK idxd_ut 00:02:52.200 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:52.200 CXX test/cpp_headers/nvmf_spec.o 00:02:52.200 LINK math_ut 00:02:52.200 LINK common_ut 00:02:52.200 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:02:52.200 CXX test/cpp_headers/nvmf_transport.o 00:02:52.200 CC test/unit/lib/util/string.c/string_ut.o 00:02:52.200 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:02:52.200 LINK nvme_transport_ut 00:02:52.200 LINK pipe_ut 00:02:52.457 CXX test/cpp_headers/opal.o 00:02:52.457 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:02:52.457 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:02:52.457 CC test/unit/lib/util/xor.c/xor_ut.o 00:02:52.457 LINK string_ut 00:02:52.457 CXX test/cpp_headers/opal_spec.o 00:02:52.457 CXX test/cpp_headers/pci_ids.o 00:02:52.457 LINK xor_ut 00:02:52.457 LINK nvme_tcp_ut 00:02:52.457 CXX test/cpp_headers/pipe.o 00:02:52.457 CXX test/cpp_headers/queue.o 00:02:52.457 LINK nvme_io_msg_ut 00:02:52.457 CXX test/cpp_headers/reduce.o 00:02:52.457 CXX test/cpp_headers/rpc.o 00:02:52.457 CXX test/cpp_headers/scheduler.o 00:02:52.457 CXX test/cpp_headers/scsi.o 00:02:52.457 LINK nvme_opal_ut 00:02:52.458 CXX test/cpp_headers/scsi_spec.o 00:02:52.458 CXX test/cpp_headers/sock.o 00:02:52.715 CXX test/cpp_headers/stdinc.o 00:02:52.715 CXX test/cpp_headers/string.o 00:02:52.715 CXX test/cpp_headers/thread.o 00:02:52.715 CXX test/cpp_headers/trace.o 00:02:52.715 CXX test/cpp_headers/trace_parser.o 00:02:52.715 CXX test/cpp_headers/tree.o 00:02:52.715 LINK nvme_fabric_ut 00:02:52.715 CXX test/cpp_headers/ublk.o 00:02:52.715 CXX test/cpp_headers/util.o 00:02:52.715 CXX test/cpp_headers/uuid.o 00:02:52.715 CXX test/cpp_headers/version.o 00:02:52.715 CXX test/cpp_headers/vfio_user_pci.o 00:02:52.715 LINK nvme_pcie_common_ut 00:02:52.715 CXX test/cpp_headers/vfio_user_spec.o 00:02:52.715 CXX test/cpp_headers/vhost.o 00:02:52.715 CXX test/cpp_headers/vmd.o 00:02:52.715 CXX test/cpp_headers/xor.o 00:02:52.715 CXX test/cpp_headers/zipf.o 00:02:52.973 LINK nvme_rdma_ut 00:02:52.973 00:02:52.973 real 1m3.171s 00:02:52.973 user 3m19.987s 00:02:52.973 sys 0m46.641s 00:02:52.973 20:40:44 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:52.973 20:40:44 -- common/autotest_common.sh@10 -- $ set +x 00:02:52.973 ************************************ 00:02:52.973 END TEST unittest_build 00:02:52.973 ************************************ 00:02:53.232 20:40:44 -- spdk/autotest.sh@25 -- # source /usr/home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:02:53.232 20:40:44 -- nvmf/common.sh@7 -- # uname -s 00:02:53.232 20:40:44 -- nvmf/common.sh@7 -- # [[ FreeBSD == FreeBSD ]] 00:02:53.232 20:40:44 -- nvmf/common.sh@7 -- # return 0 00:02:53.232 20:40:44 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:53.232 20:40:44 -- spdk/autotest.sh@32 -- # uname -s 00:02:53.232 20:40:44 -- spdk/autotest.sh@32 -- # '[' FreeBSD = Linux ']' 00:02:53.232 20:40:44 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:53.232 20:40:44 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:02:53.232 20:40:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:53.232 20:40:44 -- common/autotest_common.sh@10 -- # set +x 00:02:53.232 20:40:44 -- spdk/autotest.sh@70 -- # create_test_list 00:02:53.232 20:40:44 -- common/autotest_common.sh@736 -- # xtrace_disable 00:02:53.232 20:40:44 -- common/autotest_common.sh@10 -- # set +x 00:02:53.232 20:40:44 -- spdk/autotest.sh@72 -- # dirname /usr/home/vagrant/spdk_repo/spdk/autotest.sh 00:02:53.232 20:40:44 -- spdk/autotest.sh@72 -- # readlink -f /usr/home/vagrant/spdk_repo/spdk 00:02:53.232 20:40:44 -- spdk/autotest.sh@72 -- # src=/usr/home/vagrant/spdk_repo/spdk 00:02:53.232 20:40:44 -- spdk/autotest.sh@73 -- # out=/usr/home/vagrant/spdk_repo/spdk/../output 00:02:53.232 20:40:44 -- spdk/autotest.sh@74 -- # cd /usr/home/vagrant/spdk_repo/spdk 00:02:53.232 20:40:44 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:02:53.232 20:40:44 -- common/autotest_common.sh@1440 -- # uname 00:02:53.232 20:40:44 -- common/autotest_common.sh@1440 -- # '[' FreeBSD = FreeBSD ']' 00:02:53.232 20:40:44 -- common/autotest_common.sh@1441 -- # kldunload contigmem.ko 00:02:53.232 kldunload: can't find file contigmem.ko 00:02:53.232 20:40:44 -- common/autotest_common.sh@1441 -- # true 00:02:53.232 20:40:44 -- common/autotest_common.sh@1442 -- # '[' -n '' ']' 00:02:53.232 20:40:44 -- common/autotest_common.sh@1448 -- # cp -f /usr/home/vagrant/spdk_repo/spdk/dpdk/build/kmod/contigmem.ko /boot/modules/ 00:02:53.232 20:40:44 -- common/autotest_common.sh@1449 -- # cp -f /usr/home/vagrant/spdk_repo/spdk/dpdk/build/kmod/contigmem.ko /boot/kernel/ 00:02:53.232 20:40:44 -- common/autotest_common.sh@1450 -- # cp -f /usr/home/vagrant/spdk_repo/spdk/dpdk/build/kmod/nic_uio.ko /boot/modules/ 00:02:53.232 20:40:44 -- common/autotest_common.sh@1451 -- # cp -f /usr/home/vagrant/spdk_repo/spdk/dpdk/build/kmod/nic_uio.ko /boot/kernel/ 00:02:53.232 20:40:44 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:02:53.232 20:40:44 -- common/autotest_common.sh@1460 -- # uname 00:02:53.232 20:40:44 -- common/autotest_common.sh@1460 -- # [[ FreeBSD = FreeBSD ]] 00:02:53.232 20:40:44 -- common/autotest_common.sh@1460 -- # sysctl -n kern.ipc.maxsockbuf 00:02:53.232 20:40:44 -- common/autotest_common.sh@1460 -- # (( 2097152 < 4194304 )) 00:02:53.232 20:40:44 -- common/autotest_common.sh@1461 -- # sysctl kern.ipc.maxsockbuf=4194304 00:02:53.232 kern.ipc.maxsockbuf: 2097152 -> 4194304 00:02:53.232 20:40:44 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:02:53.232 20:40:44 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=clang 00:02:53.232 20:40:44 -- spdk/autotest.sh@83 -- # hash lcov 00:02:53.232 /usr/home/vagrant/spdk_repo/spdk/autotest.sh: line 83: hash: lcov: not found 00:02:53.232 20:40:44 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:02:53.232 20:40:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:53.232 20:40:44 -- common/autotest_common.sh@10 -- # set +x 00:02:53.232 20:40:44 -- spdk/autotest.sh@102 -- # rm -f 00:02:53.232 20:40:44 -- spdk/autotest.sh@105 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:02:53.490 kldunload: can't find file contigmem.ko 00:02:53.490 kldunload: can't find file nic_uio.ko 00:02:53.490 20:40:44 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:02:53.491 20:40:44 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:02:53.491 20:40:44 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:02:53.491 20:40:44 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:02:53.491 20:40:44 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:02:53.491 20:40:44 -- spdk/autotest.sh@121 -- # ls /dev/nvme0ns1 00:02:53.491 20:40:44 -- spdk/autotest.sh@121 -- # grep -v p 00:02:53.491 20:40:44 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:02:53.491 20:40:44 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:02:53.491 20:40:44 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0ns1 00:02:53.491 20:40:44 -- scripts/common.sh@380 -- # local block=/dev/nvme0ns1 pt 00:02:53.491 20:40:44 -- scripts/common.sh@389 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0ns1 00:02:53.491 nvme0ns1 is not a block device 00:02:53.491 20:40:44 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0ns1 00:02:53.491 /usr/home/vagrant/spdk_repo/spdk/scripts/common.sh: line 393: blkid: command not found 00:02:53.491 20:40:44 -- scripts/common.sh@393 -- # pt= 00:02:53.491 20:40:44 -- scripts/common.sh@394 -- # return 1 00:02:53.491 20:40:44 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0ns1 bs=1M count=1 00:02:53.491 1+0 records in 00:02:53.491 1+0 records out 00:02:53.491 1048576 bytes transferred in 0.007123 secs (147202981 bytes/sec) 00:02:53.491 20:40:44 -- spdk/autotest.sh@129 -- # sync 00:02:54.057 20:40:44 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:54.058 20:40:44 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:54.058 20:40:44 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:54.624 20:40:45 -- spdk/autotest.sh@135 -- # uname -s 00:02:54.624 20:40:45 -- spdk/autotest.sh@135 -- # '[' FreeBSD = Linux ']' 00:02:54.624 20:40:45 -- spdk/autotest.sh@139 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:54.624 Contigmem (not present) 00:02:54.624 Buffer Size: not set 00:02:54.624 Num Buffers: not set 00:02:54.624 00:02:54.624 00:02:54.624 Type BDF Vendor Device Driver 00:02:54.624 NVMe 0:0:6:0 0x1b36 0x0010 nvme0 00:02:54.624 20:40:45 -- spdk/autotest.sh@141 -- # uname -s 00:02:54.624 20:40:45 -- spdk/autotest.sh@141 -- # [[ FreeBSD == Linux ]] 00:02:54.624 20:40:45 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:02:54.624 20:40:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:02:54.624 20:40:45 -- common/autotest_common.sh@10 -- # set +x 00:02:54.624 20:40:45 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:02:54.624 20:40:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:54.624 20:40:45 -- common/autotest_common.sh@10 -- # set +x 00:02:54.624 20:40:45 -- spdk/autotest.sh@150 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:02:54.624 kldunload: can't find file nic_uio.ko 00:02:54.624 hw.nic_uio.bdfs="0:6:0" 00:02:54.886 hw.contigmem.num_buffers="8" 00:02:54.886 hw.contigmem.buffer_size="268435456" 00:02:55.144 20:40:46 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:02:55.144 20:40:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:02:55.144 20:40:46 -- common/autotest_common.sh@10 -- # set +x 00:02:55.403 20:40:46 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:02:55.403 20:40:46 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:02:55.403 20:40:46 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:02:55.403 20:40:46 -- common/autotest_common.sh@1562 -- # bdfs=() 00:02:55.403 20:40:46 -- common/autotest_common.sh@1562 -- # local bdfs 00:02:55.403 20:40:46 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:02:55.403 20:40:46 -- common/autotest_common.sh@1498 -- # bdfs=() 00:02:55.403 20:40:46 -- common/autotest_common.sh@1498 -- # local bdfs 00:02:55.403 20:40:46 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:02:55.403 20:40:46 -- common/autotest_common.sh@1499 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:02:55.403 20:40:46 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:02:55.403 20:40:46 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:02:55.403 20:40:46 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:02:55.403 20:40:46 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:02:55.403 20:40:46 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:02:55.403 cat: /sys/bus/pci/devices/0000:00:06.0/device: No such file or directory 00:02:55.403 20:40:46 -- common/autotest_common.sh@1565 -- # device= 00:02:55.403 20:40:46 -- common/autotest_common.sh@1565 -- # true 00:02:55.403 20:40:46 -- common/autotest_common.sh@1566 -- # [[ '' == \0\x\0\a\5\4 ]] 00:02:55.403 20:40:46 -- common/autotest_common.sh@1571 -- # printf '%s\n' 00:02:55.403 20:40:46 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:02:55.403 20:40:46 -- common/autotest_common.sh@1578 -- # return 0 00:02:55.403 20:40:46 -- spdk/autotest.sh@161 -- # '[' 1 -eq 1 ']' 00:02:55.403 20:40:46 -- spdk/autotest.sh@162 -- # run_test unittest /usr/home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:02:55.403 20:40:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:55.403 20:40:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:55.403 20:40:46 -- common/autotest_common.sh@10 -- # set +x 00:02:55.403 ************************************ 00:02:55.403 START TEST unittest 00:02:55.403 ************************************ 00:02:55.403 20:40:46 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:02:55.403 +++ dirname /usr/home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:02:55.403 ++ readlink -f /usr/home/vagrant/spdk_repo/spdk/test/unit 00:02:55.403 + testdir=/usr/home/vagrant/spdk_repo/spdk/test/unit 00:02:55.403 +++ dirname /usr/home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:02:55.403 ++ readlink -f /usr/home/vagrant/spdk_repo/spdk/test/unit/../.. 00:02:55.403 + rootdir=/usr/home/vagrant/spdk_repo/spdk 00:02:55.403 + source /usr/home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:02:55.403 ++ rpc_py=rpc_cmd 00:02:55.403 ++ set -e 00:02:55.403 ++ shopt -s nullglob 00:02:55.403 ++ shopt -s extglob 00:02:55.403 ++ [[ -e /usr/home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:02:55.403 ++ source /usr/home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:02:55.403 +++ CONFIG_WPDK_DIR= 00:02:55.403 +++ CONFIG_ASAN=n 00:02:55.403 +++ CONFIG_VBDEV_COMPRESS=n 00:02:55.403 +++ CONFIG_HAVE_EXECINFO_H=y 00:02:55.403 +++ CONFIG_USDT=n 00:02:55.403 +++ CONFIG_CUSTOMOCF=n 00:02:55.403 +++ CONFIG_PREFIX=/usr/local 00:02:55.403 +++ CONFIG_RBD=n 00:02:55.403 +++ CONFIG_LIBDIR= 00:02:55.403 +++ CONFIG_IDXD=y 00:02:55.403 +++ CONFIG_NVME_CUSE=n 00:02:55.403 +++ CONFIG_SMA=n 00:02:55.403 +++ CONFIG_VTUNE=n 00:02:55.403 +++ CONFIG_TSAN=n 00:02:55.403 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:02:55.403 +++ CONFIG_VFIO_USER_DIR= 00:02:55.403 +++ CONFIG_PGO_CAPTURE=n 00:02:55.403 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=n 00:02:55.403 +++ CONFIG_ENV=/usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:55.403 +++ CONFIG_LTO=n 00:02:55.403 +++ CONFIG_ISCSI_INITIATOR=n 00:02:55.403 +++ CONFIG_CET=n 00:02:55.403 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:02:55.403 +++ CONFIG_OCF_PATH= 00:02:55.403 +++ CONFIG_RDMA_SET_TOS=y 00:02:55.403 +++ CONFIG_HAVE_ARC4RANDOM=y 00:02:55.403 +++ CONFIG_HAVE_LIBARCHIVE=n 00:02:55.403 +++ CONFIG_UBLK=n 00:02:55.403 +++ CONFIG_ISAL_CRYPTO=y 00:02:55.403 +++ CONFIG_OPENSSL_PATH= 00:02:55.403 +++ CONFIG_OCF=n 00:02:55.403 +++ CONFIG_FUSE=n 00:02:55.403 +++ CONFIG_VTUNE_DIR= 00:02:55.403 +++ CONFIG_FUZZER_LIB= 00:02:55.403 +++ CONFIG_FUZZER=n 00:02:55.403 +++ CONFIG_DPDK_DIR=/usr/home/vagrant/spdk_repo/spdk/dpdk/build 00:02:55.403 +++ CONFIG_CRYPTO=n 00:02:55.403 +++ CONFIG_PGO_USE=n 00:02:55.403 +++ CONFIG_VHOST=n 00:02:55.403 +++ CONFIG_DAOS=n 00:02:55.403 +++ CONFIG_DPDK_INC_DIR= 00:02:55.403 +++ CONFIG_DAOS_DIR= 00:02:55.403 +++ CONFIG_UNIT_TESTS=y 00:02:55.403 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=n 00:02:55.403 +++ CONFIG_VIRTIO=n 00:02:55.403 +++ CONFIG_COVERAGE=n 00:02:55.403 +++ CONFIG_RDMA=y 00:02:55.403 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:02:55.403 +++ CONFIG_URING_PATH= 00:02:55.403 +++ CONFIG_XNVME=n 00:02:55.403 +++ CONFIG_VFIO_USER=n 00:02:55.403 +++ CONFIG_ARCH=native 00:02:55.403 +++ CONFIG_URING_ZNS=n 00:02:55.403 +++ CONFIG_WERROR=y 00:02:55.403 +++ CONFIG_HAVE_LIBBSD=n 00:02:55.403 +++ CONFIG_UBSAN=n 00:02:55.403 +++ CONFIG_IPSEC_MB_DIR= 00:02:55.403 +++ CONFIG_GOLANG=n 00:02:55.403 +++ CONFIG_ISAL=y 00:02:55.403 +++ CONFIG_IDXD_KERNEL=n 00:02:55.403 +++ CONFIG_DPDK_LIB_DIR= 00:02:55.403 +++ CONFIG_RDMA_PROV=verbs 00:02:55.403 +++ CONFIG_APPS=y 00:02:55.403 +++ CONFIG_SHARED=n 00:02:55.403 +++ CONFIG_FC_PATH= 00:02:55.403 +++ CONFIG_DPDK_PKG_CONFIG=n 00:02:55.403 +++ CONFIG_FC=n 00:02:55.403 +++ CONFIG_AVAHI=n 00:02:55.403 +++ CONFIG_FIO_PLUGIN=y 00:02:55.403 +++ CONFIG_RAID5F=n 00:02:55.403 +++ CONFIG_EXAMPLES=y 00:02:55.403 +++ CONFIG_TESTS=y 00:02:55.403 +++ CONFIG_CRYPTO_MLX5=n 00:02:55.403 +++ CONFIG_MAX_LCORES= 00:02:55.403 +++ CONFIG_IPSEC_MB=n 00:02:55.403 +++ CONFIG_DEBUG=y 00:02:55.403 +++ CONFIG_DPDK_COMPRESSDEV=n 00:02:55.403 +++ CONFIG_CROSS_PREFIX= 00:02:55.403 +++ CONFIG_URING=n 00:02:55.403 ++ source /usr/home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:02:55.403 +++++ dirname /usr/home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:02:55.403 ++++ readlink -f /usr/home/vagrant/spdk_repo/spdk/test/common 00:02:55.403 +++ _root=/usr/home/vagrant/spdk_repo/spdk/test/common 00:02:55.403 +++ _root=/usr/home/vagrant/spdk_repo/spdk 00:02:55.403 +++ _app_dir=/usr/home/vagrant/spdk_repo/spdk/build/bin 00:02:55.403 +++ _test_app_dir=/usr/home/vagrant/spdk_repo/spdk/test/app 00:02:55.403 +++ _examples_dir=/usr/home/vagrant/spdk_repo/spdk/build/examples 00:02:55.403 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:02:55.403 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:02:55.403 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:02:55.403 +++ VHOST_APP=("$_app_dir/vhost") 00:02:55.403 +++ DD_APP=("$_app_dir/spdk_dd") 00:02:55.403 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:02:55.403 +++ [[ -e /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:02:55.403 +++ [[ #ifndef SPDK_CONFIG_H 00:02:55.403 #define SPDK_CONFIG_H 00:02:55.403 #define SPDK_CONFIG_APPS 1 00:02:55.403 #define SPDK_CONFIG_ARCH native 00:02:55.403 #undef SPDK_CONFIG_ASAN 00:02:55.403 #undef SPDK_CONFIG_AVAHI 00:02:55.403 #undef SPDK_CONFIG_CET 00:02:55.403 #undef SPDK_CONFIG_COVERAGE 00:02:55.403 #define SPDK_CONFIG_CROSS_PREFIX 00:02:55.404 #undef SPDK_CONFIG_CRYPTO 00:02:55.404 #undef SPDK_CONFIG_CRYPTO_MLX5 00:02:55.404 #undef SPDK_CONFIG_CUSTOMOCF 00:02:55.404 #undef SPDK_CONFIG_DAOS 00:02:55.404 #define SPDK_CONFIG_DAOS_DIR 00:02:55.404 #define SPDK_CONFIG_DEBUG 1 00:02:55.404 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:02:55.404 #define SPDK_CONFIG_DPDK_DIR /usr/home/vagrant/spdk_repo/spdk/dpdk/build 00:02:55.404 #define SPDK_CONFIG_DPDK_INC_DIR 00:02:55.404 #define SPDK_CONFIG_DPDK_LIB_DIR 00:02:55.404 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:02:55.404 #define SPDK_CONFIG_ENV /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:55.404 #define SPDK_CONFIG_EXAMPLES 1 00:02:55.404 #undef SPDK_CONFIG_FC 00:02:55.404 #define SPDK_CONFIG_FC_PATH 00:02:55.404 #define SPDK_CONFIG_FIO_PLUGIN 1 00:02:55.404 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:02:55.404 #undef SPDK_CONFIG_FUSE 00:02:55.404 #undef SPDK_CONFIG_FUZZER 00:02:55.404 #define SPDK_CONFIG_FUZZER_LIB 00:02:55.404 #undef SPDK_CONFIG_GOLANG 00:02:55.404 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:02:55.404 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:02:55.404 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:02:55.404 #undef SPDK_CONFIG_HAVE_LIBBSD 00:02:55.404 #undef SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 00:02:55.404 #define SPDK_CONFIG_IDXD 1 00:02:55.404 #undef SPDK_CONFIG_IDXD_KERNEL 00:02:55.404 #undef SPDK_CONFIG_IPSEC_MB 00:02:55.404 #define SPDK_CONFIG_IPSEC_MB_DIR 00:02:55.404 #define SPDK_CONFIG_ISAL 1 00:02:55.404 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:02:55.404 #undef SPDK_CONFIG_ISCSI_INITIATOR 00:02:55.404 #define SPDK_CONFIG_LIBDIR 00:02:55.404 #undef SPDK_CONFIG_LTO 00:02:55.404 #define SPDK_CONFIG_MAX_LCORES 00:02:55.404 #undef SPDK_CONFIG_NVME_CUSE 00:02:55.404 #undef SPDK_CONFIG_OCF 00:02:55.404 #define SPDK_CONFIG_OCF_PATH 00:02:55.404 #define SPDK_CONFIG_OPENSSL_PATH 00:02:55.404 #undef SPDK_CONFIG_PGO_CAPTURE 00:02:55.404 #undef SPDK_CONFIG_PGO_USE 00:02:55.404 #define SPDK_CONFIG_PREFIX /usr/local 00:02:55.404 #undef SPDK_CONFIG_RAID5F 00:02:55.404 #undef SPDK_CONFIG_RBD 00:02:55.404 #define SPDK_CONFIG_RDMA 1 00:02:55.404 #define SPDK_CONFIG_RDMA_PROV verbs 00:02:55.404 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:02:55.404 #undef SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 00:02:55.404 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:02:55.404 #undef SPDK_CONFIG_SHARED 00:02:55.404 #undef SPDK_CONFIG_SMA 00:02:55.404 #define SPDK_CONFIG_TESTS 1 00:02:55.404 #undef SPDK_CONFIG_TSAN 00:02:55.404 #undef SPDK_CONFIG_UBLK 00:02:55.404 #undef SPDK_CONFIG_UBSAN 00:02:55.404 #define SPDK_CONFIG_UNIT_TESTS 1 00:02:55.404 #undef SPDK_CONFIG_URING 00:02:55.404 #define SPDK_CONFIG_URING_PATH 00:02:55.404 #undef SPDK_CONFIG_URING_ZNS 00:02:55.404 #undef SPDK_CONFIG_USDT 00:02:55.404 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:02:55.404 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:02:55.404 #undef SPDK_CONFIG_VFIO_USER 00:02:55.404 #define SPDK_CONFIG_VFIO_USER_DIR 00:02:55.404 #undef SPDK_CONFIG_VHOST 00:02:55.404 #undef SPDK_CONFIG_VIRTIO 00:02:55.404 #undef SPDK_CONFIG_VTUNE 00:02:55.404 #define SPDK_CONFIG_VTUNE_DIR 00:02:55.404 #define SPDK_CONFIG_WERROR 1 00:02:55.404 #define SPDK_CONFIG_WPDK_DIR 00:02:55.404 #undef SPDK_CONFIG_XNVME 00:02:55.404 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:02:55.404 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:02:55.404 ++ source /usr/home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:55.404 +++ [[ -e /bin/wpdk_common.sh ]] 00:02:55.404 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:55.404 +++ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:55.404 ++++ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:02:55.404 ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:02:55.404 ++++ export PATH 00:02:55.404 ++++ echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:02:55.404 ++ source /usr/home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:02:55.404 +++++ dirname /usr/home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:02:55.404 ++++ readlink -f /usr/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:02:55.404 +++ _pmdir=/usr/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:02:55.404 ++++ readlink -f /usr/home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:02:55.404 +++ _pmrootdir=/usr/home/vagrant/spdk_repo/spdk 00:02:55.404 +++ TEST_TAG=N/A 00:02:55.404 +++ TEST_TAG_FILE=/usr/home/vagrant/spdk_repo/spdk/.run_test_name 00:02:55.404 ++ : 1 00:02:55.404 ++ export RUN_NIGHTLY 00:02:55.404 ++ : 0 00:02:55.404 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:02:55.404 ++ : 0 00:02:55.404 ++ export SPDK_RUN_VALGRIND 00:02:55.404 ++ : 1 00:02:55.404 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:02:55.404 ++ : 1 00:02:55.404 ++ export SPDK_TEST_UNITTEST 00:02:55.404 ++ : 00:02:55.404 ++ export SPDK_TEST_AUTOBUILD 00:02:55.404 ++ : 0 00:02:55.404 ++ export SPDK_TEST_RELEASE_BUILD 00:02:55.404 ++ : 0 00:02:55.404 ++ export SPDK_TEST_ISAL 00:02:55.404 ++ : 0 00:02:55.404 ++ export SPDK_TEST_ISCSI 00:02:55.404 ++ : 0 00:02:55.404 ++ export SPDK_TEST_ISCSI_INITIATOR 00:02:55.404 ++ : 1 00:02:55.404 ++ export SPDK_TEST_NVME 00:02:55.404 ++ : 0 00:02:55.404 ++ export SPDK_TEST_NVME_PMR 00:02:55.404 ++ : 0 00:02:55.404 ++ export SPDK_TEST_NVME_BP 00:02:55.404 ++ : 0 00:02:55.404 ++ export SPDK_TEST_NVME_CLI 00:02:55.404 ++ : 0 00:02:55.404 ++ export SPDK_TEST_NVME_CUSE 00:02:55.404 ++ : 0 00:02:55.404 ++ export SPDK_TEST_NVME_FDP 00:02:55.404 ++ : 0 00:02:55.404 ++ export SPDK_TEST_NVMF 00:02:55.404 ++ : 0 00:02:55.404 ++ export SPDK_TEST_VFIOUSER 00:02:55.404 ++ : 0 00:02:55.404 ++ export SPDK_TEST_VFIOUSER_QEMU 00:02:55.404 ++ : 0 00:02:55.404 ++ export SPDK_TEST_FUZZER 00:02:55.404 ++ : 0 00:02:55.404 ++ export SPDK_TEST_FUZZER_SHORT 00:02:55.404 ++ : rdma 00:02:55.404 ++ export SPDK_TEST_NVMF_TRANSPORT 00:02:55.404 ++ : 0 00:02:55.404 ++ export SPDK_TEST_RBD 00:02:55.404 ++ : 0 00:02:55.404 ++ export SPDK_TEST_VHOST 00:02:55.404 ++ : 1 00:02:55.404 ++ export SPDK_TEST_BLOCKDEV 00:02:55.404 ++ : 0 00:02:55.404 ++ export SPDK_TEST_IOAT 00:02:55.404 ++ : 0 00:02:55.404 ++ export SPDK_TEST_BLOBFS 00:02:55.404 ++ : 0 00:02:55.404 ++ export SPDK_TEST_VHOST_INIT 00:02:55.404 ++ : 0 00:02:55.404 ++ export SPDK_TEST_LVOL 00:02:55.404 ++ : 0 00:02:55.404 ++ export SPDK_TEST_VBDEV_COMPRESS 00:02:55.404 ++ : 0 00:02:55.404 ++ export SPDK_RUN_ASAN 00:02:55.404 ++ : 0 00:02:55.404 ++ export SPDK_RUN_UBSAN 00:02:55.404 ++ : 00:02:55.404 ++ export SPDK_RUN_EXTERNAL_DPDK 00:02:55.404 ++ : 0 00:02:55.404 ++ export SPDK_RUN_NON_ROOT 00:02:55.404 ++ : 0 00:02:55.404 ++ export SPDK_TEST_CRYPTO 00:02:55.404 ++ : 0 00:02:55.404 ++ export SPDK_TEST_FTL 00:02:55.404 ++ : 0 00:02:55.404 ++ export SPDK_TEST_OCF 00:02:55.404 ++ : 0 00:02:55.404 ++ export SPDK_TEST_VMD 00:02:55.404 ++ : 0 00:02:55.404 ++ export SPDK_TEST_OPAL 00:02:55.404 ++ : 00:02:55.404 ++ export SPDK_TEST_NATIVE_DPDK 00:02:55.404 ++ : true 00:02:55.404 ++ export SPDK_AUTOTEST_X 00:02:55.404 ++ : 0 00:02:55.404 ++ export SPDK_TEST_RAID5 00:02:55.404 ++ : 0 00:02:55.404 ++ export SPDK_TEST_URING 00:02:55.404 ++ : 0 00:02:55.404 ++ export SPDK_TEST_USDT 00:02:55.404 ++ : 0 00:02:55.404 ++ export SPDK_TEST_USE_IGB_UIO 00:02:55.404 ++ : 0 00:02:55.404 ++ export SPDK_TEST_SCHEDULER 00:02:55.404 ++ : 0 00:02:55.404 ++ export SPDK_TEST_SCANBUILD 00:02:55.404 ++ : 00:02:55.404 ++ export SPDK_TEST_NVMF_NICS 00:02:55.404 ++ : 0 00:02:55.404 ++ export SPDK_TEST_SMA 00:02:55.404 ++ : 0 00:02:55.404 ++ export SPDK_TEST_DAOS 00:02:55.404 ++ : 0 00:02:55.404 ++ export SPDK_TEST_XNVME 00:02:55.404 ++ : 0 00:02:55.404 ++ export SPDK_TEST_ACCEL_DSA 00:02:55.404 ++ : 0 00:02:55.404 ++ export SPDK_TEST_ACCEL_IAA 00:02:55.404 ++ : 0 00:02:55.404 ++ export SPDK_TEST_ACCEL_IOAT 00:02:55.404 ++ : 00:02:55.404 ++ export SPDK_TEST_FUZZER_TARGET 00:02:55.404 ++ : 0 00:02:55.404 ++ export SPDK_TEST_NVMF_MDNS 00:02:55.404 ++ : 0 00:02:55.404 ++ export SPDK_JSONRPC_GO_CLIENT 00:02:55.404 ++ export SPDK_LIB_DIR=/usr/home/vagrant/spdk_repo/spdk/build/lib 00:02:55.404 ++ SPDK_LIB_DIR=/usr/home/vagrant/spdk_repo/spdk/build/lib 00:02:55.404 ++ export DPDK_LIB_DIR=/usr/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:02:55.404 ++ DPDK_LIB_DIR=/usr/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:02:55.404 ++ export VFIO_LIB_DIR=/usr/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:02:55.404 ++ VFIO_LIB_DIR=/usr/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:02:55.404 ++ export LD_LIBRARY_PATH=:/usr/home/vagrant/spdk_repo/spdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/usr/home/vagrant/spdk_repo/spdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:02:55.404 ++ LD_LIBRARY_PATH=:/usr/home/vagrant/spdk_repo/spdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/usr/home/vagrant/spdk_repo/spdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:02:55.404 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:02:55.404 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:02:55.404 ++ export PYTHONPATH=:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/python 00:02:55.404 ++ PYTHONPATH=:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/python 00:02:55.404 ++ export PYTHONDONTWRITEBYTECODE=1 00:02:55.404 ++ PYTHONDONTWRITEBYTECODE=1 00:02:55.404 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:02:55.404 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:02:55.404 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:02:55.405 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:02:55.405 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:02:55.405 ++ rm -rf /var/tmp/asan_suppression_file 00:02:55.405 ++ cat 00:02:55.405 ++ echo leak:libfuse3.so 00:02:55.405 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:02:55.405 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:02:55.405 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:02:55.405 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:02:55.405 ++ '[' -z /var/spdk/dependencies ']' 00:02:55.405 ++ export DEPENDENCY_DIR 00:02:55.405 ++ export SPDK_BIN_DIR=/usr/home/vagrant/spdk_repo/spdk/build/bin 00:02:55.405 ++ SPDK_BIN_DIR=/usr/home/vagrant/spdk_repo/spdk/build/bin 00:02:55.405 ++ export SPDK_EXAMPLE_DIR=/usr/home/vagrant/spdk_repo/spdk/build/examples 00:02:55.405 ++ SPDK_EXAMPLE_DIR=/usr/home/vagrant/spdk_repo/spdk/build/examples 00:02:55.405 ++ export QEMU_BIN= 00:02:55.405 ++ QEMU_BIN= 00:02:55.405 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:02:55.405 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:02:55.405 ++ export AR_TOOL=/usr/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:02:55.405 ++ AR_TOOL=/usr/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:02:55.405 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:55.405 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:55.405 ++ '[' 0 -eq 0 ']' 00:02:55.405 ++ export valgrind= 00:02:55.405 ++ valgrind= 00:02:55.405 +++ uname -s 00:02:55.405 ++ '[' FreeBSD = Linux ']' 00:02:55.405 +++ uname -s 00:02:55.405 ++ '[' FreeBSD = FreeBSD ']' 00:02:55.405 ++ MAKE=gmake 00:02:55.405 +++ sysctl -a 00:02:55.405 +++ grep -E -i hw.ncpu 00:02:55.405 +++ awk '{print $2}' 00:02:55.405 ++ MAKEFLAGS=-j10 00:02:55.405 ++ HUGEMEM=2048 00:02:55.405 ++ export HUGEMEM=2048 00:02:55.405 ++ HUGEMEM=2048 00:02:55.405 ++ '[' -z /usr/home/vagrant/spdk_repo/spdk/../output ']' 00:02:55.405 ++ NO_HUGE=() 00:02:55.405 ++ TEST_MODE= 00:02:55.405 ++ [[ -z '' ]] 00:02:55.405 ++ PYTHONPATH+=:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:02:55.405 ++ exec 00:02:55.405 ++ PYTHONPATH=:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:02:55.405 ++ /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:02:55.405 ++ set_test_storage 2147483648 00:02:55.405 ++ [[ -v testdir ]] 00:02:55.405 ++ local requested_size=2147483648 00:02:55.405 ++ local mount target_dir 00:02:55.405 ++ local -A mounts fss sizes avails uses 00:02:55.405 ++ local source fs size avail mount use 00:02:55.405 ++ local storage_fallback storage_candidates 00:02:55.405 +++ mktemp -udt spdk.XXXXXX 00:02:55.405 ++ storage_fallback=/tmp/spdk.XXXXXX.YPTJO0j8 00:02:55.405 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:02:55.405 ++ [[ -n '' ]] 00:02:55.405 ++ [[ -n '' ]] 00:02:55.405 ++ mkdir -p /usr/home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.XXXXXX.YPTJO0j8/tests/unit /tmp/spdk.XXXXXX.YPTJO0j8 00:02:55.405 ++ requested_size=2214592512 00:02:55.405 ++ read -r source fs size use avail _ mount 00:02:55.405 +++ df -T 00:02:55.405 +++ grep -v Filesystem 00:02:55.405 ++ mounts["$mount"]=/dev/gptid/bd0c1ea5-f644-11ee-93e1-001e672be6d6 00:02:55.405 ++ fss["$mount"]=ufs 00:02:55.405 ++ avails["$mount"]=17248370688 00:02:55.405 ++ sizes["$mount"]=31182712832 00:02:55.405 ++ uses["$mount"]=11439726592 00:02:55.405 ++ read -r source fs size use avail _ mount 00:02:55.405 ++ mounts["$mount"]=devfs 00:02:55.405 ++ fss["$mount"]=devfs 00:02:55.405 ++ avails["$mount"]=0 00:02:55.405 ++ sizes["$mount"]=1024 00:02:55.405 ++ uses["$mount"]=1024 00:02:55.405 ++ read -r source fs size use avail _ mount 00:02:55.405 ++ mounts["$mount"]=tmpfs 00:02:55.405 ++ fss["$mount"]=tmpfs 00:02:55.405 ++ avails["$mount"]=2147463168 00:02:55.405 ++ sizes["$mount"]=2147483648 00:02:55.405 ++ uses["$mount"]=20480 00:02:55.405 ++ read -r source fs size use avail _ mount 00:02:55.405 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/freebsd-vg-autotest/freebsd13-libvirt/output 00:02:55.405 ++ fss["$mount"]=fusefs.sshfs 00:02:55.405 ++ avails["$mount"]=96816209920 00:02:55.405 ++ sizes["$mount"]=105088212992 00:02:55.405 ++ uses["$mount"]=2886569984 00:02:55.405 ++ read -r source fs size use avail _ mount 00:02:55.405 ++ printf '* Looking for test storage...\n' 00:02:55.405 * Looking for test storage... 00:02:55.405 ++ local target_space new_size 00:02:55.405 ++ for target_dir in "${storage_candidates[@]}" 00:02:55.405 +++ df /usr/home/vagrant/spdk_repo/spdk/test/unit 00:02:55.405 +++ awk '$1 !~ /Filesystem/{print $6}' 00:02:55.405 ++ mount=/ 00:02:55.405 ++ target_space=17248370688 00:02:55.405 ++ (( target_space == 0 || target_space < requested_size )) 00:02:55.405 ++ (( target_space >= requested_size )) 00:02:55.405 ++ [[ ufs == tmpfs ]] 00:02:55.405 ++ [[ ufs == ramfs ]] 00:02:55.405 ++ [[ / == / ]] 00:02:55.405 ++ new_size=13654319104 00:02:55.405 ++ (( new_size * 100 / sizes[/] > 95 )) 00:02:55.405 ++ export SPDK_TEST_STORAGE=/usr/home/vagrant/spdk_repo/spdk/test/unit 00:02:55.405 ++ SPDK_TEST_STORAGE=/usr/home/vagrant/spdk_repo/spdk/test/unit 00:02:55.405 ++ printf '* Found test storage at %s\n' /usr/home/vagrant/spdk_repo/spdk/test/unit 00:02:55.405 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/unit 00:02:55.405 ++ return 0 00:02:55.405 ++ set -o errtrace 00:02:55.405 ++ shopt -s extdebug 00:02:55.405 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:02:55.405 ++ PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:02:55.405 20:40:46 -- common/autotest_common.sh@1672 -- # true 00:02:55.405 20:40:46 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:02:55.405 20:40:46 -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:02:55.405 20:40:46 -- common/autotest_common.sh@29 -- # exec 00:02:55.405 20:40:46 -- common/autotest_common.sh@31 -- # xtrace_restore 00:02:55.405 20:40:46 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:02:55.405 20:40:46 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:02:55.405 20:40:46 -- common/autotest_common.sh@18 -- # set -x 00:02:55.405 20:40:46 -- unit/unittest.sh@17 -- # cd /usr/home/vagrant/spdk_repo/spdk 00:02:55.405 20:40:46 -- unit/unittest.sh@151 -- # '[' 0 -eq 1 ']' 00:02:55.405 20:40:46 -- unit/unittest.sh@158 -- # '[' -z x ']' 00:02:55.405 20:40:46 -- unit/unittest.sh@165 -- # '[' 0 -eq 1 ']' 00:02:55.405 20:40:46 -- unit/unittest.sh@178 -- # grep CC_TYPE /usr/home/vagrant/spdk_repo/spdk/mk/cc.mk 00:02:55.405 20:40:46 -- unit/unittest.sh@178 -- # CC_TYPE=CC_TYPE=clang 00:02:55.405 20:40:46 -- unit/unittest.sh@179 -- # hash lcov 00:02:55.405 /usr/home/vagrant/spdk_repo/spdk/test/unit/unittest.sh: line 179: hash: lcov: not found 00:02:55.405 20:40:46 -- unit/unittest.sh@182 -- # cov_avail=no 00:02:55.405 20:40:46 -- unit/unittest.sh@184 -- # '[' no = yes ']' 00:02:55.405 20:40:46 -- unit/unittest.sh@206 -- # uname -m 00:02:55.664 20:40:46 -- unit/unittest.sh@206 -- # '[' amd64 = aarch64 ']' 00:02:55.664 20:40:46 -- unit/unittest.sh@210 -- # run_test unittest_pci_event /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:02:55.664 20:40:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:55.664 20:40:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:55.664 20:40:46 -- common/autotest_common.sh@10 -- # set +x 00:02:55.664 ************************************ 00:02:55.664 START TEST unittest_pci_event 00:02:55.664 ************************************ 00:02:55.664 20:40:46 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:02:55.664 00:02:55.664 00:02:55.664 CUnit - A unit testing framework for C - Version 2.1-3 00:02:55.664 http://cunit.sourceforge.net/ 00:02:55.664 00:02:55.664 00:02:55.664 Suite: pci_event 00:02:55.664 Test: test_pci_parse_event ...passed 00:02:55.664 00:02:55.664 Run Summary: Type Total Ran Passed Failed Inactive 00:02:55.664 suites 1 1 n/a 0 0 00:02:55.664 tests 1 1 1 0 0 00:02:55.664 asserts 1 1 1 0 n/a 00:02:55.664 00:02:55.664 Elapsed time = 0.000 seconds 00:02:55.664 00:02:55.664 real 0m0.029s 00:02:55.664 user 0m0.012s 00:02:55.664 sys 0m0.003s 00:02:55.664 20:40:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:55.664 20:40:46 -- common/autotest_common.sh@10 -- # set +x 00:02:55.665 ************************************ 00:02:55.665 END TEST unittest_pci_event 00:02:55.665 ************************************ 00:02:55.665 20:40:46 -- unit/unittest.sh@211 -- # run_test unittest_include /usr/home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:02:55.665 20:40:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:55.665 20:40:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:55.665 20:40:46 -- common/autotest_common.sh@10 -- # set +x 00:02:55.665 ************************************ 00:02:55.665 START TEST unittest_include 00:02:55.665 ************************************ 00:02:55.665 20:40:46 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:02:55.665 00:02:55.665 00:02:55.665 CUnit - A unit testing framework for C - Version 2.1-3 00:02:55.665 http://cunit.sourceforge.net/ 00:02:55.665 00:02:55.665 00:02:55.665 Suite: histogram 00:02:55.665 Test: histogram_test ...passed 00:02:55.665 Test: histogram_merge ...passed 00:02:55.665 00:02:55.665 Run Summary: Type Total Ran Passed Failed Inactive 00:02:55.665 suites 1 1 n/a 0 0 00:02:55.665 tests 2 2 2 0 0 00:02:55.665 asserts 50 50 50 0 n/a 00:02:55.665 00:02:55.665 Elapsed time = 0.008 seconds 00:02:55.665 00:02:55.665 real 0m0.010s 00:02:55.665 user 0m0.001s 00:02:55.665 sys 0m0.012s 00:02:55.665 20:40:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:55.665 20:40:46 -- common/autotest_common.sh@10 -- # set +x 00:02:55.665 ************************************ 00:02:55.665 END TEST unittest_include 00:02:55.665 ************************************ 00:02:55.665 20:40:46 -- unit/unittest.sh@212 -- # run_test unittest_bdev unittest_bdev 00:02:55.665 20:40:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:55.665 20:40:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:55.665 20:40:46 -- common/autotest_common.sh@10 -- # set +x 00:02:55.665 ************************************ 00:02:55.665 START TEST unittest_bdev 00:02:55.665 ************************************ 00:02:55.665 20:40:46 -- common/autotest_common.sh@1104 -- # unittest_bdev 00:02:55.665 20:40:46 -- unit/unittest.sh@20 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:02:55.665 00:02:55.665 00:02:55.665 CUnit - A unit testing framework for C - Version 2.1-3 00:02:55.665 http://cunit.sourceforge.net/ 00:02:55.665 00:02:55.665 00:02:55.665 Suite: bdev 00:02:55.665 Test: bytes_to_blocks_test ...passed 00:02:55.665 Test: num_blocks_test ...passed 00:02:55.665 Test: io_valid_test ...passed 00:02:55.665 Test: open_write_test ...[2024-04-16 20:40:46.682360] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7935:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:02:55.665 [2024-04-16 20:40:46.682662] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7935:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:02:55.665 [2024-04-16 20:40:46.682686] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7935:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:02:55.665 passed 00:02:55.665 Test: claim_test ...passed 00:02:55.665 Test: alias_add_del_test ...[2024-04-16 20:40:46.686529] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4548:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:02:55.665 [2024-04-16 20:40:46.686570] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4578:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:02:55.665 [2024-04-16 20:40:46.686586] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4548:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:02:55.665 passed 00:02:55.665 Test: get_device_stat_test ...passed 00:02:55.665 Test: bdev_io_types_test ...passed 00:02:55.665 Test: bdev_io_wait_test ...passed 00:02:55.665 Test: bdev_io_spans_split_test ...passed 00:02:55.665 Test: bdev_io_boundary_split_test ...passed 00:02:55.665 Test: bdev_io_max_size_and_segment_split_test ...[2024-04-16 20:40:46.694278] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3185:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:02:55.665 passed 00:02:55.665 Test: bdev_io_mix_split_test ...passed 00:02:55.665 Test: bdev_io_split_with_io_wait ...passed 00:02:55.665 Test: bdev_io_write_unit_split_test ...[2024-04-16 20:40:46.698528] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2743:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:02:55.665 [2024-04-16 20:40:46.698568] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2743:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:02:55.665 [2024-04-16 20:40:46.698579] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2743:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:02:55.665 [2024-04-16 20:40:46.698593] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2743:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:02:55.665 passed 00:02:55.665 Test: bdev_io_alignment_with_boundary ...passed 00:02:55.665 Test: bdev_io_alignment ...passed 00:02:55.665 Test: bdev_histograms ...passed 00:02:55.665 Test: bdev_write_zeroes ...passed 00:02:55.665 Test: bdev_compare_and_write ...passed 00:02:55.665 Test: bdev_compare ...passed 00:02:55.665 Test: bdev_compare_emulated ...passed 00:02:55.665 Test: bdev_zcopy_write ...passed 00:02:55.665 Test: bdev_zcopy_read ...passed 00:02:55.665 Test: bdev_open_while_hotremove ...passed 00:02:55.665 Test: bdev_close_while_hotremove ...passed 00:02:55.665 Test: bdev_open_ext_test ...[2024-04-16 20:40:46.710201] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8041:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:02:55.665 passed 00:02:55.665 Test: bdev_open_ext_unregister ...passed 00:02:55.665 Test: bdev_set_io_timeout ...[2024-04-16 20:40:46.710225] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8041:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:02:55.665 passed 00:02:55.665 Test: bdev_set_qd_sampling ...passed 00:02:55.665 Test: lba_range_overlap ...passed 00:02:55.665 Test: lock_lba_range_check_ranges ...passed 00:02:55.665 Test: lock_lba_range_with_io_outstanding ...passed 00:02:55.665 Test: lock_lba_range_overlapped ...passed 00:02:55.665 Test: bdev_quiesce ...[2024-04-16 20:40:46.715415] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9964:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:02:55.665 passed 00:02:55.665 Test: bdev_io_abort ...passed 00:02:55.665 Test: bdev_unmap ...passed 00:02:55.665 Test: bdev_write_zeroes_split_test ...passed 00:02:55.665 Test: bdev_set_options_test ...passed 00:02:55.665 Test: bdev_get_memory_domains ...passed 00:02:55.665 Test: bdev_io_ext ...[2024-04-16 20:40:46.718492] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 485:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:02:55.665 passed 00:02:55.665 Test: bdev_io_ext_no_opts ...passed 00:02:55.665 Test: bdev_io_ext_invalid_opts ...passed 00:02:55.665 Test: bdev_io_ext_split ...passed 00:02:55.665 Test: bdev_io_ext_bounce_buffer ...passed 00:02:55.665 Test: bdev_register_uuid_alias ...[2024-04-16 20:40:46.723723] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4548:bdev_name_add: *ERROR*: Bdev name 9a4a8942-fc31-11ee-80f8-ef3e42bb1492 already exists 00:02:55.665 [2024-04-16 20:40:46.723745] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7598:bdev_register: *ERROR*: Unable to add uuid:9a4a8942-fc31-11ee-80f8-ef3e42bb1492 alias for bdev bdev0 00:02:55.665 passed 00:02:55.665 Test: bdev_unregister_by_name ...passed 00:02:55.665 Test: for_each_bdev_test ...[2024-04-16 20:40:46.724001] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7831:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:02:55.665 [2024-04-16 20:40:46.724009] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7840:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:02:55.665 passed 00:02:55.665 Test: bdev_seek_test ...passed 00:02:55.665 Test: bdev_copy ...passed 00:02:55.665 Test: bdev_copy_split_test ...passed 00:02:55.665 Test: examine_locks ...passed 00:02:55.665 Test: claim_v2_rwo ...[2024-04-16 20:40:46.726891] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7935:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:02:55.665 [2024-04-16 20:40:46.726903] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8565:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:02:55.665 [2024-04-16 20:40:46.726908] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:02:55.665 passed 00:02:55.665 Test: claim_v2_rom ...[2024-04-16 20:40:46.726930] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:02:55.665 [2024-04-16 20:40:46.726936] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8402:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:02:55.665 [2024-04-16 20:40:46.726949] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8561:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:02:55.665 [2024-04-16 20:40:46.726967] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7935:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:02:55.665 [2024-04-16 20:40:46.726974] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:02:55.665 [2024-04-16 20:40:46.726980] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:02:55.665 passed 00:02:55.665 Test: claim_v2_rwm ...[2024-04-16 20:40:46.726985] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8402:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:02:55.665 [2024-04-16 20:40:46.726993] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:02:55.665 [2024-04-16 20:40:46.726999] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8599:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:02:55.665 [2024-04-16 20:40:46.727013] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8634:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:02:55.665 [2024-04-16 20:40:46.727020] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7935:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:02:55.665 [2024-04-16 20:40:46.727026] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:02:55.665 [2024-04-16 20:40:46.727032] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:02:55.665 [2024-04-16 20:40:46.727037] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8402:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:02:55.665 passed 00:02:55.665 Test: claim_v2_existing_writer ...passed 00:02:55.665 Test: claim_v2_existing_v1 ...[2024-04-16 20:40:46.727043] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8653:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:02:55.665 [2024-04-16 20:40:46.727054] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8634:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:02:55.665 [2024-04-16 20:40:46.727069] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8599:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:02:55.665 [2024-04-16 20:40:46.727075] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8599:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:02:55.665 passed 00:02:55.665 Test: claim_v1_existing_v2 ...passed 00:02:55.665 Test: examine_claimed ...[2024-04-16 20:40:46.727088] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:02:55.665 [2024-04-16 20:40:46.727095] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:02:55.665 [2024-04-16 20:40:46.727101] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:02:55.665 [2024-04-16 20:40:46.727115] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8402:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:02:55.665 [2024-04-16 20:40:46.727121] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8402:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:02:55.665 [2024-04-16 20:40:46.727128] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8402:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:02:55.665 passed 00:02:55.665 00:02:55.665 Run Summary: Type Total Ran Passed Failed Inactive 00:02:55.665 suites 1 1 n/a 0 0 00:02:55.665 tests 59 59 59 0 0 00:02:55.665 asserts 4599 4599 4599 0 n/a 00:02:55.665 00:02:55.665 Elapsed time = 0.055 seconds 00:02:55.665 [2024-04-16 20:40:46.727154] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:02:55.665 20:40:46 -- unit/unittest.sh@21 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:02:55.665 00:02:55.665 00:02:55.665 CUnit - A unit testing framework for C - Version 2.1-3 00:02:55.665 http://cunit.sourceforge.net/ 00:02:55.665 00:02:55.665 00:02:55.665 Suite: nvme 00:02:55.665 Test: test_create_ctrlr ...passed 00:02:55.665 Test: test_reset_ctrlr ...[2024-04-16 20:40:46.734287] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:02:55.665 passed 00:02:55.665 Test: test_race_between_reset_and_destruct_ctrlr ...passed 00:02:55.665 Test: test_failover_ctrlr ...passed 00:02:55.665 Test: test_race_between_failover_and_add_secondary_trid ...[2024-04-16 20:40:46.734649] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:02:55.665 [2024-04-16 20:40:46.734677] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:02:55.665 [2024-04-16 20:40:46.734698] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:02:55.665 passed 00:02:55.665 Test: test_pending_reset ...[2024-04-16 20:40:46.734838] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:02:55.665 [2024-04-16 20:40:46.734875] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:02:55.665 passed 00:02:55.665 Test: test_attach_ctrlr ...[2024-04-16 20:40:46.734941] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4186:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:02:55.665 passed 00:02:55.665 Test: test_aer_cb ...passed 00:02:55.665 Test: test_submit_nvme_cmd ...passed 00:02:55.665 Test: test_add_remove_trid ...passed 00:02:55.665 Test: test_abort ...[2024-04-16 20:40:46.735171] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7171:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:02:55.665 passed 00:02:55.665 Test: test_get_io_qpair ...passed 00:02:55.665 Test: test_bdev_unregister ...passed 00:02:55.665 Test: test_compare_ns ...passed 00:02:55.665 Test: test_init_ana_log_page ...passed 00:02:55.665 Test: test_get_memory_domains ...passed 00:02:55.665 Test: test_reconnect_qpair ...[2024-04-16 20:40:46.735399] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:02:55.665 passed 00:02:55.665 Test: test_create_bdev_ctrlr ...[2024-04-16 20:40:46.735445] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5223:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated. 00:02:55.665 passed 00:02:55.665 Test: test_add_multi_ns_to_bdev ...[2024-04-16 20:40:46.735568] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4442:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:02:55.665 passed 00:02:55.665 Test: test_add_multi_io_paths_to_nbdev_ch ...passed 00:02:55.665 Test: test_admin_path ...passed 00:02:55.665 Test: test_reset_bdev_ctrlr ...passed 00:02:55.665 Test: test_find_io_path ...passed 00:02:55.665 Test: test_retry_io_if_ana_state_is_updating ...passed 00:02:55.665 Test: test_retry_io_for_io_path_error ...passed 00:02:55.665 Test: test_retry_io_count ...passed 00:02:55.665 Test: test_concurrent_read_ana_log_page ...passed 00:02:55.665 Test: test_retry_io_for_ana_error ...passed 00:02:55.665 Test: test_check_io_error_resiliency_params ...[2024-04-16 20:40:46.736141] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5876:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:02:55.665 [2024-04-16 20:40:46.736161] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5880:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:02:55.665 [2024-04-16 20:40:46.736172] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5889:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:02:55.665 [2024-04-16 20:40:46.736182] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5892:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:02:55.665 [2024-04-16 20:40:46.736192] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5904:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:02:55.666 [2024-04-16 20:40:46.736202] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5904:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:02:55.666 passed 00:02:55.666 Test: test_retry_io_if_ctrlr_is_resetting ...[2024-04-16 20:40:46.736212] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5884:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:02:55.666 [2024-04-16 20:40:46.736222] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5899:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:02:55.666 [2024-04-16 20:40:46.736232] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5896:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:02:55.666 passed 00:02:55.666 Test: test_reconnect_ctrlr ...[2024-04-16 20:40:46.736303] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:02:55.666 [2024-04-16 20:40:46.736323] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:02:55.666 [2024-04-16 20:40:46.736366] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:02:55.666 [2024-04-16 20:40:46.736383] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:02:55.666 [2024-04-16 20:40:46.736400] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:02:55.666 passed 00:02:55.666 Test: test_retry_failover_ctrlr ...[2024-04-16 20:40:46.736444] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:02:55.666 passed 00:02:55.666 Test: test_fail_path ...[2024-04-16 20:40:46.736498] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:02:55.666 [2024-04-16 20:40:46.736517] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:02:55.666 [2024-04-16 20:40:46.736537] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:02:55.666 [2024-04-16 20:40:46.736553] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:02:55.666 passed 00:02:55.666 Test: test_nvme_ns_cmp ...passed 00:02:55.666 Test: test_ana_transition ...passed 00:02:55.666 Test: test_set_preferred_path ...[2024-04-16 20:40:46.736569] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:02:55.666 passed 00:02:55.666 Test: test_find_next_io_path ...passed 00:02:55.666 Test: test_find_io_path_min_qd ...passed 00:02:55.666 Test: test_disable_auto_failback ...[2024-04-16 20:40:46.736718] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:02:55.666 passed 00:02:55.666 Test: test_set_multipath_policy ...passed 00:02:55.666 Test: test_uuid_generation ...passed 00:02:55.666 Test: test_retry_io_to_same_path ...passed 00:02:55.666 Test: test_race_between_reset_and_disconnected ...passed 00:02:55.666 Test: test_ctrlr_op_rpc ...passed 00:02:55.666 Test: test_bdev_ctrlr_op_rpc ...passed 00:02:55.666 Test: test_disable_enable_ctrlr ...[2024-04-16 20:40:46.774449] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:02:55.666 [2024-04-16 20:40:46.774551] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:02:55.666 passed 00:02:55.666 Test: test_delete_ctrlr_done ...passed 00:02:55.666 00:02:55.666 Run Summary: Type Total Ran Passed Failed Inactive 00:02:55.666 suites 1 1 n/a 0 0 00:02:55.666 tests 47 47 47 0 0 00:02:55.666 asserts 3527 3527 3527 0 n/a 00:02:55.666 00:02:55.666 Elapsed time = 0.016 seconds 00:02:55.666 20:40:46 -- unit/unittest.sh@22 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:02:55.924 Test Options 00:02:55.924 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2 00:02:55.924 00:02:55.924 00:02:55.924 CUnit - A unit testing framework for C - Version 2.1-3 00:02:55.924 http://cunit.sourceforge.net/ 00:02:55.924 00:02:55.924 00:02:55.924 Suite: raid 00:02:55.924 Test: test_create_raid ...passed 00:02:55.924 Test: test_create_raid_superblock ...passed 00:02:55.924 Test: test_delete_raid ...passed 00:02:55.924 Test: test_create_raid_invalid_args ...[2024-04-16 20:40:46.791119] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1357:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:02:55.924 [2024-04-16 20:40:46.791552] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1351:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:02:55.924 [2024-04-16 20:40:46.791710] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1341:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:02:55.924 [2024-04-16 20:40:46.791801] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:02:55.924 [2024-04-16 20:40:46.791997] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:02:55.924 passed 00:02:55.924 Test: test_delete_raid_invalid_args ...passed 00:02:55.924 Test: test_io_channel ...passed 00:02:55.924 Test: test_reset_io ...passed 00:02:55.924 Test: test_write_io ...passed 00:02:55.924 Test: test_read_io ...passed 00:02:56.492 Test: test_unmap_io ...passed 00:02:56.492 Test: test_io_failure ...passed 00:02:56.492 Test: test_multi_raid_no_io ...[2024-04-16 20:40:47.500953] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c: 832:raid_bdev_submit_request: *ERROR*: submit request, invalid io type 0 00:02:56.492 passed 00:02:56.492 Test: test_multi_raid_with_io ...passed 00:02:56.492 Test: test_io_type_supported ...passed 00:02:56.492 Test: test_raid_json_dump_info ...passed 00:02:56.492 Test: test_context_size ...passed 00:02:56.492 Test: test_raid_level_conversions ...passed 00:02:56.492 Test: test_raid_process ...passed 00:02:56.492 Test: test_raid_io_split ...passed 00:02:56.492 00:02:56.492 Run Summary: Type Total Ran Passed Failed Inactive 00:02:56.492 suites 1 1 n/a 0 0 00:02:56.492 tests 19 19 19 0 0 00:02:56.492 asserts 177879 177879 177879 0 n/a 00:02:56.492 00:02:56.492 Elapsed time = 0.703 seconds 00:02:56.492 20:40:47 -- unit/unittest.sh@23 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:02:56.492 00:02:56.492 00:02:56.492 CUnit - A unit testing framework for C - Version 2.1-3 00:02:56.492 http://cunit.sourceforge.net/ 00:02:56.492 00:02:56.492 00:02:56.492 Suite: raid_sb 00:02:56.492 Test: test_raid_bdev_write_superblock ...passed 00:02:56.492 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:02:56.492 Test: test_raid_bdev_parse_superblock ...[2024-04-16 20:40:47.514320] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 121:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:02:56.492 passed 00:02:56.492 00:02:56.492 Run Summary: Type Total Ran Passed Failed Inactive 00:02:56.492 suites 1 1 n/a 0 0 00:02:56.492 tests 3 3 3 0 0 00:02:56.492 asserts 32 32 32 0 n/a 00:02:56.492 00:02:56.492 Elapsed time = 0.000 seconds 00:02:56.492 20:40:47 -- unit/unittest.sh@24 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:02:56.492 00:02:56.492 00:02:56.492 CUnit - A unit testing framework for C - Version 2.1-3 00:02:56.492 http://cunit.sourceforge.net/ 00:02:56.492 00:02:56.492 00:02:56.492 Suite: concat 00:02:56.492 Test: test_concat_start ...passed 00:02:56.492 Test: test_concat_rw ...passed 00:02:56.492 Test: test_concat_null_payload ...passed 00:02:56.492 00:02:56.492 Run Summary: Type Total Ran Passed Failed Inactive 00:02:56.492 suites 1 1 n/a 0 0 00:02:56.492 tests 3 3 3 0 0 00:02:56.492 asserts 8097 8097 8097 0 n/a 00:02:56.492 00:02:56.492 Elapsed time = 0.000 seconds 00:02:56.492 20:40:47 -- unit/unittest.sh@25 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:02:56.492 00:02:56.492 00:02:56.492 CUnit - A unit testing framework for C - Version 2.1-3 00:02:56.492 http://cunit.sourceforge.net/ 00:02:56.492 00:02:56.492 00:02:56.492 Suite: raid1 00:02:56.492 Test: test_raid1_start ...passed 00:02:56.492 Test: test_raid1_read_balancing ...passed 00:02:56.492 00:02:56.492 Run Summary: Type Total Ran Passed Failed Inactive 00:02:56.492 suites 1 1 n/a 0 0 00:02:56.492 tests 2 2 2 0 0 00:02:56.492 asserts 2856 2856 2856 0 n/a 00:02:56.492 00:02:56.492 Elapsed time = 0.000 seconds 00:02:56.492 20:40:47 -- unit/unittest.sh@26 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:02:56.492 00:02:56.492 00:02:56.492 CUnit - A unit testing framework for C - Version 2.1-3 00:02:56.492 http://cunit.sourceforge.net/ 00:02:56.492 00:02:56.492 00:02:56.492 Suite: zone 00:02:56.492 Test: test_zone_get_operation ...passed 00:02:56.492 Test: test_bdev_zone_get_info ...passed 00:02:56.492 Test: test_bdev_zone_management ...passed 00:02:56.492 Test: test_bdev_zone_append ...passed 00:02:56.492 Test: test_bdev_zone_append_with_md ...passed 00:02:56.492 Test: test_bdev_zone_appendv ...passed 00:02:56.492 Test: test_bdev_zone_appendv_with_md ...passed 00:02:56.492 Test: test_bdev_io_get_append_location ...passed 00:02:56.492 00:02:56.492 Run Summary: Type Total Ran Passed Failed Inactive 00:02:56.492 suites 1 1 n/a 0 0 00:02:56.492 tests 8 8 8 0 0 00:02:56.492 asserts 94 94 94 0 n/a 00:02:56.492 00:02:56.492 Elapsed time = 0.000 seconds 00:02:56.492 20:40:47 -- unit/unittest.sh@27 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:02:56.492 00:02:56.492 00:02:56.492 CUnit - A unit testing framework for C - Version 2.1-3 00:02:56.492 http://cunit.sourceforge.net/ 00:02:56.492 00:02:56.492 00:02:56.492 Suite: gpt_parse 00:02:56.492 Test: test_parse_mbr_and_primary ...[2024-04-16 20:40:47.550593] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:02:56.492 [2024-04-16 20:40:47.550994] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:02:56.492 [2024-04-16 20:40:47.551061] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:02:56.492 [2024-04-16 20:40:47.551093] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:02:56.492 [2024-04-16 20:40:47.551118] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 89:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:02:56.492 [2024-04-16 20:40:47.551138] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:02:56.492 passed 00:02:56.492 Test: test_parse_secondary ...[2024-04-16 20:40:47.551456] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:02:56.492 [2024-04-16 20:40:47.551476] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:02:56.492 [2024-04-16 20:40:47.551498] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 89:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:02:56.492 [2024-04-16 20:40:47.551516] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:02:56.492 passed 00:02:56.492 Test: test_check_mbr ...passed 00:02:56.492 Test: test_read_header ...[2024-04-16 20:40:47.551849] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:02:56.492 [2024-04-16 20:40:47.551869] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:02:56.492 [2024-04-16 20:40:47.551898] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:02:56.492 [2024-04-16 20:40:47.551921] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 178:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:02:56.493 [2024-04-16 20:40:47.551942] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:02:56.493 [2024-04-16 20:40:47.551963] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 192:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:02:56.493 passed 00:02:56.493 Test: test_read_partitions ...[2024-04-16 20:40:47.551985] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 136:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:02:56.493 [2024-04-16 20:40:47.552004] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:02:56.493 [2024-04-16 20:40:47.552032] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 89:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:02:56.493 [2024-04-16 20:40:47.552054] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 96:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:02:56.493 [2024-04-16 20:40:47.552073] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:02:56.493 [2024-04-16 20:40:47.552092] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:02:56.493 [2024-04-16 20:40:47.552247] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:02:56.493 passed 00:02:56.493 00:02:56.493 Run Summary: Type Total Ran Passed Failed Inactive 00:02:56.493 suites 1 1 n/a 0 0 00:02:56.493 tests 5 5 5 0 0 00:02:56.493 asserts 33 33 33 0 n/a 00:02:56.493 00:02:56.493 Elapsed time = 0.008 seconds 00:02:56.493 20:40:47 -- unit/unittest.sh@28 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:02:56.493 00:02:56.493 00:02:56.493 CUnit - A unit testing framework for C - Version 2.1-3 00:02:56.493 http://cunit.sourceforge.net/ 00:02:56.493 00:02:56.493 00:02:56.493 Suite: bdev_part 00:02:56.493 Test: part_test ...passed 00:02:56.493 Test: part_free_test ...[2024-04-16 20:40:47.564818] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4548:bdev_name_add: *ERROR*: Bdev name test1 already exists 00:02:56.493 passed 00:02:56.493 Test: part_get_io_channel_test ...passed 00:02:56.493 Test: part_construct_ext ...passed 00:02:56.493 00:02:56.493 Run Summary: Type Total Ran Passed Failed Inactive 00:02:56.493 suites 1 1 n/a 0 0 00:02:56.493 tests 4 4 4 0 0 00:02:56.493 asserts 48 48 48 0 n/a 00:02:56.493 00:02:56.493 Elapsed time = 0.016 seconds 00:02:56.493 20:40:47 -- unit/unittest.sh@29 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:02:56.493 00:02:56.493 00:02:56.493 CUnit - A unit testing framework for C - Version 2.1-3 00:02:56.493 http://cunit.sourceforge.net/ 00:02:56.493 00:02:56.493 00:02:56.493 Suite: scsi_nvme_suite 00:02:56.493 Test: scsi_nvme_translate_test ...passed 00:02:56.493 00:02:56.493 Run Summary: Type Total Ran Passed Failed Inactive 00:02:56.493 suites 1 1 n/a 0 0 00:02:56.493 tests 1 1 1 0 0 00:02:56.493 asserts 104 104 104 0 n/a 00:02:56.493 00:02:56.493 Elapsed time = 0.000 seconds 00:02:56.493 20:40:47 -- unit/unittest.sh@30 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:02:56.493 00:02:56.493 00:02:56.493 CUnit - A unit testing framework for C - Version 2.1-3 00:02:56.493 http://cunit.sourceforge.net/ 00:02:56.493 00:02:56.493 00:02:56.493 Suite: lvol 00:02:56.493 Test: ut_lvs_init ...[2024-04-16 20:40:47.587675] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:02:56.493 passed 00:02:56.493 Test: ut_lvol_init ...passed 00:02:56.493 Test: ut_lvol_snapshot ...passed 00:02:56.493 Test: ut_lvol_clone ...[2024-04-16 20:40:47.588120] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:02:56.493 passed 00:02:56.493 Test: ut_lvs_destroy ...passed 00:02:56.493 Test: ut_lvs_unload ...passed 00:02:56.493 Test: ut_lvol_resize ...passed 00:02:56.493 Test: ut_lvol_set_read_only ...[2024-04-16 20:40:47.588294] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1391:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:02:56.493 passed 00:02:56.493 Test: ut_lvol_hotremove ...passed 00:02:56.493 Test: ut_vbdev_lvol_get_io_channel ...passed 00:02:56.493 Test: ut_vbdev_lvol_io_type_supported ...passed 00:02:56.493 Test: ut_lvol_read_write ...passed 00:02:56.493 Test: ut_vbdev_lvol_submit_request ...passed 00:02:56.493 Test: ut_lvol_examine_config ...passed 00:02:56.493 Test: ut_lvol_examine_disk ...[2024-04-16 20:40:47.588449] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1533:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:02:56.493 passed 00:02:56.493 Test: ut_lvol_rename ...[2024-04-16 20:40:47.588541] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:02:56.493 [2024-04-16 20:40:47.588564] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1341:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:02:56.493 passed 00:02:56.493 Test: ut_bdev_finish ...passed 00:02:56.493 Test: ut_lvs_rename ...passed 00:02:56.493 Test: ut_lvol_seek ...passed 00:02:56.493 Test: ut_esnap_dev_create ...[2024-04-16 20:40:47.588641] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1868:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:02:56.493 [2024-04-16 20:40:47.588664] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1874:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:02:56.493 [2024-04-16 20:40:47.588684] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:02:56.493 [2024-04-16 20:40:47.588725] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1901:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : unable to claim esnap bdev 'a27fd8fe-d4b9-431e-a044-271016228ce4': -1 00:02:56.493 passed 00:02:56.493 Test: ut_lvol_esnap_clone_bad_args ...[2024-04-16 20:40:47.588769] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1277:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:02:56.493 [2024-04-16 20:40:47.588790] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1284:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9d1-aa17f37dd8db' could not be opened: error -19 00:02:56.493 passed 00:02:56.493 00:02:56.493 Run Summary: Type Total Ran Passed Failed Inactive 00:02:56.493 suites 1 1 n/a 0 0 00:02:56.493 tests 21 21 21 0 0 00:02:56.493 asserts 712 712 712 0 n/a 00:02:56.493 00:02:56.493 Elapsed time = 0.008 seconds 00:02:56.493 20:40:47 -- unit/unittest.sh@31 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:02:56.493 00:02:56.493 00:02:56.493 CUnit - A unit testing framework for C - Version 2.1-3 00:02:56.493 http://cunit.sourceforge.net/ 00:02:56.493 00:02:56.493 00:02:56.493 Suite: zone_block 00:02:56.493 Test: test_zone_block_create ...passed 00:02:56.493 Test: test_zone_block_create_invalid ...[2024-04-16 20:40:47.606899] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:02:56.493 [2024-04-16 20:40:47.607175] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-04-16 20:40:47.607211] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:02:56.493 [2024-04-16 20:40:47.607226] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-04-16 20:40:47.607242] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 860:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:02:56.493 [2024-04-16 20:40:47.607255] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-04-16 20:40:47.607268] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 865:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:02:56.493 passed 00:02:56.493 Test: test_get_zone_info ...[2024-04-16 20:40:47.607280] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-04-16 20:40:47.607382] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:02:56.493 [2024-04-16 20:40:47.607423] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:02:56.493 passed 00:02:56.493 Test: test_supported_io_types ...[2024-04-16 20:40:47.607439] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:02:56.493 passed 00:02:56.493 Test: test_reset_zone ...[2024-04-16 20:40:47.607523] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:02:56.493 [2024-04-16 20:40:47.607541] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:02:56.493 passed 00:02:56.493 Test: test_open_zone ...[2024-04-16 20:40:47.607587] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:02:56.493 [2024-04-16 20:40:47.607912] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:02:56.493 [2024-04-16 20:40:47.607939] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:02:56.493 passed 00:02:56.493 Test: test_zone_write ...[2024-04-16 20:40:47.607989] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:02:56.493 [2024-04-16 20:40:47.608003] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:02:56.493 [2024-04-16 20:40:47.608019] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:02:56.493 [2024-04-16 20:40:47.608042] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:02:56.493 [2024-04-16 20:40:47.608725] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 402:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:02:56.493 [2024-04-16 20:40:47.608748] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:02:56.494 [2024-04-16 20:40:47.608765] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 402:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:02:56.494 [2024-04-16 20:40:47.608777] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:02:56.494 [2024-04-16 20:40:47.609550] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 411:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:02:56.494 [2024-04-16 20:40:47.609573] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:02:56.494 passed 00:02:56.494 Test: test_zone_read ...[2024-04-16 20:40:47.609617] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:02:56.494 [2024-04-16 20:40:47.609631] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:02:56.494 [2024-04-16 20:40:47.609648] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:02:56.494 [2024-04-16 20:40:47.609660] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:02:56.494 [2024-04-16 20:40:47.609742] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:02:56.494 [2024-04-16 20:40:47.609767] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:02:56.494 passed 00:02:56.494 Test: test_close_zone ...[2024-04-16 20:40:47.609804] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:02:56.494 [2024-04-16 20:40:47.609824] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:02:56.494 [2024-04-16 20:40:47.609873] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:02:56.494 [2024-04-16 20:40:47.609888] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:02:56.494 passed 00:02:56.494 Test: test_finish_zone ...[2024-04-16 20:40:47.609982] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:02:56.494 [2024-04-16 20:40:47.610020] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:02:56.494 passed 00:02:56.494 Test: test_append_zone ...[2024-04-16 20:40:47.610060] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:02:56.494 [2024-04-16 20:40:47.610074] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:02:56.494 [2024-04-16 20:40:47.610090] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:02:56.494 [2024-04-16 20:40:47.610113] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:02:56.494 passed 00:02:56.494 00:02:56.494 Run Summary: Type Total Ran Passed Failed Inactive 00:02:56.494 suites 1 1 n/a 0 0 00:02:56.494 tests 11 11 11 0 0 00:02:56.494 asserts 3437 3437 3437 0 n/a 00:02:56.494 00:02:56.494 Elapsed time = 0.000 seconds 00:02:56.494 [2024-04-16 20:40:47.611579] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 411:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:02:56.494 [2024-04-16 20:40:47.611608] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:02:56.494 20:40:47 -- unit/unittest.sh@32 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:02:56.753 00:02:56.753 00:02:56.753 CUnit - A unit testing framework for C - Version 2.1-3 00:02:56.753 http://cunit.sourceforge.net/ 00:02:56.753 00:02:56.753 00:02:56.753 Suite: bdev 00:02:56.753 Test: basic ...[2024-04-16 20:40:47.619852] thread.c:2360:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x248619): Operation not permitted (rc=-1) 00:02:56.753 [2024-04-16 20:40:47.620009] thread.c:2360:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x82ce77480 (0x248610): Operation not permitted (rc=-1) 00:02:56.753 [2024-04-16 20:40:47.620022] thread.c:2360:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x248619): Operation not permitted (rc=-1) 00:02:56.753 passed 00:02:56.753 Test: unregister_and_close ...passed 00:02:56.753 Test: unregister_and_close_different_threads ...passed 00:02:56.753 Test: basic_qos ...passed 00:02:56.753 Test: put_channel_during_reset ...passed 00:02:56.753 Test: aborted_reset ...passed 00:02:56.753 Test: aborted_reset_no_outstanding_io ...passed 00:02:56.753 Test: io_during_reset ...passed 00:02:56.753 Test: reset_completions ...passed 00:02:56.753 Test: io_during_qos_queue ...passed 00:02:56.753 Test: io_during_qos_reset ...passed 00:02:56.753 Test: enomem ...passed 00:02:56.753 Test: enomem_multi_bdev ...passed 00:02:56.753 Test: enomem_multi_bdev_unregister ...passed 00:02:56.753 Test: enomem_multi_io_target ...passed 00:02:56.753 Test: qos_dynamic_enable ...passed 00:02:56.753 Test: bdev_histograms_mt ...passed 00:02:56.753 Test: bdev_set_io_timeout_mt ...passed 00:02:56.753 Test: lock_lba_range_then_submit_io ...[2024-04-16 20:40:47.645696] thread.c: 465:spdk_thread_lib_fini: *ERROR*: io_device 0x82ce77600 not unregistered 00:02:56.753 [2024-04-16 20:40:47.646518] thread.c:2164:spdk_io_device_register: *ERROR*: io_device 0x2485f8 already registered (old:0x82ce77600 new:0x82ce77780) 00:02:56.753 passed 00:02:56.753 Test: unregister_during_reset ...passed 00:02:56.753 Test: event_notify_and_close ...passed 00:02:56.753 Suite: bdev_wrong_thread 00:02:56.753 Test: spdk_bdev_register_wt ...[2024-04-16 20:40:47.649796] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8360:spdk_bdev_register: *ERROR*: Cannot examine bdev wt_bdev on thread 0x82ce40700 (0x82ce40700) 00:02:56.753 passed 00:02:56.753 Test: spdk_bdev_examine_wt ...passed[2024-04-16 20:40:47.649830] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 794:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x82ce40700 (0x82ce40700) 00:02:56.753 00:02:56.753 00:02:56.753 Run Summary: Type Total Ran Passed Failed Inactive 00:02:56.753 suites 2 2 n/a 0 0 00:02:56.753 tests 23 23 23 0 0 00:02:56.753 asserts 601 601 601 0 n/a 00:02:56.753 00:02:56.753 Elapsed time = 0.031 seconds 00:02:56.753 00:02:56.753 real 0m0.985s 00:02:56.753 user 0m0.750s 00:02:56.753 sys 0m0.213s 00:02:56.753 20:40:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:56.753 20:40:47 -- common/autotest_common.sh@10 -- # set +x 00:02:56.753 ************************************ 00:02:56.753 END TEST unittest_bdev 00:02:56.753 ************************************ 00:02:56.753 20:40:47 -- unit/unittest.sh@213 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:02:56.753 20:40:47 -- unit/unittest.sh@218 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:02:56.753 20:40:47 -- unit/unittest.sh@223 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:02:56.754 20:40:47 -- unit/unittest.sh@227 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:02:56.754 20:40:47 -- unit/unittest.sh@231 -- # run_test unittest_blob_blobfs unittest_blob 00:02:56.754 20:40:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:56.754 20:40:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:56.754 20:40:47 -- common/autotest_common.sh@10 -- # set +x 00:02:56.754 ************************************ 00:02:56.754 START TEST unittest_blob_blobfs 00:02:56.754 ************************************ 00:02:56.754 20:40:47 -- common/autotest_common.sh@1104 -- # unittest_blob 00:02:56.754 20:40:47 -- unit/unittest.sh@38 -- # [[ -e /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:02:56.754 20:40:47 -- unit/unittest.sh@39 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:02:56.754 00:02:56.754 00:02:56.754 CUnit - A unit testing framework for C - Version 2.1-3 00:02:56.754 http://cunit.sourceforge.net/ 00:02:56.754 00:02:56.754 00:02:56.754 Suite: blob_nocopy_noextent 00:02:56.754 Test: blob_init ...[2024-04-16 20:40:47.703748] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5268:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:02:56.754 passed 00:02:56.754 Test: blob_thin_provision ...passed 00:02:56.754 Test: blob_read_only ...passed 00:02:56.754 Test: bs_load ...passed 00:02:56.754 Test: bs_load_custom_cluster_size ...[2024-04-16 20:40:47.766102] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 897:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:02:56.754 passed 00:02:56.754 Test: bs_load_after_failed_grow ...passed 00:02:56.754 Test: bs_cluster_sz ...[2024-04-16 20:40:47.784790] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:02:56.754 [2024-04-16 20:40:47.784876] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5400:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:02:56.754 [2024-04-16 20:40:47.784885] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3663:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:02:56.754 passed 00:02:56.754 Test: bs_resize_md ...passed 00:02:56.754 Test: bs_destroy ...passed 00:02:56.754 Test: bs_type ...passed 00:02:56.754 Test: bs_super_block ...passed 00:02:56.754 Test: bs_test_recover_cluster_count ...passed 00:02:56.754 Test: bs_grow_live ...passed 00:02:56.754 Test: bs_grow_live_no_space ...passed 00:02:56.754 Test: bs_test_grow ...passed 00:02:56.754 Test: blob_serialize_test ...passed 00:02:56.754 Test: super_block_crc ...passed 00:02:57.013 Test: blob_thin_prov_write_count_io ...passed 00:02:57.013 Test: bs_load_iter_test ...passed 00:02:57.013 Test: blob_relations ...[2024-04-16 20:40:47.897141] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:02:57.013 [2024-04-16 20:40:47.897215] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:57.013 [2024-04-16 20:40:47.897277] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:02:57.013 [2024-04-16 20:40:47.897283] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:57.013 passed 00:02:57.013 Test: blob_relations2 ...[2024-04-16 20:40:47.907293] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:02:57.013 [2024-04-16 20:40:47.907313] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:57.013 [2024-04-16 20:40:47.907319] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:02:57.013 [2024-04-16 20:40:47.907325] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:57.013 [2024-04-16 20:40:47.907414] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:02:57.013 [2024-04-16 20:40:47.907421] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:57.013 [2024-04-16 20:40:47.907449] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:02:57.013 [2024-04-16 20:40:47.907455] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:57.013 passed 00:02:57.013 Test: blob_relations3 ...passed 00:02:57.013 Test: blobstore_clean_power_failure ...passed 00:02:57.013 Test: blob_delete_snapshot_power_failure ...[2024-04-16 20:40:48.037115] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:02:57.013 [2024-04-16 20:40:48.046459] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:02:57.013 [2024-04-16 20:40:48.046500] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:02:57.013 [2024-04-16 20:40:48.046506] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:57.013 [2024-04-16 20:40:48.056067] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:02:57.013 [2024-04-16 20:40:48.056087] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:02:57.013 [2024-04-16 20:40:48.056094] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:02:57.013 [2024-04-16 20:40:48.056100] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:57.013 [2024-04-16 20:40:48.065616] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:02:57.013 [2024-04-16 20:40:48.065645] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:57.013 [2024-04-16 20:40:48.075020] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:02:57.013 [2024-04-16 20:40:48.075045] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:57.013 [2024-04-16 20:40:48.084472] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:02:57.013 [2024-04-16 20:40:48.084498] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:57.013 passed 00:02:57.013 Test: blob_create_snapshot_power_failure ...[2024-04-16 20:40:48.112338] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:02:57.013 [2024-04-16 20:40:48.130915] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:02:57.272 [2024-04-16 20:40:48.140420] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:02:57.272 passed 00:02:57.272 Test: blob_io_unit ...passed 00:02:57.272 Test: blob_io_unit_compatibility ...passed 00:02:57.272 Test: blob_ext_md_pages ...passed 00:02:57.273 Test: blob_esnap_io_4096_4096 ...passed 00:02:57.273 Test: blob_esnap_io_512_512 ...passed 00:02:57.273 Test: blob_esnap_io_4096_512 ...passed 00:02:57.273 Test: blob_esnap_io_512_4096 ...passed 00:02:57.273 Suite: blob_bs_nocopy_noextent 00:02:57.273 Test: blob_open ...passed 00:02:57.273 Test: blob_create ...[2024-04-16 20:40:48.317480] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:02:57.273 passed 00:02:57.273 Test: blob_create_loop ...passed 00:02:57.273 Test: blob_create_fail ...[2024-04-16 20:40:48.384001] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:02:57.273 passed 00:02:57.531 Test: blob_create_internal ...passed 00:02:57.531 Test: blob_create_zero_extent ...passed 00:02:57.531 Test: blob_snapshot ...passed 00:02:57.531 Test: blob_clone ...passed 00:02:57.531 Test: blob_inflate ...[2024-04-16 20:40:48.529020] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:02:57.531 passed 00:02:57.531 Test: blob_delete ...passed 00:02:57.531 Test: blob_resize_test ...[2024-04-16 20:40:48.584209] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:02:57.531 passed 00:02:57.531 Test: channel_ops ...passed 00:02:57.531 Test: blob_super ...passed 00:02:57.790 Test: blob_rw_verify_iov ...passed 00:02:57.790 Test: blob_unmap ...passed 00:02:57.790 Test: blob_iter ...passed 00:02:57.790 Test: blob_parse_md ...passed 00:02:57.790 Test: bs_load_pending_removal ...passed 00:02:57.790 Test: bs_unload ...[2024-04-16 20:40:48.805935] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:02:57.790 passed 00:02:57.790 Test: bs_usable_clusters ...passed 00:02:57.790 Test: blob_crc ...[2024-04-16 20:40:48.861301] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1610:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:02:57.790 [2024-04-16 20:40:48.861346] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1610:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:02:57.790 passed 00:02:57.790 Test: blob_flags ...passed 00:02:58.049 Test: bs_version ...passed 00:02:58.049 Test: blob_set_xattrs_test ...[2024-04-16 20:40:48.945293] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:02:58.049 [2024-04-16 20:40:48.945348] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:02:58.049 passed 00:02:58.049 Test: blob_thin_prov_alloc ...passed 00:02:58.049 Test: blob_insert_cluster_msg_test ...passed 00:02:58.049 Test: blob_thin_prov_rw ...passed 00:02:58.049 Test: blob_thin_prov_rle ...passed 00:02:58.049 Test: blob_thin_prov_rw_iov ...passed 00:02:58.049 Test: blob_snapshot_rw ...passed 00:02:58.049 Test: blob_snapshot_rw_iov ...passed 00:02:58.308 Test: blob_inflate_rw ...passed 00:02:58.308 Test: blob_snapshot_freeze_io ...passed 00:02:58.308 Test: blob_operation_split_rw ...passed 00:02:58.308 Test: blob_operation_split_rw_iov ...passed 00:02:58.308 Test: blob_simultaneous_operations ...[2024-04-16 20:40:49.370118] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:02:58.308 [2024-04-16 20:40:49.370190] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:58.308 [2024-04-16 20:40:49.370430] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:02:58.308 [2024-04-16 20:40:49.370444] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:58.308 [2024-04-16 20:40:49.373535] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:02:58.308 [2024-04-16 20:40:49.373560] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:58.308 [2024-04-16 20:40:49.373574] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:02:58.308 [2024-04-16 20:40:49.373580] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:58.308 passed 00:02:58.308 Test: blob_persist_test ...passed 00:02:58.568 Test: blob_decouple_snapshot ...passed 00:02:58.568 Test: blob_seek_io_unit ...passed 00:02:58.568 Test: blob_nested_freezes ...passed 00:02:58.568 Suite: blob_blob_nocopy_noextent 00:02:58.568 Test: blob_write ...passed 00:02:58.568 Test: blob_read ...passed 00:02:58.568 Test: blob_rw_verify ...passed 00:02:58.568 Test: blob_rw_verify_iov_nomem ...passed 00:02:58.568 Test: blob_rw_iov_read_only ...passed 00:02:58.568 Test: blob_xattr ...passed 00:02:58.827 Test: blob_dirty_shutdown ...passed 00:02:58.827 Test: blob_is_degraded ...passed 00:02:58.827 Suite: blob_esnap_bs_nocopy_noextent 00:02:58.827 Test: blob_esnap_create ...passed 00:02:58.827 Test: blob_esnap_thread_add_remove ...passed 00:02:58.827 Test: blob_esnap_clone_snapshot ...passed 00:02:58.827 Test: blob_esnap_clone_inflate ...passed 00:02:58.827 Test: blob_esnap_clone_decouple ...passed 00:02:58.827 Test: blob_esnap_clone_reload ...passed 00:02:58.827 Test: blob_esnap_hotplug ...passed 00:02:58.827 Suite: blob_nocopy_extent 00:02:58.827 Test: blob_init ...[2024-04-16 20:40:49.932400] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5268:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:02:58.827 passed 00:02:58.827 Test: blob_thin_provision ...passed 00:02:59.086 Test: blob_read_only ...passed 00:02:59.086 Test: bs_load ...passed 00:02:59.086 Test: bs_load_custom_cluster_size ...[2024-04-16 20:40:49.969762] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 897:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:02:59.086 passed 00:02:59.086 Test: bs_load_after_failed_grow ...passed 00:02:59.086 Test: bs_cluster_sz ...[2024-04-16 20:40:49.988688] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:02:59.086 [2024-04-16 20:40:49.988737] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5400:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:02:59.086 [2024-04-16 20:40:49.988746] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3663:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:02:59.086 passed 00:02:59.086 Test: bs_resize_md ...passed 00:02:59.086 Test: bs_destroy ...passed 00:02:59.086 Test: bs_type ...passed 00:02:59.086 Test: bs_super_block ...passed 00:02:59.086 Test: bs_test_recover_cluster_count ...passed 00:02:59.086 Test: bs_grow_live ...passed 00:02:59.086 Test: bs_grow_live_no_space ...passed 00:02:59.086 Test: bs_test_grow ...passed 00:02:59.086 Test: blob_serialize_test ...passed 00:02:59.086 Test: super_block_crc ...passed 00:02:59.086 Test: blob_thin_prov_write_count_io ...passed 00:02:59.086 Test: bs_load_iter_test ...passed 00:02:59.086 Test: blob_relations ...[2024-04-16 20:40:50.101022] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:02:59.087 [2024-04-16 20:40:50.101093] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:59.087 [2024-04-16 20:40:50.101159] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:02:59.087 [2024-04-16 20:40:50.101165] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:59.087 passed 00:02:59.087 Test: blob_relations2 ...[2024-04-16 20:40:50.111368] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:02:59.087 [2024-04-16 20:40:50.111405] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:59.087 [2024-04-16 20:40:50.111411] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:02:59.087 [2024-04-16 20:40:50.111416] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:59.087 [2024-04-16 20:40:50.111532] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:02:59.087 [2024-04-16 20:40:50.111539] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:59.087 [2024-04-16 20:40:50.111571] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:02:59.087 [2024-04-16 20:40:50.111577] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:59.087 passed 00:02:59.087 Test: blob_relations3 ...passed 00:02:59.346 Test: blobstore_clean_power_failure ...passed 00:02:59.346 Test: blob_delete_snapshot_power_failure ...[2024-04-16 20:40:50.241948] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:02:59.346 [2024-04-16 20:40:50.251356] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:02:59.346 [2024-04-16 20:40:50.260809] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:02:59.346 [2024-04-16 20:40:50.260851] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:02:59.346 [2024-04-16 20:40:50.260858] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:59.346 [2024-04-16 20:40:50.270238] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:02:59.346 [2024-04-16 20:40:50.270280] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:02:59.346 [2024-04-16 20:40:50.270286] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:02:59.346 [2024-04-16 20:40:50.270291] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:59.346 [2024-04-16 20:40:50.279734] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:02:59.346 [2024-04-16 20:40:50.279757] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:02:59.346 [2024-04-16 20:40:50.279763] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:02:59.346 [2024-04-16 20:40:50.279769] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:59.346 [2024-04-16 20:40:50.289398] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:02:59.346 [2024-04-16 20:40:50.289432] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:59.346 [2024-04-16 20:40:50.299135] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:02:59.346 [2024-04-16 20:40:50.299166] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:59.346 [2024-04-16 20:40:50.308772] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:02:59.346 [2024-04-16 20:40:50.308801] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:02:59.346 passed 00:02:59.346 Test: blob_create_snapshot_power_failure ...[2024-04-16 20:40:50.336753] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:02:59.346 [2024-04-16 20:40:50.346181] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:02:59.346 [2024-04-16 20:40:50.364833] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:02:59.346 [2024-04-16 20:40:50.374219] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:02:59.346 passed 00:02:59.346 Test: blob_io_unit ...passed 00:02:59.346 Test: blob_io_unit_compatibility ...passed 00:02:59.346 Test: blob_ext_md_pages ...passed 00:02:59.346 Test: blob_esnap_io_4096_4096 ...passed 00:02:59.346 Test: blob_esnap_io_512_512 ...passed 00:02:59.605 Test: blob_esnap_io_4096_512 ...passed 00:02:59.605 Test: blob_esnap_io_512_4096 ...passed 00:02:59.605 Suite: blob_bs_nocopy_extent 00:02:59.605 Test: blob_open ...passed 00:02:59.605 Test: blob_create ...[2024-04-16 20:40:50.552446] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:02:59.605 passed 00:02:59.605 Test: blob_create_loop ...passed 00:02:59.605 Test: blob_create_fail ...[2024-04-16 20:40:50.620545] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:02:59.605 passed 00:02:59.605 Test: blob_create_internal ...passed 00:02:59.605 Test: blob_create_zero_extent ...passed 00:02:59.605 Test: blob_snapshot ...passed 00:02:59.864 Test: blob_clone ...passed 00:02:59.864 Test: blob_inflate ...[2024-04-16 20:40:50.765157] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:02:59.864 passed 00:02:59.864 Test: blob_delete ...passed 00:02:59.864 Test: blob_resize_test ...[2024-04-16 20:40:50.821108] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:02:59.864 passed 00:02:59.864 Test: channel_ops ...passed 00:02:59.864 Test: blob_super ...passed 00:02:59.864 Test: blob_rw_verify_iov ...passed 00:02:59.864 Test: blob_unmap ...passed 00:02:59.864 Test: blob_iter ...passed 00:03:00.123 Test: blob_parse_md ...passed 00:03:00.123 Test: bs_load_pending_removal ...passed 00:03:00.123 Test: bs_unload ...[2024-04-16 20:40:51.042653] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:03:00.123 passed 00:03:00.123 Test: bs_usable_clusters ...passed 00:03:00.123 Test: blob_crc ...[2024-04-16 20:40:51.098312] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1610:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:00.123 [2024-04-16 20:40:51.098366] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1610:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:00.123 passed 00:03:00.123 Test: blob_flags ...passed 00:03:00.123 Test: bs_version ...passed 00:03:00.123 Test: blob_set_xattrs_test ...[2024-04-16 20:40:51.182241] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:00.123 [2024-04-16 20:40:51.182290] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:00.123 passed 00:03:00.123 Test: blob_thin_prov_alloc ...passed 00:03:00.382 Test: blob_insert_cluster_msg_test ...passed 00:03:00.382 Test: blob_thin_prov_rw ...passed 00:03:00.382 Test: blob_thin_prov_rle ...passed 00:03:00.382 Test: blob_thin_prov_rw_iov ...passed 00:03:00.382 Test: blob_snapshot_rw ...passed 00:03:00.382 Test: blob_snapshot_rw_iov ...passed 00:03:00.382 Test: blob_inflate_rw ...passed 00:03:00.382 Test: blob_snapshot_freeze_io ...passed 00:03:00.642 Test: blob_operation_split_rw ...passed 00:03:00.642 Test: blob_operation_split_rw_iov ...passed 00:03:00.642 Test: blob_simultaneous_operations ...[2024-04-16 20:40:51.602445] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:00.642 [2024-04-16 20:40:51.602505] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:00.642 [2024-04-16 20:40:51.602757] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:00.642 [2024-04-16 20:40:51.602771] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:00.642 [2024-04-16 20:40:51.605829] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:00.642 [2024-04-16 20:40:51.605852] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:00.642 [2024-04-16 20:40:51.605885] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:00.642 [2024-04-16 20:40:51.605891] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:00.642 passed 00:03:00.642 Test: blob_persist_test ...passed 00:03:00.642 Test: blob_decouple_snapshot ...passed 00:03:00.642 Test: blob_seek_io_unit ...passed 00:03:00.642 Test: blob_nested_freezes ...passed 00:03:00.642 Suite: blob_blob_nocopy_extent 00:03:00.901 Test: blob_write ...passed 00:03:00.901 Test: blob_read ...passed 00:03:00.901 Test: blob_rw_verify ...passed 00:03:00.901 Test: blob_rw_verify_iov_nomem ...passed 00:03:00.901 Test: blob_rw_iov_read_only ...passed 00:03:00.901 Test: blob_xattr ...passed 00:03:00.901 Test: blob_dirty_shutdown ...passed 00:03:00.901 Test: blob_is_degraded ...passed 00:03:00.901 Suite: blob_esnap_bs_nocopy_extent 00:03:00.901 Test: blob_esnap_create ...passed 00:03:01.161 Test: blob_esnap_thread_add_remove ...passed 00:03:01.161 Test: blob_esnap_clone_snapshot ...passed 00:03:01.161 Test: blob_esnap_clone_inflate ...passed 00:03:01.161 Test: blob_esnap_clone_decouple ...passed 00:03:01.161 Test: blob_esnap_clone_reload ...passed 00:03:01.161 Test: blob_esnap_hotplug ...passed 00:03:01.161 Suite: blob_copy_noextent 00:03:01.161 Test: blob_init ...[2024-04-16 20:40:52.164579] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5268:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:03:01.161 passed 00:03:01.161 Test: blob_thin_provision ...passed 00:03:01.161 Test: blob_read_only ...passed 00:03:01.161 Test: bs_load ...passed 00:03:01.161 Test: bs_load_custom_cluster_size ...[2024-04-16 20:40:52.201607] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 897:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:03:01.161 passed 00:03:01.161 Test: bs_load_after_failed_grow ...passed 00:03:01.161 Test: bs_cluster_sz ...[2024-04-16 20:40:52.220436] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:03:01.161 [2024-04-16 20:40:52.220481] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5400:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:03:01.161 [2024-04-16 20:40:52.220491] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3663:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:03:01.161 passed 00:03:01.161 Test: bs_resize_md ...passed 00:03:01.161 Test: bs_destroy ...passed 00:03:01.161 Test: bs_type ...passed 00:03:01.161 Test: bs_super_block ...passed 00:03:01.161 Test: bs_test_recover_cluster_count ...passed 00:03:01.161 Test: bs_grow_live ...passed 00:03:01.161 Test: bs_grow_live_no_space ...passed 00:03:01.420 Test: bs_test_grow ...passed 00:03:01.420 Test: blob_serialize_test ...passed 00:03:01.420 Test: super_block_crc ...passed 00:03:01.420 Test: blob_thin_prov_write_count_io ...passed 00:03:01.420 Test: bs_load_iter_test ...passed 00:03:01.420 Test: blob_relations ...[2024-04-16 20:40:52.333559] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:01.420 [2024-04-16 20:40:52.333618] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:01.420 [2024-04-16 20:40:52.333671] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:01.420 [2024-04-16 20:40:52.333677] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:01.420 passed 00:03:01.420 Test: blob_relations2 ...[2024-04-16 20:40:52.344210] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:01.420 [2024-04-16 20:40:52.344251] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:01.420 [2024-04-16 20:40:52.344274] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:01.420 [2024-04-16 20:40:52.344280] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:01.420 [2024-04-16 20:40:52.344364] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:01.420 [2024-04-16 20:40:52.344371] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:01.420 [2024-04-16 20:40:52.344402] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:01.420 [2024-04-16 20:40:52.344408] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:01.420 passed 00:03:01.420 Test: blob_relations3 ...passed 00:03:01.420 Test: blobstore_clean_power_failure ...passed 00:03:01.420 Test: blob_delete_snapshot_power_failure ...[2024-04-16 20:40:52.475089] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:03:01.420 [2024-04-16 20:40:52.484549] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:01.420 [2024-04-16 20:40:52.484595] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:01.420 [2024-04-16 20:40:52.484601] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:01.420 [2024-04-16 20:40:52.493974] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:03:01.420 [2024-04-16 20:40:52.494045] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:03:01.420 [2024-04-16 20:40:52.494051] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:01.420 [2024-04-16 20:40:52.494057] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:01.420 [2024-04-16 20:40:52.503637] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:03:01.420 [2024-04-16 20:40:52.503659] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:01.420 [2024-04-16 20:40:52.513184] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:03:01.420 [2024-04-16 20:40:52.513213] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:01.420 [2024-04-16 20:40:52.522588] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:03:01.420 [2024-04-16 20:40:52.522633] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:01.420 passed 00:03:01.680 Test: blob_create_snapshot_power_failure ...[2024-04-16 20:40:52.550708] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:01.680 [2024-04-16 20:40:52.569494] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:03:01.680 [2024-04-16 20:40:52.578901] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:03:01.680 passed 00:03:01.680 Test: blob_io_unit ...passed 00:03:01.680 Test: blob_io_unit_compatibility ...passed 00:03:01.680 Test: blob_ext_md_pages ...passed 00:03:01.680 Test: blob_esnap_io_4096_4096 ...passed 00:03:01.680 Test: blob_esnap_io_512_512 ...passed 00:03:01.680 Test: blob_esnap_io_4096_512 ...passed 00:03:01.680 Test: blob_esnap_io_512_4096 ...passed 00:03:01.680 Suite: blob_bs_copy_noextent 00:03:01.680 Test: blob_open ...passed 00:03:01.680 Test: blob_create ...[2024-04-16 20:40:52.757281] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:03:01.680 passed 00:03:01.939 Test: blob_create_loop ...passed 00:03:01.939 Test: blob_create_fail ...[2024-04-16 20:40:52.824415] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:01.939 passed 00:03:01.939 Test: blob_create_internal ...passed 00:03:01.939 Test: blob_create_zero_extent ...passed 00:03:01.939 Test: blob_snapshot ...passed 00:03:01.939 Test: blob_clone ...passed 00:03:01.939 Test: blob_inflate ...[2024-04-16 20:40:52.968501] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:03:01.939 passed 00:03:01.939 Test: blob_delete ...passed 00:03:01.939 Test: blob_resize_test ...[2024-04-16 20:40:53.023890] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:03:01.939 passed 00:03:01.939 Test: channel_ops ...passed 00:03:02.204 Test: blob_super ...passed 00:03:02.204 Test: blob_rw_verify_iov ...passed 00:03:02.204 Test: blob_unmap ...passed 00:03:02.204 Test: blob_iter ...passed 00:03:02.204 Test: blob_parse_md ...passed 00:03:02.204 Test: bs_load_pending_removal ...passed 00:03:02.204 Test: bs_unload ...[2024-04-16 20:40:53.246635] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:03:02.204 passed 00:03:02.204 Test: bs_usable_clusters ...passed 00:03:02.204 Test: blob_crc ...[2024-04-16 20:40:53.304013] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1610:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:02.204 [2024-04-16 20:40:53.304074] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1610:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:02.204 passed 00:03:02.473 Test: blob_flags ...passed 00:03:02.473 Test: bs_version ...passed 00:03:02.473 Test: blob_set_xattrs_test ...[2024-04-16 20:40:53.389629] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:02.473 [2024-04-16 20:40:53.389693] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:02.473 passed 00:03:02.473 Test: blob_thin_prov_alloc ...passed 00:03:02.473 Test: blob_insert_cluster_msg_test ...passed 00:03:02.473 Test: blob_thin_prov_rw ...passed 00:03:02.473 Test: blob_thin_prov_rle ...passed 00:03:02.473 Test: blob_thin_prov_rw_iov ...passed 00:03:02.473 Test: blob_snapshot_rw ...passed 00:03:02.731 Test: blob_snapshot_rw_iov ...passed 00:03:02.731 Test: blob_inflate_rw ...passed 00:03:02.732 Test: blob_snapshot_freeze_io ...passed 00:03:02.732 Test: blob_operation_split_rw ...passed 00:03:02.732 Test: blob_operation_split_rw_iov ...passed 00:03:02.732 Test: blob_simultaneous_operations ...[2024-04-16 20:40:53.819218] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:02.732 [2024-04-16 20:40:53.819275] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:02.732 [2024-04-16 20:40:53.819508] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:02.732 [2024-04-16 20:40:53.819520] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:02.732 [2024-04-16 20:40:53.821494] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:02.732 [2024-04-16 20:40:53.821516] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:02.732 [2024-04-16 20:40:53.821533] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:02.732 [2024-04-16 20:40:53.821539] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:02.732 passed 00:03:02.991 Test: blob_persist_test ...passed 00:03:02.991 Test: blob_decouple_snapshot ...passed 00:03:02.991 Test: blob_seek_io_unit ...passed 00:03:02.991 Test: blob_nested_freezes ...passed 00:03:02.991 Suite: blob_blob_copy_noextent 00:03:02.991 Test: blob_write ...passed 00:03:02.991 Test: blob_read ...passed 00:03:02.991 Test: blob_rw_verify ...passed 00:03:02.991 Test: blob_rw_verify_iov_nomem ...passed 00:03:02.991 Test: blob_rw_iov_read_only ...passed 00:03:03.250 Test: blob_xattr ...passed 00:03:03.250 Test: blob_dirty_shutdown ...passed 00:03:03.250 Test: blob_is_degraded ...passed 00:03:03.250 Suite: blob_esnap_bs_copy_noextent 00:03:03.250 Test: blob_esnap_create ...passed 00:03:03.250 Test: blob_esnap_thread_add_remove ...passed 00:03:03.250 Test: blob_esnap_clone_snapshot ...passed 00:03:03.250 Test: blob_esnap_clone_inflate ...passed 00:03:03.250 Test: blob_esnap_clone_decouple ...passed 00:03:03.250 Test: blob_esnap_clone_reload ...passed 00:03:03.510 Test: blob_esnap_hotplug ...passed 00:03:03.510 Suite: blob_copy_extent 00:03:03.510 Test: blob_init ...[2024-04-16 20:40:54.380809] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5268:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:03:03.510 passed 00:03:03.510 Test: blob_thin_provision ...passed 00:03:03.510 Test: blob_read_only ...passed 00:03:03.510 Test: bs_load ...passed 00:03:03.510 Test: bs_load_custom_cluster_size ...[2024-04-16 20:40:54.417965] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 897:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:03:03.510 passed 00:03:03.510 Test: bs_load_after_failed_grow ...passed 00:03:03.510 Test: bs_cluster_sz ...[2024-04-16 20:40:54.436730] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:03:03.510 [2024-04-16 20:40:54.436775] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5400:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:03:03.510 [2024-04-16 20:40:54.436785] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3663:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:03:03.510 passed 00:03:03.510 Test: bs_resize_md ...passed 00:03:03.510 Test: bs_destroy ...passed 00:03:03.510 Test: bs_type ...passed 00:03:03.510 Test: bs_super_block ...passed 00:03:03.510 Test: bs_test_recover_cluster_count ...passed 00:03:03.510 Test: bs_grow_live ...passed 00:03:03.510 Test: bs_grow_live_no_space ...passed 00:03:03.510 Test: bs_test_grow ...passed 00:03:03.510 Test: blob_serialize_test ...passed 00:03:03.510 Test: super_block_crc ...passed 00:03:03.510 Test: blob_thin_prov_write_count_io ...passed 00:03:03.510 Test: bs_load_iter_test ...passed 00:03:03.510 Test: blob_relations ...[2024-04-16 20:40:54.550030] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:03.510 [2024-04-16 20:40:54.550088] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:03.510 [2024-04-16 20:40:54.550148] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:03.510 [2024-04-16 20:40:54.550154] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:03.510 passed 00:03:03.510 Test: blob_relations2 ...[2024-04-16 20:40:54.560666] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:03.510 [2024-04-16 20:40:54.560707] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:03.510 [2024-04-16 20:40:54.560729] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:03.510 [2024-04-16 20:40:54.560734] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:03.510 [2024-04-16 20:40:54.560833] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:03.510 [2024-04-16 20:40:54.560841] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:03.510 [2024-04-16 20:40:54.560872] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:03.510 [2024-04-16 20:40:54.560878] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:03.510 passed 00:03:03.510 Test: blob_relations3 ...passed 00:03:03.769 Test: blobstore_clean_power_failure ...passed 00:03:03.769 Test: blob_delete_snapshot_power_failure ...[2024-04-16 20:40:54.692084] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:03:03.769 [2024-04-16 20:40:54.701569] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:03:03.769 [2024-04-16 20:40:54.711068] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:03.769 [2024-04-16 20:40:54.711115] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:03.769 [2024-04-16 20:40:54.711139] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:03.769 [2024-04-16 20:40:54.720712] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:03:03.769 [2024-04-16 20:40:54.720750] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:03:03.769 [2024-04-16 20:40:54.720764] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:03.769 [2024-04-16 20:40:54.720769] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:03.769 [2024-04-16 20:40:54.730301] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:03:03.769 [2024-04-16 20:40:54.730323] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:03:03.769 [2024-04-16 20:40:54.730329] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:03.769 [2024-04-16 20:40:54.730336] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:03.769 [2024-04-16 20:40:54.739904] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:03:03.769 [2024-04-16 20:40:54.739925] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:03.769 [2024-04-16 20:40:54.749487] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:03:03.769 [2024-04-16 20:40:54.749518] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:03.769 [2024-04-16 20:40:54.759133] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:03:03.769 [2024-04-16 20:40:54.759212] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:03.769 passed 00:03:03.769 Test: blob_create_snapshot_power_failure ...[2024-04-16 20:40:54.787333] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:03.769 [2024-04-16 20:40:54.796726] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:03:03.769 [2024-04-16 20:40:54.815400] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:03:03.769 [2024-04-16 20:40:54.824825] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:03:03.769 passed 00:03:03.769 Test: blob_io_unit ...passed 00:03:03.769 Test: blob_io_unit_compatibility ...passed 00:03:03.769 Test: blob_ext_md_pages ...passed 00:03:04.029 Test: blob_esnap_io_4096_4096 ...passed 00:03:04.029 Test: blob_esnap_io_512_512 ...passed 00:03:04.029 Test: blob_esnap_io_4096_512 ...passed 00:03:04.029 Test: blob_esnap_io_512_4096 ...passed 00:03:04.029 Suite: blob_bs_copy_extent 00:03:04.029 Test: blob_open ...passed 00:03:04.029 Test: blob_create ...[2024-04-16 20:40:55.003589] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:03:04.029 passed 00:03:04.029 Test: blob_create_loop ...passed 00:03:04.029 Test: blob_create_fail ...[2024-04-16 20:40:55.071195] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:04.029 passed 00:03:04.029 Test: blob_create_internal ...passed 00:03:04.029 Test: blob_create_zero_extent ...passed 00:03:04.289 Test: blob_snapshot ...passed 00:03:04.289 Test: blob_clone ...passed 00:03:04.289 Test: blob_inflate ...[2024-04-16 20:40:55.215689] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:03:04.289 passed 00:03:04.289 Test: blob_delete ...passed 00:03:04.289 Test: blob_resize_test ...[2024-04-16 20:40:55.272077] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:03:04.289 passed 00:03:04.289 Test: channel_ops ...passed 00:03:04.289 Test: blob_super ...passed 00:03:04.289 Test: blob_rw_verify_iov ...passed 00:03:04.289 Test: blob_unmap ...passed 00:03:04.548 Test: blob_iter ...passed 00:03:04.548 Test: blob_parse_md ...passed 00:03:04.548 Test: bs_load_pending_removal ...passed 00:03:04.548 Test: bs_unload ...[2024-04-16 20:40:55.495964] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:03:04.548 passed 00:03:04.548 Test: bs_usable_clusters ...passed 00:03:04.548 Test: blob_crc ...[2024-04-16 20:40:55.551743] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1610:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:04.548 [2024-04-16 20:40:55.551796] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1610:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:04.548 passed 00:03:04.548 Test: blob_flags ...passed 00:03:04.548 Test: bs_version ...passed 00:03:04.548 Test: blob_set_xattrs_test ...[2024-04-16 20:40:55.635170] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:04.548 [2024-04-16 20:40:55.635217] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:04.548 passed 00:03:04.808 Test: blob_thin_prov_alloc ...passed 00:03:04.808 Test: blob_insert_cluster_msg_test ...passed 00:03:04.808 Test: blob_thin_prov_rw ...passed 00:03:04.808 Test: blob_thin_prov_rle ...passed 00:03:04.808 Test: blob_thin_prov_rw_iov ...passed 00:03:04.808 Test: blob_snapshot_rw ...passed 00:03:04.808 Test: blob_snapshot_rw_iov ...passed 00:03:04.808 Test: blob_inflate_rw ...passed 00:03:05.067 Test: blob_snapshot_freeze_io ...passed 00:03:05.067 Test: blob_operation_split_rw ...passed 00:03:05.067 Test: blob_operation_split_rw_iov ...passed 00:03:05.067 Test: blob_simultaneous_operations ...[2024-04-16 20:40:56.059633] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:05.067 [2024-04-16 20:40:56.059714] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:05.067 [2024-04-16 20:40:56.059972] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:05.067 [2024-04-16 20:40:56.059985] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:05.067 [2024-04-16 20:40:56.062093] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:05.067 [2024-04-16 20:40:56.062115] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:05.067 [2024-04-16 20:40:56.062131] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:05.067 [2024-04-16 20:40:56.062138] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:05.067 passed 00:03:05.067 Test: blob_persist_test ...passed 00:03:05.067 Test: blob_decouple_snapshot ...passed 00:03:05.067 Test: blob_seek_io_unit ...passed 00:03:05.326 Test: blob_nested_freezes ...passed 00:03:05.326 Suite: blob_blob_copy_extent 00:03:05.326 Test: blob_write ...passed 00:03:05.326 Test: blob_read ...passed 00:03:05.326 Test: blob_rw_verify ...passed 00:03:05.326 Test: blob_rw_verify_iov_nomem ...passed 00:03:05.326 Test: blob_rw_iov_read_only ...passed 00:03:05.326 Test: blob_xattr ...passed 00:03:05.326 Test: blob_dirty_shutdown ...passed 00:03:05.326 Test: blob_is_degraded ...passed 00:03:05.326 Suite: blob_esnap_bs_copy_extent 00:03:05.585 Test: blob_esnap_create ...passed 00:03:05.585 Test: blob_esnap_thread_add_remove ...passed 00:03:05.585 Test: blob_esnap_clone_snapshot ...passed 00:03:05.585 Test: blob_esnap_clone_inflate ...passed 00:03:05.585 Test: blob_esnap_clone_decouple ...passed 00:03:05.585 Test: blob_esnap_clone_reload ...passed 00:03:05.585 Test: blob_esnap_hotplug ...passed 00:03:05.585 00:03:05.585 Run Summary: Type Total Ran Passed Failed Inactive 00:03:05.585 suites 16 16 n/a 0 0 00:03:05.585 tests 348 348 348 0 0 00:03:05.585 asserts 92605 92605 92605 0 n/a 00:03:05.585 00:03:05.585 Elapsed time = 8.914 seconds 00:03:05.585 20:40:56 -- unit/unittest.sh@41 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:03:05.585 00:03:05.585 00:03:05.585 CUnit - A unit testing framework for C - Version 2.1-3 00:03:05.585 http://cunit.sourceforge.net/ 00:03:05.585 00:03:05.585 00:03:05.585 Suite: blob_bdev 00:03:05.585 Test: create_bs_dev ...passed 00:03:05.585 Test: create_bs_dev_ro ...[2024-04-16 20:40:56.634541] /usr/home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 507:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:03:05.585 passed 00:03:05.585 Test: create_bs_dev_rw ...passed 00:03:05.585 Test: claim_bs_dev ...[2024-04-16 20:40:56.635101] /usr/home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:03:05.585 passed 00:03:05.585 Test: claim_bs_dev_ro ...passed 00:03:05.585 Test: deferred_destroy_refs ...passed 00:03:05.585 Test: deferred_destroy_channels ...passed 00:03:05.585 Test: deferred_destroy_threads ...passed 00:03:05.585 00:03:05.585 Run Summary: Type Total Ran Passed Failed Inactive 00:03:05.585 suites 1 1 n/a 0 0 00:03:05.585 tests 8 8 8 0 0 00:03:05.585 asserts 119 119 119 0 n/a 00:03:05.585 00:03:05.585 Elapsed time = 0.000 seconds 00:03:05.585 20:40:56 -- unit/unittest.sh@42 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:03:05.585 00:03:05.585 00:03:05.585 CUnit - A unit testing framework for C - Version 2.1-3 00:03:05.585 http://cunit.sourceforge.net/ 00:03:05.585 00:03:05.585 00:03:05.585 Suite: tree 00:03:05.585 Test: blobfs_tree_op_test ...passed 00:03:05.585 00:03:05.585 Run Summary: Type Total Ran Passed Failed Inactive 00:03:05.585 suites 1 1 n/a 0 0 00:03:05.585 tests 1 1 1 0 0 00:03:05.585 asserts 27 27 27 0 n/a 00:03:05.585 00:03:05.585 Elapsed time = 0.000 seconds 00:03:05.585 20:40:56 -- unit/unittest.sh@43 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:03:05.585 00:03:05.585 00:03:05.585 CUnit - A unit testing framework for C - Version 2.1-3 00:03:05.585 http://cunit.sourceforge.net/ 00:03:05.585 00:03:05.585 00:03:05.585 Suite: blobfs_async_ut 00:03:05.585 Test: fs_init ...passed 00:03:05.585 Test: fs_open ...passed 00:03:05.845 Test: fs_create ...passed 00:03:05.845 Test: fs_truncate ...passed 00:03:05.845 Test: fs_rename ...[2024-04-16 20:40:56.740595] /usr/home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1474:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:03:05.845 passed 00:03:05.845 Test: fs_rw_async ...passed 00:03:05.845 Test: fs_writev_readv_async ...passed 00:03:05.845 Test: tree_find_buffer_ut ...passed 00:03:05.845 Test: channel_ops ...passed 00:03:05.845 Test: channel_ops_sync ...passed 00:03:05.845 00:03:05.845 Run Summary: Type Total Ran Passed Failed Inactive 00:03:05.845 suites 1 1 n/a 0 0 00:03:05.845 tests 10 10 10 0 0 00:03:05.845 asserts 292 292 292 0 n/a 00:03:05.845 00:03:05.845 Elapsed time = 0.117 seconds 00:03:05.845 20:40:56 -- unit/unittest.sh@45 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:03:05.845 00:03:05.845 00:03:05.845 CUnit - A unit testing framework for C - Version 2.1-3 00:03:05.845 http://cunit.sourceforge.net/ 00:03:05.845 00:03:05.845 00:03:05.845 Suite: blobfs_sync_ut 00:03:05.845 Test: cache_read_after_write ...passed 00:03:05.845 Test: file_length ...[2024-04-16 20:40:56.838622] /usr/home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1474:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:03:05.845 passed 00:03:05.845 Test: append_write_to_extend_blob ...passed 00:03:05.845 Test: partial_buffer ...passed 00:03:05.845 Test: cache_write_null_buffer ...passed 00:03:05.845 Test: fs_create_sync ...passed 00:03:05.845 Test: fs_rename_sync ...passed 00:03:05.845 Test: cache_append_no_cache ...passed 00:03:05.845 Test: fs_delete_file_without_close ...passed 00:03:05.845 00:03:05.845 Run Summary: Type Total Ran Passed Failed Inactive 00:03:05.845 suites 1 1 n/a 0 0 00:03:05.845 tests 9 9 9 0 0 00:03:05.845 asserts 345 345 345 0 n/a 00:03:05.845 00:03:05.845 Elapsed time = 0.250 seconds 00:03:05.845 20:40:56 -- unit/unittest.sh@46 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:03:05.845 00:03:05.845 00:03:05.845 CUnit - A unit testing framework for C - Version 2.1-3 00:03:05.845 http://cunit.sourceforge.net/ 00:03:05.845 00:03:05.845 00:03:05.845 Suite: blobfs_bdev_ut 00:03:05.845 Test: spdk_blobfs_bdev_detect_test ...[2024-04-16 20:40:56.929044] /usr/home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:03:05.845 passed 00:03:05.845 Test: spdk_blobfs_bdev_create_test ...passed 00:03:05.845 Test: spdk_blobfs_bdev_mount_test ...passed[2024-04-16 20:40:56.929442] /usr/home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:03:05.845 00:03:05.845 00:03:05.845 Run Summary: Type Total Ran Passed Failed Inactive 00:03:05.845 suites 1 1 n/a 0 0 00:03:05.845 tests 3 3 3 0 0 00:03:05.845 asserts 9 9 9 0 n/a 00:03:05.845 00:03:05.845 Elapsed time = 0.000 seconds 00:03:05.845 00:03:05.845 real 0m9.237s 00:03:05.845 user 0m9.217s 00:03:05.845 sys 0m0.148s 00:03:05.845 20:40:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:05.845 20:40:56 -- common/autotest_common.sh@10 -- # set +x 00:03:05.845 ************************************ 00:03:05.845 END TEST unittest_blob_blobfs 00:03:05.845 ************************************ 00:03:06.107 20:40:56 -- unit/unittest.sh@232 -- # run_test unittest_event unittest_event 00:03:06.107 20:40:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:06.107 20:40:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:06.107 20:40:56 -- common/autotest_common.sh@10 -- # set +x 00:03:06.107 ************************************ 00:03:06.107 START TEST unittest_event 00:03:06.107 ************************************ 00:03:06.107 20:40:56 -- common/autotest_common.sh@1104 -- # unittest_event 00:03:06.107 20:40:56 -- unit/unittest.sh@50 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:03:06.107 00:03:06.107 00:03:06.107 CUnit - A unit testing framework for C - Version 2.1-3 00:03:06.107 http://cunit.sourceforge.net/ 00:03:06.107 00:03:06.107 00:03:06.107 Suite: app_suite 00:03:06.107 Test: test_spdk_app_parse_args ...app_ut [options] 00:03:06.107 options: 00:03:06.107 -c, --config JSON config file (default none) 00:03:06.107 --json JSON config file (default none) 00:03:06.107 --json-ignore-init-errors 00:03:06.107 don't exit on invalid config entry 00:03:06.107 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:03:06.107 -g, --single-file-segments 00:03:06.107 force creating just one hugetlbfs file 00:03:06.107 -h, --help show this usage 00:03:06.107 -i, --shm-id shared memory ID (optional) 00:03:06.107 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:03:06.107 --lcores lcore to CPU mapping list. The list is in the format: 00:03:06.107 [<,lcores[@CPUs]>...] 00:03:06.107 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:03:06.107 Within the group, '-' is used for range separator, 00:03:06.107 ',' is used for single number separator. 00:03:06.107 '( )' can be omitted for single element group, 00:03:06.107 '@' can be omitted if cpus and lcores have the same value 00:03:06.107 -n, --mem-channels channel number of memory channels used for DPDK 00:03:06.107 -p, --main-core main (primary) core for DPDK 00:03:06.107 app_ut: invalid option -- z 00:03:06.107 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:03:06.107 -s, --mem-size memory size in MB for DPDK (default: all hugepage memory) 00:03:06.107 --disable-cpumask-locks Disable CPU core lock files. 00:03:06.107 --silence-noticelog disable notice level logging to stderr 00:03:06.107 --msg-mempool-size global message memory pool size in count (default: 262143) 00:03:06.107 -u, --no-pci disable PCI access 00:03:06.107 --wait-for-rpc wait for RPCs to initialize subsystems 00:03:06.107 --max-delay maximum reactor delay (in microseconds) 00:03:06.107 -B, --pci-blocked pci addr to block (can be used more than once) 00:03:06.107 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:03:06.107 -R, --huge-unlink unlink huge files after initialization 00:03:06.107 -v, --version print SPDK version 00:03:06.107 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:03:06.107 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:03:06.107 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:03:06.107 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:03:06.107 Tracepoints vary in size and can use more than one trace entry. 00:03:06.107 --rpcs-allowed comma-separated list of permitted RPCS 00:03:06.107 --env-context Opaque context for use of the env implementation 00:03:06.107 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:03:06.107 --no-huge run without using hugepages 00:03:06.107 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:03:06.107 -e, --tpoint-group [:] 00:03:06.107 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:03:06.107 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:03:06.107 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:03:06.107 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:03:06.107 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:03:06.107 app_ut [options] 00:03:06.107 options: 00:03:06.107 -c, --config JSON config file (default none) 00:03:06.107 --json JSON config file (default none) 00:03:06.107 --json-ignore-init-errors 00:03:06.107 don't exit on invalid config entry 00:03:06.107 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:03:06.107 -g, --single-file-segments 00:03:06.107 force creating just one hugetlbfs file 00:03:06.107 -h, --help show this usage 00:03:06.107 -i, --shm-id shared memory ID (optional) 00:03:06.107 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:03:06.107 --lcores lcore to CPU mapping list. The list is in the format: 00:03:06.107 [<,lcores[@CPUs]>...] 00:03:06.107 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:03:06.107 Within the group, '-' is used for range separator, 00:03:06.107 ',' is used for single number separator. 00:03:06.107 '( )' can be omitted for single element group, 00:03:06.107 '@' can be omitted if cpus and lcores have the same value 00:03:06.107 -n, --mem-channels channel number of memory channels used for DPDK 00:03:06.107 -p, --main-core main (primary) core for DPDK 00:03:06.107 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:03:06.107 -s, --mem-size memory size in MB for DPDK (default: all hugepage memory) 00:03:06.107 --disable-cpumask-locks Disable CPU core lock files. 00:03:06.107 --silence-noticelog disable notice level logging to stderr 00:03:06.107 --msg-mempool-size global message memory pool size in count (default: 262143) 00:03:06.107 app_ut: unrecognized option `--test-long-opt' 00:03:06.107 -u, --no-pci disable PCI access 00:03:06.107 --wait-for-rpc wait for RPCs to initialize subsystems 00:03:06.107 --max-delay maximum reactor delay (in microseconds) 00:03:06.107 -B, --pci-blocked pci addr to block (can be used more than once) 00:03:06.107 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:03:06.107 -R, --huge-unlink unlink huge files after initialization 00:03:06.107 -v, --version print SPDK version 00:03:06.107 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:03:06.107 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:03:06.107 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:03:06.107 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:03:06.107 Tracepoints vary in size and can use more than one trace entry. 00:03:06.107 --rpcs-allowed comma-separated list of permitted RPCS 00:03:06.107 --env-context Opaque context for use of the env implementation 00:03:06.107 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:03:06.107 --no-huge run without using hugepages 00:03:06.107 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:03:06.107 -e, --tpoint-group [:] 00:03:06.107 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:03:06.107 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:03:06.107 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:03:06.107 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:03:06.107 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:03:06.107 [2024-04-16 20:40:56.985510] /usr/home/vagrant/spdk_repo/spdk/lib/event/app.c:1031:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:03:06.107 app_ut [options] 00:03:06.107 options: 00:03:06.107 -c, --config JSON config file (default none) 00:03:06.107 --json JSON config file (default none) 00:03:06.107 --json-ignore-init-errors 00:03:06.107 don't exit on invalid config entry 00:03:06.107 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:03:06.107 [2024-04-16 20:40:56.985894] /usr/home/vagrant/spdk_repo/spdk/lib/event/app.c:1211:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:03:06.107 -g, --single-file-segments 00:03:06.107 force creating just one hugetlbfs file 00:03:06.107 -h, --help show this usage 00:03:06.107 -i, --shm-id shared memory ID (optional) 00:03:06.107 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:03:06.107 --lcores lcore to CPU mapping list. The list is in the format: 00:03:06.107 [<,lcores[@CPUs]>...] 00:03:06.107 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:03:06.107 Within the group, '-' is used for range separator, 00:03:06.107 ',' is used for single number separator. 00:03:06.107 '( )' can be omitted for single element group, 00:03:06.107 '@' can be omitted if cpus and lcores have the same value 00:03:06.107 -n, --mem-channels channel number of memory channels used for DPDK 00:03:06.107 -p, --main-core main (primary) core for DPDK 00:03:06.107 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:03:06.107 -s, --mem-size memory size in MB for DPDK (default: all hugepage memory) 00:03:06.108 --disable-cpumask-locks Disable CPU core lock files. 00:03:06.108 --silence-noticelog disable notice level logging to stderr 00:03:06.108 --msg-mempool-size global message memory pool size in count (default: 262143) 00:03:06.108 -u, --no-pci disable PCI access 00:03:06.108 --wait-for-rpc wait for RPCs to initialize subsystems 00:03:06.108 --max-delay maximum reactor delay (in microseconds) 00:03:06.108 -B, --pci-blocked pci addr to block (can be used more than once) 00:03:06.108 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:03:06.108 -R, --huge-unlink unlink huge files after initialization 00:03:06.108 -v, --version print SPDK version 00:03:06.108 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:03:06.108 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:03:06.108 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:03:06.108 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:03:06.108 Tracepoints vary in size and can use more than one trace entry. 00:03:06.108 --rpcs-allowed comma-separated list of permitted RPCS 00:03:06.108 --env-context Opaque context for use of the env implementation 00:03:06.108 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:03:06.108 --no-huge run without using hugepages 00:03:06.108 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:03:06.108 -e, --tpoint-group [:] 00:03:06.108 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:03:06.108 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:03:06.108 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:03:06.108 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:03:06.108 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:03:06.108 [2024-04-16 20:40:56.986103] /usr/home/vagrant/spdk_repo/spdk/lib/event/app.c:1116:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:03:06.108 passed 00:03:06.108 00:03:06.108 Run Summary: Type Total Ran Passed Failed Inactive 00:03:06.108 suites 1 1 n/a 0 0 00:03:06.108 tests 1 1 1 0 0 00:03:06.108 asserts 8 8 8 0 n/a 00:03:06.108 00:03:06.108 Elapsed time = 0.000 seconds 00:03:06.108 20:40:56 -- unit/unittest.sh@51 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:03:06.108 00:03:06.108 00:03:06.108 CUnit - A unit testing framework for C - Version 2.1-3 00:03:06.108 http://cunit.sourceforge.net/ 00:03:06.108 00:03:06.108 00:03:06.108 Suite: app_suite 00:03:06.108 Test: test_create_reactor ...passed 00:03:06.108 Test: test_init_reactors ...passed 00:03:06.108 Test: test_event_call ...passed 00:03:06.108 Test: test_schedule_thread ...passed 00:03:06.108 Test: test_reschedule_thread ...passed 00:03:06.108 Test: test_bind_thread ...passed 00:03:06.108 Test: test_for_each_reactor ...passed 00:03:06.108 Test: test_reactor_stats ...passed 00:03:06.108 Test: test_scheduler ...passed 00:03:06.108 Test: test_governor ...passed 00:03:06.108 00:03:06.108 Run Summary: Type Total Ran Passed Failed Inactive 00:03:06.108 suites 1 1 n/a 0 0 00:03:06.108 tests 10 10 10 0 0 00:03:06.108 asserts 336 336 336 0 n/a 00:03:06.108 00:03:06.108 Elapsed time = 0.008 seconds 00:03:06.108 00:03:06.108 real 0m0.023s 00:03:06.108 user 0m0.007s 00:03:06.108 sys 0m0.016s 00:03:06.108 20:40:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:06.108 20:40:57 -- common/autotest_common.sh@10 -- # set +x 00:03:06.108 ************************************ 00:03:06.108 END TEST unittest_event 00:03:06.108 ************************************ 00:03:06.108 20:40:57 -- unit/unittest.sh@233 -- # uname -s 00:03:06.108 20:40:57 -- unit/unittest.sh@233 -- # '[' FreeBSD = Linux ']' 00:03:06.108 20:40:57 -- unit/unittest.sh@237 -- # run_test unittest_accel /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:03:06.108 20:40:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:06.108 20:40:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:06.108 20:40:57 -- common/autotest_common.sh@10 -- # set +x 00:03:06.108 ************************************ 00:03:06.108 START TEST unittest_accel 00:03:06.108 ************************************ 00:03:06.108 20:40:57 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:03:06.108 00:03:06.108 00:03:06.108 CUnit - A unit testing framework for C - Version 2.1-3 00:03:06.108 http://cunit.sourceforge.net/ 00:03:06.108 00:03:06.108 00:03:06.108 Suite: accel_sequence 00:03:06.108 Test: test_sequence_fill_copy ...passed 00:03:06.108 Test: test_sequence_abort ...passed 00:03:06.108 Test: test_sequence_append_error ...passed 00:03:06.108 Test: test_sequence_completion_error ...[2024-04-16 20:40:57.066007] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1927:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x82de0e780 00:03:06.108 [2024-04-16 20:40:57.066419] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1927:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x82de0e780 00:03:06.108 [2024-04-16 20:40:57.066461] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1837:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x82de0e780 00:03:06.108 [2024-04-16 20:40:57.066480] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1837:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x82de0e780 00:03:06.108 passed 00:03:06.108 Test: test_sequence_decompress ...passed 00:03:06.108 Test: test_sequence_reverse ...passed 00:03:06.108 Test: test_sequence_copy_elision ...passed 00:03:06.108 Test: test_sequence_accel_buffers ...passed 00:03:06.108 Test: test_sequence_memory_domain ...[2024-04-16 20:40:57.068872] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1729:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:03:06.108 [2024-04-16 20:40:57.068931] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1768:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -48 00:03:06.108 passed 00:03:06.108 Test: test_sequence_module_memory_domain ...passed 00:03:06.108 Test: test_sequence_crypto ...passed 00:03:06.108 Test: test_sequence_driver ...[2024-04-16 20:40:57.069795] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1876:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x82de0eb00 using driver: ut 00:03:06.108 [2024-04-16 20:40:57.069843] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1941:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x82de0eb00 through driver: ut 00:03:06.108 passed 00:03:06.108 Test: test_sequence_same_iovs ...passed 00:03:06.108 Test: test_sequence_crc32 ...passed 00:03:06.108 Suite: accel 00:03:06.108 Test: test_spdk_accel_task_complete ...passed 00:03:06.108 Test: test_get_task ...passed 00:03:06.108 Test: test_spdk_accel_submit_copy ...passed 00:03:06.108 Test: test_spdk_accel_submit_dualcast ...[2024-04-16 20:40:57.070575] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:03:06.108 passed 00:03:06.108 Test: test_spdk_accel_submit_compare ...passed 00:03:06.108 Test: test_spdk_accel_submit_fill ...[2024-04-16 20:40:57.070602] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:03:06.108 passed 00:03:06.108 Test: test_spdk_accel_submit_crc32c ...passed 00:03:06.108 Test: test_spdk_accel_submit_crc32cv ...passed 00:03:06.108 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:03:06.108 Test: test_spdk_accel_submit_xor ...passed 00:03:06.108 Test: test_spdk_accel_module_find_by_name ...passed 00:03:06.108 Test: test_spdk_accel_module_register ...passed 00:03:06.108 00:03:06.108 Run Summary: Type Total Ran Passed Failed Inactive 00:03:06.108 suites 2 2 n/a 0 0 00:03:06.108 tests 26 26 26 0 0 00:03:06.108 asserts 831 831 831 0 n/a 00:03:06.108 00:03:06.108 Elapsed time = 0.008 seconds 00:03:06.108 00:03:06.108 real 0m0.018s 00:03:06.108 user 0m0.017s 00:03:06.108 sys 0m0.000s 00:03:06.108 20:40:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:06.108 20:40:57 -- common/autotest_common.sh@10 -- # set +x 00:03:06.108 ************************************ 00:03:06.108 END TEST unittest_accel 00:03:06.108 ************************************ 00:03:06.108 20:40:57 -- unit/unittest.sh@238 -- # run_test unittest_ioat /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:03:06.108 20:40:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:06.108 20:40:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:06.108 20:40:57 -- common/autotest_common.sh@10 -- # set +x 00:03:06.108 ************************************ 00:03:06.108 START TEST unittest_ioat 00:03:06.108 ************************************ 00:03:06.108 20:40:57 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:03:06.108 00:03:06.108 00:03:06.108 CUnit - A unit testing framework for C - Version 2.1-3 00:03:06.108 http://cunit.sourceforge.net/ 00:03:06.108 00:03:06.108 00:03:06.108 Suite: ioat 00:03:06.108 Test: ioat_state_check ...passed 00:03:06.108 00:03:06.108 Run Summary: Type Total Ran Passed Failed Inactive 00:03:06.108 suites 1 1 n/a 0 0 00:03:06.108 tests 1 1 1 0 0 00:03:06.108 asserts 32 32 32 0 n/a 00:03:06.108 00:03:06.108 Elapsed time = 0.000 seconds 00:03:06.108 00:03:06.108 real 0m0.007s 00:03:06.108 user 0m0.006s 00:03:06.108 sys 0m0.000s 00:03:06.108 20:40:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:06.108 20:40:57 -- common/autotest_common.sh@10 -- # set +x 00:03:06.108 ************************************ 00:03:06.108 END TEST unittest_ioat 00:03:06.108 ************************************ 00:03:06.108 20:40:57 -- unit/unittest.sh@239 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:06.109 20:40:57 -- unit/unittest.sh@240 -- # run_test unittest_idxd_user /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:03:06.109 20:40:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:06.109 20:40:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:06.109 20:40:57 -- common/autotest_common.sh@10 -- # set +x 00:03:06.109 ************************************ 00:03:06.109 START TEST unittest_idxd_user 00:03:06.109 ************************************ 00:03:06.109 20:40:57 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:03:06.109 00:03:06.109 00:03:06.109 CUnit - A unit testing framework for C - Version 2.1-3 00:03:06.109 http://cunit.sourceforge.net/ 00:03:06.109 00:03:06.109 00:03:06.109 Suite: idxd_user 00:03:06.109 Test: test_idxd_wait_cmd ...[2024-04-16 20:40:57.189587] /usr/home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:03:06.109 [2024-04-16 20:40:57.189994] /usr/home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:03:06.109 passed 00:03:06.109 Test: test_idxd_reset_dev ...passed 00:03:06.109 Test: test_idxd_group_config ...passed 00:03:06.109 Test: test_idxd_wq_config ...[2024-04-16 20:40:57.190071] /usr/home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:03:06.109 [2024-04-16 20:40:57.190095] /usr/home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:03:06.109 passed 00:03:06.109 00:03:06.109 Run Summary: Type Total Ran Passed Failed Inactive 00:03:06.109 suites 1 1 n/a 0 0 00:03:06.109 tests 4 4 4 0 0 00:03:06.109 asserts 20 20 20 0 n/a 00:03:06.109 00:03:06.109 Elapsed time = 0.000 seconds 00:03:06.109 00:03:06.109 real 0m0.009s 00:03:06.109 user 0m0.001s 00:03:06.109 sys 0m0.008s 00:03:06.109 20:40:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:06.109 20:40:57 -- common/autotest_common.sh@10 -- # set +x 00:03:06.109 ************************************ 00:03:06.109 END TEST unittest_idxd_user 00:03:06.109 ************************************ 00:03:06.369 20:40:57 -- unit/unittest.sh@242 -- # run_test unittest_iscsi unittest_iscsi 00:03:06.369 20:40:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:06.369 20:40:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:06.369 20:40:57 -- common/autotest_common.sh@10 -- # set +x 00:03:06.369 ************************************ 00:03:06.369 START TEST unittest_iscsi 00:03:06.369 ************************************ 00:03:06.369 20:40:57 -- common/autotest_common.sh@1104 -- # unittest_iscsi 00:03:06.369 20:40:57 -- unit/unittest.sh@66 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:03:06.369 00:03:06.369 00:03:06.369 CUnit - A unit testing framework for C - Version 2.1-3 00:03:06.369 http://cunit.sourceforge.net/ 00:03:06.369 00:03:06.369 00:03:06.369 Suite: conn_suite 00:03:06.369 Test: read_task_split_in_order_case ...passed 00:03:06.369 Test: read_task_split_reverse_order_case ...passed 00:03:06.369 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:03:06.369 Test: process_non_read_task_completion_test ...passed 00:03:06.369 Test: free_tasks_on_connection ...passed 00:03:06.369 Test: free_tasks_with_queued_datain ...passed 00:03:06.369 Test: abort_queued_datain_task_test ...passed 00:03:06.369 Test: abort_queued_datain_tasks_test ...passed 00:03:06.369 00:03:06.369 Run Summary: Type Total Ran Passed Failed Inactive 00:03:06.369 suites 1 1 n/a 0 0 00:03:06.369 tests 8 8 8 0 0 00:03:06.369 asserts 230 230 230 0 n/a 00:03:06.369 00:03:06.369 Elapsed time = 0.000 seconds 00:03:06.369 20:40:57 -- unit/unittest.sh@67 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:03:06.369 00:03:06.369 00:03:06.369 CUnit - A unit testing framework for C - Version 2.1-3 00:03:06.369 http://cunit.sourceforge.net/ 00:03:06.369 00:03:06.369 00:03:06.369 Suite: iscsi_suite 00:03:06.369 Test: param_negotiation_test ...passed 00:03:06.369 Test: list_negotiation_test ...passed 00:03:06.369 Test: parse_valid_test ...passed 00:03:06.369 Test: parse_invalid_test ...[2024-04-16 20:40:57.259094] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 202:iscsi_parse_param: *ERROR*: '=' not found 00:03:06.369 [2024-04-16 20:40:57.259494] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 202:iscsi_parse_param: *ERROR*: '=' not found 00:03:06.369 [2024-04-16 20:40:57.259539] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 208:iscsi_parse_param: *ERROR*: Empty key 00:03:06.369 [2024-04-16 20:40:57.259594] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 248:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:03:06.369 [2024-04-16 20:40:57.259628] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 248:iscsi_parse_param: *ERROR*: Overflow Val 256 00:03:06.369 [2024-04-16 20:40:57.259653] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 215:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:03:06.369 passed 00:03:06.369 00:03:06.369 [2024-04-16 20:40:57.259676] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 229:iscsi_parse_param: *ERROR*: Duplicated Key B 00:03:06.369 Run Summary: Type Total Ran Passed Failed Inactive 00:03:06.369 suites 1 1 n/a 0 0 00:03:06.369 tests 4 4 4 0 0 00:03:06.369 asserts 161 161 161 0 n/a 00:03:06.369 00:03:06.369 Elapsed time = 0.000 seconds 00:03:06.369 20:40:57 -- unit/unittest.sh@68 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:03:06.369 00:03:06.369 00:03:06.369 CUnit - A unit testing framework for C - Version 2.1-3 00:03:06.369 http://cunit.sourceforge.net/ 00:03:06.369 00:03:06.369 00:03:06.369 Suite: iscsi_target_node_suite 00:03:06.369 Test: add_lun_test_cases ...[2024-04-16 20:40:57.268336] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1249:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:03:06.369 [2024-04-16 20:40:57.268687] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1254:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:03:06.369 [2024-04-16 20:40:57.268734] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:03:06.369 [2024-04-16 20:40:57.268755] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:03:06.369 passed 00:03:06.369 Test: allow_any_allowed ...passed[2024-04-16 20:40:57.268774] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1266:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:03:06.369 00:03:06.369 Test: allow_ipv6_allowed ...passed 00:03:06.369 Test: allow_ipv6_denied ...passed 00:03:06.369 Test: allow_ipv6_invalid ...passed 00:03:06.369 Test: allow_ipv4_allowed ...passed 00:03:06.369 Test: allow_ipv4_denied ...passed 00:03:06.369 Test: allow_ipv4_invalid ...passed 00:03:06.369 Test: node_access_allowed ...passed 00:03:06.369 Test: node_access_denied_by_empty_netmask ...passed 00:03:06.369 Test: node_access_multi_initiator_groups_cases ...passed 00:03:06.369 Test: allow_iscsi_name_multi_maps_case ...passed 00:03:06.369 Test: chap_param_test_cases ...[2024-04-16 20:40:57.268961] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1036:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:03:06.369 [2024-04-16 20:40:57.268993] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1036:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:03:06.369 [2024-04-16 20:40:57.269013] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1036:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:03:06.369 [2024-04-16 20:40:57.269032] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1036:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:03:06.369 passed[2024-04-16 20:40:57.269051] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1026:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:03:06.369 00:03:06.369 00:03:06.369 Run Summary: Type Total Ran Passed Failed Inactive 00:03:06.369 suites 1 1 n/a 0 0 00:03:06.369 tests 13 13 13 0 0 00:03:06.369 asserts 50 50 50 0 n/a 00:03:06.369 00:03:06.369 Elapsed time = 0.000 seconds 00:03:06.369 20:40:57 -- unit/unittest.sh@69 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:03:06.369 00:03:06.370 00:03:06.370 CUnit - A unit testing framework for C - Version 2.1-3 00:03:06.370 http://cunit.sourceforge.net/ 00:03:06.370 00:03:06.370 00:03:06.370 Suite: iscsi_suite 00:03:06.370 Test: op_login_check_target_test ...[2024-04-16 20:40:57.278916] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1434:iscsi_op_login_check_target: *ERROR*: access denied 00:03:06.370 passed 00:03:06.370 Test: op_login_session_normal_test ...[2024-04-16 20:40:57.279408] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:03:06.370 [2024-04-16 20:40:57.279459] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:03:06.370 [2024-04-16 20:40:57.279481] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:03:06.370 [2024-04-16 20:40:57.279542] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:03:06.370 [2024-04-16 20:40:57.279567] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1470:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:03:06.370 [2024-04-16 20:40:57.279611] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 703:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:03:06.370 [2024-04-16 20:40:57.279632] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1470:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:03:06.370 passed 00:03:06.370 Test: maxburstlength_test ...[2024-04-16 20:40:57.279768] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:03:06.370 [2024-04-16 20:40:57.279823] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4551:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:03:06.370 passed 00:03:06.370 Test: underflow_for_read_transfer_test ...passed 00:03:06.370 Test: underflow_for_zero_read_transfer_test ...passed 00:03:06.370 Test: underflow_for_request_sense_test ...passed 00:03:06.370 Test: underflow_for_check_condition_test ...passed 00:03:06.370 Test: add_transfer_task_test ...passed 00:03:06.370 Test: get_transfer_task_test ...passed 00:03:06.370 Test: del_transfer_task_test ...passed 00:03:06.370 Test: clear_all_transfer_tasks_test ...passed 00:03:06.370 Test: build_iovs_test ...passed 00:03:06.370 Test: build_iovs_with_md_test ...passed 00:03:06.370 Test: pdu_hdr_op_login_test ...[2024-04-16 20:40:57.280269] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1251:iscsi_op_login_rsp_init: *ERROR*: transit error 00:03:06.370 [2024-04-16 20:40:57.280321] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1259:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:03:06.370 passed 00:03:06.370 Test: pdu_hdr_op_text_test ...[2024-04-16 20:40:57.280367] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1272:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:03:06.370 [2024-04-16 20:40:57.280407] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2241:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:03:06.370 [2024-04-16 20:40:57.280440] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2272:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:03:06.370 passed 00:03:06.370 Test: pdu_hdr_op_logout_test ...[2024-04-16 20:40:57.280468] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2286:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:03:06.370 passed 00:03:06.370 Test: pdu_hdr_op_scsi_test ...[2024-04-16 20:40:57.280507] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2517:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:03:06.370 [2024-04-16 20:40:57.280546] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:03:06.370 [2024-04-16 20:40:57.280564] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:03:06.370 [2024-04-16 20:40:57.280582] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3364:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:03:06.370 [2024-04-16 20:40:57.280602] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3398:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:03:06.370 [2024-04-16 20:40:57.280621] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3405:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:03:06.370 passed 00:03:06.370 Test: pdu_hdr_op_task_mgmt_test ...[2024-04-16 20:40:57.280647] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3428:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:03:06.370 [2024-04-16 20:40:57.280681] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3605:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:03:06.370 [2024-04-16 20:40:57.280711] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3694:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:03:06.370 passed 00:03:06.370 Test: pdu_hdr_op_nopout_test ...[2024-04-16 20:40:57.280754] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3713:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:03:06.370 [2024-04-16 20:40:57.280810] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:03:06.370 [2024-04-16 20:40:57.280840] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:03:06.370 [2024-04-16 20:40:57.280866] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3743:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:03:06.370 passed 00:03:06.370 Test: pdu_hdr_op_data_test ...[2024-04-16 20:40:57.280900] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4186:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:03:06.370 [2024-04-16 20:40:57.280954] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4203:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:03:06.370 [2024-04-16 20:40:57.280984] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:03:06.370 [2024-04-16 20:40:57.281017] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4217:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:03:06.370 [2024-04-16 20:40:57.281045] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4222:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:03:06.370 [2024-04-16 20:40:57.281070] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4233:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:03:06.370 [2024-04-16 20:40:57.281094] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4245:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:03:06.370 passed 00:03:06.370 Test: empty_text_with_cbit_test ...passed 00:03:06.370 Test: pdu_payload_read_test ...[2024-04-16 20:40:57.281950] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4632:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:03:06.370 passed 00:03:06.370 Test: data_out_pdu_sequence_test ...passed 00:03:06.370 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:03:06.370 00:03:06.370 Run Summary: Type Total Ran Passed Failed Inactive 00:03:06.370 suites 1 1 n/a 0 0 00:03:06.370 tests 24 24 24 0 0 00:03:06.370 asserts 150253 150253 150253 0 n/a 00:03:06.370 00:03:06.370 Elapsed time = 0.000 seconds 00:03:06.370 20:40:57 -- unit/unittest.sh@70 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:03:06.370 00:03:06.370 00:03:06.370 CUnit - A unit testing framework for C - Version 2.1-3 00:03:06.370 http://cunit.sourceforge.net/ 00:03:06.370 00:03:06.370 00:03:06.370 Suite: init_grp_suite 00:03:06.370 Test: create_initiator_group_success_case ...passed 00:03:06.370 Test: find_initiator_group_success_case ...passed 00:03:06.370 Test: register_initiator_group_twice_case ...passed 00:03:06.370 Test: add_initiator_name_success_case ...passed 00:03:06.370 Test: add_initiator_name_fail_case ...[2024-04-16 20:40:57.297218] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:03:06.370 passed 00:03:06.370 Test: delete_all_initiator_names_success_case ...passed 00:03:06.370 Test: add_netmask_success_case ...passed 00:03:06.370 Test: add_netmask_fail_case ...passed 00:03:06.370 Test: delete_all_netmasks_success_case ...passed 00:03:06.370 Test: initiator_name_overwrite_all_to_any_case ...[2024-04-16 20:40:57.297616] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:03:06.370 passed 00:03:06.370 Test: netmask_overwrite_all_to_any_case ...passed 00:03:06.370 Test: add_delete_initiator_names_case ...passed 00:03:06.370 Test: add_duplicated_initiator_names_case ...passed 00:03:06.370 Test: delete_nonexisting_initiator_names_case ...passed 00:03:06.370 Test: add_delete_netmasks_case ...passed 00:03:06.370 Test: add_duplicated_netmasks_case ...passed 00:03:06.370 Test: delete_nonexisting_netmasks_case ...passed 00:03:06.370 00:03:06.370 Run Summary: Type Total Ran Passed Failed Inactive 00:03:06.370 suites 1 1 n/a 0 0 00:03:06.370 tests 17 17 17 0 0 00:03:06.370 asserts 108 108 108 0 n/a 00:03:06.370 00:03:06.370 Elapsed time = 0.000 seconds 00:03:06.370 20:40:57 -- unit/unittest.sh@71 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:03:06.370 00:03:06.370 00:03:06.370 CUnit - A unit testing framework for C - Version 2.1-3 00:03:06.370 http://cunit.sourceforge.net/ 00:03:06.370 00:03:06.370 00:03:06.370 Suite: portal_grp_suite 00:03:06.370 Test: portal_create_ipv4_normal_case ...passed 00:03:06.370 Test: portal_create_ipv6_normal_case ...passed 00:03:06.370 Test: portal_create_ipv4_wildcard_case ...passed 00:03:06.370 Test: portal_create_ipv6_wildcard_case ...passed 00:03:06.370 Test: portal_create_twice_case ...[2024-04-16 20:40:57.307632] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:03:06.370 passed 00:03:06.370 Test: portal_grp_register_unregister_case ...passed 00:03:06.370 Test: portal_grp_register_twice_case ...passed 00:03:06.370 Test: portal_grp_add_delete_case ...passed 00:03:06.370 Test: portal_grp_add_delete_twice_case ...passed 00:03:06.370 00:03:06.370 Run Summary: Type Total Ran Passed Failed Inactive 00:03:06.370 suites 1 1 n/a 0 0 00:03:06.370 tests 9 9 9 0 0 00:03:06.370 asserts 44 44 44 0 n/a 00:03:06.370 00:03:06.370 Elapsed time = 0.000 seconds 00:03:06.370 00:03:06.370 real 0m0.069s 00:03:06.370 user 0m0.042s 00:03:06.370 sys 0m0.017s 00:03:06.370 20:40:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:06.370 20:40:57 -- common/autotest_common.sh@10 -- # set +x 00:03:06.370 ************************************ 00:03:06.370 END TEST unittest_iscsi 00:03:06.370 ************************************ 00:03:06.370 20:40:57 -- unit/unittest.sh@243 -- # run_test unittest_json unittest_json 00:03:06.371 20:40:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:06.371 20:40:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:06.371 20:40:57 -- common/autotest_common.sh@10 -- # set +x 00:03:06.371 ************************************ 00:03:06.371 START TEST unittest_json 00:03:06.371 ************************************ 00:03:06.371 20:40:57 -- common/autotest_common.sh@1104 -- # unittest_json 00:03:06.371 20:40:57 -- unit/unittest.sh@75 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:03:06.371 00:03:06.371 00:03:06.371 CUnit - A unit testing framework for C - Version 2.1-3 00:03:06.371 http://cunit.sourceforge.net/ 00:03:06.371 00:03:06.371 00:03:06.371 Suite: json 00:03:06.371 Test: test_parse_literal ...passed 00:03:06.371 Test: test_parse_string_simple ...passed 00:03:06.371 Test: test_parse_string_control_chars ...passed 00:03:06.371 Test: test_parse_string_utf8 ...passed 00:03:06.371 Test: test_parse_string_escapes_twochar ...passed 00:03:06.371 Test: test_parse_string_escapes_unicode ...passed 00:03:06.371 Test: test_parse_number ...passed 00:03:06.371 Test: test_parse_array ...passed 00:03:06.371 Test: test_parse_object ...passed 00:03:06.371 Test: test_parse_nesting ...passed 00:03:06.371 Test: test_parse_comment ...passed 00:03:06.371 00:03:06.371 Run Summary: Type Total Ran Passed Failed Inactive 00:03:06.371 suites 1 1 n/a 0 0 00:03:06.371 tests 11 11 11 0 0 00:03:06.371 asserts 1516 1516 1516 0 n/a 00:03:06.371 00:03:06.371 Elapsed time = 0.000 seconds 00:03:06.371 20:40:57 -- unit/unittest.sh@76 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:03:06.371 00:03:06.371 00:03:06.371 CUnit - A unit testing framework for C - Version 2.1-3 00:03:06.371 http://cunit.sourceforge.net/ 00:03:06.371 00:03:06.371 00:03:06.371 Suite: json 00:03:06.371 Test: test_strequal ...passed 00:03:06.371 Test: test_num_to_uint16 ...passed 00:03:06.371 Test: test_num_to_int32 ...passed 00:03:06.371 Test: test_num_to_uint64 ...passed 00:03:06.371 Test: test_decode_object ...passed 00:03:06.371 Test: test_decode_array ...passed 00:03:06.371 Test: test_decode_bool ...passed 00:03:06.371 Test: test_decode_uint16 ...passed 00:03:06.371 Test: test_decode_int32 ...passed 00:03:06.371 Test: test_decode_uint32 ...passed 00:03:06.371 Test: test_decode_uint64 ...passed 00:03:06.371 Test: test_decode_string ...passed 00:03:06.371 Test: test_decode_uuid ...passed 00:03:06.371 Test: test_find ...passed 00:03:06.371 Test: test_find_array ...passed 00:03:06.371 Test: test_iterating ...passed 00:03:06.371 Test: test_free_object ...passed 00:03:06.371 00:03:06.371 Run Summary: Type Total Ran Passed Failed Inactive 00:03:06.371 suites 1 1 n/a 0 0 00:03:06.371 tests 17 17 17 0 0 00:03:06.371 asserts 236 236 236 0 n/a 00:03:06.371 00:03:06.371 Elapsed time = 0.000 seconds 00:03:06.371 20:40:57 -- unit/unittest.sh@77 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:03:06.371 00:03:06.371 00:03:06.371 CUnit - A unit testing framework for C - Version 2.1-3 00:03:06.371 http://cunit.sourceforge.net/ 00:03:06.371 00:03:06.371 00:03:06.371 Suite: json 00:03:06.371 Test: test_write_literal ...passed 00:03:06.371 Test: test_write_string_simple ...passed 00:03:06.371 Test: test_write_string_escapes ...passed 00:03:06.371 Test: test_write_string_utf16le ...passed 00:03:06.371 Test: test_write_number_int32 ...passed 00:03:06.371 Test: test_write_number_uint32 ...passed 00:03:06.371 Test: test_write_number_uint128 ...passed 00:03:06.371 Test: test_write_string_number_uint128 ...passed 00:03:06.371 Test: test_write_number_int64 ...passed 00:03:06.371 Test: test_write_number_uint64 ...passed 00:03:06.371 Test: test_write_number_double ...passed 00:03:06.371 Test: test_write_uuid ...passed 00:03:06.371 Test: test_write_array ...passed 00:03:06.371 Test: test_write_object ...passed 00:03:06.371 Test: test_write_nesting ...passed 00:03:06.371 Test: test_write_val ...passed 00:03:06.371 00:03:06.371 Run Summary: Type Total Ran Passed Failed Inactive 00:03:06.371 suites 1 1 n/a 0 0 00:03:06.371 tests 16 16 16 0 0 00:03:06.371 asserts 918 918 918 0 n/a 00:03:06.371 00:03:06.371 Elapsed time = 0.000 seconds 00:03:06.371 20:40:57 -- unit/unittest.sh@78 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:03:06.371 00:03:06.371 00:03:06.371 CUnit - A unit testing framework for C - Version 2.1-3 00:03:06.371 http://cunit.sourceforge.net/ 00:03:06.371 00:03:06.371 00:03:06.371 Suite: jsonrpc 00:03:06.371 Test: test_parse_request ...passed 00:03:06.371 Test: test_parse_request_streaming ...passed 00:03:06.371 00:03:06.371 Run Summary: Type Total Ran Passed Failed Inactive 00:03:06.371 suites 1 1 n/a 0 0 00:03:06.371 tests 2 2 2 0 0 00:03:06.371 asserts 289 289 289 0 n/a 00:03:06.371 00:03:06.371 Elapsed time = 0.000 seconds 00:03:06.371 00:03:06.371 real 0m0.031s 00:03:06.371 user 0m0.013s 00:03:06.371 sys 0m0.017s 00:03:06.371 20:40:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:06.371 20:40:57 -- common/autotest_common.sh@10 -- # set +x 00:03:06.371 ************************************ 00:03:06.371 END TEST unittest_json 00:03:06.371 ************************************ 00:03:06.371 20:40:57 -- unit/unittest.sh@244 -- # run_test unittest_rpc unittest_rpc 00:03:06.371 20:40:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:06.371 20:40:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:06.371 20:40:57 -- common/autotest_common.sh@10 -- # set +x 00:03:06.371 ************************************ 00:03:06.371 START TEST unittest_rpc 00:03:06.371 ************************************ 00:03:06.371 20:40:57 -- common/autotest_common.sh@1104 -- # unittest_rpc 00:03:06.371 20:40:57 -- unit/unittest.sh@82 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:03:06.371 00:03:06.371 00:03:06.371 CUnit - A unit testing framework for C - Version 2.1-3 00:03:06.371 http://cunit.sourceforge.net/ 00:03:06.371 00:03:06.371 00:03:06.371 Suite: rpc 00:03:06.371 Test: test_jsonrpc_handler ...passed 00:03:06.371 Test: test_spdk_rpc_is_method_allowed ...passed 00:03:06.371 Test: test_rpc_get_methods ...[2024-04-16 20:40:57.440075] /usr/home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 378:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:03:06.371 passed 00:03:06.371 Test: test_rpc_spdk_get_version ...passed 00:03:06.371 Test: test_spdk_rpc_listen_close ...passed 00:03:06.371 00:03:06.371 Run Summary: Type Total Ran Passed Failed Inactive 00:03:06.371 suites 1 1 n/a 0 0 00:03:06.371 tests 5 5 5 0 0 00:03:06.371 asserts 20 20 20 0 n/a 00:03:06.371 00:03:06.371 Elapsed time = 0.000 seconds 00:03:06.371 00:03:06.371 real 0m0.009s 00:03:06.371 user 0m0.007s 00:03:06.371 sys 0m0.001s 00:03:06.371 20:40:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:06.371 20:40:57 -- common/autotest_common.sh@10 -- # set +x 00:03:06.371 ************************************ 00:03:06.371 END TEST unittest_rpc 00:03:06.371 ************************************ 00:03:06.371 20:40:57 -- unit/unittest.sh@245 -- # run_test unittest_notify /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:03:06.371 20:40:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:06.371 20:40:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:06.371 20:40:57 -- common/autotest_common.sh@10 -- # set +x 00:03:06.631 ************************************ 00:03:06.631 START TEST unittest_notify 00:03:06.631 ************************************ 00:03:06.631 20:40:57 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:03:06.631 00:03:06.631 00:03:06.631 CUnit - A unit testing framework for C - Version 2.1-3 00:03:06.631 http://cunit.sourceforge.net/ 00:03:06.631 00:03:06.631 00:03:06.631 Suite: app_suite 00:03:06.631 Test: notify ...passed 00:03:06.631 00:03:06.631 Run Summary: Type Total Ran Passed Failed Inactive 00:03:06.632 suites 1 1 n/a 0 0 00:03:06.632 tests 1 1 1 0 0 00:03:06.632 asserts 13 13 13 0 n/a 00:03:06.632 00:03:06.632 Elapsed time = 0.000 seconds 00:03:06.632 00:03:06.632 real 0m0.009s 00:03:06.632 user 0m0.001s 00:03:06.632 sys 0m0.008s 00:03:06.632 20:40:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:06.632 20:40:57 -- common/autotest_common.sh@10 -- # set +x 00:03:06.632 ************************************ 00:03:06.632 END TEST unittest_notify 00:03:06.632 ************************************ 00:03:06.632 20:40:57 -- unit/unittest.sh@246 -- # run_test unittest_nvme unittest_nvme 00:03:06.632 20:40:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:06.632 20:40:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:06.632 20:40:57 -- common/autotest_common.sh@10 -- # set +x 00:03:06.632 ************************************ 00:03:06.632 START TEST unittest_nvme 00:03:06.632 ************************************ 00:03:06.632 20:40:57 -- common/autotest_common.sh@1104 -- # unittest_nvme 00:03:06.632 20:40:57 -- unit/unittest.sh@86 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:03:06.632 00:03:06.632 00:03:06.632 CUnit - A unit testing framework for C - Version 2.1-3 00:03:06.632 http://cunit.sourceforge.net/ 00:03:06.632 00:03:06.632 00:03:06.632 Suite: nvme 00:03:06.632 Test: test_opc_data_transfer ...passed 00:03:06.632 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:03:06.632 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:03:06.632 Test: test_trid_parse_and_compare ...[2024-04-16 20:40:57.563785] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1167:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:03:06.632 [2024-04-16 20:40:57.564312] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:03:06.632 [2024-04-16 20:40:57.564368] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1180:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:03:06.632 [2024-04-16 20:40:57.564427] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:03:06.632 [2024-04-16 20:40:57.564463] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1190:parse_next_key: *ERROR*: Key without value 00:03:06.632 [2024-04-16 20:40:57.564495] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:03:06.632 passed 00:03:06.632 Test: test_trid_trtype_str ...passed 00:03:06.632 Test: test_trid_adrfam_str ...passed 00:03:06.632 Test: test_nvme_ctrlr_probe ...[2024-04-16 20:40:57.564792] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:03:06.632 passed 00:03:06.632 Test: test_spdk_nvme_probe ...[2024-04-16 20:40:57.564861] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:03:06.632 [2024-04-16 20:40:57.564896] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:03:06.632 [2024-04-16 20:40:57.564936] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 813:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:03:06.632 [2024-04-16 20:40:57.564969] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:03:06.632 passed 00:03:06.632 Test: test_spdk_nvme_connect ...[2024-04-16 20:40:57.565031] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 989:spdk_nvme_connect: *ERROR*: No transport ID specified 00:03:06.632 [2024-04-16 20:40:57.565193] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:03:06.632 [2024-04-16 20:40:57.565243] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1000:spdk_nvme_connect: *ERROR*: Create probe context failed 00:03:06.632 passed 00:03:06.632 Test: test_nvme_ctrlr_probe_internal ...[2024-04-16 20:40:57.565306] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:03:06.632 passed 00:03:06.632 Test: test_nvme_init_controllers ...[2024-04-16 20:40:57.565341] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:03:06.632 passed 00:03:06.632 Test: test_nvme_driver_init ...[2024-04-16 20:40:57.565384] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:03:06.632 [2024-04-16 20:40:57.565435] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:03:06.632 [2024-04-16 20:40:57.565470] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:03:06.632 passed 00:03:06.632 Test: test_spdk_nvme_detach ...passed 00:03:06.632 Test: test_nvme_completion_poll_cb ...passed 00:03:06.632 Test: test_nvme_user_copy_cmd_complete ...[2024-04-16 20:40:57.676209] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:03:06.632 passed 00:03:06.632 Test: test_nvme_allocate_request_null ...passed 00:03:06.632 Test: test_nvme_allocate_request ...passed 00:03:06.632 Test: test_nvme_free_request ...passed 00:03:06.632 Test: test_nvme_allocate_request_user_copy ...passed 00:03:06.632 Test: test_nvme_robust_mutex_init_shared ...passed 00:03:06.632 Test: test_nvme_request_check_timeout ...passed 00:03:06.632 Test: test_nvme_wait_for_completion ...passed 00:03:06.632 Test: test_spdk_nvme_parse_func ...passed 00:03:06.632 Test: test_spdk_nvme_detach_async ...passed 00:03:06.632 Test: test_nvme_parse_addr ...[2024-04-16 20:40:57.676620] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1577:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL 00:03:06.632 passed 00:03:06.632 00:03:06.632 Run Summary: Type Total Ran Passed Failed Inactive 00:03:06.632 suites 1 1 n/a 0 0 00:03:06.632 tests 25 25 25 0 0 00:03:06.632 asserts 326 326 326 0 n/a 00:03:06.632 00:03:06.632 Elapsed time = 0.008 seconds 00:03:06.632 20:40:57 -- unit/unittest.sh@87 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:03:06.632 00:03:06.632 00:03:06.632 CUnit - A unit testing framework for C - Version 2.1-3 00:03:06.632 http://cunit.sourceforge.net/ 00:03:06.632 00:03:06.632 00:03:06.632 Suite: nvme_ctrlr 00:03:06.632 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-04-16 20:40:57.687271] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:06.632 passed 00:03:06.632 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-04-16 20:40:57.688982] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:06.632 passed 00:03:06.632 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-04-16 20:40:57.690187] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:06.632 passed 00:03:06.632 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-04-16 20:40:57.691378] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:06.632 passed 00:03:06.632 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-04-16 20:40:57.692574] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:06.632 [2024-04-16 20:40:57.693733] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-04-16 20:40:57.694927] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-04-16 20:40:57.696099] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:03:06.632 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-04-16 20:40:57.698417] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:06.632 [2024-04-16 20:40:57.700701] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-04-16 20:40:57.701887] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:03:06.632 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-04-16 20:40:57.704196] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:06.632 [2024-04-16 20:40:57.705343] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-04-16 20:40:57.707607] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:03:06.632 Test: test_nvme_ctrlr_init_delay ...[2024-04-16 20:40:57.709923] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:06.632 passed 00:03:06.632 Test: test_alloc_io_qpair_rr_1 ...[2024-04-16 20:40:57.711139] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:06.632 [2024-04-16 20:40:57.711194] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5304:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:03:06.632 [2024-04-16 20:40:57.711222] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:03:06.632 [2024-04-16 20:40:57.711248] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:03:06.632 passed 00:03:06.632 Test: test_ctrlr_get_default_ctrlr_opts ...[2024-04-16 20:40:57.711269] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:03:06.632 passed 00:03:06.632 Test: test_ctrlr_get_default_io_qpair_opts ...passed 00:03:06.632 Test: test_alloc_io_qpair_wrr_1 ...[2024-04-16 20:40:57.711376] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:06.632 passed 00:03:06.632 Test: test_alloc_io_qpair_wrr_2 ...[2024-04-16 20:40:57.711418] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:06.632 [2024-04-16 20:40:57.711446] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5304:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:03:06.632 passed 00:03:06.632 Test: test_spdk_nvme_ctrlr_update_firmware ...[2024-04-16 20:40:57.711489] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4832:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size! 00:03:06.632 [2024-04-16 20:40:57.711512] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4869:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:03:06.632 [2024-04-16 20:40:57.711534] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4909:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed! 00:03:06.633 passed 00:03:06.633 Test: test_nvme_ctrlr_fail ...[2024-04-16 20:40:57.711554] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4869:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:03:06.633 passed 00:03:06.633 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...[2024-04-16 20:40:57.711587] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [] in failed state. 00:03:06.633 passed 00:03:06.633 Test: test_nvme_ctrlr_set_supported_features ...passed 00:03:06.633 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed 00:03:06.633 Test: test_nvme_ctrlr_test_active_ns ...[2024-04-16 20:40:57.711733] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:06.893 passed 00:03:06.893 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:03:06.893 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:03:06.893 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:03:06.893 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-04-16 20:40:57.756204] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:06.893 passed 00:03:06.893 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-04-16 20:40:57.762737] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:06.893 passed 00:03:06.893 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-04-16 20:40:57.763843] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:06.893 [2024-04-16 20:40:57.763858] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:2870:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:03:06.893 passed 00:03:06.893 Test: test_alloc_io_qpair_fail ...passed 00:03:06.893 Test: test_nvme_ctrlr_add_remove_process ...[2024-04-16 20:40:57.764961] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:06.893 [2024-04-16 20:40:57.764977] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 497:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed 00:03:06.893 passed 00:03:06.893 Test: test_nvme_ctrlr_set_arbitration_feature ...passed 00:03:06.893 Test: test_nvme_ctrlr_set_state ...passed 00:03:06.893 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-04-16 20:40:57.764994] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1465:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:03:06.893 [2024-04-16 20:40:57.765001] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:06.893 passed 00:03:06.893 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-04-16 20:40:57.768462] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:06.893 passed 00:03:06.893 Test: test_nvme_ctrlr_ns_mgmt ...[2024-04-16 20:40:57.776105] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:06.893 passed 00:03:06.893 Test: test_nvme_ctrlr_reset ...[2024-04-16 20:40:57.777258] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:06.893 passed 00:03:06.893 Test: test_nvme_ctrlr_aer_callback ...[2024-04-16 20:40:57.777313] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:06.893 passed 00:03:06.893 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-04-16 20:40:57.778434] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:06.893 passed 00:03:06.893 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:03:06.893 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:03:06.893 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-04-16 20:40:57.779682] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:06.893 passed 00:03:06.893 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:03:06.893 Test: test_nvme_ctrlr_ana_resize ...[2024-04-16 20:40:57.780820] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:06.893 passed 00:03:06.893 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:03:06.893 Test: test_nvme_transport_ctrlr_ready ...[2024-04-16 20:40:57.781968] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4015:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1 00:03:06.893 [2024-04-16 20:40:57.781991] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4067:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 51 (error) 00:03:06.893 passed 00:03:06.893 Test: test_nvme_ctrlr_disable ...[2024-04-16 20:40:57.782005] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:06.893 passed 00:03:06.893 00:03:06.893 Run Summary: Type Total Ran Passed Failed Inactive 00:03:06.893 suites 1 1 n/a 0 0 00:03:06.893 tests 43 43 43 0 0 00:03:06.893 asserts 10418 10418 10418 0 n/a 00:03:06.893 00:03:06.893 Elapsed time = 0.055 seconds 00:03:06.893 20:40:57 -- unit/unittest.sh@88 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:03:06.893 00:03:06.893 00:03:06.893 CUnit - A unit testing framework for C - Version 2.1-3 00:03:06.893 http://cunit.sourceforge.net/ 00:03:06.893 00:03:06.893 00:03:06.893 Suite: nvme_ctrlr_cmd 00:03:06.893 Test: test_get_log_pages ...passed 00:03:06.893 Test: test_set_feature_cmd ...passed 00:03:06.893 Test: test_set_feature_ns_cmd ...passed 00:03:06.893 Test: test_get_feature_cmd ...passed 00:03:06.893 Test: test_get_feature_ns_cmd ...passed 00:03:06.893 Test: test_abort_cmd ...passed 00:03:06.893 Test: test_set_host_id_cmds ...[2024-04-16 20:40:57.793752] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 502:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:03:06.893 passed 00:03:06.893 Test: test_io_cmd_raw_no_payload_build ...passed 00:03:06.893 Test: test_io_raw_cmd ...passed 00:03:06.893 Test: test_io_raw_cmd_with_md ...passed 00:03:06.893 Test: test_namespace_attach ...passed 00:03:06.893 Test: test_namespace_detach ...passed 00:03:06.893 Test: test_namespace_create ...passed 00:03:06.893 Test: test_namespace_delete ...passed 00:03:06.893 Test: test_doorbell_buffer_config ...passed 00:03:06.893 Test: test_format_nvme ...passed 00:03:06.893 Test: test_fw_commit ...passed 00:03:06.893 Test: test_fw_image_download ...passed 00:03:06.893 Test: test_sanitize ...passed 00:03:06.893 Test: test_directive ...passed 00:03:06.893 Test: test_nvme_request_add_abort ...passed 00:03:06.893 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:03:06.893 Test: test_nvme_ctrlr_cmd_identify ...passed 00:03:06.893 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:03:06.893 00:03:06.893 Run Summary: Type Total Ran Passed Failed Inactive 00:03:06.893 suites 1 1 n/a 0 0 00:03:06.893 tests 24 24 24 0 0 00:03:06.893 asserts 198 198 198 0 n/a 00:03:06.893 00:03:06.893 Elapsed time = 0.000 seconds 00:03:06.893 20:40:57 -- unit/unittest.sh@89 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:03:06.893 00:03:06.893 00:03:06.893 CUnit - A unit testing framework for C - Version 2.1-3 00:03:06.893 http://cunit.sourceforge.net/ 00:03:06.893 00:03:06.893 00:03:06.893 Suite: nvme_ctrlr_cmd 00:03:06.893 Test: test_geometry_cmd ...passed 00:03:06.894 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:03:06.894 00:03:06.894 Run Summary: Type Total Ran Passed Failed Inactive 00:03:06.894 suites 1 1 n/a 0 0 00:03:06.894 tests 2 2 2 0 0 00:03:06.894 asserts 7 7 7 0 n/a 00:03:06.894 00:03:06.894 Elapsed time = 0.000 seconds 00:03:06.894 20:40:57 -- unit/unittest.sh@90 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:03:06.894 00:03:06.894 00:03:06.894 CUnit - A unit testing framework for C - Version 2.1-3 00:03:06.894 http://cunit.sourceforge.net/ 00:03:06.894 00:03:06.894 00:03:06.894 Suite: nvme 00:03:06.894 Test: test_nvme_ns_construct ...passed 00:03:06.894 Test: test_nvme_ns_uuid ...passed 00:03:06.894 Test: test_nvme_ns_csi ...passed 00:03:06.894 Test: test_nvme_ns_data ...passed 00:03:06.894 Test: test_nvme_ns_set_identify_data ...passed 00:03:06.894 Test: test_spdk_nvme_ns_get_values ...passed 00:03:06.894 Test: test_spdk_nvme_ns_is_active ...passed 00:03:06.894 Test: spdk_nvme_ns_supports ...passed 00:03:06.894 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:03:06.894 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:03:06.894 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:03:06.894 Test: test_nvme_ns_find_id_desc ...passed 00:03:06.894 00:03:06.894 Run Summary: Type Total Ran Passed Failed Inactive 00:03:06.894 suites 1 1 n/a 0 0 00:03:06.894 tests 12 12 12 0 0 00:03:06.894 asserts 83 83 83 0 n/a 00:03:06.894 00:03:06.894 Elapsed time = 0.000 seconds 00:03:06.894 20:40:57 -- unit/unittest.sh@91 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:03:06.894 00:03:06.894 00:03:06.894 CUnit - A unit testing framework for C - Version 2.1-3 00:03:06.894 http://cunit.sourceforge.net/ 00:03:06.894 00:03:06.894 00:03:06.894 Suite: nvme_ns_cmd 00:03:06.894 Test: split_test ...passed 00:03:06.894 Test: split_test2 ...passed 00:03:06.894 Test: split_test3 ...passed 00:03:06.894 Test: split_test4 ...passed 00:03:06.894 Test: test_nvme_ns_cmd_flush ...passed 00:03:06.894 Test: test_nvme_ns_cmd_dataset_management ...passed 00:03:06.894 Test: test_nvme_ns_cmd_copy ...passed 00:03:06.894 Test: test_io_flags ...[2024-04-16 20:40:57.820102] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:03:06.894 passed 00:03:06.894 Test: test_nvme_ns_cmd_write_zeroes ...passed 00:03:06.894 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:03:06.894 Test: test_nvme_ns_cmd_reservation_register ...passed 00:03:06.894 Test: test_nvme_ns_cmd_reservation_release ...passed 00:03:06.894 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:03:06.894 Test: test_nvme_ns_cmd_reservation_report ...passed 00:03:06.894 Test: test_cmd_child_request ...passed 00:03:06.894 Test: test_nvme_ns_cmd_readv ...passed 00:03:06.894 Test: test_nvme_ns_cmd_read_with_md ...passed 00:03:06.894 Test: test_nvme_ns_cmd_writev ...passed 00:03:06.894 Test: test_nvme_ns_cmd_write_with_md ...[2024-04-16 20:40:57.820509] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 288:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512 00:03:06.894 passed 00:03:06.894 Test: test_nvme_ns_cmd_zone_append_with_md ...passed 00:03:06.894 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:03:06.894 Test: test_nvme_ns_cmd_comparev ...passed 00:03:06.894 Test: test_nvme_ns_cmd_compare_and_write ...passed 00:03:06.894 Test: test_nvme_ns_cmd_compare_with_md ...passed 00:03:06.894 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:03:06.894 Test: test_nvme_ns_cmd_setup_request ...passed 00:03:06.894 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:03:06.894 Test: test_spdk_nvme_ns_cmd_writev_ext ...passed 00:03:06.894 Test: test_spdk_nvme_ns_cmd_readv_ext ...[2024-04-16 20:40:57.820661] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:03:06.894 [2024-04-16 20:40:57.820685] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:03:06.894 passed 00:03:06.894 Test: test_nvme_ns_cmd_verify ...passed 00:03:06.894 Test: test_nvme_ns_cmd_io_mgmt_send ...passed 00:03:06.894 Test: test_nvme_ns_cmd_io_mgmt_recv ...passed 00:03:06.894 00:03:06.894 Run Summary: Type Total Ran Passed Failed Inactive 00:03:06.894 suites 1 1 n/a 0 0 00:03:06.894 tests 32 32 32 0 0 00:03:06.894 asserts 550 550 550 0 n/a 00:03:06.894 00:03:06.894 Elapsed time = 0.000 seconds 00:03:06.894 20:40:57 -- unit/unittest.sh@92 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:03:06.894 00:03:06.894 00:03:06.894 CUnit - A unit testing framework for C - Version 2.1-3 00:03:06.894 http://cunit.sourceforge.net/ 00:03:06.894 00:03:06.894 00:03:06.894 Suite: nvme_ns_cmd 00:03:06.894 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:03:06.894 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:03:06.894 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:03:06.894 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:03:06.894 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:03:06.894 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:03:06.894 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:03:06.894 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:03:06.894 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:03:06.894 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:03:06.894 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:03:06.894 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:03:06.894 00:03:06.894 Run Summary: Type Total Ran Passed Failed Inactive 00:03:06.894 suites 1 1 n/a 0 0 00:03:06.894 tests 12 12 12 0 0 00:03:06.894 asserts 123 123 123 0 n/a 00:03:06.894 00:03:06.894 Elapsed time = 0.000 seconds 00:03:06.894 20:40:57 -- unit/unittest.sh@93 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:03:06.894 00:03:06.894 00:03:06.894 CUnit - A unit testing framework for C - Version 2.1-3 00:03:06.894 http://cunit.sourceforge.net/ 00:03:06.894 00:03:06.894 00:03:06.894 Suite: nvme_qpair 00:03:06.894 Test: test3 ...passed 00:03:06.894 Test: test_ctrlr_failed ...passed 00:03:06.894 Test: struct_packing ...passed 00:03:06.894 Test: test_nvme_qpair_process_completions ...[2024-04-16 20:40:57.837206] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:03:06.894 [2024-04-16 20:40:57.837408] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:03:06.894 [2024-04-16 20:40:57.837486] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 805:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (Device not configured) on qpair id 0 00:03:06.894 passed 00:03:06.894 Test: test_nvme_completion_is_retry ...passed 00:03:06.894 Test: test_get_status_string ...passed 00:03:06.894 Test: test_nvme_qpair_add_cmd_error_injection ...passed 00:03:06.894 Test: test_nvme_qpair_submit_request ...passed 00:03:06.894 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:03:06.894 Test: test_nvme_qpair_manual_complete_request ...passed 00:03:06.894 Test: test_nvme_qpair_init_deinit ...[2024-04-16 20:40:57.837505] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 805:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (Device not configured) on qpair id 1 00:03:06.894 passed 00:03:06.894 Test: test_nvme_get_sgl_print_info ...passed 00:03:06.894 00:03:06.894 [2024-04-16 20:40:57.837548] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:03:06.894 Run Summary: Type Total Ran Passed Failed Inactive 00:03:06.894 suites 1 1 n/a 0 0 00:03:06.894 tests 12 12 12 0 0 00:03:06.894 asserts 154 154 154 0 n/a 00:03:06.894 00:03:06.894 Elapsed time = 0.000 seconds 00:03:06.894 20:40:57 -- unit/unittest.sh@94 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:03:06.894 00:03:06.894 00:03:06.894 CUnit - A unit testing framework for C - Version 2.1-3 00:03:06.894 http://cunit.sourceforge.net/ 00:03:06.894 00:03:06.894 00:03:06.894 Suite: nvme_pcie 00:03:06.894 Test: test_prp_list_append ...[2024-04-16 20:40:57.845815] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:03:06.894 [2024-04-16 20:40:57.846298] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:03:06.894 [2024-04-16 20:40:57.846357] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1221:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:03:06.894 [2024-04-16 20:40:57.846533] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:03:06.894 [2024-04-16 20:40:57.846637] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:03:06.894 passed 00:03:06.894 Test: test_nvme_pcie_hotplug_monitor ...passed 00:03:06.894 Test: test_shadow_doorbell_update ...passed 00:03:06.894 Test: test_build_contig_hw_sgl_request ...passed 00:03:06.894 Test: test_nvme_pcie_qpair_build_metadata ...passed 00:03:06.894 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed 00:03:06.894 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed 00:03:06.894 Test: test_nvme_pcie_qpair_build_contig_request ...[2024-04-16 20:40:57.846920] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:03:06.894 passed 00:03:06.894 Test: test_nvme_pcie_ctrlr_regs_get_set ...passed 00:03:06.894 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...passed 00:03:06.894 Test: test_nvme_pcie_ctrlr_map_io_cmb ...passed 00:03:06.894 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...[2024-04-16 20:40:57.847006] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:03:06.894 passed 00:03:06.894 Test: test_nvme_pcie_ctrlr_config_pmr ...[2024-04-16 20:40:57.847056] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:03:06.894 passed 00:03:06.895 Test: test_nvme_pcie_ctrlr_map_io_pmr ...[2024-04-16 20:40:57.847104] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:03:06.895 [2024-04-16 20:40:57.847146] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:03:06.895 passed 00:03:06.895 00:03:06.895 Run Summary: Type Total Ran Passed Failed Inactive 00:03:06.895 suites 1 1 n/a 0 0 00:03:06.895 tests 14 14 14 0 0 00:03:06.895 asserts 235 235 235 0 n/a 00:03:06.895 00:03:06.895 Elapsed time = 0.000 seconds 00:03:06.895 20:40:57 -- unit/unittest.sh@95 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:03:06.895 00:03:06.895 00:03:06.895 CUnit - A unit testing framework for C - Version 2.1-3 00:03:06.895 http://cunit.sourceforge.net/ 00:03:06.895 00:03:06.895 00:03:06.895 Suite: nvme_ns_cmd 00:03:06.895 Test: nvme_poll_group_create_test ...passed 00:03:06.895 Test: nvme_poll_group_add_remove_test ...passed 00:03:06.895 Test: nvme_poll_group_process_completions ...passed 00:03:06.895 Test: nvme_poll_group_destroy_test ...passed 00:03:06.895 Test: nvme_poll_group_get_free_stats ...passed 00:03:06.895 00:03:06.895 Run Summary: Type Total Ran Passed Failed Inactive 00:03:06.895 suites 1 1 n/a 0 0 00:03:06.895 tests 5 5 5 0 0 00:03:06.895 asserts 75 75 75 0 n/a 00:03:06.895 00:03:06.895 Elapsed time = 0.000 seconds 00:03:06.895 20:40:57 -- unit/unittest.sh@96 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:03:06.895 00:03:06.895 00:03:06.895 CUnit - A unit testing framework for C - Version 2.1-3 00:03:06.895 http://cunit.sourceforge.net/ 00:03:06.895 00:03:06.895 00:03:06.895 Suite: nvme_quirks 00:03:06.895 Test: test_nvme_quirks_striping ...passed 00:03:06.895 00:03:06.895 Run Summary: Type Total Ran Passed Failed Inactive 00:03:06.895 suites 1 1 n/a 0 0 00:03:06.895 tests 1 1 1 0 0 00:03:06.895 asserts 5 5 5 0 n/a 00:03:06.895 00:03:06.895 Elapsed time = 0.000 seconds 00:03:06.895 20:40:57 -- unit/unittest.sh@97 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:03:06.895 00:03:06.895 00:03:06.895 CUnit - A unit testing framework for C - Version 2.1-3 00:03:06.895 http://cunit.sourceforge.net/ 00:03:06.895 00:03:06.895 00:03:06.895 Suite: nvme_tcp 00:03:06.895 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:03:06.895 Test: test_nvme_tcp_build_iovs ...passed 00:03:06.895 Test: test_nvme_tcp_build_sgl_request ...[2024-04-16 20:40:57.870829] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 784:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x8210371f0, and the iovcnt=16, remaining_size=28672 00:03:06.895 passed 00:03:06.895 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed 00:03:06.895 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:03:06.895 Test: test_nvme_tcp_req_complete_safe ...passed 00:03:06.895 Test: test_nvme_tcp_req_get ...passed 00:03:06.895 Test: test_nvme_tcp_req_init ...passed 00:03:06.895 Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed 00:03:06.895 Test: test_nvme_tcp_qpair_write_pdu ...passed 00:03:06.895 Test: test_nvme_tcp_qpair_set_recv_state ...[2024-04-16 20:40:57.871538] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x821038d60 is same with the state(6) to be set 00:03:06.895 passed 00:03:06.895 Test: test_nvme_tcp_alloc_reqs ...passed 00:03:06.895 Test: test_nvme_tcp_qpair_send_h2c_term_req ...passed 00:03:06.895 Test: test_nvme_tcp_pdu_ch_handle ...[2024-04-16 20:40:57.871638] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8210380b0 is same with the state(5) to be set 00:03:06.895 [2024-04-16 20:40:57.871697] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1108:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x821038658 00:03:06.895 [2024-04-16 20:40:57.871735] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:03:06.895 [2024-04-16 20:40:57.871767] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8210384e8 is same with the state(5) to be set 00:03:06.895 [2024-04-16 20:40:57.871799] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1118:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:03:06.895 [2024-04-16 20:40:57.871830] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8210384e8 is same with the state(5) to be set 00:03:06.895 [2024-04-16 20:40:57.871864] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:03:06.895 [2024-04-16 20:40:57.871896] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8210384e8 is same with the state(5) to be set 00:03:06.895 [2024-04-16 20:40:57.871928] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8210384e8 is same with the state(5) to be set 00:03:06.895 [2024-04-16 20:40:57.871960] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8210384e8 is same with the state(5) to be set 00:03:06.895 [2024-04-16 20:40:57.871993] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8210384e8 is same with the state(5) to be set 00:03:06.895 [2024-04-16 20:40:57.872026] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8210384e8 is same with the state(5) to be set 00:03:06.895 [2024-04-16 20:40:57.872057] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8210384e8 is same with the state(5) to be set 00:03:06.895 passed 00:03:06.895 Test: test_nvme_tcp_qpair_connect_sock ...[2024-04-16 20:40:57.872147] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2239:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:03:06.895 [2024-04-16 20:40:57.872182] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2251:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:03:06.895 [2024-04-16 20:40:57.997311] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2251:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:03:06.895 passed 00:03:06.895 Test: test_nvme_tcp_qpair_icreq_send ...passed 00:03:06.895 Test: test_nvme_tcp_c2h_payload_handle ...passed 00:03:06.895 Test: test_nvme_tcp_icresp_handle ...[2024-04-16 20:40:57.997531] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1283:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x821038a90): PDU Sequence Error 00:03:06.895 [2024-04-16 20:40:57.997587] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1508:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:03:06.895 [2024-04-16 20:40:57.997627] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1516:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:03:06.895 [2024-04-16 20:40:57.997664] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8210380b0 is same with the state(5) to be set 00:03:06.895 [2024-04-16 20:40:57.997701] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1524:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:03:06.895 [2024-04-16 20:40:57.997735] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8210380b0 is same with the state(5) to be set 00:03:06.895 passed 00:03:06.895 Test: test_nvme_tcp_pdu_payload_handle ...[2024-04-16 20:40:57.997772] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8210380b0 is same with the state(0) to be set 00:03:06.895 passed 00:03:06.895 Test: test_nvme_tcp_capsule_resp_hdr_handle ...[2024-04-16 20:40:57.997817] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1283:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x821038a90): PDU Sequence Error 00:03:06.895 passed 00:03:06.895 Test: test_nvme_tcp_ctrlr_connect_qpair ...[2024-04-16 20:40:57.997875] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1585:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x821037350 00:03:06.895 passed 00:03:06.895 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...[2024-04-16 20:40:57.997955] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 353:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x821036ad8, errno=0, rc=0 00:03:06.895 [2024-04-16 20:40:57.997995] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x821036ad8 is same with the state(5) to be set 00:03:06.895 [2024-04-16 20:40:57.998048] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x821036ad8 is same with the state(5) to be set 00:03:06.895 [2024-04-16 20:40:57.998198] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2099:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x821036ad8 (0): No error: 0 00:03:06.895 passed 00:03:06.895 Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-04-16 20:40:57.998267] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2099:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x821036ad8 (0): No error: 0 00:03:07.155 passed 00:03:07.155 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...passed 00:03:07.155 Test: test_nvme_tcp_poll_group_get_stats ...passed 00:03:07.155 Test: test_nvme_tcp_ctrlr_construct ...[2024-04-16 20:40:58.058691] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2423:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:03:07.155 [2024-04-16 20:40:58.058761] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2423:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:03:07.155 [2024-04-16 20:40:58.058809] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2849:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:03:07.155 [2024-04-16 20:40:58.058818] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2849:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:03:07.155 [2024-04-16 20:40:58.058857] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2423:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:03:07.155 [2024-04-16 20:40:58.058866] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:03:07.155 passed 00:03:07.155 Test: test_nvme_tcp_qpair_submit_request ...passed 00:03:07.155 00:03:07.155 Run Summary: Type Total Ran Passed Failed Inactive 00:03:07.155 suites 1 1 n/a 0 0 00:03:07.155 tests 27 27 27 0 0 00:03:07.155 asserts 624 624 624 0 n/a 00:03:07.155 00:03:07.155 Elapsed time = 0.070 seconds 00:03:07.155 [2024-04-16 20:40:58.058888] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2239:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:03:07.155 [2024-04-16 20:40:58.058897] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:03:07.155 [2024-04-16 20:40:58.058927] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2290:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x82af57180 with addr=192.168.1.78, port=23 00:03:07.155 [2024-04-16 20:40:58.058937] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:03:07.155 [2024-04-16 20:40:58.058957] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 784:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x82af57300, and the iovcnt=1, remaining_size=1024 00:03:07.155 [2024-04-16 20:40:58.058967] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 961:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:03:07.155 20:40:58 -- unit/unittest.sh@98 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:03:07.155 00:03:07.155 00:03:07.155 CUnit - A unit testing framework for C - Version 2.1-3 00:03:07.155 http://cunit.sourceforge.net/ 00:03:07.155 00:03:07.155 00:03:07.155 Suite: nvme_transport 00:03:07.155 Test: test_nvme_get_transport ...passed 00:03:07.155 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:03:07.155 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:03:07.155 Test: test_nvme_transport_poll_group_add_remove ...passed 00:03:07.155 Test: test_ctrlr_get_memory_domains ...passed 00:03:07.155 00:03:07.155 Run Summary: Type Total Ran Passed Failed Inactive 00:03:07.155 suites 1 1 n/a 0 0 00:03:07.155 tests 5 5 5 0 0 00:03:07.155 asserts 28 28 28 0 n/a 00:03:07.155 00:03:07.155 Elapsed time = 0.000 seconds 00:03:07.155 20:40:58 -- unit/unittest.sh@99 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:03:07.155 00:03:07.155 00:03:07.155 CUnit - A unit testing framework for C - Version 2.1-3 00:03:07.155 http://cunit.sourceforge.net/ 00:03:07.155 00:03:07.155 00:03:07.155 Suite: nvme_io_msg 00:03:07.155 Test: test_nvme_io_msg_send ...passed 00:03:07.155 Test: test_nvme_io_msg_process ...passed 00:03:07.155 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:03:07.155 00:03:07.155 Run Summary: Type Total Ran Passed Failed Inactive 00:03:07.155 suites 1 1 n/a 0 0 00:03:07.155 tests 3 3 3 0 0 00:03:07.155 asserts 56 56 56 0 n/a 00:03:07.155 00:03:07.155 Elapsed time = 0.000 seconds 00:03:07.155 20:40:58 -- unit/unittest.sh@100 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:03:07.155 00:03:07.155 00:03:07.156 CUnit - A unit testing framework for C - Version 2.1-3 00:03:07.156 http://cunit.sourceforge.net/ 00:03:07.156 00:03:07.156 00:03:07.156 Suite: nvme_pcie_common 00:03:07.156 Test: test_nvme_pcie_ctrlr_alloc_cmb ...[2024-04-16 20:40:58.077523] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:03:07.156 passed 00:03:07.156 Test: test_nvme_pcie_qpair_construct_destroy ...passed 00:03:07.156 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed 00:03:07.156 Test: test_nvme_pcie_ctrlr_connect_qpair ...[2024-04-16 20:40:58.077756] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 503:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:03:07.156 [2024-04-16 20:40:58.077769] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 456:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:03:07.156 passed 00:03:07.156 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...[2024-04-16 20:40:58.077779] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 550:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq 00:03:07.156 passed 00:03:07.156 Test: test_nvme_pcie_poll_group_get_stats ...[2024-04-16 20:40:58.077878] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1791:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:03:07.156 [2024-04-16 20:40:58.077889] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1791:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:03:07.156 passed 00:03:07.156 00:03:07.156 Run Summary: Type Total Ran Passed Failed Inactive 00:03:07.156 suites 1 1 n/a 0 0 00:03:07.156 tests 6 6 6 0 0 00:03:07.156 asserts 148 148 148 0 n/a 00:03:07.156 00:03:07.156 Elapsed time = 0.000 seconds 00:03:07.156 20:40:58 -- unit/unittest.sh@101 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:03:07.156 00:03:07.156 00:03:07.156 CUnit - A unit testing framework for C - Version 2.1-3 00:03:07.156 http://cunit.sourceforge.net/ 00:03:07.156 00:03:07.156 00:03:07.156 Suite: nvme_fabric 00:03:07.156 Test: test_nvme_fabric_prop_set_cmd ...passed 00:03:07.156 Test: test_nvme_fabric_prop_get_cmd ...passed 00:03:07.156 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:03:07.156 Test: test_nvme_fabric_discover_probe ...passed 00:03:07.156 Test: test_nvme_fabric_qpair_connect ...[2024-04-16 20:40:58.085680] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 605:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -85, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:03:07.156 passed 00:03:07.156 00:03:07.156 Run Summary: Type Total Ran Passed Failed Inactive 00:03:07.156 suites 1 1 n/a 0 0 00:03:07.156 tests 5 5 5 0 0 00:03:07.156 asserts 60 60 60 0 n/a 00:03:07.156 00:03:07.156 Elapsed time = 0.000 seconds 00:03:07.156 20:40:58 -- unit/unittest.sh@102 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:03:07.156 00:03:07.156 00:03:07.156 CUnit - A unit testing framework for C - Version 2.1-3 00:03:07.156 http://cunit.sourceforge.net/ 00:03:07.156 00:03:07.156 00:03:07.156 Suite: nvme_opal 00:03:07.156 Test: test_opal_nvme_security_recv_send_done ...passed 00:03:07.156 Test: test_opal_add_short_atom_header ...[2024-04-16 20:40:58.093681] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:03:07.156 passed 00:03:07.156 00:03:07.156 Run Summary: Type Total Ran Passed Failed Inactive 00:03:07.156 suites 1 1 n/a 0 0 00:03:07.156 tests 2 2 2 0 0 00:03:07.156 asserts 22 22 22 0 n/a 00:03:07.156 00:03:07.156 Elapsed time = 0.000 seconds 00:03:07.156 00:03:07.156 real 0m0.538s 00:03:07.156 user 0m0.091s 00:03:07.156 sys 0m0.170s 00:03:07.156 20:40:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:07.156 20:40:58 -- common/autotest_common.sh@10 -- # set +x 00:03:07.156 ************************************ 00:03:07.156 END TEST unittest_nvme 00:03:07.156 ************************************ 00:03:07.156 20:40:58 -- unit/unittest.sh@247 -- # run_test unittest_log /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:03:07.156 20:40:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:07.156 20:40:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:07.156 20:40:58 -- common/autotest_common.sh@10 -- # set +x 00:03:07.156 ************************************ 00:03:07.156 START TEST unittest_log 00:03:07.156 ************************************ 00:03:07.156 20:40:58 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:03:07.156 00:03:07.156 00:03:07.156 CUnit - A unit testing framework for C - Version 2.1-3 00:03:07.156 http://cunit.sourceforge.net/ 00:03:07.156 00:03:07.156 00:03:07.156 Suite: log 00:03:07.156 Test: log_test ...[2024-04-16 20:40:58.138772] log_ut.c: 54:log_test: *WARNING*: log warning unit test 00:03:07.156 passed 00:03:07.156 Test: deprecation ...[2024-04-16 20:40:58.139234] log_ut.c: 55:log_test: *DEBUG*: log test 00:03:07.156 log dump test: 00:03:07.156 00000000 6c 6f 67 20 64 75 6d 70 log dump 00:03:07.156 spdk dump test: 00:03:07.156 00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:03:07.156 spdk dump test: 00:03:07.156 00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:03:07.156 00000010 65 20 63 68 61 72 73 e chars 00:03:08.092 passed 00:03:08.092 00:03:08.092 Run Summary: Type Total Ran Passed Failed Inactive 00:03:08.092 suites 1 1 n/a 0 0 00:03:08.092 tests 2 2 2 0 0 00:03:08.092 asserts 73 73 73 0 n/a 00:03:08.092 00:03:08.092 Elapsed time = 0.000 seconds 00:03:08.092 00:03:08.092 real 0m1.073s 00:03:08.092 user 0m0.001s 00:03:08.092 sys 0m0.008s 00:03:08.092 20:40:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:08.092 20:40:59 -- common/autotest_common.sh@10 -- # set +x 00:03:08.092 ************************************ 00:03:08.092 END TEST unittest_log 00:03:08.092 ************************************ 00:03:08.353 20:40:59 -- unit/unittest.sh@248 -- # run_test unittest_lvol /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:03:08.353 20:40:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:08.353 20:40:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:08.353 20:40:59 -- common/autotest_common.sh@10 -- # set +x 00:03:08.353 ************************************ 00:03:08.353 START TEST unittest_lvol 00:03:08.353 ************************************ 00:03:08.353 20:40:59 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:03:08.353 00:03:08.353 00:03:08.353 CUnit - A unit testing framework for C - Version 2.1-3 00:03:08.353 http://cunit.sourceforge.net/ 00:03:08.353 00:03:08.353 00:03:08.353 Suite: lvol 00:03:08.353 Test: lvs_init_unload_success ...[2024-04-16 20:40:59.264212] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:03:08.353 passed 00:03:08.353 Test: lvs_init_destroy_success ...[2024-04-16 20:40:59.264682] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:03:08.353 passed 00:03:08.353 Test: lvs_init_opts_success ...passed 00:03:08.353 Test: lvs_unload_lvs_is_null_fail ...[2024-04-16 20:40:59.264764] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:03:08.353 passed 00:03:08.353 Test: lvs_names ...[2024-04-16 20:40:59.264808] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:03:08.353 [2024-04-16 20:40:59.264841] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:03:08.353 [2024-04-16 20:40:59.264897] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:03:08.353 passed 00:03:08.353 Test: lvol_create_destroy_success ...passed 00:03:08.353 Test: lvol_create_fail ...[2024-04-16 20:40:59.265066] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:03:08.353 [2024-04-16 20:40:59.265112] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:03:08.353 passed 00:03:08.353 Test: lvol_destroy_fail ...[2024-04-16 20:40:59.265227] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:03:08.353 passed 00:03:08.353 Test: lvol_close ...[2024-04-16 20:40:59.265294] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:03:08.353 [2024-04-16 20:40:59.265328] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:03:08.353 passed 00:03:08.353 Test: lvol_resize ...passed 00:03:08.353 Test: lvol_set_read_only ...passed 00:03:08.353 Test: test_lvs_load ...[2024-04-16 20:40:59.265485] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:03:08.353 [2024-04-16 20:40:59.265524] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:03:08.353 passed 00:03:08.353 Test: lvols_load ...[2024-04-16 20:40:59.265583] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:03:08.353 [2024-04-16 20:40:59.265658] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:03:08.353 passed 00:03:08.353 Test: lvol_open ...passed 00:03:08.353 Test: lvol_snapshot ...passed 00:03:08.353 Test: lvol_snapshot_fail ...[2024-04-16 20:40:59.265871] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:03:08.353 passed 00:03:08.353 Test: lvol_clone ...passed 00:03:08.353 Test: lvol_clone_fail ...[2024-04-16 20:40:59.265987] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:03:08.353 passed 00:03:08.353 Test: lvol_iter_clones ...passed 00:03:08.353 Test: lvol_refcnt ...[2024-04-16 20:40:59.266127] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol a1c45ac0-fc31-11ee-80f8-ef3e42bb1492 because it is still open 00:03:08.353 passed 00:03:08.353 Test: lvol_names ...[2024-04-16 20:40:59.266186] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:03:08.353 [2024-04-16 20:40:59.266224] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:03:08.353 passed 00:03:08.353 Test: lvol_create_thin_provisioned ...[2024-04-16 20:40:59.266285] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:03:08.353 passed 00:03:08.353 Test: lvol_rename ...[2024-04-16 20:40:59.266374] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:03:08.353 [2024-04-16 20:40:59.266407] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:03:08.353 passed 00:03:08.353 Test: lvs_rename ...passed 00:03:08.353 Test: lvol_inflate ...[2024-04-16 20:40:59.266456] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:03:08.353 passed 00:03:08.353 Test: lvol_decouple_parent ...[2024-04-16 20:40:59.266507] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:03:08.353 [2024-04-16 20:40:59.266549] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:03:08.353 passed 00:03:08.353 Test: lvol_get_xattr ...passed 00:03:08.353 Test: lvol_esnap_reload ...passed 00:03:08.353 Test: lvol_esnap_create_bad_args ...[2024-04-16 20:40:59.266645] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:03:08.353 [2024-04-16 20:40:59.266668] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:03:08.353 [2024-04-16 20:40:59.266692] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1260:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:03:08.353 passed 00:03:08.353 Test: lvol_esnap_create_delete ...[2024-04-16 20:40:59.266722] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:03:08.354 [2024-04-16 20:40:59.266757] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:03:08.354 passed 00:03:08.354 Test: lvol_esnap_load_esnaps ...passed 00:03:08.354 Test: lvol_esnap_missing ...[2024-04-16 20:40:59.266841] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1833:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:03:08.354 [2024-04-16 20:40:59.266894] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:03:08.354 [2024-04-16 20:40:59.266916] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:03:08.354 passed 00:03:08.354 Test: lvol_esnap_hotplug ... 00:03:08.354 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:03:08.354 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:03:08.354 [2024-04-16 20:40:59.267051] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2063:lvs_esnap_degraded_hotplug: *ERROR*: lvol a1c47ecf-fc31-11ee-80f8-ef3e42bb1492: failed to create esnap bs_dev: error -12 00:03:08.354 lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM 00:03:08.354 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:03:08.354 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:03:08.354 [2024-04-16 20:40:59.267146] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2063:lvs_esnap_degraded_hotplug: *ERROR*: lvol a1c48243-fc31-11ee-80f8-ef3e42bb1492: failed to create esnap bs_dev: error -12 00:03:08.354 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:03:08.354 [2024-04-16 20:40:59.267199] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2063:lvs_esnap_degraded_hotplug: *ERROR*: lvol a1c4849d-fc31-11ee-80f8-ef3e42bb1492: failed to create esnap bs_dev: error -12 00:03:08.354 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:03:08.354 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:03:08.354 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:03:08.354 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:03:08.354 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:03:08.354 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:03:08.354 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:03:08.354 passed 00:03:08.354 Test: lvol_get_by ...passed 00:03:08.354 00:03:08.354 Run Summary: Type Total Ran Passed Failed Inactive 00:03:08.354 suites 1 1 n/a 0 0 00:03:08.354 tests 34 34 34 0 0 00:03:08.354 asserts 1439 1439 1439 0 n/a 00:03:08.354 00:03:08.354 Elapsed time = 0.008 seconds 00:03:08.354 00:03:08.354 real 0m0.015s 00:03:08.354 user 0m0.014s 00:03:08.354 sys 0m0.013s 00:03:08.354 20:40:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:08.354 20:40:59 -- common/autotest_common.sh@10 -- # set +x 00:03:08.354 ************************************ 00:03:08.354 END TEST unittest_lvol 00:03:08.354 ************************************ 00:03:08.354 20:40:59 -- unit/unittest.sh@249 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:08.354 20:40:59 -- unit/unittest.sh@250 -- # run_test unittest_nvme_rdma /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:03:08.354 20:40:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:08.354 20:40:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:08.354 20:40:59 -- common/autotest_common.sh@10 -- # set +x 00:03:08.354 ************************************ 00:03:08.354 START TEST unittest_nvme_rdma 00:03:08.354 ************************************ 00:03:08.354 20:40:59 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:03:08.354 00:03:08.354 00:03:08.354 CUnit - A unit testing framework for C - Version 2.1-3 00:03:08.354 http://cunit.sourceforge.net/ 00:03:08.354 00:03:08.354 00:03:08.354 Suite: nvme_rdma 00:03:08.354 Test: test_nvme_rdma_build_sgl_request ...[2024-04-16 20:40:59.329577] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1455:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:03:08.354 [2024-04-16 20:40:59.329968] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1629:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:03:08.354 passed 00:03:08.354 Test: test_nvme_rdma_build_sgl_inline_request ...passed 00:03:08.354 Test: test_nvme_rdma_build_contig_request ...[2024-04-16 20:40:59.330018] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1685:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:03:08.354 passed 00:03:08.354 Test: test_nvme_rdma_build_contig_inline_request ...passed 00:03:08.354 Test: test_nvme_rdma_create_reqs ...[2024-04-16 20:40:59.330070] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1566:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:03:08.354 [2024-04-16 20:40:59.330108] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1007:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:03:08.354 passed 00:03:08.354 Test: test_nvme_rdma_create_rsps ...[2024-04-16 20:40:59.330218] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 925:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:03:08.354 passed 00:03:08.354 Test: test_nvme_rdma_ctrlr_create_qpair ...[2024-04-16 20:40:59.330266] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1823:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:03:08.354 passed 00:03:08.354 Test: test_nvme_rdma_poller_create ...[2024-04-16 20:40:59.330296] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1823:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:03:08.354 passed 00:03:08.354 Test: test_nvme_rdma_qpair_process_cm_event ...[2024-04-16 20:40:59.330364] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 526:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:03:08.354 passed 00:03:08.354 Test: test_nvme_rdma_ctrlr_construct ...passed 00:03:08.354 Test: test_nvme_rdma_req_put_and_get ...passed 00:03:08.354 Test: test_nvme_rdma_req_init ...passed 00:03:08.354 Test: test_nvme_rdma_validate_cm_event ...[2024-04-16 20:40:59.330514] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 620:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:03:08.354 passed 00:03:08.354 Test: test_nvme_rdma_qpair_init ...[2024-04-16 20:40:59.330538] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 620:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:03:08.354 passed 00:03:08.354 Test: test_nvme_rdma_qpair_submit_request ...passed 00:03:08.354 Test: test_nvme_rdma_memory_domain ...[2024-04-16 20:40:59.330626] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 352:nvme_rdma_get_memory_domain: *ERROR*: Failed to create memory domain 00:03:08.354 passed 00:03:08.354 Test: test_rdma_ctrlr_get_memory_domains ...passed 00:03:08.354 Test: test_rdma_get_memory_translation ...[2024-04-16 20:40:59.330685] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1444:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:03:08.354 [2024-04-16 20:40:59.330714] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1455:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:03:08.354 passed 00:03:08.354 Test: test_get_rdma_qpair_from_wc ...passed 00:03:08.354 Test: test_nvme_rdma_ctrlr_get_max_sges ...passed 00:03:08.354 Test: test_nvme_rdma_poll_group_get_stats ...[2024-04-16 20:40:59.330807] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3239:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:03:08.354 [2024-04-16 20:40:59.330839] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3239:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:03:08.354 passed 00:03:08.354 Test: test_nvme_rdma_qpair_set_poller ...[2024-04-16 20:40:59.330897] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2972:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 0. 00:03:08.354 [2024-04-16 20:40:59.330928] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3018:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:03:08.354 [2024-04-16 20:40:59.330951] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 723:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x820ae1e30 on poll group 0x82d9e8000 00:03:08.354 [2024-04-16 20:40:59.330973] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2972:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 0. 00:03:08.354 [2024-04-16 20:40:59.330993] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3018:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0x0 00:03:08.354 [2024-04-16 20:40:59.331013] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 723:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x820ae1e30 on poll group 0x82d9e8000 00:03:08.354 [2024-04-16 20:40:59.331105] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 701:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 0: No error: 0 00:03:08.354 passed 00:03:08.354 00:03:08.354 Run Summary: Type Total Ran Passed Failed Inactive 00:03:08.354 suites 1 1 n/a 0 0 00:03:08.354 tests 22 22 22 0 0 00:03:08.354 asserts 412 412 412 0 n/a 00:03:08.354 00:03:08.354 Elapsed time = 0.000 seconds 00:03:08.354 00:03:08.354 real 0m0.011s 00:03:08.354 user 0m0.011s 00:03:08.354 sys 0m0.001s 00:03:08.354 20:40:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:08.354 20:40:59 -- common/autotest_common.sh@10 -- # set +x 00:03:08.354 ************************************ 00:03:08.354 END TEST unittest_nvme_rdma 00:03:08.354 ************************************ 00:03:08.354 20:40:59 -- unit/unittest.sh@251 -- # run_test unittest_nvmf_transport /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:03:08.354 20:40:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:08.354 20:40:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:08.354 20:40:59 -- common/autotest_common.sh@10 -- # set +x 00:03:08.354 ************************************ 00:03:08.354 START TEST unittest_nvmf_transport 00:03:08.354 ************************************ 00:03:08.354 20:40:59 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:03:08.354 00:03:08.354 00:03:08.354 CUnit - A unit testing framework for C - Version 2.1-3 00:03:08.354 http://cunit.sourceforge.net/ 00:03:08.354 00:03:08.354 00:03:08.354 Suite: nvmf 00:03:08.354 Test: test_spdk_nvmf_transport_create ...[2024-04-16 20:40:59.387492] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 247:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:03:08.354 [2024-04-16 20:40:59.387831] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 267:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:03:08.354 passed 00:03:08.354 Test: test_nvmf_transport_poll_group_create ...passed 00:03:08.355 Test: test_spdk_nvmf_transport_opts_init ...[2024-04-16 20:40:59.387868] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 272:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:03:08.355 [2024-04-16 20:40:59.387920] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 255:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:03:08.355 [2024-04-16 20:40:59.387965] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 788:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:03:08.355 [2024-04-16 20:40:59.387983] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 793:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:03:08.355 [2024-04-16 20:40:59.387998] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 798:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:03:08.355 passed 00:03:08.355 Test: test_spdk_nvmf_transport_listen_ext ...passed 00:03:08.355 00:03:08.355 Run Summary: Type Total Ran Passed Failed Inactive 00:03:08.355 suites 1 1 n/a 0 0 00:03:08.355 tests 4 4 4 0 0 00:03:08.355 asserts 49 49 49 0 n/a 00:03:08.355 00:03:08.355 Elapsed time = 0.000 seconds 00:03:08.355 00:03:08.355 real 0m0.009s 00:03:08.355 user 0m0.000s 00:03:08.355 sys 0m0.008s 00:03:08.355 20:40:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:08.355 20:40:59 -- common/autotest_common.sh@10 -- # set +x 00:03:08.355 ************************************ 00:03:08.355 END TEST unittest_nvmf_transport 00:03:08.355 ************************************ 00:03:08.355 20:40:59 -- unit/unittest.sh@252 -- # run_test unittest_rdma /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:03:08.355 20:40:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:08.355 20:40:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:08.355 20:40:59 -- common/autotest_common.sh@10 -- # set +x 00:03:08.355 ************************************ 00:03:08.355 START TEST unittest_rdma 00:03:08.355 ************************************ 00:03:08.355 20:40:59 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:03:08.355 00:03:08.355 00:03:08.355 CUnit - A unit testing framework for C - Version 2.1-3 00:03:08.355 http://cunit.sourceforge.net/ 00:03:08.355 00:03:08.355 00:03:08.355 Suite: rdma_common 00:03:08.355 Test: test_spdk_rdma_pd ...[2024-04-16 20:40:59.437121] /usr/home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:03:08.355 [2024-04-16 20:40:59.437525] /usr/home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:03:08.355 passed 00:03:08.355 00:03:08.355 Run Summary: Type Total Ran Passed Failed Inactive 00:03:08.355 suites 1 1 n/a 0 0 00:03:08.355 tests 1 1 1 0 0 00:03:08.355 asserts 31 31 31 0 n/a 00:03:08.355 00:03:08.355 Elapsed time = 0.000 seconds 00:03:08.355 00:03:08.355 real 0m0.009s 00:03:08.355 user 0m0.000s 00:03:08.355 sys 0m0.013s 00:03:08.355 20:40:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:08.355 20:40:59 -- common/autotest_common.sh@10 -- # set +x 00:03:08.355 ************************************ 00:03:08.355 END TEST unittest_rdma 00:03:08.355 ************************************ 00:03:08.619 20:40:59 -- unit/unittest.sh@255 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:08.619 20:40:59 -- unit/unittest.sh@259 -- # run_test unittest_nvmf unittest_nvmf 00:03:08.619 20:40:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:08.619 20:40:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:08.619 20:40:59 -- common/autotest_common.sh@10 -- # set +x 00:03:08.619 ************************************ 00:03:08.619 START TEST unittest_nvmf 00:03:08.619 ************************************ 00:03:08.619 20:40:59 -- common/autotest_common.sh@1104 -- # unittest_nvmf 00:03:08.619 20:40:59 -- unit/unittest.sh@106 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:03:08.619 00:03:08.619 00:03:08.619 CUnit - A unit testing framework for C - Version 2.1-3 00:03:08.619 http://cunit.sourceforge.net/ 00:03:08.619 00:03:08.619 00:03:08.619 Suite: nvmf 00:03:08.619 Test: test_get_log_page ...[2024-04-16 20:40:59.501997] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:03:08.619 passed 00:03:08.619 Test: test_process_fabrics_cmd ...passed 00:03:08.619 Test: test_connect ...[2024-04-16 20:40:59.502639] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 905:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:03:08.619 [2024-04-16 20:40:59.502732] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 768:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:03:08.619 [2024-04-16 20:40:59.502800] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 944:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:03:08.619 [2024-04-16 20:40:59.502840] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:03:08.619 [2024-04-16 20:40:59.502878] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 779:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:03:08.619 [2024-04-16 20:40:59.502917] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 787:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:03:08.619 [2024-04-16 20:40:59.502954] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 793:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:03:08.619 [2024-04-16 20:40:59.502990] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 819:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:03:08.619 [2024-04-16 20:40:59.503037] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:03:08.619 [2024-04-16 20:40:59.503080] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 587:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:03:08.619 [2024-04-16 20:40:59.503148] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 593:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:03:08.619 [2024-04-16 20:40:59.503207] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 600:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:03:08.619 [2024-04-16 20:40:59.503247] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 607:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:03:08.619 [2024-04-16 20:40:59.503290] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 624:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:03:08.619 [2024-04-16 20:40:59.503340] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 232:ctrlr_add_qpair_and_send_rsp: *ERROR*: Got I/O connect with duplicate QID 1 00:03:08.619 passed 00:03:08.619 Test: test_get_ns_id_desc_list ...[2024-04-16 20:40:59.503393] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 699:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 2, group 0x0) 00:03:08.619 passed 00:03:08.619 Test: test_identify_ns ...[2024-04-16 20:40:59.503524] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:03:08.619 [2024-04-16 20:40:59.503615] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:03:08.619 [2024-04-16 20:40:59.503683] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:03:08.619 passed 00:03:08.619 Test: test_identify_ns_iocs_specific ...[2024-04-16 20:40:59.503748] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:03:08.619 [2024-04-16 20:40:59.503863] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:03:08.619 passed 00:03:08.619 Test: test_reservation_write_exclusive ...passed 00:03:08.619 Test: test_reservation_exclusive_access ...passed 00:03:08.619 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed 00:03:08.619 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:03:08.619 Test: test_reservation_notification_log_page ...passed 00:03:08.619 Test: test_get_dif_ctx ...passed 00:03:08.619 Test: test_set_get_features ...[2024-04-16 20:40:59.504103] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1534:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:03:08.619 [2024-04-16 20:40:59.504134] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1534:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:03:08.619 [2024-04-16 20:40:59.504160] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1545:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3 00:03:08.619 [2024-04-16 20:40:59.504188] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1621:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:03:08.619 passed 00:03:08.619 Test: test_identify_ctrlr ...passed 00:03:08.619 Test: test_identify_ctrlr_iocs_specific ...passed 00:03:08.619 Test: test_custom_admin_cmd ...passed 00:03:08.619 Test: test_fused_compare_and_write ...[2024-04-16 20:40:59.504391] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4105:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:03:08.619 [2024-04-16 20:40:59.504422] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4094:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:03:08.619 passed 00:03:08.619 Test: test_multi_async_event_reqs ...passed 00:03:08.619 Test: test_get_ana_log_page_one_ns_per_anagrp ...passed 00:03:08.619 Test: test_get_ana_log_page_multi_ns_per_anagrp ...[2024-04-16 20:40:59.504452] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4112:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:03:08.619 passed 00:03:08.619 Test: test_multi_async_events ...passed 00:03:08.619 Test: test_rae ...passed 00:03:08.619 Test: test_nvmf_ctrlr_create_destruct ...passed 00:03:08.619 Test: test_nvmf_ctrlr_use_zcopy ...passed 00:03:08.619 Test: test_spdk_nvmf_request_zcopy_start ...[2024-04-16 20:40:59.504627] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4232:nvmf_ctrlr_process_io_cmd: *ERROR*: I/O command sent before CONNECT 00:03:08.619 passed 00:03:08.619 Test: test_zcopy_read ...passed 00:03:08.619 Test: test_zcopy_write ...passed 00:03:08.619 Test: test_nvmf_property_set ...passed 00:03:08.619 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...[2024-04-16 20:40:59.504703] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1832:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:03:08.619 passed 00:03:08.619 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...[2024-04-16 20:40:59.504732] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1832:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:03:08.619 [2024-04-16 20:40:59.504764] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1855:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:03:08.619 [2024-04-16 20:40:59.504791] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1861:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:03:08.619 passed[2024-04-16 20:40:59.504819] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1873:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:03:08.619 00:03:08.619 00:03:08.619 Run Summary: Type Total Ran Passed Failed Inactive 00:03:08.619 suites 1 1 n/a 0 0 00:03:08.619 tests 30 30 30 0 0 00:03:08.619 asserts 885 885 885 0 n/a 00:03:08.619 00:03:08.619 Elapsed time = 0.008 seconds 00:03:08.619 20:40:59 -- unit/unittest.sh@107 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:03:08.619 00:03:08.619 00:03:08.619 CUnit - A unit testing framework for C - Version 2.1-3 00:03:08.619 http://cunit.sourceforge.net/ 00:03:08.619 00:03:08.619 00:03:08.619 Suite: nvmf 00:03:08.619 Test: test_get_rw_params ...passed 00:03:08.619 Test: test_lba_in_range ...passed 00:03:08.619 Test: test_get_dif_ctx ...passed 00:03:08.619 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:03:08.619 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...[2024-04-16 20:40:59.516971] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 435:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:03:08.619 [2024-04-16 20:40:59.517370] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 443:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:03:08.619 [2024-04-16 20:40:59.517416] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 451:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:03:08.619 passed 00:03:08.619 Test: test_nvmf_bdev_ctrlr_zcopy_start ...[2024-04-16 20:40:59.517446] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 946:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:03:08.619 passed 00:03:08.619 Test: test_nvmf_bdev_ctrlr_cmd ...[2024-04-16 20:40:59.517467] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 954:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:03:08.619 [2024-04-16 20:40:59.517495] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 389:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:03:08.619 [2024-04-16 20:40:59.517516] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 397:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:03:08.619 [2024-04-16 20:40:59.517538] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 488:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:03:08.619 passed 00:03:08.619 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed 00:03:08.620 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...[2024-04-16 20:40:59.517557] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 495:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:03:08.620 passed 00:03:08.620 00:03:08.620 Run Summary: Type Total Ran Passed Failed Inactive 00:03:08.620 suites 1 1 n/a 0 0 00:03:08.620 tests 9 9 9 0 0 00:03:08.620 asserts 157 157 157 0 n/a 00:03:08.620 00:03:08.620 Elapsed time = 0.000 seconds 00:03:08.620 20:40:59 -- unit/unittest.sh@108 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:03:08.620 00:03:08.620 00:03:08.620 CUnit - A unit testing framework for C - Version 2.1-3 00:03:08.620 http://cunit.sourceforge.net/ 00:03:08.620 00:03:08.620 00:03:08.620 Suite: nvmf 00:03:08.620 Test: test_discovery_log ...passed 00:03:08.620 Test: test_discovery_log_with_filters ...passed 00:03:08.620 00:03:08.620 Run Summary: Type Total Ran Passed Failed Inactive 00:03:08.620 suites 1 1 n/a 0 0 00:03:08.620 tests 2 2 2 0 0 00:03:08.620 asserts 238 238 238 0 n/a 00:03:08.620 00:03:08.620 Elapsed time = 0.000 seconds 00:03:08.620 20:40:59 -- unit/unittest.sh@109 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:03:08.620 00:03:08.620 00:03:08.620 CUnit - A unit testing framework for C - Version 2.1-3 00:03:08.620 http://cunit.sourceforge.net/ 00:03:08.620 00:03:08.620 00:03:08.620 Suite: nvmf 00:03:08.620 Test: nvmf_test_create_subsystem ...[2024-04-16 20:40:59.531442] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 126:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:03:08.620 [2024-04-16 20:40:59.531621] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:03:08.620 [2024-04-16 20:40:59.531637] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:03:08.620 [2024-04-16 20:40:59.531646] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:03:08.620 [2024-04-16 20:40:59.531654] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 184:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:03:08.620 [2024-04-16 20:40:59.531663] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:03:08.620 [2024-04-16 20:40:59.531680] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:03:08.620 [2024-04-16 20:40:59.531702] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:03:08.620 [2024-04-16 20:40:59.531713] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:03:08.620 passed 00:03:08.620 Test: test_spdk_nvmf_subsystem_add_ns ...[2024-04-16 20:40:59.531722] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:03:08.620 [2024-04-16 20:40:59.531730] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:03:08.620 [2024-04-16 20:40:59.531769] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use 00:03:08.620 passed 00:03:08.620 Test: test_spdk_nvmf_subsystem_set_sn ...passed 00:03:08.620 Test: test_reservation_register ...[2024-04-16 20:40:59.531778] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1734:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295 00:03:08.620 [2024-04-16 20:40:59.531813] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2785:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:08.620 [2024-04-16 20:40:59.531826] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2841:nvmf_ns_reservation_register: *ERROR*: No registrant 00:03:08.620 passed 00:03:08.620 Test: test_reservation_register_with_ptpl ...passed 00:03:08.620 Test: test_reservation_acquire_preempt_1 ...passed 00:03:08.620 Test: test_reservation_acquire_release_with_ptpl ...[2024-04-16 20:40:59.531979] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2785:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:08.620 passed 00:03:08.620 Test: test_reservation_release ...[2024-04-16 20:40:59.532086] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2785:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:08.620 passed 00:03:08.620 Test: test_reservation_unregister_notification ...passed 00:03:08.620 Test: test_reservation_release_notification ...[2024-04-16 20:40:59.532102] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2785:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:08.620 passed 00:03:08.620 Test: test_reservation_release_notification_write_exclusive ...passed 00:03:08.620 Test: test_reservation_clear_notification ...[2024-04-16 20:40:59.532115] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2785:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:08.620 [2024-04-16 20:40:59.532129] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2785:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:08.620 [2024-04-16 20:40:59.532142] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2785:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:08.620 passed 00:03:08.620 Test: test_reservation_preempt_notification ...[2024-04-16 20:40:59.532166] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2785:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:08.620 passed 00:03:08.620 Test: test_spdk_nvmf_ns_event ...passed 00:03:08.620 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:03:08.620 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:03:08.620 Test: test_spdk_nvmf_subsystem_add_host ...[2024-04-16 20:40:59.532248] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 261:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:03:08.620 passed 00:03:08.620 Test: test_nvmf_ns_reservation_report ...[2024-04-16 20:40:59.532266] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 840:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to transport_ut transport 00:03:08.620 [2024-04-16 20:40:59.532283] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3147:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:03:08.620 passed 00:03:08.620 Test: test_nvmf_nqn_is_valid ...[2024-04-16 20:40:59.532311] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:03:08.620 passed 00:03:08.620 Test: test_nvmf_ns_reservation_restore ...[2024-04-16 20:40:59.532322] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:a1ecf893-fc31-11ee-80f8-ef3e42bb149": uuid is not the correct length 00:03:08.620 [2024-04-16 20:40:59.532334] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:03:08.620 [2024-04-16 20:40:59.532365] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2340:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:03:08.620 passed 00:03:08.620 Test: test_nvmf_subsystem_state_change ...passed 00:03:08.620 Test: test_nvmf_reservation_custom_ops ...passed 00:03:08.620 00:03:08.620 Run Summary: Type Total Ran Passed Failed Inactive 00:03:08.620 suites 1 1 n/a 0 0 00:03:08.620 tests 22 22 22 0 0 00:03:08.620 asserts 405 405 405 0 n/a 00:03:08.620 00:03:08.620 Elapsed time = 0.000 seconds 00:03:08.620 20:40:59 -- unit/unittest.sh@110 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:03:08.620 00:03:08.620 00:03:08.620 CUnit - A unit testing framework for C - Version 2.1-3 00:03:08.620 http://cunit.sourceforge.net/ 00:03:08.620 00:03:08.620 00:03:08.620 Suite: nvmf 00:03:08.620 Test: test_nvmf_tcp_create ...passed 00:03:08.620 Test: test_nvmf_tcp_destroy ...[2024-04-16 20:40:59.548484] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 730:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:03:08.620 passed 00:03:08.620 Test: test_nvmf_tcp_poll_group_create ...passed 00:03:08.620 Test: test_nvmf_tcp_send_c2h_data ...passed 00:03:08.620 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:03:08.620 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:03:08.620 Test: test_nvmf_tcp_qpair_init_mem_resource ...passed 00:03:08.620 Test: test_nvmf_tcp_send_c2h_term_req ...passed 00:03:08.620 Test: test_nvmf_tcp_send_capsule_resp_pdu ...passed 00:03:08.620 Test: test_nvmf_tcp_icreq_handle ...[2024-04-16 20:40:59.562506] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:08.620 [2024-04-16 20:40:59.562537] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820f02440 is same with the state(5) to be set 00:03:08.620 [2024-04-16 20:40:59.562548] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820f02440 is same with the state(5) to be set 00:03:08.620 [2024-04-16 20:40:59.562558] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:08.620 [2024-04-16 20:40:59.562566] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820f02440 is same with the state(5) to be set 00:03:08.620 [2024-04-16 20:40:59.562596] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2089:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:03:08.620 [2024-04-16 20:40:59.562606] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:08.620 [2024-04-16 20:40:59.562615] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820f02358 is same with the state(5) to be set 00:03:08.620 passed 00:03:08.620 Test: test_nvmf_tcp_check_xfer_type ...passed 00:03:08.620 Test: test_nvmf_tcp_invalid_sgl ...passed 00:03:08.620 Test: test_nvmf_tcp_pdu_ch_handle ...[2024-04-16 20:40:59.562624] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2089:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:03:08.621 [2024-04-16 20:40:59.562632] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820f02358 is same with the state(5) to be set 00:03:08.621 [2024-04-16 20:40:59.562640] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:08.621 [2024-04-16 20:40:59.562648] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820f02358 is same with the state(5) to be set 00:03:08.621 [2024-04-16 20:40:59.562658] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=0 00:03:08.621 [2024-04-16 20:40:59.562665] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820f02358 is same with the state(5) to be set 00:03:08.621 [2024-04-16 20:40:59.562682] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2485:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:03:08.621 [2024-04-16 20:40:59.562691] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:08.621 [2024-04-16 20:40:59.562699] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820f02358 is same with the state(5) to be set 00:03:08.621 [2024-04-16 20:40:59.562710] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2216:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x820f01bd0 00:03:08.621 [2024-04-16 20:40:59.562719] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:08.621 [2024-04-16 20:40:59.562727] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820f02440 is same with the state(5) to be set 00:03:08.621 [2024-04-16 20:40:59.562736] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2275:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x820f02440 00:03:08.621 [2024-04-16 20:40:59.562745] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:08.621 [2024-04-16 20:40:59.562753] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820f02440 is same with the state(5) to be set 00:03:08.621 [2024-04-16 20:40:59.562772] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2226:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:03:08.621 [2024-04-16 20:40:59.562780] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:08.621 [2024-04-16 20:40:59.562788] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820f02440 is same with the state(5) to be set 00:03:08.621 [2024-04-16 20:40:59.562797] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2265:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:03:08.621 [2024-04-16 20:40:59.562806] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:08.621 passed 00:03:08.621 Test: test_nvmf_tcp_tls_add_remove_credentials ...[2024-04-16 20:40:59.562814] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820f02440 is same with the state(5) to be set 00:03:08.621 [2024-04-16 20:40:59.562828] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:08.621 [2024-04-16 20:40:59.562836] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820f02440 is same with the state(5) to be set 00:03:08.621 [2024-04-16 20:40:59.562845] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:08.621 [2024-04-16 20:40:59.562853] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820f02440 is same with the state(5) to be set 00:03:08.621 [2024-04-16 20:40:59.562862] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:08.621 [2024-04-16 20:40:59.562870] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820f02440 is same with the state(5) to be set 00:03:08.621 [2024-04-16 20:40:59.562879] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:08.621 [2024-04-16 20:40:59.562887] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820f02440 is same with the state(5) to be set 00:03:08.621 [2024-04-16 20:40:59.562896] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:08.621 [2024-04-16 20:40:59.562904] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820f02440 is same with the state(5) to be set 00:03:08.621 [2024-04-16 20:40:59.562913] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:08.621 [2024-04-16 20:40:59.562921] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820f02440 is same with the state(5) to be set 00:03:08.621 passed 00:03:08.621 Test: test_nvmf_tcp_tls_generate_psk_id ...passed 00:03:08.621 Test: test_nvmf_tcp_tls_generate_retained_psk ...[2024-04-16 20:40:59.568337] /usr/home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:03:08.621 [2024-04-16 20:40:59.568362] /usr/home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:03:08.621 passed 00:03:08.621 Test: test_nvmf_tcp_tls_generate_tls_psk ...[2024-04-16 20:40:59.568477] /usr/home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:03:08.621 [2024-04-16 20:40:59.568487] /usr/home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:03:08.621 passed 00:03:08.621 00:03:08.621 [2024-04-16 20:40:59.568535] /usr/home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:03:08.621 [2024-04-16 20:40:59.568542] /usr/home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:03:08.621 Run Summary: Type Total Ran Passed Failed Inactive 00:03:08.621 suites 1 1 n/a 0 0 00:03:08.621 tests 17 17 17 0 0 00:03:08.621 asserts 222 222 222 0 n/a 00:03:08.621 00:03:08.621 Elapsed time = 0.031 seconds 00:03:08.621 20:40:59 -- unit/unittest.sh@111 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:03:08.621 00:03:08.621 00:03:08.621 CUnit - A unit testing framework for C - Version 2.1-3 00:03:08.621 http://cunit.sourceforge.net/ 00:03:08.621 00:03:08.621 00:03:08.621 Suite: nvmf 00:03:08.621 Test: test_nvmf_tgt_create_poll_group ...passed 00:03:08.621 00:03:08.621 Run Summary: Type Total Ran Passed Failed Inactive 00:03:08.621 suites 1 1 n/a 0 0 00:03:08.621 tests 1 1 1 0 0 00:03:08.621 asserts 17 17 17 0 n/a 00:03:08.621 00:03:08.621 Elapsed time = 0.000 seconds 00:03:08.621 00:03:08.621 real 0m0.085s 00:03:08.621 user 0m0.050s 00:03:08.621 sys 0m0.034s 00:03:08.621 20:40:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:08.621 20:40:59 -- common/autotest_common.sh@10 -- # set +x 00:03:08.621 ************************************ 00:03:08.621 END TEST unittest_nvmf 00:03:08.621 ************************************ 00:03:08.621 20:40:59 -- unit/unittest.sh@260 -- # grep -q '#define SPDK_CONFIG_FC 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:08.621 20:40:59 -- unit/unittest.sh@265 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:08.621 20:40:59 -- unit/unittest.sh@266 -- # run_test unittest_nvmf_rdma /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:03:08.621 20:40:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:08.621 20:40:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:08.621 20:40:59 -- common/autotest_common.sh@10 -- # set +x 00:03:08.621 ************************************ 00:03:08.621 START TEST unittest_nvmf_rdma 00:03:08.621 ************************************ 00:03:08.621 20:40:59 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:03:08.621 00:03:08.621 00:03:08.621 CUnit - A unit testing framework for C - Version 2.1-3 00:03:08.621 http://cunit.sourceforge.net/ 00:03:08.621 00:03:08.621 00:03:08.621 Suite: nvmf 00:03:08.621 Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-04-16 20:40:59.623961] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1917:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:03:08.621 [2024-04-16 20:40:59.624342] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1967:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:03:08.621 [2024-04-16 20:40:59.624387] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1967:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:03:08.621 passed 00:03:08.621 Test: test_spdk_nvmf_rdma_request_process ...passed 00:03:08.621 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:03:08.621 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed 00:03:08.621 Test: test_nvmf_rdma_opts_init ...passed 00:03:08.621 Test: test_nvmf_rdma_request_free_data ...passed 00:03:08.621 Test: test_nvmf_rdma_update_ibv_state ...[2024-04-16 20:40:59.624788] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 614:nvmf_rdma_update_ibv_state: *ERROR*: Failed to get updated RDMA queue pair state! 00:03:08.621 passed 00:03:08.621 Test: test_nvmf_rdma_resources_create ...[2024-04-16 20:40:59.624821] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 625:nvmf_rdma_update_ibv_state: *ERROR*: QP#0: bad state updated: 10, maybe hardware issue 00:03:08.621 passed 00:03:08.621 Test: test_nvmf_rdma_qpair_compare ...passed 00:03:08.621 Test: test_nvmf_rdma_resize_cq ...[2024-04-16 20:40:59.625956] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1007:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:03:08.622 Using CQ of insufficient size may lead to CQ overrun 00:03:08.622 [2024-04-16 20:40:59.625986] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1012:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:03:08.622 passed[2024-04-16 20:40:59.626068] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1019:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 0: No error: 0 00:03:08.622 00:03:08.622 00:03:08.622 Run Summary: Type Total Ran Passed Failed Inactive 00:03:08.622 suites 1 1 n/a 0 0 00:03:08.622 tests 10 10 10 0 0 00:03:08.622 asserts 584 584 584 0 n/a 00:03:08.622 00:03:08.622 Elapsed time = 0.008 seconds 00:03:08.622 00:03:08.622 real 0m0.011s 00:03:08.622 user 0m0.000s 00:03:08.622 sys 0m0.015s 00:03:08.622 20:40:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:08.622 20:40:59 -- common/autotest_common.sh@10 -- # set +x 00:03:08.622 ************************************ 00:03:08.622 END TEST unittest_nvmf_rdma 00:03:08.622 ************************************ 00:03:08.622 20:40:59 -- unit/unittest.sh@269 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:08.622 20:40:59 -- unit/unittest.sh@273 -- # run_test unittest_scsi unittest_scsi 00:03:08.622 20:40:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:08.622 20:40:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:08.622 20:40:59 -- common/autotest_common.sh@10 -- # set +x 00:03:08.622 ************************************ 00:03:08.622 START TEST unittest_scsi 00:03:08.622 ************************************ 00:03:08.622 20:40:59 -- common/autotest_common.sh@1104 -- # unittest_scsi 00:03:08.622 20:40:59 -- unit/unittest.sh@115 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:03:08.622 00:03:08.622 00:03:08.622 CUnit - A unit testing framework for C - Version 2.1-3 00:03:08.622 http://cunit.sourceforge.net/ 00:03:08.622 00:03:08.622 00:03:08.622 Suite: dev_suite 00:03:08.622 Test: dev_destruct_null_dev ...passed 00:03:08.622 Test: dev_destruct_zero_luns ...passed 00:03:08.622 Test: dev_destruct_null_lun ...passed 00:03:08.622 Test: dev_destruct_success ...passed 00:03:08.622 Test: dev_construct_num_luns_zero ...[2024-04-16 20:40:59.672449] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:03:08.622 passed 00:03:08.622 Test: dev_construct_no_lun_zero ...[2024-04-16 20:40:59.672882] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:03:08.622 passed 00:03:08.622 Test: dev_construct_null_lun ...passed 00:03:08.622 Test: dev_construct_name_too_long ...passed 00:03:08.622 Test: dev_construct_success ...[2024-04-16 20:40:59.672917] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 248:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:03:08.622 [2024-04-16 20:40:59.672946] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 223:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:03:08.622 passed 00:03:08.622 Test: dev_construct_success_lun_zero_not_first ...passed 00:03:08.622 Test: dev_queue_mgmt_task_success ...passed 00:03:08.622 Test: dev_queue_task_success ...passed 00:03:08.622 Test: dev_stop_success ...passed 00:03:08.622 Test: dev_add_port_max_ports ...[2024-04-16 20:40:59.673044] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:03:08.622 passed 00:03:08.622 Test: dev_add_port_construct_failure1 ...[2024-04-16 20:40:59.673085] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:03:08.622 passed 00:03:08.622 Test: dev_add_port_construct_failure2 ...passed 00:03:08.622 Test: dev_add_port_success1 ...passed[2024-04-16 20:40:59.673129] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:03:08.622 00:03:08.622 Test: dev_add_port_success2 ...passed 00:03:08.622 Test: dev_add_port_success3 ...passed 00:03:08.622 Test: dev_find_port_by_id_num_ports_zero ...passed 00:03:08.622 Test: dev_find_port_by_id_id_not_found_failure ...passed 00:03:08.622 Test: dev_find_port_by_id_success ...passed 00:03:08.622 Test: dev_add_lun_bdev_not_found ...passed 00:03:08.622 Test: dev_add_lun_no_free_lun_id ...passed 00:03:08.622 Test: dev_add_lun_success1 ...passed 00:03:08.622 Test: dev_add_lun_success2 ...passed 00:03:08.622 Test: dev_check_pending_tasks ...[2024-04-16 20:40:59.673565] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:03:08.622 passed 00:03:08.622 Test: dev_iterate_luns ...passed 00:03:08.622 Test: dev_find_free_lun ...passed 00:03:08.622 00:03:08.622 Run Summary: Type Total Ran Passed Failed Inactive 00:03:08.622 suites 1 1 n/a 0 0 00:03:08.622 tests 29 29 29 0 0 00:03:08.622 asserts 97 97 97 0 n/a 00:03:08.622 00:03:08.622 Elapsed time = 0.000 seconds 00:03:08.622 20:40:59 -- unit/unittest.sh@116 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:03:08.622 00:03:08.622 00:03:08.622 CUnit - A unit testing framework for C - Version 2.1-3 00:03:08.622 http://cunit.sourceforge.net/ 00:03:08.622 00:03:08.622 00:03:08.622 Suite: lun_suite 00:03:08.622 Test: lun_task_mgmt_execute_abort_task_not_supported ...[2024-04-16 20:40:59.681593] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported 00:03:08.622 passed 00:03:08.622 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...passed 00:03:08.622 Test: lun_task_mgmt_execute_lun_reset ...passed 00:03:08.622 Test: lun_task_mgmt_execute_target_reset ...passed 00:03:08.622 Test: lun_task_mgmt_execute_invalid_case ...passed 00:03:08.622 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...passed 00:03:08.622 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed 00:03:08.622 Test: lun_append_task_null_lun_not_supported ...passed 00:03:08.622 Test: lun_execute_scsi_task_pending ...passed 00:03:08.622 Test: lun_execute_scsi_task_complete ...passed 00:03:08.622 Test: lun_execute_scsi_task_resize ...passed 00:03:08.622 Test: lun_destruct_success ...passed 00:03:08.622 Test: lun_construct_null_ctx ...passed 00:03:08.622 Test: lun_construct_success ...passed 00:03:08.622 Test: lun_reset_task_wait_scsi_task_complete ...passed 00:03:08.622 Test: lun_reset_task_suspend_scsi_task ...passed 00:03:08.622 Test: lun_check_pending_tasks_only_for_specific_initiator ...passed 00:03:08.622 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed 00:03:08.622 00:03:08.622 Run Summary: Type Total Ran Passed Failed Inactive 00:03:08.622 suites 1 1 n/a 0 0 00:03:08.622 tests 18 18 18 0 0 00:03:08.622 asserts 153 153 153 0 n/a 00:03:08.622 00:03:08.622 Elapsed time = 0.000 seconds 00:03:08.622 [2024-04-16 20:40:59.681778] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported 00:03:08.622 [2024-04-16 20:40:59.681812] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:03:08.622 [2024-04-16 20:40:59.681842] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:03:08.622 20:40:59 -- unit/unittest.sh@117 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:03:08.622 00:03:08.622 00:03:08.622 CUnit - A unit testing framework for C - Version 2.1-3 00:03:08.622 http://cunit.sourceforge.net/ 00:03:08.622 00:03:08.622 00:03:08.622 Suite: scsi_suite 00:03:08.622 Test: scsi_init ...passed 00:03:08.622 00:03:08.622 Run Summary: Type Total Ran Passed Failed Inactive 00:03:08.622 suites 1 1 n/a 0 0 00:03:08.622 tests 1 1 1 0 0 00:03:08.622 asserts 1 1 1 0 n/a 00:03:08.622 00:03:08.622 Elapsed time = 0.000 seconds 00:03:08.622 20:40:59 -- unit/unittest.sh@118 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:03:08.622 00:03:08.622 00:03:08.622 CUnit - A unit testing framework for C - Version 2.1-3 00:03:08.622 http://cunit.sourceforge.net/ 00:03:08.622 00:03:08.622 00:03:08.622 Suite: translation_suite 00:03:08.622 Test: mode_select_6_test ...passed 00:03:08.622 Test: mode_select_6_test2 ...passed 00:03:08.622 Test: mode_sense_6_test ...passed 00:03:08.622 Test: mode_sense_10_test ...passed 00:03:08.622 Test: inquiry_evpd_test ...passed 00:03:08.622 Test: inquiry_standard_test ...passed 00:03:08.622 Test: inquiry_overflow_test ...passed 00:03:08.622 Test: task_complete_test ...passed 00:03:08.622 Test: lba_range_test ...passed 00:03:08.622 Test: xfer_len_test ...[2024-04-16 20:40:59.701780] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1271:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:03:08.622 passed 00:03:08.622 Test: xfer_test ...passed 00:03:08.622 Test: scsi_name_padding_test ...passed 00:03:08.622 Test: get_dif_ctx_test ...passed 00:03:08.622 Test: unmap_split_test ...passed 00:03:08.622 00:03:08.622 Run Summary: Type Total Ran Passed Failed Inactive 00:03:08.622 suites 1 1 n/a 0 0 00:03:08.622 tests 14 14 14 0 0 00:03:08.622 asserts 1200 1200 1200 0 n/a 00:03:08.622 00:03:08.622 Elapsed time = 0.000 seconds 00:03:08.622 20:40:59 -- unit/unittest.sh@119 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:03:08.622 00:03:08.622 00:03:08.622 CUnit - A unit testing framework for C - Version 2.1-3 00:03:08.622 http://cunit.sourceforge.net/ 00:03:08.622 00:03:08.622 00:03:08.622 Suite: reservation_suite 00:03:08.622 Test: test_reservation_register ...[2024-04-16 20:40:59.712286] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 273:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:03:08.622 passed 00:03:08.622 Test: test_reservation_reserve ...[2024-04-16 20:40:59.712737] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 273:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:03:08.623 [2024-04-16 20:40:59.712773] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 209:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:03:08.623 passed 00:03:08.623 Test: test_reservation_preempt_non_all_regs ...[2024-04-16 20:40:59.712833] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 204:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match 00:03:08.623 [2024-04-16 20:40:59.712868] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 273:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:03:08.623 [2024-04-16 20:40:59.712891] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 458:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:03:08.623 passed 00:03:08.623 Test: test_reservation_preempt_all_regs ...[2024-04-16 20:40:59.712935] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 273:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:03:08.623 passed 00:03:08.623 Test: test_reservation_cmds_conflict ...[2024-04-16 20:40:59.712988] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 273:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:03:08.623 [2024-04-16 20:40:59.713039] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 852:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:03:08.623 [2024-04-16 20:40:59.713063] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 846:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:03:08.623 [2024-04-16 20:40:59.713100] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 846:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:03:08.623 [2024-04-16 20:40:59.713127] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 846:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:03:08.623 passed 00:03:08.623 Test: test_scsi2_reserve_release ...[2024-04-16 20:40:59.713147] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 846:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:03:08.623 passed 00:03:08.623 Test: test_pr_with_scsi2_reserve_release ...[2024-04-16 20:40:59.713200] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 273:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:03:08.623 passed 00:03:08.623 00:03:08.623 Run Summary: Type Total Ran Passed Failed Inactive 00:03:08.623 suites 1 1 n/a 0 0 00:03:08.623 tests 7 7 7 0 0 00:03:08.623 asserts 257 257 257 0 n/a 00:03:08.623 00:03:08.623 Elapsed time = 0.000 seconds 00:03:08.623 00:03:08.623 real 0m0.048s 00:03:08.623 user 0m0.027s 00:03:08.623 sys 0m0.024s 00:03:08.623 20:40:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:08.623 20:40:59 -- common/autotest_common.sh@10 -- # set +x 00:03:08.623 ************************************ 00:03:08.623 END TEST unittest_scsi 00:03:08.623 ************************************ 00:03:08.893 20:40:59 -- unit/unittest.sh@276 -- # uname -s 00:03:08.893 20:40:59 -- unit/unittest.sh@276 -- # '[' FreeBSD = Linux ']' 00:03:08.893 20:40:59 -- unit/unittest.sh@279 -- # run_test unittest_thread /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:03:08.893 20:40:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:08.893 20:40:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:08.893 20:40:59 -- common/autotest_common.sh@10 -- # set +x 00:03:08.893 ************************************ 00:03:08.893 START TEST unittest_thread 00:03:08.893 ************************************ 00:03:08.893 20:40:59 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:03:08.893 00:03:08.893 00:03:08.893 CUnit - A unit testing framework for C - Version 2.1-3 00:03:08.893 http://cunit.sourceforge.net/ 00:03:08.893 00:03:08.893 00:03:08.893 Suite: io_channel 00:03:08.893 Test: thread_alloc ...passed 00:03:08.893 Test: thread_send_msg ...passed 00:03:08.893 Test: thread_poller ...passed 00:03:08.893 Test: poller_pause ...passed 00:03:08.893 Test: thread_for_each ...passed 00:03:08.893 Test: for_each_channel_remove ...passed 00:03:08.893 Test: for_each_channel_unreg ...[2024-04-16 20:40:59.767231] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2164:spdk_io_device_register: *ERROR*: io_device 0x8204446f4 already registered (old:0x82bcde000 new:0x82bcde180) 00:03:08.893 passed 00:03:08.893 Test: thread_name ...passed 00:03:08.893 Test: channel ...[2024-04-16 20:40:59.767941] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2297:spdk_get_io_channel: *ERROR*: could not find io_device 0x226918 00:03:08.893 passed 00:03:08.893 Test: channel_destroy_races ...passed 00:03:08.893 Test: thread_exit_test ...[2024-04-16 20:40:59.768571] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 630:thread_exit: *ERROR*: thread 0x82bca3a80 got timeout, and move it to the exited state forcefully 00:03:08.893 passed 00:03:08.893 Test: thread_update_stats_test ...passed 00:03:08.893 Test: nested_channel ...passed 00:03:08.893 Test: device_unregister_and_thread_exit_race ...passed 00:03:08.893 Test: cache_closest_timed_poller ...passed 00:03:08.893 Test: multi_timed_pollers_have_same_expiration ...passed 00:03:08.893 Test: io_device_lookup ...passed 00:03:08.893 Test: spdk_spin ...[2024-04-16 20:40:59.769948] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3061:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:03:08.893 [2024-04-16 20:40:59.769969] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x8204446f0 00:03:08.893 [2024-04-16 20:40:59.769982] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3099:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:03:08.893 [2024-04-16 20:40:59.770157] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3062:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:03:08.893 [2024-04-16 20:40:59.770177] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x8204446f0 00:03:08.893 [2024-04-16 20:40:59.770189] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:03:08.893 [2024-04-16 20:40:59.770201] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x8204446f0 00:03:08.893 [2024-04-16 20:40:59.770212] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:03:08.893 [2024-04-16 20:40:59.770223] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x8204446f0 00:03:08.893 [2024-04-16 20:40:59.770236] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3043:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:03:08.893 [2024-04-16 20:40:59.770247] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x8204446f0 00:03:08.893 passed 00:03:08.893 Test: for_each_channel_and_thread_exit_race ...passed 00:03:08.893 Test: for_each_thread_and_thread_exit_race ...passed 00:03:08.893 00:03:08.893 Run Summary: Type Total Ran Passed Failed Inactive 00:03:08.893 suites 1 1 n/a 0 0 00:03:08.893 tests 20 20 20 0 0 00:03:08.893 asserts 409 409 409 0 n/a 00:03:08.893 00:03:08.893 Elapsed time = 0.008 seconds 00:03:08.893 00:03:08.893 real 0m0.012s 00:03:08.893 user 0m0.011s 00:03:08.893 sys 0m0.008s 00:03:08.893 20:40:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:08.893 20:40:59 -- common/autotest_common.sh@10 -- # set +x 00:03:08.893 ************************************ 00:03:08.893 END TEST unittest_thread 00:03:08.893 ************************************ 00:03:08.893 20:40:59 -- unit/unittest.sh@280 -- # run_test unittest_iobuf /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:03:08.893 20:40:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:08.893 20:40:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:08.893 20:40:59 -- common/autotest_common.sh@10 -- # set +x 00:03:08.893 ************************************ 00:03:08.893 START TEST unittest_iobuf 00:03:08.893 ************************************ 00:03:08.893 20:40:59 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:03:08.893 00:03:08.893 00:03:08.893 CUnit - A unit testing framework for C - Version 2.1-3 00:03:08.893 http://cunit.sourceforge.net/ 00:03:08.893 00:03:08.893 00:03:08.893 Suite: io_channel 00:03:08.893 Test: iobuf ...passed 00:03:08.893 Test: iobuf_cache ...[2024-04-16 20:40:59.820303] /usr/home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 304:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:03:08.893 [2024-04-16 20:40:59.820479] /usr/home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 306:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:03:08.893 [2024-04-16 20:40:59.820504] /usr/home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 316:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf large buffer cache. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:03:08.893 [2024-04-16 20:40:59.820514] /usr/home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 318:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:03:08.893 [2024-04-16 20:40:59.820524] /usr/home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 304:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:03:08.893 [2024-04-16 20:40:59.820532] /usr/home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 306:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:03:08.893 passed 00:03:08.893 00:03:08.893 Run Summary: Type Total Ran Passed Failed Inactive 00:03:08.893 suites 1 1 n/a 0 0 00:03:08.893 tests 2 2 2 0 0 00:03:08.893 asserts 107 107 107 0 n/a 00:03:08.893 00:03:08.893 Elapsed time = 0.000 seconds 00:03:08.893 00:03:08.893 real 0m0.005s 00:03:08.893 user 0m0.004s 00:03:08.893 sys 0m0.004s 00:03:08.893 20:40:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:08.893 20:40:59 -- common/autotest_common.sh@10 -- # set +x 00:03:08.893 ************************************ 00:03:08.893 END TEST unittest_iobuf 00:03:08.893 ************************************ 00:03:08.893 20:40:59 -- unit/unittest.sh@281 -- # run_test unittest_util unittest_util 00:03:08.893 20:40:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:08.893 20:40:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:08.893 20:40:59 -- common/autotest_common.sh@10 -- # set +x 00:03:08.893 ************************************ 00:03:08.893 START TEST unittest_util 00:03:08.893 ************************************ 00:03:08.893 20:40:59 -- common/autotest_common.sh@1104 -- # unittest_util 00:03:08.893 20:40:59 -- unit/unittest.sh@132 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:03:08.893 00:03:08.893 00:03:08.893 CUnit - A unit testing framework for C - Version 2.1-3 00:03:08.893 http://cunit.sourceforge.net/ 00:03:08.893 00:03:08.893 00:03:08.893 Suite: base64 00:03:08.893 Test: test_base64_get_encoded_strlen ...passed 00:03:08.893 Test: test_base64_get_decoded_len ...passed 00:03:08.893 Test: test_base64_encode ...passed 00:03:08.893 Test: test_base64_decode ...passed 00:03:08.893 Test: test_base64_urlsafe_encode ...passed 00:03:08.893 Test: test_base64_urlsafe_decode ...passed 00:03:08.893 00:03:08.893 Run Summary: Type Total Ran Passed Failed Inactive 00:03:08.893 suites 1 1 n/a 0 0 00:03:08.893 tests 6 6 6 0 0 00:03:08.893 asserts 112 112 112 0 n/a 00:03:08.893 00:03:08.893 Elapsed time = 0.000 seconds 00:03:08.893 20:40:59 -- unit/unittest.sh@133 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:03:08.893 00:03:08.893 00:03:08.893 CUnit - A unit testing framework for C - Version 2.1-3 00:03:08.893 http://cunit.sourceforge.net/ 00:03:08.893 00:03:08.893 00:03:08.893 Suite: bit_array 00:03:08.893 Test: test_1bit ...passed 00:03:08.893 Test: test_64bit ...passed 00:03:08.893 Test: test_find ...passed 00:03:08.893 Test: test_resize ...passed 00:03:08.893 Test: test_errors ...passed 00:03:08.893 Test: test_count ...passed 00:03:08.893 Test: test_mask_store_load ...passed 00:03:08.893 Test: test_mask_clear ...passed 00:03:08.893 00:03:08.893 Run Summary: Type Total Ran Passed Failed Inactive 00:03:08.893 suites 1 1 n/a 0 0 00:03:08.893 tests 8 8 8 0 0 00:03:08.893 asserts 5075 5075 5075 0 n/a 00:03:08.893 00:03:08.893 Elapsed time = 0.000 seconds 00:03:08.893 20:40:59 -- unit/unittest.sh@134 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:03:08.893 00:03:08.893 00:03:08.893 CUnit - A unit testing framework for C - Version 2.1-3 00:03:08.893 http://cunit.sourceforge.net/ 00:03:08.893 00:03:08.893 00:03:08.893 Suite: cpuset 00:03:08.893 Test: test_cpuset ...passed 00:03:08.893 Test: test_cpuset_parse ...[2024-04-16 20:40:59.891651] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 239:parse_list: *ERROR*: Unexpected end of core list '[' 00:03:08.893 [2024-04-16 20:40:59.892041] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:03:08.893 [2024-04-16 20:40:59.892093] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:03:08.893 [2024-04-16 20:40:59.892118] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 220:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:03:08.893 [2024-04-16 20:40:59.892140] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:03:08.893 [2024-04-16 20:40:59.892160] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:03:08.893 [2024-04-16 20:40:59.892182] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 203:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:03:08.894 [2024-04-16 20:40:59.892203] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 198:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:03:08.894 passed 00:03:08.894 Test: test_cpuset_fmt ...passed 00:03:08.894 00:03:08.894 Run Summary: Type Total Ran Passed Failed Inactive 00:03:08.894 suites 1 1 n/a 0 0 00:03:08.894 tests 3 3 3 0 0 00:03:08.894 asserts 65 65 65 0 n/a 00:03:08.894 00:03:08.894 Elapsed time = 0.000 seconds 00:03:08.894 20:40:59 -- unit/unittest.sh@135 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:03:08.894 00:03:08.894 00:03:08.894 CUnit - A unit testing framework for C - Version 2.1-3 00:03:08.894 http://cunit.sourceforge.net/ 00:03:08.894 00:03:08.894 00:03:08.894 Suite: crc16 00:03:08.894 Test: test_crc16_t10dif ...passed 00:03:08.894 Test: test_crc16_t10dif_seed ...passed 00:03:08.894 Test: test_crc16_t10dif_copy ...passed 00:03:08.894 00:03:08.894 Run Summary: Type Total Ran Passed Failed Inactive 00:03:08.894 suites 1 1 n/a 0 0 00:03:08.894 tests 3 3 3 0 0 00:03:08.894 asserts 5 5 5 0 n/a 00:03:08.894 00:03:08.894 Elapsed time = 0.000 seconds 00:03:08.894 20:40:59 -- unit/unittest.sh@136 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:03:08.894 00:03:08.894 00:03:08.894 CUnit - A unit testing framework for C - Version 2.1-3 00:03:08.894 http://cunit.sourceforge.net/ 00:03:08.894 00:03:08.894 00:03:08.894 Suite: crc32_ieee 00:03:08.894 Test: test_crc32_ieee ...passed 00:03:08.894 00:03:08.894 Run Summary: Type Total Ran Passed Failed Inactive 00:03:08.894 suites 1 1 n/a 0 0 00:03:08.894 tests 1 1 1 0 0 00:03:08.894 asserts 1 1 1 0 n/a 00:03:08.894 00:03:08.894 Elapsed time = 0.000 seconds 00:03:08.894 20:40:59 -- unit/unittest.sh@137 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:03:08.894 00:03:08.894 00:03:08.894 CUnit - A unit testing framework for C - Version 2.1-3 00:03:08.894 http://cunit.sourceforge.net/ 00:03:08.894 00:03:08.894 00:03:08.894 Suite: crc32c 00:03:08.894 Test: test_crc32c ...passed 00:03:08.894 Test: test_crc32c_nvme ...passed 00:03:08.894 00:03:08.894 Run Summary: Type Total Ran Passed Failed Inactive 00:03:08.894 suites 1 1 n/a 0 0 00:03:08.894 tests 2 2 2 0 0 00:03:08.894 asserts 16 16 16 0 n/a 00:03:08.894 00:03:08.894 Elapsed time = 0.000 seconds 00:03:08.894 20:40:59 -- unit/unittest.sh@138 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:03:08.894 00:03:08.894 00:03:08.894 CUnit - A unit testing framework for C - Version 2.1-3 00:03:08.894 http://cunit.sourceforge.net/ 00:03:08.894 00:03:08.894 00:03:08.894 Suite: crc64 00:03:08.894 Test: test_crc64_nvme ...passed 00:03:08.894 00:03:08.894 Run Summary: Type Total Ran Passed Failed Inactive 00:03:08.894 suites 1 1 n/a 0 0 00:03:08.894 tests 1 1 1 0 0 00:03:08.894 asserts 4 4 4 0 n/a 00:03:08.894 00:03:08.894 Elapsed time = 0.000 seconds 00:03:08.894 20:40:59 -- unit/unittest.sh@139 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:03:08.894 00:03:08.894 00:03:08.894 CUnit - A unit testing framework for C - Version 2.1-3 00:03:08.894 http://cunit.sourceforge.net/ 00:03:08.894 00:03:08.894 00:03:08.894 Suite: string 00:03:08.894 Test: test_parse_ip_addr ...passed 00:03:08.894 Test: test_str_chomp ...passed 00:03:08.894 Test: test_parse_capacity ...passed 00:03:08.894 Test: test_sprintf_append_realloc ...passed 00:03:08.894 Test: test_strtol ...passed 00:03:08.894 Test: test_strtoll ...passed 00:03:08.894 Test: test_strarray ...passed 00:03:08.894 Test: test_strcpy_replace ...passed 00:03:08.894 00:03:08.894 Run Summary: Type Total Ran Passed Failed Inactive 00:03:08.894 suites 1 1 n/a 0 0 00:03:08.894 tests 8 8 8 0 0 00:03:08.894 asserts 161 161 161 0 n/a 00:03:08.894 00:03:08.894 Elapsed time = 0.000 seconds 00:03:08.894 20:40:59 -- unit/unittest.sh@140 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:03:08.894 00:03:08.894 00:03:08.894 CUnit - A unit testing framework for C - Version 2.1-3 00:03:08.894 http://cunit.sourceforge.net/ 00:03:08.894 00:03:08.894 00:03:08.894 Suite: dif 00:03:08.894 Test: dif_generate_and_verify_test ...[2024-04-16 20:40:59.939459] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:03:08.894 [2024-04-16 20:40:59.939745] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:03:08.894 [2024-04-16 20:40:59.939810] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:03:08.894 [2024-04-16 20:40:59.939862] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:03:08.894 [2024-04-16 20:40:59.939912] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:03:08.894 [2024-04-16 20:40:59.939962] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:03:08.894 passed 00:03:08.894 Test: dif_disable_check_test ...[2024-04-16 20:40:59.940143] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:03:08.894 [2024-04-16 20:40:59.940206] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:03:08.894 [2024-04-16 20:40:59.940256] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:03:08.894 passed 00:03:08.894 Test: dif_generate_and_verify_different_pi_formats_test ...[2024-04-16 20:40:59.940445] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:03:08.894 [2024-04-16 20:40:59.940499] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:03:08.894 [2024-04-16 20:40:59.940551] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:03:08.894 [2024-04-16 20:40:59.940603] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:03:08.894 [2024-04-16 20:40:59.940654] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:03:08.894 [2024-04-16 20:40:59.940704] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:03:08.894 [2024-04-16 20:40:59.940765] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:03:08.894 [2024-04-16 20:40:59.940816] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:03:08.894 [2024-04-16 20:40:59.940866] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:03:08.894 [2024-04-16 20:40:59.940915] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:03:08.894 [2024-04-16 20:40:59.940965] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:03:08.894 passed 00:03:08.894 Test: dif_apptag_mask_test ...[2024-04-16 20:40:59.941018] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:03:08.894 [2024-04-16 20:40:59.941069] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:03:08.894 passed 00:03:08.894 Test: dif_sec_512_md_0_error_test ...passed 00:03:08.894 Test: dif_sec_4096_md_0_error_test ...passed 00:03:08.894 Test: dif_sec_4100_md_128_error_test ...[2024-04-16 20:40:59.941102] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:03:08.894 [2024-04-16 20:40:59.941115] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:03:08.894 [2024-04-16 20:40:59.941125] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:03:08.894 [2024-04-16 20:40:59.941136] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:03:08.894 passed 00:03:08.894 Test: dif_guard_seed_test ...passed 00:03:08.894 Test: dif_guard_value_test ...[2024-04-16 20:40:59.941146] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:03:08.894 passed 00:03:08.894 Test: dif_disable_sec_512_md_8_single_iov_test ...passed 00:03:08.894 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:03:08.894 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:03:08.894 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed 00:03:08.894 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:03:08.894 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:03:08.894 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:03:08.894 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:03:08.894 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:03:08.894 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:03:08.894 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:03:08.894 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:03:08.894 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:03:08.894 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:03:08.894 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:03:08.894 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:03:08.894 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:03:08.894 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:03:08.894 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-04-16 20:40:59.947538] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f94c, Actual=fd4c 00:03:08.894 [2024-04-16 20:40:59.947819] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fa21, Actual=fe21 00:03:08.894 [2024-04-16 20:40:59.948092] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:08.894 [2024-04-16 20:40:59.948365] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:08.894 [2024-04-16 20:40:59.948639] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:03:08.894 [2024-04-16 20:40:59.948911] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:03:08.894 [2024-04-16 20:40:59.949184] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=6352 00:03:08.894 [2024-04-16 20:40:59.949327] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=8505 00:03:08.894 [2024-04-16 20:40:59.949471] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab757ed, Actual=1ab753ed 00:03:08.894 [2024-04-16 20:40:59.949740] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574260, Actual=38574660 00:03:08.894 [2024-04-16 20:40:59.950010] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:08.895 [2024-04-16 20:40:59.950287] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:08.895 [2024-04-16 20:40:59.950556] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:03:08.895 [2024-04-16 20:40:59.950841] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:03:08.895 [2024-04-16 20:40:59.951111] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=aa2d8a5d 00:03:08.895 [2024-04-16 20:40:59.951253] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=b1aaeb1e 00:03:08.895 [2024-04-16 20:40:59.951397] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a3728ecc20d3, Actual=a576a7728ecc20d3 00:03:08.895 [2024-04-16 20:40:59.951666] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010e2d4837a266, Actual=88010a2d4837a266 00:03:08.895 [2024-04-16 20:40:59.951936] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:08.895 [2024-04-16 20:40:59.952204] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:08.895 [2024-04-16 20:40:59.952474] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:03:08.895 [2024-04-16 20:40:59.952742] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:03:08.895 [2024-04-16 20:40:59.953010] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=4ed78872741b8eea 00:03:08.895 passed 00:03:08.895 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-04-16 20:40:59.953154] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=28731ef0212e7de 00:03:08.895 [2024-04-16 20:40:59.953185] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f94c, Actual=fd4c 00:03:08.895 [2024-04-16 20:40:59.953221] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fa21, Actual=fe21 00:03:08.895 [2024-04-16 20:40:59.953257] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:08.895 [2024-04-16 20:40:59.953293] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:08.895 [2024-04-16 20:40:59.953328] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:03:08.895 [2024-04-16 20:40:59.953364] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:03:08.895 [2024-04-16 20:40:59.953400] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=6352 00:03:08.895 [2024-04-16 20:40:59.953423] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=8505 00:03:08.895 [2024-04-16 20:40:59.953447] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab757ed, Actual=1ab753ed 00:03:08.895 [2024-04-16 20:40:59.953483] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574260, Actual=38574660 00:03:08.895 [2024-04-16 20:40:59.953518] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:08.895 [2024-04-16 20:40:59.953553] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:08.895 [2024-04-16 20:40:59.953589] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:03:08.895 [2024-04-16 20:40:59.953624] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:03:08.895 [2024-04-16 20:40:59.953659] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=aa2d8a5d 00:03:08.895 [2024-04-16 20:40:59.953683] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=b1aaeb1e 00:03:08.895 [2024-04-16 20:40:59.953706] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a3728ecc20d3, Actual=a576a7728ecc20d3 00:03:08.895 [2024-04-16 20:40:59.953741] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010e2d4837a266, Actual=88010a2d4837a266 00:03:08.895 [2024-04-16 20:40:59.953777] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:08.895 [2024-04-16 20:40:59.953812] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:08.895 [2024-04-16 20:40:59.953847] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:03:08.895 [2024-04-16 20:40:59.953883] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:03:08.895 passed 00:03:08.895 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-04-16 20:40:59.953919] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=4ed78872741b8eea 00:03:08.895 [2024-04-16 20:40:59.953942] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=28731ef0212e7de 00:03:08.895 [2024-04-16 20:40:59.953967] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f94c, Actual=fd4c 00:03:08.895 [2024-04-16 20:40:59.954002] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fa21, Actual=fe21 00:03:08.895 [2024-04-16 20:40:59.954036] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:08.895 [2024-04-16 20:40:59.954072] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:08.895 [2024-04-16 20:40:59.954107] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:03:08.895 [2024-04-16 20:40:59.954143] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:03:08.895 [2024-04-16 20:40:59.954178] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=6352 00:03:08.895 [2024-04-16 20:40:59.954202] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=8505 00:03:08.895 [2024-04-16 20:40:59.954225] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab757ed, Actual=1ab753ed 00:03:08.895 [2024-04-16 20:40:59.954260] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574260, Actual=38574660 00:03:08.895 [2024-04-16 20:40:59.954296] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:08.895 [2024-04-16 20:40:59.954331] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:08.895 [2024-04-16 20:40:59.954366] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:03:08.895 [2024-04-16 20:40:59.954402] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:03:08.895 [2024-04-16 20:40:59.954437] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=aa2d8a5d 00:03:08.895 [2024-04-16 20:40:59.954460] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=b1aaeb1e 00:03:08.895 [2024-04-16 20:40:59.954484] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a3728ecc20d3, Actual=a576a7728ecc20d3 00:03:08.895 [2024-04-16 20:40:59.954519] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010e2d4837a266, Actual=88010a2d4837a266 00:03:08.895 [2024-04-16 20:40:59.954554] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:08.895 [2024-04-16 20:40:59.954590] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:08.895 [2024-04-16 20:40:59.954625] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:03:08.895 [2024-04-16 20:40:59.954660] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:03:08.895 [2024-04-16 20:40:59.954696] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=4ed78872741b8eea 00:03:08.895 [2024-04-16 20:40:59.954718] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=28731ef0212e7de 00:03:08.895 passed 00:03:08.895 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-04-16 20:40:59.954749] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f94c, Actual=fd4c 00:03:08.895 [2024-04-16 20:40:59.954785] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fa21, Actual=fe21 00:03:08.895 [2024-04-16 20:40:59.954820] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:08.895 [2024-04-16 20:40:59.954855] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:08.895 [2024-04-16 20:40:59.954891] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:03:08.895 [2024-04-16 20:40:59.954926] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:03:08.895 [2024-04-16 20:40:59.954961] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=6352 00:03:08.895 [2024-04-16 20:40:59.954984] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=8505 00:03:08.895 [2024-04-16 20:40:59.955018] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab757ed, Actual=1ab753ed 00:03:08.895 [2024-04-16 20:40:59.955053] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574260, Actual=38574660 00:03:08.895 [2024-04-16 20:40:59.955089] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:08.895 [2024-04-16 20:40:59.955125] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:08.895 [2024-04-16 20:40:59.955160] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:03:08.895 [2024-04-16 20:40:59.955195] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:03:08.895 [2024-04-16 20:40:59.955231] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=aa2d8a5d 00:03:08.895 [2024-04-16 20:40:59.955254] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=b1aaeb1e 00:03:08.895 [2024-04-16 20:40:59.955276] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a3728ecc20d3, Actual=a576a7728ecc20d3 00:03:08.895 [2024-04-16 20:40:59.955313] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010e2d4837a266, Actual=88010a2d4837a266 00:03:08.895 [2024-04-16 20:40:59.955363] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:08.895 [2024-04-16 20:40:59.955398] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:08.895 [2024-04-16 20:40:59.955434] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:03:08.895 [2024-04-16 20:40:59.955470] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:03:08.895 [2024-04-16 20:40:59.955502] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=4ed78872741b8eea 00:03:08.895 [2024-04-16 20:40:59.955525] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=28731ef0212e7de 00:03:08.895 passed 00:03:08.895 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-04-16 20:40:59.955551] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f94c, Actual=fd4c 00:03:08.896 [2024-04-16 20:40:59.955586] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fa21, Actual=fe21 00:03:08.896 [2024-04-16 20:40:59.955631] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:08.896 [2024-04-16 20:40:59.955660] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:08.896 [2024-04-16 20:40:59.955689] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:03:08.896 [2024-04-16 20:40:59.955718] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:03:08.896 [2024-04-16 20:40:59.955746] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=6352 00:03:08.896 passed 00:03:08.896 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-04-16 20:40:59.955765] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=8505 00:03:08.896 [2024-04-16 20:40:59.955786] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab757ed, Actual=1ab753ed 00:03:08.896 [2024-04-16 20:40:59.955815] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574260, Actual=38574660 00:03:08.896 [2024-04-16 20:40:59.955844] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:08.896 [2024-04-16 20:40:59.955873] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:08.896 [2024-04-16 20:40:59.955902] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:03:08.896 [2024-04-16 20:40:59.955931] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:03:08.896 [2024-04-16 20:40:59.955960] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=aa2d8a5d 00:03:08.896 [2024-04-16 20:40:59.955989] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=b1aaeb1e 00:03:08.896 [2024-04-16 20:40:59.956007] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a3728ecc20d3, Actual=a576a7728ecc20d3 00:03:08.896 [2024-04-16 20:40:59.956034] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010e2d4837a266, Actual=88010a2d4837a266 00:03:08.896 [2024-04-16 20:40:59.956061] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:08.896 [2024-04-16 20:40:59.956088] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:08.896 [2024-04-16 20:40:59.956114] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:03:08.896 [2024-04-16 20:40:59.956141] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:03:08.896 [2024-04-16 20:40:59.956168] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=4ed78872741b8eea 00:03:08.896 passed 00:03:08.896 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-04-16 20:40:59.956186] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=28731ef0212e7de 00:03:08.896 [2024-04-16 20:40:59.956205] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f94c, Actual=fd4c 00:03:08.896 [2024-04-16 20:40:59.956232] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fa21, Actual=fe21 00:03:08.896 [2024-04-16 20:40:59.956260] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:08.896 [2024-04-16 20:40:59.956285] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:08.896 [2024-04-16 20:40:59.956312] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:03:08.896 [2024-04-16 20:40:59.956339] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:03:08.896 [2024-04-16 20:40:59.956366] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=6352 00:03:08.896 [2024-04-16 20:40:59.956383] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=8505 00:03:08.896 passed 00:03:08.896 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-04-16 20:40:59.956402] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab757ed, Actual=1ab753ed 00:03:08.896 [2024-04-16 20:40:59.956429] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574260, Actual=38574660 00:03:08.896 [2024-04-16 20:40:59.956456] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:08.896 [2024-04-16 20:40:59.956483] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:08.896 [2024-04-16 20:40:59.956511] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:03:08.896 [2024-04-16 20:40:59.956537] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:03:08.896 [2024-04-16 20:40:59.956564] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=aa2d8a5d 00:03:08.896 [2024-04-16 20:40:59.956582] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=b1aaeb1e 00:03:08.896 [2024-04-16 20:40:59.956600] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a3728ecc20d3, Actual=a576a7728ecc20d3 00:03:08.896 [2024-04-16 20:40:59.956627] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010e2d4837a266, Actual=88010a2d4837a266 00:03:08.896 [2024-04-16 20:40:59.956654] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:08.896 [2024-04-16 20:40:59.956681] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:08.896 [2024-04-16 20:40:59.956708] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:03:08.896 [2024-04-16 20:40:59.956735] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:03:08.896 [2024-04-16 20:40:59.956762] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=4ed78872741b8eea 00:03:08.896 [2024-04-16 20:40:59.956780] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=28731ef0212e7de 00:03:08.896 passed 00:03:08.896 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...passed 00:03:08.896 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:03:08.896 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:03:08.896 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:03:08.896 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:03:08.896 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:03:08.896 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:03:08.896 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:03:08.896 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:03:08.896 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-04-16 20:40:59.960706] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f94c, Actual=fd4c 00:03:08.896 [2024-04-16 20:40:59.960838] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=cd05, Actual=c905 00:03:08.896 [2024-04-16 20:40:59.960956] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:08.896 [2024-04-16 20:40:59.961074] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:08.896 [2024-04-16 20:40:59.961192] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:03:08.896 [2024-04-16 20:40:59.961309] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:03:08.896 [2024-04-16 20:40:59.961430] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=6352 00:03:08.896 [2024-04-16 20:40:59.961548] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=5632 00:03:08.896 [2024-04-16 20:40:59.961665] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab757ed, Actual=1ab753ed 00:03:08.896 [2024-04-16 20:40:59.961783] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=b25c3099, Actual=b25c3499 00:03:08.896 [2024-04-16 20:40:59.961901] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:08.896 [2024-04-16 20:40:59.962017] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:08.896 [2024-04-16 20:40:59.962145] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:03:08.896 [2024-04-16 20:40:59.962263] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:03:08.896 [2024-04-16 20:40:59.962381] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=aa2d8a5d 00:03:08.896 [2024-04-16 20:40:59.962498] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=635bedd2 00:03:08.896 [2024-04-16 20:40:59.962617] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a3728ecc20d3, Actual=a576a7728ecc20d3 00:03:08.896 [2024-04-16 20:40:59.962742] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=311874c4f7ce1dbf, Actual=311870c4f7ce1dbf 00:03:08.896 [2024-04-16 20:40:59.962860] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:08.896 [2024-04-16 20:40:59.962978] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:08.896 [2024-04-16 20:40:59.963096] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:03:08.896 [2024-04-16 20:40:59.963215] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:03:08.896 [2024-04-16 20:40:59.963334] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=4ed78872741b8eea 00:03:08.896 [2024-04-16 20:40:59.963454] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=b2e011d91cff751a 00:03:08.896 passed 00:03:08.896 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-04-16 20:40:59.963491] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f94c, Actual=fd4c 00:03:08.896 [2024-04-16 20:40:59.963522] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=cd05, Actual=c905 00:03:08.896 [2024-04-16 20:40:59.963552] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:08.896 [2024-04-16 20:40:59.963581] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:08.896 [2024-04-16 20:40:59.963611] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:03:08.896 [2024-04-16 20:40:59.963641] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:03:08.896 [2024-04-16 20:40:59.963671] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=6352 00:03:08.897 [2024-04-16 20:40:59.963700] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=5632 00:03:08.897 [2024-04-16 20:40:59.963730] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab757ed, Actual=1ab753ed 00:03:08.897 [2024-04-16 20:40:59.963759] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=b25c3099, Actual=b25c3499 00:03:08.897 [2024-04-16 20:40:59.963789] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:08.897 [2024-04-16 20:40:59.963819] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:08.897 [2024-04-16 20:40:59.963849] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:03:08.897 [2024-04-16 20:40:59.963878] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:03:08.897 [2024-04-16 20:40:59.963908] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=aa2d8a5d 00:03:08.897 [2024-04-16 20:40:59.963938] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=635bedd2 00:03:08.897 [2024-04-16 20:40:59.963968] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a3728ecc20d3, Actual=a576a7728ecc20d3 00:03:08.897 [2024-04-16 20:40:59.963998] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=311874c4f7ce1dbf, Actual=311870c4f7ce1dbf 00:03:08.897 [2024-04-16 20:40:59.964028] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:08.897 [2024-04-16 20:40:59.964057] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:08.897 [2024-04-16 20:40:59.964087] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:03:08.897 [2024-04-16 20:40:59.964117] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:03:08.897 [2024-04-16 20:40:59.964146] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=4ed78872741b8eea 00:03:08.897 passed 00:03:08.897 Test: dix_sec_512_md_0_error ...passed 00:03:08.897 Test: dix_sec_512_md_8_prchk_0_single_iov ...[2024-04-16 20:40:59.964176] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=b2e011d91cff751a 00:03:08.897 [2024-04-16 20:40:59.964183] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:03:08.897 passed 00:03:08.897 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:03:08.897 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:03:08.897 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:03:08.897 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:03:08.897 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:03:08.897 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:03:08.897 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:03:08.897 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:03:08.897 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-04-16 20:40:59.968016] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f94c, Actual=fd4c 00:03:08.897 [2024-04-16 20:40:59.968145] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=cd05, Actual=c905 00:03:08.897 [2024-04-16 20:40:59.968266] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:08.897 [2024-04-16 20:40:59.968387] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:08.897 [2024-04-16 20:40:59.968508] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:03:08.897 [2024-04-16 20:40:59.968629] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:03:08.897 [2024-04-16 20:40:59.968747] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=6352 00:03:08.897 [2024-04-16 20:40:59.968866] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=5632 00:03:08.897 [2024-04-16 20:40:59.968982] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab757ed, Actual=1ab753ed 00:03:08.897 [2024-04-16 20:40:59.969100] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=b25c3099, Actual=b25c3499 00:03:08.897 [2024-04-16 20:40:59.969217] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:08.897 [2024-04-16 20:40:59.969334] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:08.897 [2024-04-16 20:40:59.969451] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:03:08.897 [2024-04-16 20:40:59.969567] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:03:08.897 [2024-04-16 20:40:59.969683] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=aa2d8a5d 00:03:08.897 [2024-04-16 20:40:59.969801] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=635bedd2 00:03:08.897 [2024-04-16 20:40:59.969918] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a3728ecc20d3, Actual=a576a7728ecc20d3 00:03:08.897 [2024-04-16 20:40:59.970037] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=311874c4f7ce1dbf, Actual=311870c4f7ce1dbf 00:03:08.897 [2024-04-16 20:40:59.970155] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:08.897 [2024-04-16 20:40:59.970273] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:08.897 [2024-04-16 20:40:59.970391] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:03:08.897 [2024-04-16 20:40:59.970510] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:03:08.897 [2024-04-16 20:40:59.970628] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=4ed78872741b8eea 00:03:08.897 passed 00:03:08.897 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-04-16 20:40:59.970759] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=b2e011d91cff751a 00:03:08.897 [2024-04-16 20:40:59.970797] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f94c, Actual=fd4c 00:03:08.897 [2024-04-16 20:40:59.970827] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=cd05, Actual=c905 00:03:08.897 [2024-04-16 20:40:59.970858] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:08.897 [2024-04-16 20:40:59.970890] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:08.897 [2024-04-16 20:40:59.970923] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:03:08.897 [2024-04-16 20:40:59.970955] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:03:08.897 [2024-04-16 20:40:59.970988] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=6352 00:03:08.897 [2024-04-16 20:40:59.971022] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=5632 00:03:08.897 [2024-04-16 20:40:59.971055] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab757ed, Actual=1ab753ed 00:03:08.897 [2024-04-16 20:40:59.971087] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=b25c3099, Actual=b25c3499 00:03:08.897 [2024-04-16 20:40:59.971120] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:08.897 [2024-04-16 20:40:59.971152] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:08.897 [2024-04-16 20:40:59.971185] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:03:08.897 [2024-04-16 20:40:59.971217] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:03:08.897 [2024-04-16 20:40:59.971249] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=aa2d8a5d 00:03:08.897 [2024-04-16 20:40:59.971282] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=635bedd2 00:03:08.897 [2024-04-16 20:40:59.971316] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a3728ecc20d3, Actual=a576a7728ecc20d3 00:03:08.897 [2024-04-16 20:40:59.971349] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=311874c4f7ce1dbf, Actual=311870c4f7ce1dbf 00:03:08.897 [2024-04-16 20:40:59.971382] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:08.897 [2024-04-16 20:40:59.971415] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:08.897 [2024-04-16 20:40:59.971447] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:03:08.897 [2024-04-16 20:40:59.971480] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:03:08.897 [2024-04-16 20:40:59.971513] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=4ed78872741b8eea 00:03:08.897 [2024-04-16 20:40:59.971546] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=b2e011d91cff751a 00:03:08.897 passed 00:03:08.897 Test: set_md_interleave_iovs_test ...passed 00:03:08.897 Test: set_md_interleave_iovs_split_test ...passed 00:03:08.897 Test: dif_generate_stream_pi_16_test ...passed 00:03:08.897 Test: dif_generate_stream_test ...passed 00:03:08.897 Test: set_md_interleave_iovs_alignment_test ...passed 00:03:08.897 Test: dif_generate_split_test ...[2024-04-16 20:40:59.972179] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c:1799:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:03:08.897 passed 00:03:08.897 Test: set_md_interleave_iovs_multi_segments_test ...passed 00:03:08.897 Test: dif_verify_split_test ...passed 00:03:08.897 Test: dif_verify_stream_multi_segments_test ...passed 00:03:08.898 Test: update_crc32c_pi_16_test ...passed 00:03:08.898 Test: update_crc32c_test ...passed 00:03:08.898 Test: dif_update_crc32c_split_test ...passed 00:03:08.898 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:03:08.898 Test: get_range_with_md_test ...passed 00:03:08.898 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:03:08.898 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:03:08.898 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:03:08.898 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:03:08.898 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:03:08.898 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:03:08.898 Test: dif_generate_and_verify_unmap_test ...passed 00:03:08.898 00:03:08.898 Run Summary: Type Total Ran Passed Failed Inactive 00:03:08.898 suites 1 1 n/a 0 0 00:03:08.898 tests 79 79 79 0 0 00:03:08.898 asserts 3584 3584 3584 0 n/a 00:03:08.898 00:03:08.898 Elapsed time = 0.039 seconds 00:03:08.898 20:40:59 -- unit/unittest.sh@141 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:03:08.898 00:03:08.898 00:03:08.898 CUnit - A unit testing framework for C - Version 2.1-3 00:03:08.898 http://cunit.sourceforge.net/ 00:03:08.898 00:03:08.898 00:03:08.898 Suite: iov 00:03:08.898 Test: test_single_iov ...passed 00:03:08.898 Test: test_simple_iov ...passed 00:03:08.898 Test: test_complex_iov ...passed 00:03:08.898 Test: test_iovs_to_buf ...passed 00:03:08.898 Test: test_buf_to_iovs ...passed 00:03:08.898 Test: test_memset ...passed 00:03:08.898 Test: test_iov_one ...passed 00:03:08.898 Test: test_iov_xfer ...passed 00:03:08.898 00:03:08.898 Run Summary: Type Total Ran Passed Failed Inactive 00:03:08.898 suites 1 1 n/a 0 0 00:03:08.898 tests 8 8 8 0 0 00:03:08.898 asserts 156 156 156 0 n/a 00:03:08.898 00:03:08.898 Elapsed time = 0.000 seconds 00:03:08.898 20:40:59 -- unit/unittest.sh@142 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:03:08.898 00:03:08.898 00:03:08.898 CUnit - A unit testing framework for C - Version 2.1-3 00:03:08.898 http://cunit.sourceforge.net/ 00:03:08.898 00:03:08.898 00:03:08.898 Suite: math 00:03:08.898 Test: test_serial_number_arithmetic ...passed 00:03:08.898 Suite: erase 00:03:08.898 Test: test_memset_s ...passed 00:03:08.898 00:03:08.898 Run Summary: Type Total Ran Passed Failed Inactive 00:03:08.898 suites 2 2 n/a 0 0 00:03:08.898 tests 2 2 2 0 0 00:03:08.898 asserts 18 18 18 0 n/a 00:03:08.898 00:03:08.898 Elapsed time = 0.000 seconds 00:03:08.898 20:40:59 -- unit/unittest.sh@143 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:03:08.898 00:03:08.898 00:03:08.898 CUnit - A unit testing framework for C - Version 2.1-3 00:03:08.898 http://cunit.sourceforge.net/ 00:03:08.898 00:03:08.898 00:03:08.898 Suite: pipe 00:03:08.898 Test: test_create_destroy ...passed 00:03:08.898 Test: test_write_get_buffer ...passed 00:03:08.898 Test: test_write_advance ...passed 00:03:08.898 Test: test_read_get_buffer ...passed 00:03:08.898 Test: test_read_advance ...passed 00:03:08.898 Test: test_data ...passed 00:03:08.898 00:03:08.898 Run Summary: Type Total Ran Passed Failed Inactive 00:03:08.898 suites 1 1 n/a 0 0 00:03:08.898 tests 6 6 6 0 0 00:03:08.898 asserts 250 250 250 0 n/a 00:03:08.898 00:03:08.898 Elapsed time = 0.000 seconds 00:03:08.898 20:41:00 -- unit/unittest.sh@144 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut 00:03:08.898 00:03:08.898 00:03:08.898 CUnit - A unit testing framework for C - Version 2.1-3 00:03:08.898 http://cunit.sourceforge.net/ 00:03:08.898 00:03:08.898 00:03:08.898 Suite: xor 00:03:08.898 Test: test_xor_gen ...passed 00:03:08.898 00:03:08.898 Run Summary: Type Total Ran Passed Failed Inactive 00:03:08.898 suites 1 1 n/a 0 0 00:03:08.898 tests 1 1 1 0 0 00:03:08.898 asserts 17 17 17 0 n/a 00:03:08.898 00:03:08.898 Elapsed time = 0.000 seconds 00:03:08.898 00:03:08.898 real 0m0.145s 00:03:08.898 user 0m0.073s 00:03:08.898 sys 0m0.073s 00:03:08.898 20:41:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:08.898 20:41:00 -- common/autotest_common.sh@10 -- # set +x 00:03:08.898 ************************************ 00:03:08.898 END TEST unittest_util 00:03:08.898 ************************************ 00:03:09.158 20:41:00 -- unit/unittest.sh@282 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:09.158 20:41:00 -- unit/unittest.sh@285 -- # run_test unittest_dma /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:03:09.158 20:41:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:09.158 20:41:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:09.158 20:41:00 -- common/autotest_common.sh@10 -- # set +x 00:03:09.158 ************************************ 00:03:09.158 START TEST unittest_dma 00:03:09.158 ************************************ 00:03:09.158 20:41:00 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:03:09.158 00:03:09.158 00:03:09.158 CUnit - A unit testing framework for C - Version 2.1-3 00:03:09.158 http://cunit.sourceforge.net/ 00:03:09.158 00:03:09.158 00:03:09.158 Suite: dma_suite 00:03:09.158 Test: test_dma ...[2024-04-16 20:41:00.062431] /usr/home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 37:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:03:09.158 passed 00:03:09.158 00:03:09.158 Run Summary: Type Total Ran Passed Failed Inactive 00:03:09.158 suites 1 1 n/a 0 0 00:03:09.158 tests 1 1 1 0 0 00:03:09.158 asserts 50 50 50 0 n/a 00:03:09.158 00:03:09.158 Elapsed time = 0.000 seconds 00:03:09.158 00:03:09.158 real 0m0.008s 00:03:09.158 user 0m0.008s 00:03:09.158 sys 0m0.001s 00:03:09.158 20:41:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:09.158 20:41:00 -- common/autotest_common.sh@10 -- # set +x 00:03:09.158 ************************************ 00:03:09.158 END TEST unittest_dma 00:03:09.158 ************************************ 00:03:09.158 20:41:00 -- unit/unittest.sh@287 -- # run_test unittest_init unittest_init 00:03:09.158 20:41:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:09.158 20:41:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:09.158 20:41:00 -- common/autotest_common.sh@10 -- # set +x 00:03:09.158 ************************************ 00:03:09.158 START TEST unittest_init 00:03:09.158 ************************************ 00:03:09.158 20:41:00 -- common/autotest_common.sh@1104 -- # unittest_init 00:03:09.158 20:41:00 -- unit/unittest.sh@148 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:03:09.158 00:03:09.158 00:03:09.158 CUnit - A unit testing framework for C - Version 2.1-3 00:03:09.158 http://cunit.sourceforge.net/ 00:03:09.158 00:03:09.158 00:03:09.158 Suite: subsystem_suite 00:03:09.158 Test: subsystem_sort_test_depends_on_single ...passed 00:03:09.158 Test: subsystem_sort_test_depends_on_multiple ...passed 00:03:09.158 Test: subsystem_sort_test_missing_dependency ...[2024-04-16 20:41:00.112437] /usr/home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 191:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:03:09.158 passed 00:03:09.158 00:03:09.158 [2024-04-16 20:41:00.112609] /usr/home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 185:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:03:09.158 Run Summary: Type Total Ran Passed Failed Inactive 00:03:09.158 suites 1 1 n/a 0 0 00:03:09.159 tests 3 3 3 0 0 00:03:09.159 asserts 20 20 20 0 n/a 00:03:09.159 00:03:09.159 Elapsed time = 0.000 seconds 00:03:09.159 00:03:09.159 real 0m0.005s 00:03:09.159 user 0m0.004s 00:03:09.159 sys 0m0.004s 00:03:09.159 20:41:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:09.159 20:41:00 -- common/autotest_common.sh@10 -- # set +x 00:03:09.159 ************************************ 00:03:09.159 END TEST unittest_init 00:03:09.159 ************************************ 00:03:09.159 20:41:00 -- unit/unittest.sh@289 -- # '[' no = yes ']' 00:03:09.159 20:41:00 -- unit/unittest.sh@302 -- # set +x 00:03:09.159 00:03:09.159 00:03:09.159 ===================== 00:03:09.159 All unit tests passed 00:03:09.159 ===================== 00:03:09.159 WARN: lcov not installed or SPDK built without coverage! 00:03:09.159 WARN: neither valgrind nor ASAN is enabled! 00:03:09.159 00:03:09.159 00:03:09.159 00:03:09.159 real 0m13.787s 00:03:09.159 user 0m10.739s 00:03:09.159 sys 0m1.885s 00:03:09.159 20:41:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:09.159 20:41:00 -- common/autotest_common.sh@10 -- # set +x 00:03:09.159 ************************************ 00:03:09.159 END TEST unittest 00:03:09.159 ************************************ 00:03:09.159 20:41:00 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:03:09.159 20:41:00 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:03:09.159 20:41:00 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:03:09.159 20:41:00 -- spdk/autotest.sh@173 -- # timing_enter lib 00:03:09.159 20:41:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:09.159 20:41:00 -- common/autotest_common.sh@10 -- # set +x 00:03:09.159 20:41:00 -- spdk/autotest.sh@175 -- # run_test env /usr/home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:09.159 20:41:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:09.159 20:41:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:09.159 20:41:00 -- common/autotest_common.sh@10 -- # set +x 00:03:09.159 ************************************ 00:03:09.159 START TEST env 00:03:09.159 ************************************ 00:03:09.159 20:41:00 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:09.418 * Looking for test storage... 00:03:09.418 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/env 00:03:09.418 20:41:00 -- env/env.sh@10 -- # run_test env_memory /usr/home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:09.418 20:41:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:09.418 20:41:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:09.418 20:41:00 -- common/autotest_common.sh@10 -- # set +x 00:03:09.418 ************************************ 00:03:09.418 START TEST env_memory 00:03:09.418 ************************************ 00:03:09.418 20:41:00 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:09.418 00:03:09.418 00:03:09.418 CUnit - A unit testing framework for C - Version 2.1-3 00:03:09.418 http://cunit.sourceforge.net/ 00:03:09.418 00:03:09.418 00:03:09.418 Suite: memory 00:03:09.418 Test: alloc and free memory map ...[2024-04-16 20:41:00.419674] /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:09.418 passed 00:03:09.418 Test: mem map translation ...[2024-04-16 20:41:00.426382] /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:09.418 [2024-04-16 20:41:00.426424] /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:09.418 [2024-04-16 20:41:00.426439] /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:09.418 [2024-04-16 20:41:00.426462] /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:09.418 passed 00:03:09.418 Test: mem map registration ...[2024-04-16 20:41:00.433700] /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:09.418 [2024-04-16 20:41:00.433729] /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:09.418 passed 00:03:09.418 Test: mem map adjacent registrations ...passed 00:03:09.418 00:03:09.418 Run Summary: Type Total Ran Passed Failed Inactive 00:03:09.418 suites 1 1 n/a 0 0 00:03:09.418 tests 4 4 4 0 0 00:03:09.418 asserts 152 152 152 0 n/a 00:03:09.418 00:03:09.418 Elapsed time = 0.023 seconds 00:03:09.418 00:03:09.418 real 0m0.039s 00:03:09.418 user 0m0.016s 00:03:09.418 sys 0m0.023s 00:03:09.418 20:41:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:09.418 20:41:00 -- common/autotest_common.sh@10 -- # set +x 00:03:09.418 ************************************ 00:03:09.418 END TEST env_memory 00:03:09.418 ************************************ 00:03:09.418 20:41:00 -- env/env.sh@11 -- # run_test env_vtophys /usr/home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:09.418 20:41:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:09.418 20:41:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:09.418 20:41:00 -- common/autotest_common.sh@10 -- # set +x 00:03:09.418 ************************************ 00:03:09.418 START TEST env_vtophys 00:03:09.418 ************************************ 00:03:09.418 20:41:00 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:09.418 EAL: lib.eal log level changed from notice to debug 00:03:09.418 EAL: Sysctl reports 10 cpus 00:03:09.418 EAL: Detected lcore 0 as core 0 on socket 0 00:03:09.418 EAL: Detected lcore 1 as core 0 on socket 0 00:03:09.418 EAL: Detected lcore 2 as core 0 on socket 0 00:03:09.418 EAL: Detected lcore 3 as core 0 on socket 0 00:03:09.418 EAL: Detected lcore 4 as core 0 on socket 0 00:03:09.418 EAL: Detected lcore 5 as core 0 on socket 0 00:03:09.418 EAL: Detected lcore 6 as core 0 on socket 0 00:03:09.418 EAL: Detected lcore 7 as core 0 on socket 0 00:03:09.418 EAL: Detected lcore 8 as core 0 on socket 0 00:03:09.418 EAL: Detected lcore 9 as core 0 on socket 0 00:03:09.418 EAL: Maximum logical cores by configuration: 128 00:03:09.418 EAL: Detected CPU lcores: 10 00:03:09.418 EAL: Detected NUMA nodes: 1 00:03:09.418 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:03:09.418 EAL: Checking presence of .so 'librte_eal.so.24' 00:03:09.418 EAL: Checking presence of .so 'librte_eal.so' 00:03:09.418 EAL: Detected static linkage of DPDK 00:03:09.418 EAL: No shared files mode enabled, IPC will be disabled 00:03:09.418 EAL: PCI scan found 10 devices 00:03:09.418 EAL: Specific IOVA mode is not requested, autodetecting 00:03:09.418 EAL: Selecting IOVA mode according to bus requests 00:03:09.418 EAL: Bus pci wants IOVA as 'PA' 00:03:09.418 EAL: Selected IOVA mode 'PA' 00:03:09.418 EAL: Contigmem driver has 8 buffers, each of size 256MB 00:03:09.418 EAL: Ask a virtual area of 0x2e000 bytes 00:03:09.418 EAL: WARNING! Base virtual address hint (0x1000005000 != 0x1000fc9000) not respected! 00:03:09.418 EAL: This may cause issues with mapping memory into secondary processes 00:03:09.418 EAL: Virtual area found at 0x1000fc9000 (size = 0x2e000) 00:03:09.418 EAL: Setting up physically contiguous memory... 00:03:09.418 EAL: Ask a virtual area of 0x1000 bytes 00:03:09.418 EAL: WARNING! Base virtual address hint (0x100000b000 != 0x1001ae8000) not respected! 00:03:09.418 EAL: This may cause issues with mapping memory into secondary processes 00:03:09.418 EAL: Virtual area found at 0x1001ae8000 (size = 0x1000) 00:03:09.418 EAL: Memseg list allocated at socket 0, page size 0x40000kB 00:03:09.418 EAL: Ask a virtual area of 0xf0000000 bytes 00:03:09.418 EAL: WARNING! Base virtual address hint (0x105000c000 != 0x1060000000) not respected! 00:03:09.418 EAL: This may cause issues with mapping memory into secondary processes 00:03:09.418 EAL: Virtual area found at 0x1060000000 (size = 0xf0000000) 00:03:09.418 EAL: VA reserved for memseg list at 0x1060000000, size f0000000 00:03:09.677 EAL: Mapped memory segment 0 @ 0x1060000000: physaddr:0x230000000, len 268435456 00:03:09.677 EAL: Mapped memory segment 1 @ 0x1070000000: physaddr:0x240000000, len 268435456 00:03:09.677 EAL: Mapped memory segment 2 @ 0x1080000000: physaddr:0x250000000, len 268435456 00:03:09.677 EAL: Mapped memory segment 3 @ 0x1090000000: physaddr:0x260000000, len 268435456 00:03:09.677 EAL: Mapped memory segment 4 @ 0x10a0000000: physaddr:0x270000000, len 268435456 00:03:09.935 EAL: Mapped memory segment 5 @ 0x10b0000000: physaddr:0x280000000, len 268435456 00:03:09.935 EAL: Mapped memory segment 6 @ 0x10c0000000: physaddr:0x290000000, len 268435456 00:03:09.935 EAL: Mapped memory segment 7 @ 0x10d0000000: physaddr:0x2a0000000, len 268435456 00:03:09.935 EAL: No shared files mode enabled, IPC is disabled 00:03:09.935 EAL: Added 2048M to heap on socket 0 00:03:09.935 EAL: TSC is not safe to use in SMP mode 00:03:09.935 EAL: TSC is not invariant 00:03:09.935 EAL: TSC frequency is ~2294601 KHz 00:03:09.935 EAL: Main lcore 0 is ready (tid=82c256000;cpuset=[0]) 00:03:09.935 EAL: PCI scan found 10 devices 00:03:09.935 EAL: Registering mem event callbacks not supported 00:03:09.935 00:03:09.935 00:03:09.935 CUnit - A unit testing framework for C - Version 2.1-3 00:03:09.935 http://cunit.sourceforge.net/ 00:03:09.935 00:03:09.935 00:03:09.935 Suite: components_suite 00:03:09.935 Test: vtophys_malloc_test ...passed 00:03:10.194 Test: vtophys_spdk_malloc_test ...passed 00:03:10.194 00:03:10.194 Run Summary: Type Total Ran Passed Failed Inactive 00:03:10.194 suites 1 1 n/a 0 0 00:03:10.194 tests 2 2 2 0 0 00:03:10.194 asserts 497 497 497 0 n/a 00:03:10.194 00:03:10.194 Elapsed time = 0.312 seconds 00:03:10.194 00:03:10.194 real 0m0.811s 00:03:10.194 user 0m0.313s 00:03:10.194 sys 0m0.493s 00:03:10.194 20:41:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:10.194 20:41:01 -- common/autotest_common.sh@10 -- # set +x 00:03:10.194 ************************************ 00:03:10.194 END TEST env_vtophys 00:03:10.194 ************************************ 00:03:10.453 20:41:01 -- env/env.sh@12 -- # run_test env_pci /usr/home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:10.453 20:41:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:10.453 20:41:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:10.453 20:41:01 -- common/autotest_common.sh@10 -- # set +x 00:03:10.453 ************************************ 00:03:10.453 START TEST env_pci 00:03:10.453 ************************************ 00:03:10.453 20:41:01 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:10.453 00:03:10.453 00:03:10.453 CUnit - A unit testing framework for C - Version 2.1-3 00:03:10.453 http://cunit.sourceforge.net/ 00:03:10.453 00:03:10.453 00:03:10.453 Suite: pci 00:03:10.453 Test: pci_hook ...passed 00:03:10.453 00:03:10.453 Run Summary: Type Total Ran Passed Failed Inactive 00:03:10.453 suites 1 1 n/a 0 0 00:03:10.453 tests 1 1 1 0 0 00:03:10.453 asserts 25 25 25 0 n/a 00:03:10.453 00:03:10.453 Elapsed time = 0.008 seconds 00:03:10.453 EAL: Cannot find device (10000:00:01.0) 00:03:10.453 EAL: Failed to attach device on primary process 00:03:10.453 00:03:10.453 real 0m0.012s 00:03:10.453 user 0m0.009s 00:03:10.453 sys 0m0.012s 00:03:10.453 20:41:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:10.453 20:41:01 -- common/autotest_common.sh@10 -- # set +x 00:03:10.453 ************************************ 00:03:10.453 END TEST env_pci 00:03:10.453 ************************************ 00:03:10.453 20:41:01 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:10.453 20:41:01 -- env/env.sh@15 -- # uname 00:03:10.453 20:41:01 -- env/env.sh@15 -- # '[' FreeBSD = Linux ']' 00:03:10.453 20:41:01 -- env/env.sh@24 -- # run_test env_dpdk_post_init /usr/home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 00:03:10.453 20:41:01 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:03:10.453 20:41:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:10.453 20:41:01 -- common/autotest_common.sh@10 -- # set +x 00:03:10.453 ************************************ 00:03:10.453 START TEST env_dpdk_post_init 00:03:10.453 ************************************ 00:03:10.453 20:41:01 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 00:03:10.453 EAL: Sysctl reports 10 cpus 00:03:10.453 EAL: Detected CPU lcores: 10 00:03:10.453 EAL: Detected NUMA nodes: 1 00:03:10.453 EAL: Detected static linkage of DPDK 00:03:10.453 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:10.453 EAL: Selected IOVA mode 'PA' 00:03:10.453 EAL: Contigmem driver has 8 buffers, each of size 256MB 00:03:10.453 EAL: Mapped memory segment 0 @ 0x1060000000: physaddr:0x230000000, len 268435456 00:03:10.453 EAL: Mapped memory segment 1 @ 0x1070000000: physaddr:0x240000000, len 268435456 00:03:10.713 EAL: Mapped memory segment 2 @ 0x1080000000: physaddr:0x250000000, len 268435456 00:03:10.713 EAL: Mapped memory segment 3 @ 0x1090000000: physaddr:0x260000000, len 268435456 00:03:10.713 EAL: Mapped memory segment 4 @ 0x10a0000000: physaddr:0x270000000, len 268435456 00:03:10.713 EAL: Mapped memory segment 5 @ 0x10b0000000: physaddr:0x280000000, len 268435456 00:03:10.713 EAL: Mapped memory segment 6 @ 0x10c0000000: physaddr:0x290000000, len 268435456 00:03:10.972 EAL: Mapped memory segment 7 @ 0x10d0000000: physaddr:0x2a0000000, len 268435456 00:03:10.972 EAL: TSC is not safe to use in SMP mode 00:03:10.972 EAL: TSC is not invariant 00:03:10.973 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:10.973 [2024-04-16 20:41:01.880027] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:03:10.973 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:03:10.973 Starting DPDK initialization... 00:03:10.973 Starting SPDK post initialization... 00:03:10.973 SPDK NVMe probe 00:03:10.973 Attaching to 0000:00:06.0 00:03:10.973 Attached to 0000:00:06.0 00:03:10.973 Cleaning up... 00:03:10.973 00:03:10.973 real 0m0.509s 00:03:10.973 user 0m0.029s 00:03:10.973 sys 0m0.477s 00:03:10.973 20:41:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:10.973 20:41:01 -- common/autotest_common.sh@10 -- # set +x 00:03:10.973 ************************************ 00:03:10.973 END TEST env_dpdk_post_init 00:03:10.973 ************************************ 00:03:10.973 20:41:01 -- env/env.sh@26 -- # uname 00:03:10.973 20:41:01 -- env/env.sh@26 -- # '[' FreeBSD = Linux ']' 00:03:10.973 00:03:10.973 real 0m1.774s 00:03:10.973 user 0m0.565s 00:03:10.973 sys 0m1.268s 00:03:10.973 20:41:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:10.973 20:41:01 -- common/autotest_common.sh@10 -- # set +x 00:03:10.973 ************************************ 00:03:10.973 END TEST env 00:03:10.973 ************************************ 00:03:10.973 20:41:02 -- spdk/autotest.sh@176 -- # run_test rpc /usr/home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:10.973 20:41:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:10.973 20:41:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:10.973 20:41:02 -- common/autotest_common.sh@10 -- # set +x 00:03:10.973 ************************************ 00:03:10.973 START TEST rpc 00:03:10.973 ************************************ 00:03:10.973 20:41:02 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:11.232 * Looking for test storage... 00:03:11.232 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/rpc 00:03:11.232 20:41:02 -- rpc/rpc.sh@65 -- # spdk_pid=45214 00:03:11.232 20:41:02 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:11.232 20:41:02 -- rpc/rpc.sh@67 -- # waitforlisten 45214 00:03:11.232 20:41:02 -- common/autotest_common.sh@819 -- # '[' -z 45214 ']' 00:03:11.232 20:41:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:11.232 20:41:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:03:11.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:11.232 20:41:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:11.232 20:41:02 -- rpc/rpc.sh@64 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:03:11.232 20:41:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:03:11.232 20:41:02 -- common/autotest_common.sh@10 -- # set +x 00:03:11.232 [2024-04-16 20:41:02.222542] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:03:11.232 [2024-04-16 20:41:02.222935] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:11.801 EAL: TSC is not safe to use in SMP mode 00:03:11.801 EAL: TSC is not invariant 00:03:11.801 [2024-04-16 20:41:02.683153] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:11.801 [2024-04-16 20:41:02.776682] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:03:11.801 [2024-04-16 20:41:02.776780] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:11.801 [2024-04-16 20:41:02.776790] app.c: 492:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 45214' to capture a snapshot of events at runtime. 00:03:11.801 [2024-04-16 20:41:02.776809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:12.060 20:41:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:03:12.060 20:41:03 -- common/autotest_common.sh@852 -- # return 0 00:03:12.060 20:41:03 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/test/rpc 00:03:12.060 20:41:03 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/test/rpc 00:03:12.060 20:41:03 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:12.060 20:41:03 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:12.060 20:41:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:12.060 20:41:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:12.060 20:41:03 -- common/autotest_common.sh@10 -- # set +x 00:03:12.320 ************************************ 00:03:12.320 START TEST rpc_integrity 00:03:12.320 ************************************ 00:03:12.320 20:41:03 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:03:12.320 20:41:03 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:12.320 20:41:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:12.320 20:41:03 -- common/autotest_common.sh@10 -- # set +x 00:03:12.320 20:41:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:12.320 20:41:03 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:12.320 20:41:03 -- rpc/rpc.sh@13 -- # jq length 00:03:12.320 20:41:03 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:12.320 20:41:03 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:12.320 20:41:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:12.320 20:41:03 -- common/autotest_common.sh@10 -- # set +x 00:03:12.320 20:41:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:12.320 20:41:03 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:12.320 20:41:03 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:12.320 20:41:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:12.320 20:41:03 -- common/autotest_common.sh@10 -- # set +x 00:03:12.320 20:41:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:12.320 20:41:03 -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:12.320 { 00:03:12.320 "name": "Malloc0", 00:03:12.320 "aliases": [ 00:03:12.320 "a4208fdb-fc31-11ee-80f8-ef3e42bb1492" 00:03:12.320 ], 00:03:12.320 "product_name": "Malloc disk", 00:03:12.320 "block_size": 512, 00:03:12.320 "num_blocks": 16384, 00:03:12.320 "uuid": "a4208fdb-fc31-11ee-80f8-ef3e42bb1492", 00:03:12.320 "assigned_rate_limits": { 00:03:12.320 "rw_ios_per_sec": 0, 00:03:12.320 "rw_mbytes_per_sec": 0, 00:03:12.320 "r_mbytes_per_sec": 0, 00:03:12.320 "w_mbytes_per_sec": 0 00:03:12.320 }, 00:03:12.320 "claimed": false, 00:03:12.320 "zoned": false, 00:03:12.320 "supported_io_types": { 00:03:12.320 "read": true, 00:03:12.320 "write": true, 00:03:12.320 "unmap": true, 00:03:12.320 "write_zeroes": true, 00:03:12.320 "flush": true, 00:03:12.320 "reset": true, 00:03:12.320 "compare": false, 00:03:12.320 "compare_and_write": false, 00:03:12.320 "abort": true, 00:03:12.320 "nvme_admin": false, 00:03:12.320 "nvme_io": false 00:03:12.320 }, 00:03:12.320 "memory_domains": [ 00:03:12.320 { 00:03:12.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:12.320 "dma_device_type": 2 00:03:12.320 } 00:03:12.320 ], 00:03:12.320 "driver_specific": {} 00:03:12.320 } 00:03:12.320 ]' 00:03:12.320 20:41:03 -- rpc/rpc.sh@17 -- # jq length 00:03:12.320 20:41:03 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:12.320 20:41:03 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:12.320 20:41:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:12.320 20:41:03 -- common/autotest_common.sh@10 -- # set +x 00:03:12.320 [2024-04-16 20:41:03.269831] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:12.320 [2024-04-16 20:41:03.269888] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:12.320 [2024-04-16 20:41:03.270481] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d04f780 00:03:12.320 [2024-04-16 20:41:03.270527] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:12.320 [2024-04-16 20:41:03.271294] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:12.320 [2024-04-16 20:41:03.271329] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:12.320 Passthru0 00:03:12.320 20:41:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:12.320 20:41:03 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:12.320 20:41:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:12.320 20:41:03 -- common/autotest_common.sh@10 -- # set +x 00:03:12.320 20:41:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:12.320 20:41:03 -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:12.320 { 00:03:12.320 "name": "Malloc0", 00:03:12.320 "aliases": [ 00:03:12.320 "a4208fdb-fc31-11ee-80f8-ef3e42bb1492" 00:03:12.320 ], 00:03:12.320 "product_name": "Malloc disk", 00:03:12.320 "block_size": 512, 00:03:12.320 "num_blocks": 16384, 00:03:12.320 "uuid": "a4208fdb-fc31-11ee-80f8-ef3e42bb1492", 00:03:12.320 "assigned_rate_limits": { 00:03:12.320 "rw_ios_per_sec": 0, 00:03:12.320 "rw_mbytes_per_sec": 0, 00:03:12.320 "r_mbytes_per_sec": 0, 00:03:12.320 "w_mbytes_per_sec": 0 00:03:12.320 }, 00:03:12.320 "claimed": true, 00:03:12.320 "claim_type": "exclusive_write", 00:03:12.320 "zoned": false, 00:03:12.320 "supported_io_types": { 00:03:12.320 "read": true, 00:03:12.320 "write": true, 00:03:12.320 "unmap": true, 00:03:12.320 "write_zeroes": true, 00:03:12.320 "flush": true, 00:03:12.320 "reset": true, 00:03:12.320 "compare": false, 00:03:12.320 "compare_and_write": false, 00:03:12.320 "abort": true, 00:03:12.320 "nvme_admin": false, 00:03:12.320 "nvme_io": false 00:03:12.320 }, 00:03:12.320 "memory_domains": [ 00:03:12.320 { 00:03:12.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:12.320 "dma_device_type": 2 00:03:12.320 } 00:03:12.320 ], 00:03:12.320 "driver_specific": {} 00:03:12.320 }, 00:03:12.320 { 00:03:12.320 "name": "Passthru0", 00:03:12.320 "aliases": [ 00:03:12.320 "94820424-b9f3-c059-87ee-ffab3be54ea0" 00:03:12.320 ], 00:03:12.320 "product_name": "passthru", 00:03:12.320 "block_size": 512, 00:03:12.320 "num_blocks": 16384, 00:03:12.320 "uuid": "94820424-b9f3-c059-87ee-ffab3be54ea0", 00:03:12.320 "assigned_rate_limits": { 00:03:12.320 "rw_ios_per_sec": 0, 00:03:12.320 "rw_mbytes_per_sec": 0, 00:03:12.320 "r_mbytes_per_sec": 0, 00:03:12.320 "w_mbytes_per_sec": 0 00:03:12.320 }, 00:03:12.320 "claimed": false, 00:03:12.320 "zoned": false, 00:03:12.320 "supported_io_types": { 00:03:12.320 "read": true, 00:03:12.320 "write": true, 00:03:12.320 "unmap": true, 00:03:12.320 "write_zeroes": true, 00:03:12.320 "flush": true, 00:03:12.320 "reset": true, 00:03:12.320 "compare": false, 00:03:12.320 "compare_and_write": false, 00:03:12.320 "abort": true, 00:03:12.320 "nvme_admin": false, 00:03:12.321 "nvme_io": false 00:03:12.321 }, 00:03:12.321 "memory_domains": [ 00:03:12.321 { 00:03:12.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:12.321 "dma_device_type": 2 00:03:12.321 } 00:03:12.321 ], 00:03:12.321 "driver_specific": { 00:03:12.321 "passthru": { 00:03:12.321 "name": "Passthru0", 00:03:12.321 "base_bdev_name": "Malloc0" 00:03:12.321 } 00:03:12.321 } 00:03:12.321 } 00:03:12.321 ]' 00:03:12.321 20:41:03 -- rpc/rpc.sh@21 -- # jq length 00:03:12.321 20:41:03 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:12.321 20:41:03 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:12.321 20:41:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:12.321 20:41:03 -- common/autotest_common.sh@10 -- # set +x 00:03:12.321 20:41:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:12.321 20:41:03 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:12.321 20:41:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:12.321 20:41:03 -- common/autotest_common.sh@10 -- # set +x 00:03:12.321 20:41:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:12.321 20:41:03 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:12.321 20:41:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:12.321 20:41:03 -- common/autotest_common.sh@10 -- # set +x 00:03:12.321 20:41:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:12.321 20:41:03 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:12.321 20:41:03 -- rpc/rpc.sh@26 -- # jq length 00:03:12.321 20:41:03 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:12.321 00:03:12.321 real 0m0.172s 00:03:12.321 user 0m0.054s 00:03:12.321 sys 0m0.048s 00:03:12.321 20:41:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:12.321 20:41:03 -- common/autotest_common.sh@10 -- # set +x 00:03:12.321 ************************************ 00:03:12.321 END TEST rpc_integrity 00:03:12.321 ************************************ 00:03:12.321 20:41:03 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:12.321 20:41:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:12.321 20:41:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:12.321 20:41:03 -- common/autotest_common.sh@10 -- # set +x 00:03:12.321 ************************************ 00:03:12.321 START TEST rpc_plugins 00:03:12.321 ************************************ 00:03:12.321 20:41:03 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:03:12.321 20:41:03 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:12.321 20:41:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:12.321 20:41:03 -- common/autotest_common.sh@10 -- # set +x 00:03:12.321 20:41:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:12.321 20:41:03 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:12.321 20:41:03 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:12.321 20:41:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:12.321 20:41:03 -- common/autotest_common.sh@10 -- # set +x 00:03:12.321 20:41:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:12.321 20:41:03 -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:12.321 { 00:03:12.321 "name": "Malloc1", 00:03:12.321 "aliases": [ 00:03:12.321 "a43ddb5c-fc31-11ee-80f8-ef3e42bb1492" 00:03:12.321 ], 00:03:12.321 "product_name": "Malloc disk", 00:03:12.321 "block_size": 4096, 00:03:12.321 "num_blocks": 256, 00:03:12.321 "uuid": "a43ddb5c-fc31-11ee-80f8-ef3e42bb1492", 00:03:12.321 "assigned_rate_limits": { 00:03:12.321 "rw_ios_per_sec": 0, 00:03:12.321 "rw_mbytes_per_sec": 0, 00:03:12.321 "r_mbytes_per_sec": 0, 00:03:12.321 "w_mbytes_per_sec": 0 00:03:12.321 }, 00:03:12.321 "claimed": false, 00:03:12.321 "zoned": false, 00:03:12.321 "supported_io_types": { 00:03:12.321 "read": true, 00:03:12.321 "write": true, 00:03:12.321 "unmap": true, 00:03:12.321 "write_zeroes": true, 00:03:12.321 "flush": true, 00:03:12.321 "reset": true, 00:03:12.321 "compare": false, 00:03:12.321 "compare_and_write": false, 00:03:12.321 "abort": true, 00:03:12.321 "nvme_admin": false, 00:03:12.321 "nvme_io": false 00:03:12.321 }, 00:03:12.321 "memory_domains": [ 00:03:12.321 { 00:03:12.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:12.321 "dma_device_type": 2 00:03:12.321 } 00:03:12.321 ], 00:03:12.321 "driver_specific": {} 00:03:12.321 } 00:03:12.321 ]' 00:03:12.581 20:41:03 -- rpc/rpc.sh@32 -- # jq length 00:03:12.581 20:41:03 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:12.581 20:41:03 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:12.581 20:41:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:12.581 20:41:03 -- common/autotest_common.sh@10 -- # set +x 00:03:12.581 20:41:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:12.581 20:41:03 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:12.581 20:41:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:12.581 20:41:03 -- common/autotest_common.sh@10 -- # set +x 00:03:12.581 20:41:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:12.581 20:41:03 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:12.581 20:41:03 -- rpc/rpc.sh@36 -- # jq length 00:03:12.581 20:41:03 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:12.581 00:03:12.581 real 0m0.077s 00:03:12.581 user 0m0.020s 00:03:12.581 sys 0m0.022s 00:03:12.581 20:41:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:12.581 20:41:03 -- common/autotest_common.sh@10 -- # set +x 00:03:12.581 ************************************ 00:03:12.581 END TEST rpc_plugins 00:03:12.581 ************************************ 00:03:12.581 20:41:03 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:12.581 20:41:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:12.581 20:41:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:12.581 20:41:03 -- common/autotest_common.sh@10 -- # set +x 00:03:12.581 ************************************ 00:03:12.581 START TEST rpc_trace_cmd_test 00:03:12.581 ************************************ 00:03:12.581 20:41:03 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:03:12.581 20:41:03 -- rpc/rpc.sh@40 -- # local info 00:03:12.581 20:41:03 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:12.581 20:41:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:12.581 20:41:03 -- common/autotest_common.sh@10 -- # set +x 00:03:12.581 20:41:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:12.581 20:41:03 -- rpc/rpc.sh@42 -- # info='{ 00:03:12.581 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid45214", 00:03:12.581 "tpoint_group_mask": "0x8", 00:03:12.581 "iscsi_conn": { 00:03:12.581 "mask": "0x2", 00:03:12.581 "tpoint_mask": "0x0" 00:03:12.581 }, 00:03:12.581 "scsi": { 00:03:12.581 "mask": "0x4", 00:03:12.581 "tpoint_mask": "0x0" 00:03:12.581 }, 00:03:12.581 "bdev": { 00:03:12.581 "mask": "0x8", 00:03:12.581 "tpoint_mask": "0xffffffffffffffff" 00:03:12.581 }, 00:03:12.581 "nvmf_rdma": { 00:03:12.581 "mask": "0x10", 00:03:12.581 "tpoint_mask": "0x0" 00:03:12.581 }, 00:03:12.581 "nvmf_tcp": { 00:03:12.581 "mask": "0x20", 00:03:12.581 "tpoint_mask": "0x0" 00:03:12.581 }, 00:03:12.581 "blobfs": { 00:03:12.581 "mask": "0x80", 00:03:12.581 "tpoint_mask": "0x0" 00:03:12.581 }, 00:03:12.581 "dsa": { 00:03:12.581 "mask": "0x200", 00:03:12.581 "tpoint_mask": "0x0" 00:03:12.581 }, 00:03:12.581 "thread": { 00:03:12.581 "mask": "0x400", 00:03:12.581 "tpoint_mask": "0x0" 00:03:12.581 }, 00:03:12.581 "nvme_pcie": { 00:03:12.581 "mask": "0x800", 00:03:12.581 "tpoint_mask": "0x0" 00:03:12.581 }, 00:03:12.581 "iaa": { 00:03:12.581 "mask": "0x1000", 00:03:12.581 "tpoint_mask": "0x0" 00:03:12.581 }, 00:03:12.581 "nvme_tcp": { 00:03:12.581 "mask": "0x2000", 00:03:12.581 "tpoint_mask": "0x0" 00:03:12.581 }, 00:03:12.581 "bdev_nvme": { 00:03:12.581 "mask": "0x4000", 00:03:12.581 "tpoint_mask": "0x0" 00:03:12.581 } 00:03:12.581 }' 00:03:12.581 20:41:03 -- rpc/rpc.sh@43 -- # jq length 00:03:12.581 20:41:03 -- rpc/rpc.sh@43 -- # '[' 14 -gt 2 ']' 00:03:12.581 20:41:03 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:12.581 20:41:03 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:12.582 20:41:03 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:12.582 20:41:03 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:12.582 20:41:03 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:12.582 20:41:03 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:12.582 20:41:03 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:12.582 20:41:03 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:12.582 00:03:12.582 real 0m0.062s 00:03:12.582 user 0m0.018s 00:03:12.582 sys 0m0.040s 00:03:12.582 20:41:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:12.582 20:41:03 -- common/autotest_common.sh@10 -- # set +x 00:03:12.582 ************************************ 00:03:12.582 END TEST rpc_trace_cmd_test 00:03:12.582 ************************************ 00:03:12.582 20:41:03 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:12.582 20:41:03 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:12.582 20:41:03 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:12.582 20:41:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:12.582 20:41:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:12.582 20:41:03 -- common/autotest_common.sh@10 -- # set +x 00:03:12.582 ************************************ 00:03:12.582 START TEST rpc_daemon_integrity 00:03:12.582 ************************************ 00:03:12.582 20:41:03 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:03:12.582 20:41:03 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:12.582 20:41:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:12.582 20:41:03 -- common/autotest_common.sh@10 -- # set +x 00:03:12.582 20:41:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:12.582 20:41:03 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:12.582 20:41:03 -- rpc/rpc.sh@13 -- # jq length 00:03:12.582 20:41:03 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:12.582 20:41:03 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:12.582 20:41:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:12.582 20:41:03 -- common/autotest_common.sh@10 -- # set +x 00:03:12.582 20:41:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:12.582 20:41:03 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:12.582 20:41:03 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:12.582 20:41:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:12.582 20:41:03 -- common/autotest_common.sh@10 -- # set +x 00:03:12.582 20:41:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:12.582 20:41:03 -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:12.582 { 00:03:12.582 "name": "Malloc2", 00:03:12.582 "aliases": [ 00:03:12.582 "a463169a-fc31-11ee-80f8-ef3e42bb1492" 00:03:12.582 ], 00:03:12.582 "product_name": "Malloc disk", 00:03:12.582 "block_size": 512, 00:03:12.582 "num_blocks": 16384, 00:03:12.582 "uuid": "a463169a-fc31-11ee-80f8-ef3e42bb1492", 00:03:12.582 "assigned_rate_limits": { 00:03:12.582 "rw_ios_per_sec": 0, 00:03:12.582 "rw_mbytes_per_sec": 0, 00:03:12.582 "r_mbytes_per_sec": 0, 00:03:12.582 "w_mbytes_per_sec": 0 00:03:12.582 }, 00:03:12.582 "claimed": false, 00:03:12.582 "zoned": false, 00:03:12.582 "supported_io_types": { 00:03:12.582 "read": true, 00:03:12.582 "write": true, 00:03:12.582 "unmap": true, 00:03:12.582 "write_zeroes": true, 00:03:12.582 "flush": true, 00:03:12.582 "reset": true, 00:03:12.582 "compare": false, 00:03:12.582 "compare_and_write": false, 00:03:12.582 "abort": true, 00:03:12.582 "nvme_admin": false, 00:03:12.582 "nvme_io": false 00:03:12.582 }, 00:03:12.582 "memory_domains": [ 00:03:12.582 { 00:03:12.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:12.582 "dma_device_type": 2 00:03:12.582 } 00:03:12.582 ], 00:03:12.582 "driver_specific": {} 00:03:12.582 } 00:03:12.582 ]' 00:03:12.582 20:41:03 -- rpc/rpc.sh@17 -- # jq length 00:03:12.582 20:41:03 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:12.582 20:41:03 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:12.582 20:41:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:12.582 20:41:03 -- common/autotest_common.sh@10 -- # set +x 00:03:12.582 [2024-04-16 20:41:03.701843] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:12.582 [2024-04-16 20:41:03.701890] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:12.582 [2024-04-16 20:41:03.701915] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d04f780 00:03:12.582 [2024-04-16 20:41:03.701922] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:12.582 [2024-04-16 20:41:03.702478] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:12.582 [2024-04-16 20:41:03.702511] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:12.841 Passthru0 00:03:12.841 20:41:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:12.841 20:41:03 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:12.841 20:41:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:12.842 20:41:03 -- common/autotest_common.sh@10 -- # set +x 00:03:12.842 20:41:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:12.842 20:41:03 -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:12.842 { 00:03:12.842 "name": "Malloc2", 00:03:12.842 "aliases": [ 00:03:12.842 "a463169a-fc31-11ee-80f8-ef3e42bb1492" 00:03:12.842 ], 00:03:12.842 "product_name": "Malloc disk", 00:03:12.842 "block_size": 512, 00:03:12.842 "num_blocks": 16384, 00:03:12.842 "uuid": "a463169a-fc31-11ee-80f8-ef3e42bb1492", 00:03:12.842 "assigned_rate_limits": { 00:03:12.842 "rw_ios_per_sec": 0, 00:03:12.842 "rw_mbytes_per_sec": 0, 00:03:12.842 "r_mbytes_per_sec": 0, 00:03:12.842 "w_mbytes_per_sec": 0 00:03:12.842 }, 00:03:12.842 "claimed": true, 00:03:12.842 "claim_type": "exclusive_write", 00:03:12.842 "zoned": false, 00:03:12.842 "supported_io_types": { 00:03:12.842 "read": true, 00:03:12.842 "write": true, 00:03:12.842 "unmap": true, 00:03:12.842 "write_zeroes": true, 00:03:12.842 "flush": true, 00:03:12.842 "reset": true, 00:03:12.842 "compare": false, 00:03:12.842 "compare_and_write": false, 00:03:12.842 "abort": true, 00:03:12.842 "nvme_admin": false, 00:03:12.842 "nvme_io": false 00:03:12.842 }, 00:03:12.842 "memory_domains": [ 00:03:12.842 { 00:03:12.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:12.842 "dma_device_type": 2 00:03:12.842 } 00:03:12.842 ], 00:03:12.842 "driver_specific": {} 00:03:12.842 }, 00:03:12.842 { 00:03:12.842 "name": "Passthru0", 00:03:12.842 "aliases": [ 00:03:12.842 "aeb7ae50-303c-2458-8a84-6b1f46b1ad8a" 00:03:12.842 ], 00:03:12.842 "product_name": "passthru", 00:03:12.842 "block_size": 512, 00:03:12.842 "num_blocks": 16384, 00:03:12.842 "uuid": "aeb7ae50-303c-2458-8a84-6b1f46b1ad8a", 00:03:12.842 "assigned_rate_limits": { 00:03:12.842 "rw_ios_per_sec": 0, 00:03:12.842 "rw_mbytes_per_sec": 0, 00:03:12.842 "r_mbytes_per_sec": 0, 00:03:12.842 "w_mbytes_per_sec": 0 00:03:12.842 }, 00:03:12.842 "claimed": false, 00:03:12.842 "zoned": false, 00:03:12.842 "supported_io_types": { 00:03:12.842 "read": true, 00:03:12.842 "write": true, 00:03:12.842 "unmap": true, 00:03:12.842 "write_zeroes": true, 00:03:12.842 "flush": true, 00:03:12.842 "reset": true, 00:03:12.842 "compare": false, 00:03:12.842 "compare_and_write": false, 00:03:12.842 "abort": true, 00:03:12.842 "nvme_admin": false, 00:03:12.842 "nvme_io": false 00:03:12.842 }, 00:03:12.842 "memory_domains": [ 00:03:12.842 { 00:03:12.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:12.842 "dma_device_type": 2 00:03:12.842 } 00:03:12.842 ], 00:03:12.842 "driver_specific": { 00:03:12.842 "passthru": { 00:03:12.842 "name": "Passthru0", 00:03:12.842 "base_bdev_name": "Malloc2" 00:03:12.842 } 00:03:12.842 } 00:03:12.842 } 00:03:12.842 ]' 00:03:12.842 20:41:03 -- rpc/rpc.sh@21 -- # jq length 00:03:12.842 20:41:03 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:12.842 20:41:03 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:12.842 20:41:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:12.842 20:41:03 -- common/autotest_common.sh@10 -- # set +x 00:03:12.842 20:41:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:12.842 20:41:03 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:12.842 20:41:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:12.842 20:41:03 -- common/autotest_common.sh@10 -- # set +x 00:03:12.842 20:41:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:12.842 20:41:03 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:12.842 20:41:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:12.842 20:41:03 -- common/autotest_common.sh@10 -- # set +x 00:03:12.842 20:41:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:12.842 20:41:03 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:12.842 20:41:03 -- rpc/rpc.sh@26 -- # jq length 00:03:12.842 20:41:03 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:12.842 00:03:12.842 real 0m0.151s 00:03:12.842 user 0m0.046s 00:03:12.842 sys 0m0.036s 00:03:12.842 20:41:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:12.842 20:41:03 -- common/autotest_common.sh@10 -- # set +x 00:03:12.842 ************************************ 00:03:12.842 END TEST rpc_daemon_integrity 00:03:12.842 ************************************ 00:03:12.842 20:41:03 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:12.842 20:41:03 -- rpc/rpc.sh@84 -- # killprocess 45214 00:03:12.842 20:41:03 -- common/autotest_common.sh@926 -- # '[' -z 45214 ']' 00:03:12.842 20:41:03 -- common/autotest_common.sh@930 -- # kill -0 45214 00:03:12.842 20:41:03 -- common/autotest_common.sh@931 -- # uname 00:03:12.842 20:41:03 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:03:12.842 20:41:03 -- common/autotest_common.sh@934 -- # ps -c -o command 45214 00:03:12.842 20:41:03 -- common/autotest_common.sh@934 -- # tail -1 00:03:12.842 20:41:03 -- common/autotest_common.sh@934 -- # process_name=spdk_tgt 00:03:12.842 20:41:03 -- common/autotest_common.sh@936 -- # '[' spdk_tgt = sudo ']' 00:03:12.842 killing process with pid 45214 00:03:12.842 20:41:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 45214' 00:03:12.842 20:41:03 -- common/autotest_common.sh@945 -- # kill 45214 00:03:12.842 20:41:03 -- common/autotest_common.sh@950 -- # wait 45214 00:03:13.100 00:03:13.101 real 0m2.018s 00:03:13.101 user 0m2.037s 00:03:13.101 sys 0m0.955s 00:03:13.101 20:41:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:13.101 20:41:04 -- common/autotest_common.sh@10 -- # set +x 00:03:13.101 ************************************ 00:03:13.101 END TEST rpc 00:03:13.101 ************************************ 00:03:13.101 20:41:04 -- spdk/autotest.sh@177 -- # run_test rpc_client /usr/home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:03:13.101 20:41:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:13.101 20:41:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:13.101 20:41:04 -- common/autotest_common.sh@10 -- # set +x 00:03:13.101 ************************************ 00:03:13.101 START TEST rpc_client 00:03:13.101 ************************************ 00:03:13.101 20:41:04 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:03:13.358 * Looking for test storage... 00:03:13.358 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/rpc_client 00:03:13.358 20:41:04 -- rpc_client/rpc_client.sh@10 -- # /usr/home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:03:13.358 OK 00:03:13.358 20:41:04 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:13.358 00:03:13.358 real 0m0.206s 00:03:13.358 user 0m0.173s 00:03:13.358 sys 0m0.102s 00:03:13.358 20:41:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:13.358 20:41:04 -- common/autotest_common.sh@10 -- # set +x 00:03:13.358 ************************************ 00:03:13.358 END TEST rpc_client 00:03:13.358 ************************************ 00:03:13.358 20:41:04 -- spdk/autotest.sh@178 -- # run_test json_config /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:03:13.358 20:41:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:13.358 20:41:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:13.358 20:41:04 -- common/autotest_common.sh@10 -- # set +x 00:03:13.358 ************************************ 00:03:13.358 START TEST json_config 00:03:13.358 ************************************ 00:03:13.358 20:41:04 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:03:13.616 20:41:04 -- json_config/json_config.sh@8 -- # source /usr/home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:13.616 20:41:04 -- nvmf/common.sh@7 -- # uname -s 00:03:13.616 20:41:04 -- nvmf/common.sh@7 -- # [[ FreeBSD == FreeBSD ]] 00:03:13.616 20:41:04 -- nvmf/common.sh@7 -- # return 0 00:03:13.616 20:41:04 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:03:13.616 20:41:04 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:03:13.616 20:41:04 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:03:13.616 20:41:04 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:13.616 20:41:04 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:03:13.616 20:41:04 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:03:13.616 20:41:04 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:13.616 20:41:04 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:03:13.616 20:41:04 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:13.616 20:41:04 -- json_config/json_config.sh@32 -- # declare -A app_params 00:03:13.616 20:41:04 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/usr/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:03:13.616 20:41:04 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:03:13.616 20:41:04 -- json_config/json_config.sh@43 -- # last_event_id=0 00:03:13.616 20:41:04 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:13.616 INFO: JSON configuration test init 00:03:13.616 20:41:04 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:03:13.616 20:41:04 -- json_config/json_config.sh@420 -- # json_config_test_init 00:03:13.616 20:41:04 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:03:13.616 20:41:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:13.616 20:41:04 -- common/autotest_common.sh@10 -- # set +x 00:03:13.616 20:41:04 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:03:13.616 20:41:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:13.616 20:41:04 -- common/autotest_common.sh@10 -- # set +x 00:03:13.616 20:41:04 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:03:13.616 20:41:04 -- json_config/json_config.sh@98 -- # local app=target 00:03:13.616 20:41:04 -- json_config/json_config.sh@99 -- # shift 00:03:13.616 20:41:04 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:03:13.616 20:41:04 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:03:13.616 20:41:04 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:03:13.616 20:41:04 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:03:13.616 20:41:04 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:03:13.616 20:41:04 -- json_config/json_config.sh@111 -- # app_pid[$app]=45421 00:03:13.616 20:41:04 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:03:13.616 Waiting for target to run... 00:03:13.617 20:41:04 -- json_config/json_config.sh@110 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:13.617 20:41:04 -- json_config/json_config.sh@114 -- # waitforlisten 45421 /var/tmp/spdk_tgt.sock 00:03:13.617 20:41:04 -- common/autotest_common.sh@819 -- # '[' -z 45421 ']' 00:03:13.617 20:41:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:13.617 20:41:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:03:13.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:13.617 20:41:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:13.617 20:41:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:03:13.617 20:41:04 -- common/autotest_common.sh@10 -- # set +x 00:03:13.617 [2024-04-16 20:41:04.564211] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:03:13.617 [2024-04-16 20:41:04.564666] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:13.874 EAL: TSC is not safe to use in SMP mode 00:03:13.874 EAL: TSC is not invariant 00:03:13.874 [2024-04-16 20:41:04.797492] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:13.874 [2024-04-16 20:41:04.889726] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:03:13.874 [2024-04-16 20:41:04.889840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:14.441 20:41:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:03:14.441 20:41:05 -- common/autotest_common.sh@852 -- # return 0 00:03:14.441 00:03:14.441 20:41:05 -- json_config/json_config.sh@115 -- # echo '' 00:03:14.441 20:41:05 -- json_config/json_config.sh@322 -- # create_accel_config 00:03:14.441 20:41:05 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:03:14.441 20:41:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:14.441 20:41:05 -- common/autotest_common.sh@10 -- # set +x 00:03:14.441 20:41:05 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:03:14.441 20:41:05 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:03:14.441 20:41:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:03:14.441 20:41:05 -- common/autotest_common.sh@10 -- # set +x 00:03:14.441 20:41:05 -- json_config/json_config.sh@326 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:14.441 20:41:05 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:03:14.441 20:41:05 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:03:14.700 [2024-04-16 20:41:05.807127] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:03:14.958 20:41:05 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:03:14.958 20:41:05 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:03:14.958 20:41:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:14.958 20:41:05 -- common/autotest_common.sh@10 -- # set +x 00:03:14.958 20:41:05 -- json_config/json_config.sh@48 -- # local ret=0 00:03:14.958 20:41:05 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:03:14.958 20:41:05 -- json_config/json_config.sh@49 -- # local enabled_types 00:03:14.958 20:41:05 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:03:14.958 20:41:05 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:03:14.958 20:41:05 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:03:15.219 20:41:06 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:03:15.219 20:41:06 -- json_config/json_config.sh@51 -- # local get_types 00:03:15.219 20:41:06 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:03:15.219 20:41:06 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:03:15.219 20:41:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:03:15.219 20:41:06 -- common/autotest_common.sh@10 -- # set +x 00:03:15.219 20:41:06 -- json_config/json_config.sh@58 -- # return 0 00:03:15.219 20:41:06 -- json_config/json_config.sh@331 -- # [[ 1 -eq 1 ]] 00:03:15.219 20:41:06 -- json_config/json_config.sh@332 -- # create_bdev_subsystem_config 00:03:15.219 20:41:06 -- json_config/json_config.sh@158 -- # timing_enter create_bdev_subsystem_config 00:03:15.219 20:41:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:15.219 20:41:06 -- common/autotest_common.sh@10 -- # set +x 00:03:15.219 20:41:06 -- json_config/json_config.sh@160 -- # expected_notifications=() 00:03:15.219 20:41:06 -- json_config/json_config.sh@160 -- # local expected_notifications 00:03:15.219 20:41:06 -- json_config/json_config.sh@164 -- # expected_notifications+=($(get_notifications)) 00:03:15.219 20:41:06 -- json_config/json_config.sh@164 -- # get_notifications 00:03:15.219 20:41:06 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:03:15.219 20:41:06 -- json_config/json_config.sh@64 -- # IFS=: 00:03:15.219 20:41:06 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:03:15.219 20:41:06 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:03:15.219 20:41:06 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:03:15.219 20:41:06 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:03:15.482 20:41:06 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:03:15.482 20:41:06 -- json_config/json_config.sh@64 -- # IFS=: 00:03:15.482 20:41:06 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:03:15.482 20:41:06 -- json_config/json_config.sh@166 -- # [[ 1 -eq 1 ]] 00:03:15.482 20:41:06 -- json_config/json_config.sh@167 -- # local lvol_store_base_bdev=Nvme0n1 00:03:15.482 20:41:06 -- json_config/json_config.sh@169 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:03:15.482 20:41:06 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:03:15.482 Nvme0n1p0 Nvme0n1p1 00:03:15.482 20:41:06 -- json_config/json_config.sh@170 -- # tgt_rpc bdev_split_create Malloc0 3 00:03:15.482 20:41:06 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:03:15.740 [2024-04-16 20:41:06.716073] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:03:15.740 [2024-04-16 20:41:06.716147] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:03:15.740 00:03:15.740 20:41:06 -- json_config/json_config.sh@171 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:03:15.740 20:41:06 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:03:15.998 Malloc3 00:03:15.998 20:41:06 -- json_config/json_config.sh@172 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:03:15.998 20:41:06 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:03:15.998 [2024-04-16 20:41:07.116084] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:03:15.998 [2024-04-16 20:41:07.116135] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:15.998 [2024-04-16 20:41:07.116162] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82be0ff00 00:03:15.998 [2024-04-16 20:41:07.116168] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:15.998 [2024-04-16 20:41:07.116707] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:15.998 [2024-04-16 20:41:07.116736] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:03:15.998 PTBdevFromMalloc3 00:03:16.256 20:41:07 -- json_config/json_config.sh@174 -- # tgt_rpc bdev_null_create Null0 32 512 00:03:16.256 20:41:07 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:03:16.256 Null0 00:03:16.256 20:41:07 -- json_config/json_config.sh@176 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:03:16.256 20:41:07 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:03:16.514 Malloc0 00:03:16.514 20:41:07 -- json_config/json_config.sh@177 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:03:16.514 20:41:07 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:03:16.771 Malloc1 00:03:16.771 20:41:07 -- json_config/json_config.sh@190 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:03:16.771 20:41:07 -- json_config/json_config.sh@193 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:03:17.337 102400+0 records in 00:03:17.337 102400+0 records out 00:03:17.337 104857600 bytes transferred in 0.435436 secs (240810817 bytes/sec) 00:03:17.337 20:41:08 -- json_config/json_config.sh@194 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:03:17.337 20:41:08 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:03:17.337 aio_disk 00:03:17.337 20:41:08 -- json_config/json_config.sh@195 -- # expected_notifications+=(bdev_register:aio_disk) 00:03:17.337 20:41:08 -- json_config/json_config.sh@200 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:03:17.337 20:41:08 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:03:17.595 a752c8bf-fc31-11ee-80f8-ef3e42bb1492 00:03:17.595 20:41:08 -- json_config/json_config.sh@207 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:03:17.595 20:41:08 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:03:17.595 20:41:08 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:03:17.853 20:41:08 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:03:17.853 20:41:08 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:03:18.111 20:41:08 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:03:18.111 20:41:08 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:03:18.112 20:41:09 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:03:18.112 20:41:09 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:03:18.369 20:41:09 -- json_config/json_config.sh@210 -- # [[ 0 -eq 1 ]] 00:03:18.369 20:41:09 -- json_config/json_config.sh@225 -- # [[ 0 -eq 1 ]] 00:03:18.370 20:41:09 -- json_config/json_config.sh@231 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:a7701490-fc31-11ee-80f8-ef3e42bb1492 bdev_register:a78e99b3-fc31-11ee-80f8-ef3e42bb1492 bdev_register:a7ae57a1-fc31-11ee-80f8-ef3e42bb1492 bdev_register:a7ceb0f9-fc31-11ee-80f8-ef3e42bb1492 00:03:18.370 20:41:09 -- json_config/json_config.sh@70 -- # local events_to_check 00:03:18.370 20:41:09 -- json_config/json_config.sh@71 -- # local recorded_events 00:03:18.370 20:41:09 -- json_config/json_config.sh@74 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:03:18.370 20:41:09 -- json_config/json_config.sh@74 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:a7701490-fc31-11ee-80f8-ef3e42bb1492 bdev_register:a78e99b3-fc31-11ee-80f8-ef3e42bb1492 bdev_register:a7ae57a1-fc31-11ee-80f8-ef3e42bb1492 bdev_register:a7ceb0f9-fc31-11ee-80f8-ef3e42bb1492 00:03:18.370 20:41:09 -- json_config/json_config.sh@74 -- # sort 00:03:18.370 20:41:09 -- json_config/json_config.sh@75 -- # recorded_events=($(get_notifications | sort)) 00:03:18.370 20:41:09 -- json_config/json_config.sh@75 -- # sort 00:03:18.370 20:41:09 -- json_config/json_config.sh@75 -- # get_notifications 00:03:18.370 20:41:09 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:03:18.370 20:41:09 -- json_config/json_config.sh@64 -- # IFS=: 00:03:18.370 20:41:09 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:03:18.370 20:41:09 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:03:18.370 20:41:09 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:03:18.370 20:41:09 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:03:18.628 20:41:09 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:03:18.628 20:41:09 -- json_config/json_config.sh@64 -- # IFS=: 00:03:18.628 20:41:09 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:03:18.628 20:41:09 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p1 00:03:18.628 20:41:09 -- json_config/json_config.sh@64 -- # IFS=: 00:03:18.628 20:41:09 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:03:18.628 20:41:09 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p0 00:03:18.628 20:41:09 -- json_config/json_config.sh@64 -- # IFS=: 00:03:18.628 20:41:09 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:03:18.628 20:41:09 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc3 00:03:18.628 20:41:09 -- json_config/json_config.sh@64 -- # IFS=: 00:03:18.628 20:41:09 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:03:18.628 20:41:09 -- json_config/json_config.sh@65 -- # echo bdev_register:PTBdevFromMalloc3 00:03:18.628 20:41:09 -- json_config/json_config.sh@64 -- # IFS=: 00:03:18.628 20:41:09 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:03:18.628 20:41:09 -- json_config/json_config.sh@65 -- # echo bdev_register:Null0 00:03:18.628 20:41:09 -- json_config/json_config.sh@64 -- # IFS=: 00:03:18.628 20:41:09 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:03:18.628 20:41:09 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0 00:03:18.628 20:41:09 -- json_config/json_config.sh@64 -- # IFS=: 00:03:18.628 20:41:09 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:03:18.628 20:41:09 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p2 00:03:18.628 20:41:09 -- json_config/json_config.sh@64 -- # IFS=: 00:03:18.628 20:41:09 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:03:18.628 20:41:09 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p1 00:03:18.628 20:41:09 -- json_config/json_config.sh@64 -- # IFS=: 00:03:18.628 20:41:09 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:03:18.628 20:41:09 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p0 00:03:18.628 20:41:09 -- json_config/json_config.sh@64 -- # IFS=: 00:03:18.628 20:41:09 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:03:18.628 20:41:09 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc1 00:03:18.628 20:41:09 -- json_config/json_config.sh@64 -- # IFS=: 00:03:18.628 20:41:09 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:03:18.628 20:41:09 -- json_config/json_config.sh@65 -- # echo bdev_register:aio_disk 00:03:18.628 20:41:09 -- json_config/json_config.sh@64 -- # IFS=: 00:03:18.628 20:41:09 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:03:18.628 20:41:09 -- json_config/json_config.sh@65 -- # echo bdev_register:a7701490-fc31-11ee-80f8-ef3e42bb1492 00:03:18.628 20:41:09 -- json_config/json_config.sh@64 -- # IFS=: 00:03:18.628 20:41:09 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:03:18.628 20:41:09 -- json_config/json_config.sh@65 -- # echo bdev_register:a78e99b3-fc31-11ee-80f8-ef3e42bb1492 00:03:18.628 20:41:09 -- json_config/json_config.sh@64 -- # IFS=: 00:03:18.628 20:41:09 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:03:18.628 20:41:09 -- json_config/json_config.sh@65 -- # echo bdev_register:a7ae57a1-fc31-11ee-80f8-ef3e42bb1492 00:03:18.628 20:41:09 -- json_config/json_config.sh@64 -- # IFS=: 00:03:18.628 20:41:09 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:03:18.628 20:41:09 -- json_config/json_config.sh@65 -- # echo bdev_register:a7ceb0f9-fc31-11ee-80f8-ef3e42bb1492 00:03:18.628 20:41:09 -- json_config/json_config.sh@64 -- # IFS=: 00:03:18.628 20:41:09 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:03:18.628 20:41:09 -- json_config/json_config.sh@77 -- # [[ bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:a7701490-fc31-11ee-80f8-ef3e42bb1492 bdev_register:a78e99b3-fc31-11ee-80f8-ef3e42bb1492 bdev_register:a7ae57a1-fc31-11ee-80f8-ef3e42bb1492 bdev_register:a7ceb0f9-fc31-11ee-80f8-ef3e42bb1492 bdev_register:aio_disk != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\7\7\0\1\4\9\0\-\f\c\3\1\-\1\1\e\e\-\8\0\f\8\-\e\f\3\e\4\2\b\b\1\4\9\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\7\8\e\9\9\b\3\-\f\c\3\1\-\1\1\e\e\-\8\0\f\8\-\e\f\3\e\4\2\b\b\1\4\9\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\7\a\e\5\7\a\1\-\f\c\3\1\-\1\1\e\e\-\8\0\f\8\-\e\f\3\e\4\2\b\b\1\4\9\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\7\c\e\b\0\f\9\-\f\c\3\1\-\1\1\e\e\-\8\0\f\8\-\e\f\3\e\4\2\b\b\1\4\9\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k ]] 00:03:18.628 20:41:09 -- json_config/json_config.sh@89 -- # cat 00:03:18.629 20:41:09 -- json_config/json_config.sh@89 -- # printf ' %s\n' bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:a7701490-fc31-11ee-80f8-ef3e42bb1492 bdev_register:a78e99b3-fc31-11ee-80f8-ef3e42bb1492 bdev_register:a7ae57a1-fc31-11ee-80f8-ef3e42bb1492 bdev_register:a7ceb0f9-fc31-11ee-80f8-ef3e42bb1492 bdev_register:aio_disk 00:03:18.629 Expected events matched: 00:03:18.629 bdev_register:Malloc0 00:03:18.629 bdev_register:Malloc0p0 00:03:18.629 bdev_register:Malloc0p1 00:03:18.629 bdev_register:Malloc0p2 00:03:18.629 bdev_register:Malloc1 00:03:18.629 bdev_register:Malloc3 00:03:18.629 bdev_register:Null0 00:03:18.629 bdev_register:Nvme0n1 00:03:18.629 bdev_register:Nvme0n1p0 00:03:18.629 bdev_register:Nvme0n1p1 00:03:18.629 bdev_register:PTBdevFromMalloc3 00:03:18.629 bdev_register:a7701490-fc31-11ee-80f8-ef3e42bb1492 00:03:18.629 bdev_register:a78e99b3-fc31-11ee-80f8-ef3e42bb1492 00:03:18.629 bdev_register:a7ae57a1-fc31-11ee-80f8-ef3e42bb1492 00:03:18.629 bdev_register:a7ceb0f9-fc31-11ee-80f8-ef3e42bb1492 00:03:18.629 bdev_register:aio_disk 00:03:18.629 20:41:09 -- json_config/json_config.sh@233 -- # timing_exit create_bdev_subsystem_config 00:03:18.629 20:41:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:03:18.629 20:41:09 -- common/autotest_common.sh@10 -- # set +x 00:03:18.629 20:41:09 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:03:18.629 20:41:09 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:03:18.629 20:41:09 -- json_config/json_config.sh@343 -- # [[ 0 -eq 1 ]] 00:03:18.629 20:41:09 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:03:18.629 20:41:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:03:18.629 20:41:09 -- common/autotest_common.sh@10 -- # set +x 00:03:18.629 20:41:09 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:03:18.629 20:41:09 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:18.629 20:41:09 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:18.887 MallocBdevForConfigChangeCheck 00:03:18.887 20:41:09 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:03:18.887 20:41:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:03:18.887 20:41:09 -- common/autotest_common.sh@10 -- # set +x 00:03:18.887 20:41:09 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:03:18.887 20:41:09 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:19.145 INFO: shutting down applications... 00:03:19.145 20:41:10 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:03:19.145 20:41:10 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:03:19.145 20:41:10 -- json_config/json_config.sh@431 -- # json_config_clear target 00:03:19.145 20:41:10 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:03:19.145 20:41:10 -- json_config/json_config.sh@386 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:03:19.402 [2024-04-16 20:41:10.412233] vbdev_lvol.c: 151:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:03:19.659 Calling clear_iscsi_subsystem 00:03:19.659 Calling clear_nvmf_subsystem 00:03:19.659 Calling clear_bdev_subsystem 00:03:19.659 Calling clear_accel_subsystem 00:03:19.659 Calling clear_sock_subsystem 00:03:19.659 Calling clear_scheduler_subsystem 00:03:19.659 Calling clear_iobuf_subsystem 00:03:19.659 Calling clear_vmd_subsystem 00:03:19.659 20:41:10 -- json_config/json_config.sh@390 -- # local config_filter=/usr/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:03:19.659 20:41:10 -- json_config/json_config.sh@396 -- # count=100 00:03:19.659 20:41:10 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:03:19.659 20:41:10 -- json_config/json_config.sh@398 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:03:19.659 20:41:10 -- json_config/json_config.sh@398 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:19.659 20:41:10 -- json_config/json_config.sh@398 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:03:19.917 20:41:10 -- json_config/json_config.sh@398 -- # break 00:03:19.917 20:41:10 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:03:19.917 20:41:10 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:03:19.917 20:41:10 -- json_config/json_config.sh@120 -- # local app=target 00:03:19.917 20:41:10 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:03:19.917 20:41:10 -- json_config/json_config.sh@124 -- # [[ -n 45421 ]] 00:03:19.917 20:41:10 -- json_config/json_config.sh@127 -- # kill -SIGINT 45421 00:03:19.917 20:41:10 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:03:19.917 20:41:10 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:03:19.917 20:41:10 -- json_config/json_config.sh@130 -- # kill -0 45421 00:03:19.917 20:41:10 -- json_config/json_config.sh@134 -- # sleep 0.5 00:03:20.484 20:41:11 -- json_config/json_config.sh@129 -- # (( i++ )) 00:03:20.484 20:41:11 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:03:20.484 20:41:11 -- json_config/json_config.sh@130 -- # kill -0 45421 00:03:20.484 20:41:11 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:03:20.484 20:41:11 -- json_config/json_config.sh@132 -- # break 00:03:20.484 20:41:11 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:03:20.484 SPDK target shutdown done 00:03:20.484 20:41:11 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:03:20.484 INFO: relaunching applications... 00:03:20.484 20:41:11 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:03:20.484 20:41:11 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:03:20.484 20:41:11 -- json_config/json_config.sh@98 -- # local app=target 00:03:20.484 20:41:11 -- json_config/json_config.sh@99 -- # shift 00:03:20.484 20:41:11 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:03:20.484 20:41:11 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:03:20.484 20:41:11 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:03:20.484 20:41:11 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:03:20.484 20:41:11 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:03:20.484 20:41:11 -- json_config/json_config.sh@111 -- # app_pid[$app]=45579 00:03:20.484 20:41:11 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:03:20.484 Waiting for target to run... 00:03:20.484 20:41:11 -- json_config/json_config.sh@114 -- # waitforlisten 45579 /var/tmp/spdk_tgt.sock 00:03:20.485 20:41:11 -- json_config/json_config.sh@110 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:03:20.485 20:41:11 -- common/autotest_common.sh@819 -- # '[' -z 45579 ']' 00:03:20.485 20:41:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:20.485 20:41:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:03:20.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:20.485 20:41:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:20.485 20:41:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:03:20.485 20:41:11 -- common/autotest_common.sh@10 -- # set +x 00:03:20.485 [2024-04-16 20:41:11.429643] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:03:20.485 [2024-04-16 20:41:11.430038] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:20.743 EAL: TSC is not safe to use in SMP mode 00:03:20.743 EAL: TSC is not invariant 00:03:20.743 [2024-04-16 20:41:11.661502] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:20.743 [2024-04-16 20:41:11.753370] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:03:20.743 [2024-04-16 20:41:11.753462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:21.002 [2024-04-16 20:41:11.883159] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:03:21.002 [2024-04-16 20:41:11.883222] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:03:21.002 [2024-04-16 20:41:11.891146] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:03:21.002 [2024-04-16 20:41:11.891167] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:03:21.002 [2024-04-16 20:41:11.899164] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:03:21.002 [2024-04-16 20:41:11.899184] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:03:21.002 [2024-04-16 20:41:11.899191] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:03:21.002 [2024-04-16 20:41:11.907160] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:03:21.002 [2024-04-16 20:41:11.972510] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:03:21.002 [2024-04-16 20:41:11.972553] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:21.002 [2024-04-16 20:41:11.972568] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82ae49500 00:03:21.003 [2024-04-16 20:41:11.972574] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:21.003 [2024-04-16 20:41:11.972619] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:21.003 [2024-04-16 20:41:11.972642] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:03:21.271 20:41:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:03:21.271 20:41:12 -- common/autotest_common.sh@852 -- # return 0 00:03:21.271 00:03:21.271 20:41:12 -- json_config/json_config.sh@115 -- # echo '' 00:03:21.271 20:41:12 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:03:21.271 INFO: Checking if target configuration is the same... 00:03:21.271 20:41:12 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:03:21.271 20:41:12 -- json_config/json_config.sh@441 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /tmp//sh-np.dAb3iI /usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:03:21.271 + '[' 2 -ne 2 ']' 00:03:21.271 +++ dirname /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:03:21.271 ++ readlink -f /usr/home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:03:21.271 + rootdir=/usr/home/vagrant/spdk_repo/spdk 00:03:21.271 +++ basename /tmp//sh-np.dAb3iI 00:03:21.271 ++ mktemp /tmp/sh-np.dAb3iI.XXX 00:03:21.271 + tmp_file_1=/tmp/sh-np.dAb3iI.nfp 00:03:21.271 +++ basename /usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:03:21.271 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:21.271 + tmp_file_2=/tmp/spdk_tgt_config.json.JwL 00:03:21.271 + ret=0 00:03:21.271 + /usr/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:03:21.271 20:41:12 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:03:21.271 20:41:12 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:21.568 + /usr/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:03:21.827 + diff -u /tmp/sh-np.dAb3iI.nfp /tmp/spdk_tgt_config.json.JwL 00:03:21.827 INFO: JSON config files are the same 00:03:21.827 + echo 'INFO: JSON config files are the same' 00:03:21.827 + rm /tmp/sh-np.dAb3iI.nfp /tmp/spdk_tgt_config.json.JwL 00:03:21.827 + exit 0 00:03:21.827 20:41:12 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:03:21.827 INFO: changing configuration and checking if this can be detected... 00:03:21.827 20:41:12 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:03:21.827 20:41:12 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:21.827 20:41:12 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:21.827 20:41:12 -- json_config/json_config.sh@450 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /tmp//sh-np.Oa8RCd /usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:03:21.827 + '[' 2 -ne 2 ']' 00:03:21.827 +++ dirname /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:03:21.827 ++ readlink -f /usr/home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:03:21.827 + rootdir=/usr/home/vagrant/spdk_repo/spdk 00:03:21.827 +++ basename /tmp//sh-np.Oa8RCd 00:03:21.827 ++ mktemp /tmp/sh-np.Oa8RCd.XXX 00:03:21.827 + tmp_file_1=/tmp/sh-np.Oa8RCd.x9m 00:03:21.827 +++ basename /usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:03:22.086 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:22.086 + tmp_file_2=/tmp/spdk_tgt_config.json.chD 00:03:22.086 + ret=0 00:03:22.086 + /usr/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:03:22.086 20:41:12 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:03:22.086 20:41:12 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:22.346 + /usr/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:03:22.346 + diff -u /tmp/sh-np.Oa8RCd.x9m /tmp/spdk_tgt_config.json.chD 00:03:22.346 + ret=1 00:03:22.346 + echo '=== Start of file: /tmp/sh-np.Oa8RCd.x9m ===' 00:03:22.346 + cat /tmp/sh-np.Oa8RCd.x9m 00:03:22.346 + echo '=== End of file: /tmp/sh-np.Oa8RCd.x9m ===' 00:03:22.346 + echo '' 00:03:22.346 + echo '=== Start of file: /tmp/spdk_tgt_config.json.chD ===' 00:03:22.346 + cat /tmp/spdk_tgt_config.json.chD 00:03:22.346 + echo '=== End of file: /tmp/spdk_tgt_config.json.chD ===' 00:03:22.346 + echo '' 00:03:22.346 + rm /tmp/sh-np.Oa8RCd.x9m /tmp/spdk_tgt_config.json.chD 00:03:22.346 + exit 1 00:03:22.346 20:41:13 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:03:22.346 INFO: configuration change detected. 00:03:22.346 20:41:13 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:03:22.346 20:41:13 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:03:22.346 20:41:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:22.346 20:41:13 -- common/autotest_common.sh@10 -- # set +x 00:03:22.346 20:41:13 -- json_config/json_config.sh@360 -- # local ret=0 00:03:22.346 20:41:13 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:03:22.346 20:41:13 -- json_config/json_config.sh@370 -- # [[ -n 45579 ]] 00:03:22.346 20:41:13 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:03:22.346 20:41:13 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:03:22.346 20:41:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:22.346 20:41:13 -- common/autotest_common.sh@10 -- # set +x 00:03:22.346 20:41:13 -- json_config/json_config.sh@239 -- # [[ 1 -eq 1 ]] 00:03:22.346 20:41:13 -- json_config/json_config.sh@240 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:03:22.346 20:41:13 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:03:22.604 20:41:13 -- json_config/json_config.sh@241 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:03:22.604 20:41:13 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:03:22.604 20:41:13 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:03:22.604 20:41:13 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:03:22.862 20:41:13 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:03:22.862 20:41:13 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:03:23.120 20:41:14 -- json_config/json_config.sh@246 -- # uname -s 00:03:23.120 20:41:14 -- json_config/json_config.sh@246 -- # [[ FreeBSD = Linux ]] 00:03:23.120 20:41:14 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:03:23.120 20:41:14 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:03:23.120 20:41:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:03:23.120 20:41:14 -- common/autotest_common.sh@10 -- # set +x 00:03:23.120 20:41:14 -- json_config/json_config.sh@376 -- # killprocess 45579 00:03:23.120 20:41:14 -- common/autotest_common.sh@926 -- # '[' -z 45579 ']' 00:03:23.120 20:41:14 -- common/autotest_common.sh@930 -- # kill -0 45579 00:03:23.120 20:41:14 -- common/autotest_common.sh@931 -- # uname 00:03:23.120 20:41:14 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:03:23.120 20:41:14 -- common/autotest_common.sh@934 -- # ps -c -o command 45579 00:03:23.120 20:41:14 -- common/autotest_common.sh@934 -- # tail -1 00:03:23.120 20:41:14 -- common/autotest_common.sh@934 -- # process_name=spdk_tgt 00:03:23.120 20:41:14 -- common/autotest_common.sh@936 -- # '[' spdk_tgt = sudo ']' 00:03:23.120 killing process with pid 45579 00:03:23.120 20:41:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 45579' 00:03:23.120 20:41:14 -- common/autotest_common.sh@945 -- # kill 45579 00:03:23.120 20:41:14 -- common/autotest_common.sh@950 -- # wait 45579 00:03:23.379 20:41:14 -- json_config/json_config.sh@379 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:03:23.379 20:41:14 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:03:23.379 20:41:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:03:23.379 20:41:14 -- common/autotest_common.sh@10 -- # set +x 00:03:23.379 INFO: Success 00:03:23.379 20:41:14 -- json_config/json_config.sh@381 -- # return 0 00:03:23.379 20:41:14 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:03:23.379 00:03:23.379 real 0m10.014s 00:03:23.379 user 0m15.032s 00:03:23.379 sys 0m1.893s 00:03:23.379 20:41:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:23.379 20:41:14 -- common/autotest_common.sh@10 -- # set +x 00:03:23.379 ************************************ 00:03:23.379 END TEST json_config 00:03:23.379 ************************************ 00:03:23.379 20:41:14 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:03:23.379 20:41:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:23.379 20:41:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:23.379 20:41:14 -- common/autotest_common.sh@10 -- # set +x 00:03:23.379 ************************************ 00:03:23.379 START TEST json_config_extra_key 00:03:23.379 ************************************ 00:03:23.379 20:41:14 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:03:23.638 20:41:14 -- json_config/json_config_extra_key.sh@9 -- # source /usr/home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:23.638 20:41:14 -- nvmf/common.sh@7 -- # uname -s 00:03:23.638 20:41:14 -- nvmf/common.sh@7 -- # [[ FreeBSD == FreeBSD ]] 00:03:23.638 20:41:14 -- nvmf/common.sh@7 -- # return 0 00:03:23.638 20:41:14 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:03:23.638 20:41:14 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:03:23.638 20:41:14 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:03:23.638 20:41:14 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:03:23.638 20:41:14 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:03:23.638 20:41:14 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:03:23.638 20:41:14 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/usr/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:03:23.638 20:41:14 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:03:23.638 20:41:14 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:23.638 INFO: launching applications... 00:03:23.638 20:41:14 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:03:23.638 20:41:14 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /usr/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:03:23.638 20:41:14 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:03:23.638 20:41:14 -- json_config/json_config_extra_key.sh@25 -- # shift 00:03:23.638 20:41:14 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:03:23.638 20:41:14 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:03:23.638 20:41:14 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=45695 00:03:23.638 20:41:14 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:03:23.638 Waiting for target to run... 00:03:23.638 20:41:14 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 45695 /var/tmp/spdk_tgt.sock 00:03:23.638 20:41:14 -- json_config/json_config_extra_key.sh@30 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /usr/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:03:23.638 20:41:14 -- common/autotest_common.sh@819 -- # '[' -z 45695 ']' 00:03:23.638 20:41:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:23.638 20:41:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:03:23.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:23.638 20:41:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:23.638 20:41:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:03:23.638 20:41:14 -- common/autotest_common.sh@10 -- # set +x 00:03:23.638 [2024-04-16 20:41:14.618221] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:03:23.638 [2024-04-16 20:41:14.618471] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:23.898 EAL: TSC is not safe to use in SMP mode 00:03:23.898 EAL: TSC is not invariant 00:03:23.898 [2024-04-16 20:41:14.860143] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:23.898 [2024-04-16 20:41:14.951521] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:03:23.898 [2024-04-16 20:41:14.951628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:24.467 20:41:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:03:24.467 20:41:15 -- common/autotest_common.sh@852 -- # return 0 00:03:24.467 00:03:24.467 20:41:15 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:03:24.467 INFO: shutting down applications... 00:03:24.467 20:41:15 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:03:24.467 20:41:15 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:03:24.467 20:41:15 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:03:24.467 20:41:15 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:03:24.467 20:41:15 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 45695 ]] 00:03:24.467 20:41:15 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 45695 00:03:24.467 20:41:15 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:03:24.467 20:41:15 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:03:24.467 20:41:15 -- json_config/json_config_extra_key.sh@50 -- # kill -0 45695 00:03:24.467 20:41:15 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:03:25.036 20:41:16 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:03:25.036 20:41:16 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:03:25.036 20:41:16 -- json_config/json_config_extra_key.sh@50 -- # kill -0 45695 00:03:25.036 20:41:16 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:03:25.036 20:41:16 -- json_config/json_config_extra_key.sh@52 -- # break 00:03:25.036 20:41:16 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:03:25.036 SPDK target shutdown done 00:03:25.036 20:41:16 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:03:25.036 Success 00:03:25.036 20:41:16 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:03:25.036 00:03:25.036 real 0m1.660s 00:03:25.036 user 0m1.299s 00:03:25.036 sys 0m0.429s 00:03:25.036 20:41:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:25.036 20:41:16 -- common/autotest_common.sh@10 -- # set +x 00:03:25.036 ************************************ 00:03:25.036 END TEST json_config_extra_key 00:03:25.036 ************************************ 00:03:25.036 20:41:16 -- spdk/autotest.sh@180 -- # run_test alias_rpc /usr/home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:03:25.036 20:41:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:25.036 20:41:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:25.036 20:41:16 -- common/autotest_common.sh@10 -- # set +x 00:03:25.036 ************************************ 00:03:25.036 START TEST alias_rpc 00:03:25.036 ************************************ 00:03:25.036 20:41:16 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:03:25.294 * Looking for test storage... 00:03:25.294 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:03:25.294 20:41:16 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:03:25.294 20:41:16 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=45744 00:03:25.294 20:41:16 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 45744 00:03:25.294 20:41:16 -- alias_rpc/alias_rpc.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:03:25.294 20:41:16 -- common/autotest_common.sh@819 -- # '[' -z 45744 ']' 00:03:25.294 20:41:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:25.294 20:41:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:03:25.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:25.294 20:41:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:25.294 20:41:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:03:25.294 20:41:16 -- common/autotest_common.sh@10 -- # set +x 00:03:25.294 [2024-04-16 20:41:16.321453] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:03:25.294 [2024-04-16 20:41:16.321837] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:25.862 EAL: TSC is not safe to use in SMP mode 00:03:25.862 EAL: TSC is not invariant 00:03:25.862 [2024-04-16 20:41:16.753320] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:25.862 [2024-04-16 20:41:16.844670] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:03:25.862 [2024-04-16 20:41:16.844772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:26.121 20:41:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:03:26.121 20:41:17 -- common/autotest_common.sh@852 -- # return 0 00:03:26.121 20:41:17 -- alias_rpc/alias_rpc.sh@17 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:03:26.381 20:41:17 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 45744 00:03:26.381 20:41:17 -- common/autotest_common.sh@926 -- # '[' -z 45744 ']' 00:03:26.381 20:41:17 -- common/autotest_common.sh@930 -- # kill -0 45744 00:03:26.381 20:41:17 -- common/autotest_common.sh@931 -- # uname 00:03:26.381 20:41:17 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:03:26.381 20:41:17 -- common/autotest_common.sh@934 -- # tail -1 00:03:26.381 20:41:17 -- common/autotest_common.sh@934 -- # ps -c -o command 45744 00:03:26.381 20:41:17 -- common/autotest_common.sh@934 -- # process_name=spdk_tgt 00:03:26.381 20:41:17 -- common/autotest_common.sh@936 -- # '[' spdk_tgt = sudo ']' 00:03:26.381 killing process with pid 45744 00:03:26.381 20:41:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 45744' 00:03:26.381 20:41:17 -- common/autotest_common.sh@945 -- # kill 45744 00:03:26.381 20:41:17 -- common/autotest_common.sh@950 -- # wait 45744 00:03:26.640 00:03:26.640 real 0m1.542s 00:03:26.640 user 0m1.497s 00:03:26.640 sys 0m0.694s 00:03:26.640 20:41:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:26.641 20:41:17 -- common/autotest_common.sh@10 -- # set +x 00:03:26.641 ************************************ 00:03:26.641 END TEST alias_rpc 00:03:26.641 ************************************ 00:03:26.641 20:41:17 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:03:26.641 20:41:17 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /usr/home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:03:26.641 20:41:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:26.641 20:41:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:26.641 20:41:17 -- common/autotest_common.sh@10 -- # set +x 00:03:26.641 ************************************ 00:03:26.641 START TEST spdkcli_tcp 00:03:26.641 ************************************ 00:03:26.641 20:41:17 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:03:26.900 * Looking for test storage... 00:03:26.900 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/spdkcli 00:03:26.900 20:41:17 -- spdkcli/tcp.sh@9 -- # source /usr/home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:03:26.900 20:41:17 -- spdkcli/common.sh@6 -- # spdkcli_job=/usr/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:03:26.900 20:41:17 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/usr/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:03:26.900 20:41:17 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:03:26.900 20:41:17 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:03:26.900 20:41:17 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:03:26.900 20:41:17 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:03:26.901 20:41:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:26.901 20:41:17 -- common/autotest_common.sh@10 -- # set +x 00:03:26.901 20:41:17 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=45800 00:03:26.901 20:41:17 -- spdkcli/tcp.sh@27 -- # waitforlisten 45800 00:03:26.901 20:41:17 -- common/autotest_common.sh@819 -- # '[' -z 45800 ']' 00:03:26.901 20:41:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:26.901 20:41:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:03:26.901 20:41:17 -- spdkcli/tcp.sh@24 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:03:26.901 20:41:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:26.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:26.901 20:41:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:03:26.901 20:41:17 -- common/autotest_common.sh@10 -- # set +x 00:03:26.901 [2024-04-16 20:41:17.906608] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:03:26.901 [2024-04-16 20:41:17.906898] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:27.469 EAL: TSC is not safe to use in SMP mode 00:03:27.469 EAL: TSC is not invariant 00:03:27.469 [2024-04-16 20:41:18.337358] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:03:27.469 [2024-04-16 20:41:18.429685] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:03:27.469 [2024-04-16 20:41:18.429860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:27.469 [2024-04-16 20:41:18.429836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:03:27.729 20:41:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:03:27.729 20:41:18 -- common/autotest_common.sh@852 -- # return 0 00:03:27.729 20:41:18 -- spdkcli/tcp.sh@31 -- # socat_pid=45804 00:03:27.729 20:41:18 -- spdkcli/tcp.sh@33 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:03:27.729 20:41:18 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:03:27.994 [ 00:03:27.994 "spdk_get_version", 00:03:27.994 "rpc_get_methods", 00:03:27.994 "env_dpdk_get_mem_stats", 00:03:27.994 "trace_get_info", 00:03:27.994 "trace_get_tpoint_group_mask", 00:03:27.994 "trace_disable_tpoint_group", 00:03:27.994 "trace_enable_tpoint_group", 00:03:27.994 "trace_clear_tpoint_mask", 00:03:27.994 "trace_set_tpoint_mask", 00:03:27.994 "notify_get_notifications", 00:03:27.994 "notify_get_types", 00:03:27.994 "accel_get_stats", 00:03:27.994 "accel_set_options", 00:03:27.994 "accel_set_driver", 00:03:27.994 "accel_crypto_key_destroy", 00:03:27.994 "accel_crypto_keys_get", 00:03:27.994 "accel_crypto_key_create", 00:03:27.994 "accel_assign_opc", 00:03:27.994 "accel_get_module_info", 00:03:27.994 "accel_get_opc_assignments", 00:03:27.994 "bdev_get_histogram", 00:03:27.994 "bdev_enable_histogram", 00:03:27.994 "bdev_set_qos_limit", 00:03:27.994 "bdev_set_qd_sampling_period", 00:03:27.994 "bdev_get_bdevs", 00:03:27.994 "bdev_reset_iostat", 00:03:27.994 "bdev_get_iostat", 00:03:27.994 "bdev_examine", 00:03:27.994 "bdev_wait_for_examine", 00:03:27.994 "bdev_set_options", 00:03:27.994 "sock_set_default_impl", 00:03:27.994 "sock_impl_set_options", 00:03:27.994 "sock_impl_get_options", 00:03:27.994 "framework_get_pci_devices", 00:03:27.994 "framework_get_config", 00:03:27.994 "framework_get_subsystems", 00:03:27.994 "thread_set_cpumask", 00:03:27.994 "framework_get_scheduler", 00:03:27.994 "framework_set_scheduler", 00:03:27.994 "framework_get_reactors", 00:03:27.994 "thread_get_io_channels", 00:03:27.994 "thread_get_pollers", 00:03:27.994 "thread_get_stats", 00:03:27.994 "framework_monitor_context_switch", 00:03:27.994 "spdk_kill_instance", 00:03:27.994 "log_enable_timestamps", 00:03:27.994 "log_get_flags", 00:03:27.994 "log_clear_flag", 00:03:27.994 "log_set_flag", 00:03:27.994 "log_get_level", 00:03:27.994 "log_set_level", 00:03:27.994 "log_get_print_level", 00:03:27.994 "log_set_print_level", 00:03:27.994 "framework_enable_cpumask_locks", 00:03:27.994 "framework_disable_cpumask_locks", 00:03:27.994 "framework_wait_init", 00:03:27.994 "framework_start_init", 00:03:27.994 "iobuf_get_stats", 00:03:27.994 "iobuf_set_options", 00:03:27.994 "vmd_rescan", 00:03:27.994 "vmd_remove_device", 00:03:27.994 "vmd_enable", 00:03:27.994 "nvmf_subsystem_get_listeners", 00:03:27.994 "nvmf_subsystem_get_qpairs", 00:03:27.994 "nvmf_subsystem_get_controllers", 00:03:27.994 "nvmf_get_stats", 00:03:27.995 "nvmf_get_transports", 00:03:27.995 "nvmf_create_transport", 00:03:27.995 "nvmf_get_targets", 00:03:27.995 "nvmf_delete_target", 00:03:27.995 "nvmf_create_target", 00:03:27.995 "nvmf_subsystem_allow_any_host", 00:03:27.995 "nvmf_subsystem_remove_host", 00:03:27.995 "nvmf_subsystem_add_host", 00:03:27.995 "nvmf_subsystem_remove_ns", 00:03:27.995 "nvmf_subsystem_add_ns", 00:03:27.995 "nvmf_subsystem_listener_set_ana_state", 00:03:27.995 "nvmf_discovery_get_referrals", 00:03:27.995 "nvmf_discovery_remove_referral", 00:03:27.995 "nvmf_discovery_add_referral", 00:03:27.995 "nvmf_subsystem_remove_listener", 00:03:27.995 "nvmf_subsystem_add_listener", 00:03:27.995 "nvmf_delete_subsystem", 00:03:27.995 "nvmf_create_subsystem", 00:03:27.995 "nvmf_get_subsystems", 00:03:27.995 "nvmf_set_crdt", 00:03:27.995 "nvmf_set_config", 00:03:27.995 "nvmf_set_max_subsystems", 00:03:27.995 "scsi_get_devices", 00:03:27.995 "iscsi_set_options", 00:03:27.995 "iscsi_get_auth_groups", 00:03:27.995 "iscsi_auth_group_remove_secret", 00:03:27.995 "iscsi_auth_group_add_secret", 00:03:27.995 "iscsi_delete_auth_group", 00:03:27.995 "iscsi_create_auth_group", 00:03:27.995 "iscsi_set_discovery_auth", 00:03:27.995 "iscsi_get_options", 00:03:27.995 "iscsi_target_node_request_logout", 00:03:27.995 "iscsi_target_node_set_redirect", 00:03:27.995 "iscsi_target_node_set_auth", 00:03:27.995 "iscsi_target_node_add_lun", 00:03:27.995 "iscsi_get_connections", 00:03:27.995 "iscsi_portal_group_set_auth", 00:03:27.995 "iscsi_start_portal_group", 00:03:27.995 "iscsi_delete_portal_group", 00:03:27.995 "iscsi_create_portal_group", 00:03:27.995 "iscsi_get_portal_groups", 00:03:27.995 "iscsi_delete_target_node", 00:03:27.995 "iscsi_target_node_remove_pg_ig_maps", 00:03:27.995 "iscsi_target_node_add_pg_ig_maps", 00:03:27.995 "iscsi_create_target_node", 00:03:27.995 "iscsi_get_target_nodes", 00:03:27.995 "iscsi_delete_initiator_group", 00:03:27.995 "iscsi_initiator_group_remove_initiators", 00:03:27.995 "iscsi_initiator_group_add_initiators", 00:03:27.995 "iscsi_create_initiator_group", 00:03:27.995 "iscsi_get_initiator_groups", 00:03:27.995 "iaa_scan_accel_module", 00:03:27.995 "dsa_scan_accel_module", 00:03:27.995 "ioat_scan_accel_module", 00:03:27.995 "accel_error_inject_error", 00:03:27.995 "bdev_aio_delete", 00:03:27.995 "bdev_aio_rescan", 00:03:27.995 "bdev_aio_create", 00:03:27.995 "blobfs_create", 00:03:27.995 "blobfs_detect", 00:03:27.995 "blobfs_set_cache_size", 00:03:27.995 "bdev_zone_block_delete", 00:03:27.995 "bdev_zone_block_create", 00:03:27.995 "bdev_delay_delete", 00:03:27.995 "bdev_delay_create", 00:03:27.995 "bdev_delay_update_latency", 00:03:27.995 "bdev_split_delete", 00:03:27.995 "bdev_split_create", 00:03:27.995 "bdev_error_inject_error", 00:03:27.995 "bdev_error_delete", 00:03:27.995 "bdev_error_create", 00:03:27.995 "bdev_raid_set_options", 00:03:27.995 "bdev_raid_remove_base_bdev", 00:03:27.995 "bdev_raid_add_base_bdev", 00:03:27.995 "bdev_raid_delete", 00:03:27.995 "bdev_raid_create", 00:03:27.995 "bdev_raid_get_bdevs", 00:03:27.995 "bdev_lvol_grow_lvstore", 00:03:27.995 "bdev_lvol_get_lvols", 00:03:27.995 "bdev_lvol_get_lvstores", 00:03:27.995 "bdev_lvol_delete", 00:03:27.995 "bdev_lvol_set_read_only", 00:03:27.995 "bdev_lvol_resize", 00:03:27.995 "bdev_lvol_decouple_parent", 00:03:27.995 "bdev_lvol_inflate", 00:03:27.995 "bdev_lvol_rename", 00:03:27.995 "bdev_lvol_clone_bdev", 00:03:27.995 "bdev_lvol_clone", 00:03:27.995 "bdev_lvol_snapshot", 00:03:27.995 "bdev_lvol_create", 00:03:27.995 "bdev_lvol_delete_lvstore", 00:03:27.995 "bdev_lvol_rename_lvstore", 00:03:27.995 "bdev_lvol_create_lvstore", 00:03:27.995 "bdev_passthru_delete", 00:03:27.995 "bdev_passthru_create", 00:03:27.995 "bdev_nvme_send_cmd", 00:03:27.995 "bdev_nvme_get_path_iostat", 00:03:27.995 "bdev_nvme_get_mdns_discovery_info", 00:03:27.995 "bdev_nvme_stop_mdns_discovery", 00:03:27.995 "bdev_nvme_start_mdns_discovery", 00:03:27.995 "bdev_nvme_set_multipath_policy", 00:03:27.995 "bdev_nvme_set_preferred_path", 00:03:27.995 "bdev_nvme_get_io_paths", 00:03:27.995 "bdev_nvme_remove_error_injection", 00:03:27.995 "bdev_nvme_add_error_injection", 00:03:27.995 "bdev_nvme_get_discovery_info", 00:03:27.995 "bdev_nvme_stop_discovery", 00:03:27.995 "bdev_nvme_start_discovery", 00:03:27.995 "bdev_nvme_get_controller_health_info", 00:03:27.995 "bdev_nvme_disable_controller", 00:03:27.995 "bdev_nvme_enable_controller", 00:03:27.995 "bdev_nvme_reset_controller", 00:03:27.995 "bdev_nvme_get_transport_statistics", 00:03:27.995 "bdev_nvme_apply_firmware", 00:03:27.995 "bdev_nvme_detach_controller", 00:03:27.995 "bdev_nvme_get_controllers", 00:03:27.995 "bdev_nvme_attach_controller", 00:03:27.995 "bdev_nvme_set_hotplug", 00:03:27.995 "bdev_nvme_set_options", 00:03:27.995 "bdev_null_resize", 00:03:27.995 "bdev_null_delete", 00:03:27.995 "bdev_null_create", 00:03:27.995 "bdev_malloc_delete", 00:03:27.995 "bdev_malloc_create" 00:03:27.995 ] 00:03:27.995 20:41:19 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:03:27.995 20:41:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:03:27.995 20:41:19 -- common/autotest_common.sh@10 -- # set +x 00:03:27.995 20:41:19 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:03:27.995 20:41:19 -- spdkcli/tcp.sh@38 -- # killprocess 45800 00:03:27.995 20:41:19 -- common/autotest_common.sh@926 -- # '[' -z 45800 ']' 00:03:27.995 20:41:19 -- common/autotest_common.sh@930 -- # kill -0 45800 00:03:27.995 20:41:19 -- common/autotest_common.sh@931 -- # uname 00:03:27.995 20:41:19 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:03:27.995 20:41:19 -- common/autotest_common.sh@934 -- # ps -c -o command 45800 00:03:27.995 20:41:19 -- common/autotest_common.sh@934 -- # tail -1 00:03:27.995 20:41:19 -- common/autotest_common.sh@934 -- # process_name=spdk_tgt 00:03:27.995 20:41:19 -- common/autotest_common.sh@936 -- # '[' spdk_tgt = sudo ']' 00:03:27.995 killing process with pid 45800 00:03:27.995 20:41:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 45800' 00:03:27.995 20:41:19 -- common/autotest_common.sh@945 -- # kill 45800 00:03:27.995 20:41:19 -- common/autotest_common.sh@950 -- # wait 45800 00:03:28.269 00:03:28.269 real 0m1.556s 00:03:28.269 user 0m2.322s 00:03:28.269 sys 0m0.687s 00:03:28.269 20:41:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:28.269 20:41:19 -- common/autotest_common.sh@10 -- # set +x 00:03:28.269 ************************************ 00:03:28.269 END TEST spdkcli_tcp 00:03:28.269 ************************************ 00:03:28.269 20:41:19 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /usr/home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:03:28.269 20:41:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:28.269 20:41:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:28.269 20:41:19 -- common/autotest_common.sh@10 -- # set +x 00:03:28.269 ************************************ 00:03:28.269 START TEST dpdk_mem_utility 00:03:28.269 ************************************ 00:03:28.269 20:41:19 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:03:28.528 * Looking for test storage... 00:03:28.528 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:03:28.528 20:41:19 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/usr/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:03:28.528 20:41:19 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=45870 00:03:28.528 20:41:19 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 45870 00:03:28.528 20:41:19 -- common/autotest_common.sh@819 -- # '[' -z 45870 ']' 00:03:28.528 20:41:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:28.528 20:41:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:03:28.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:28.528 20:41:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:28.529 20:41:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:03:28.529 20:41:19 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:03:28.529 20:41:19 -- common/autotest_common.sh@10 -- # set +x 00:03:28.529 [2024-04-16 20:41:19.499199] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:03:28.529 [2024-04-16 20:41:19.499562] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:29.097 EAL: TSC is not safe to use in SMP mode 00:03:29.097 EAL: TSC is not invariant 00:03:29.097 [2024-04-16 20:41:19.930514] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:29.097 [2024-04-16 20:41:20.023176] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:03:29.097 [2024-04-16 20:41:20.023273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:29.356 20:41:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:03:29.356 20:41:20 -- common/autotest_common.sh@852 -- # return 0 00:03:29.356 20:41:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:03:29.356 20:41:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:03:29.356 20:41:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:29.357 20:41:20 -- common/autotest_common.sh@10 -- # set +x 00:03:29.357 { 00:03:29.357 "filename": "/tmp/spdk_mem_dump.txt" 00:03:29.357 } 00:03:29.357 20:41:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:29.357 20:41:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:03:29.357 DPDK memory size 2048.000000 MiB in 1 heap(s) 00:03:29.357 1 heaps totaling size 2048.000000 MiB 00:03:29.357 size: 2048.000000 MiB heap id: 0 00:03:29.357 end heaps---------- 00:03:29.357 8 mempools totaling size 592.563660 MiB 00:03:29.357 size: 212.271240 MiB name: PDU_immediate_data_Pool 00:03:29.357 size: 153.489014 MiB name: PDU_data_out_Pool 00:03:29.357 size: 84.500549 MiB name: bdev_io_45870 00:03:29.357 size: 51.008362 MiB name: evtpool_45870 00:03:29.357 size: 50.000549 MiB name: msgpool_45870 00:03:29.357 size: 21.758911 MiB name: PDU_Pool 00:03:29.357 size: 19.508911 MiB name: SCSI_TASK_Pool 00:03:29.357 size: 0.026123 MiB name: Session_Pool 00:03:29.357 end mempools------- 00:03:29.357 6 memzones totaling size 4.142822 MiB 00:03:29.357 size: 1.000366 MiB name: RG_ring_0_45870 00:03:29.357 size: 1.000366 MiB name: RG_ring_1_45870 00:03:29.357 size: 1.000366 MiB name: RG_ring_4_45870 00:03:29.357 size: 1.000366 MiB name: RG_ring_5_45870 00:03:29.357 size: 0.125366 MiB name: RG_ring_2_45870 00:03:29.357 size: 0.015991 MiB name: RG_ring_3_45870 00:03:29.357 end memzones------- 00:03:29.357 20:41:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:03:29.616 heap id: 0 total size: 2048.000000 MiB number of busy elements: 39 number of free elements: 3 00:03:29.616 list of free elements. size: 1254.071899 MiB 00:03:29.616 element at address: 0x1060000000 with size: 1254.001099 MiB 00:03:29.616 element at address: 0x10c8000000 with size: 0.070129 MiB 00:03:29.616 element at address: 0x10d98b6000 with size: 0.000671 MiB 00:03:29.616 list of standard malloc elements. size: 197.217957 MiB 00:03:29.616 element at address: 0x10cd4b0f80 with size: 132.000122 MiB 00:03:29.616 element at address: 0x10d58b5f80 with size: 64.000122 MiB 00:03:29.616 element at address: 0x10c7efff80 with size: 1.000122 MiB 00:03:29.616 element at address: 0x10dffd9f00 with size: 0.140747 MiB 00:03:29.616 element at address: 0x10c8020c80 with size: 0.062622 MiB 00:03:29.616 element at address: 0x10dfffdf80 with size: 0.007935 MiB 00:03:29.616 element at address: 0x10d58b1000 with size: 0.000305 MiB 00:03:29.616 element at address: 0x10d58b18c0 with size: 0.000305 MiB 00:03:29.616 element at address: 0x10d58b1140 with size: 0.000183 MiB 00:03:29.616 element at address: 0x10d58b1200 with size: 0.000183 MiB 00:03:29.616 element at address: 0x10d58b12c0 with size: 0.000183 MiB 00:03:29.616 element at address: 0x10d58b1380 with size: 0.000183 MiB 00:03:29.616 element at address: 0x10d58b1440 with size: 0.000183 MiB 00:03:29.616 element at address: 0x10d58b1500 with size: 0.000183 MiB 00:03:29.616 element at address: 0x10d58b15c0 with size: 0.000183 MiB 00:03:29.616 element at address: 0x10d58b1680 with size: 0.000183 MiB 00:03:29.616 element at address: 0x10d58b1740 with size: 0.000183 MiB 00:03:29.616 element at address: 0x10d58b1800 with size: 0.000183 MiB 00:03:29.616 element at address: 0x10d58b1a00 with size: 0.000183 MiB 00:03:29.616 element at address: 0x10d58b1ac0 with size: 0.000183 MiB 00:03:29.616 element at address: 0x10d58b1cc0 with size: 0.000183 MiB 00:03:29.616 element at address: 0x10d98b62c0 with size: 0.000183 MiB 00:03:29.616 element at address: 0x10d98b6380 with size: 0.000183 MiB 00:03:29.616 element at address: 0x10d98b6440 with size: 0.000183 MiB 00:03:29.616 element at address: 0x10d98b6500 with size: 0.000183 MiB 00:03:29.616 element at address: 0x10d98b65c0 with size: 0.000183 MiB 00:03:29.616 element at address: 0x10d98b6680 with size: 0.000183 MiB 00:03:29.616 element at address: 0x10d98b6880 with size: 0.000183 MiB 00:03:29.616 element at address: 0x10d98b6940 with size: 0.000183 MiB 00:03:29.616 element at address: 0x10d98d6c00 with size: 0.000183 MiB 00:03:29.616 element at address: 0x10d98d6cc0 with size: 0.000183 MiB 00:03:29.616 element at address: 0x10d99d6f80 with size: 0.000183 MiB 00:03:29.616 element at address: 0x10d9ad7240 with size: 0.000183 MiB 00:03:29.616 element at address: 0x10d9ad7300 with size: 0.000183 MiB 00:03:29.616 element at address: 0x10dccd7640 with size: 0.000183 MiB 00:03:29.616 element at address: 0x10dccd7840 with size: 0.000183 MiB 00:03:29.616 element at address: 0x10dccd7900 with size: 0.000183 MiB 00:03:29.616 element at address: 0x10dfed7c40 with size: 0.000183 MiB 00:03:29.616 element at address: 0x10dffd9e40 with size: 0.000183 MiB 00:03:29.616 list of memzone associated elements. size: 596.710144 MiB 00:03:29.616 element at address: 0x10b93f7f00 with size: 211.013000 MiB 00:03:29.616 associated memzone info: size: 211.012878 MiB name: MP_PDU_immediate_data_Pool_0 00:03:29.616 element at address: 0x10afa82c80 with size: 152.449524 MiB 00:03:29.617 associated memzone info: size: 152.449402 MiB name: MP_PDU_data_out_Pool_0 00:03:29.617 element at address: 0x10c8030d00 with size: 84.000122 MiB 00:03:29.617 associated memzone info: size: 84.000000 MiB name: MP_bdev_io_45870_0 00:03:29.617 element at address: 0x10dccd79c0 with size: 48.000122 MiB 00:03:29.617 associated memzone info: size: 48.000000 MiB name: MP_evtpool_45870_0 00:03:29.617 element at address: 0x10d9ad73c0 with size: 48.000122 MiB 00:03:29.617 associated memzone info: size: 48.000000 MiB name: MP_msgpool_45870_0 00:03:29.617 element at address: 0x10c683d780 with size: 20.250671 MiB 00:03:29.617 associated memzone info: size: 20.250549 MiB name: MP_PDU_Pool_0 00:03:29.617 element at address: 0x10ae700680 with size: 18.000671 MiB 00:03:29.617 associated memzone info: size: 18.000549 MiB name: MP_SCSI_TASK_Pool_0 00:03:29.617 element at address: 0x10dfcd7a40 with size: 2.000488 MiB 00:03:29.617 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_45870 00:03:29.617 element at address: 0x10dcad7440 with size: 2.000488 MiB 00:03:29.617 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_45870 00:03:29.617 element at address: 0x10dfed7d00 with size: 1.008118 MiB 00:03:29.617 associated memzone info: size: 1.007996 MiB name: MP_evtpool_45870 00:03:29.617 element at address: 0x10c7cfdc40 with size: 1.008118 MiB 00:03:29.617 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:03:29.617 element at address: 0x10c673b640 with size: 1.008118 MiB 00:03:29.617 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:03:29.617 element at address: 0x10b92f5dc0 with size: 1.008118 MiB 00:03:29.617 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:03:29.617 element at address: 0x10af980b40 with size: 1.008118 MiB 00:03:29.617 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:03:29.617 element at address: 0x10d99d7040 with size: 1.000488 MiB 00:03:29.617 associated memzone info: size: 1.000366 MiB name: RG_ring_0_45870 00:03:29.617 element at address: 0x10d98d6d80 with size: 1.000488 MiB 00:03:29.617 associated memzone info: size: 1.000366 MiB name: RG_ring_1_45870 00:03:29.617 element at address: 0x10c7dffd80 with size: 1.000488 MiB 00:03:29.617 associated memzone info: size: 1.000366 MiB name: RG_ring_4_45870 00:03:29.617 element at address: 0x10ae600480 with size: 1.000488 MiB 00:03:29.617 associated memzone info: size: 1.000366 MiB name: RG_ring_5_45870 00:03:29.617 element at address: 0x10cd430d80 with size: 0.500488 MiB 00:03:29.617 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_45870 00:03:29.617 element at address: 0x10c7c7da40 with size: 0.500488 MiB 00:03:29.617 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:03:29.617 element at address: 0x10af900940 with size: 0.500488 MiB 00:03:29.617 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:03:29.617 element at address: 0x10c66fb440 with size: 0.250488 MiB 00:03:29.617 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:03:29.617 element at address: 0x10d98b6a00 with size: 0.125488 MiB 00:03:29.617 associated memzone info: size: 0.125366 MiB name: RG_ring_2_45870 00:03:29.617 element at address: 0x10c8018a80 with size: 0.031738 MiB 00:03:29.617 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:03:29.617 element at address: 0x10c8011f40 with size: 0.023743 MiB 00:03:29.617 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:03:29.617 element at address: 0x10d58b1d80 with size: 0.016113 MiB 00:03:29.617 associated memzone info: size: 0.015991 MiB name: RG_ring_3_45870 00:03:29.617 element at address: 0x10c8018080 with size: 0.002441 MiB 00:03:29.617 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:03:29.617 element at address: 0x10dccd7700 with size: 0.000305 MiB 00:03:29.617 associated memzone info: size: 0.000183 MiB name: MP_msgpool_45870 00:03:29.617 element at address: 0x10d58b1b80 with size: 0.000305 MiB 00:03:29.617 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_45870 00:03:29.617 element at address: 0x10d98b6740 with size: 0.000305 MiB 00:03:29.617 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:03:29.617 20:41:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:03:29.617 20:41:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 45870 00:03:29.617 20:41:20 -- common/autotest_common.sh@926 -- # '[' -z 45870 ']' 00:03:29.617 20:41:20 -- common/autotest_common.sh@930 -- # kill -0 45870 00:03:29.617 20:41:20 -- common/autotest_common.sh@931 -- # uname 00:03:29.617 20:41:20 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:03:29.617 20:41:20 -- common/autotest_common.sh@934 -- # ps -c -o command 45870 00:03:29.617 20:41:20 -- common/autotest_common.sh@934 -- # tail -1 00:03:29.617 20:41:20 -- common/autotest_common.sh@934 -- # process_name=spdk_tgt 00:03:29.617 20:41:20 -- common/autotest_common.sh@936 -- # '[' spdk_tgt = sudo ']' 00:03:29.617 killing process with pid 45870 00:03:29.617 20:41:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 45870' 00:03:29.617 20:41:20 -- common/autotest_common.sh@945 -- # kill 45870 00:03:29.617 20:41:20 -- common/autotest_common.sh@950 -- # wait 45870 00:03:29.617 00:03:29.617 real 0m1.388s 00:03:29.617 user 0m1.313s 00:03:29.617 sys 0m0.631s 00:03:29.617 20:41:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:29.617 20:41:20 -- common/autotest_common.sh@10 -- # set +x 00:03:29.617 ************************************ 00:03:29.617 END TEST dpdk_mem_utility 00:03:29.617 ************************************ 00:03:29.877 20:41:20 -- spdk/autotest.sh@187 -- # run_test event /usr/home/vagrant/spdk_repo/spdk/test/event/event.sh 00:03:29.877 20:41:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:29.877 20:41:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:29.877 20:41:20 -- common/autotest_common.sh@10 -- # set +x 00:03:29.877 ************************************ 00:03:29.877 START TEST event 00:03:29.877 ************************************ 00:03:29.877 20:41:20 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/event/event.sh 00:03:29.877 * Looking for test storage... 00:03:29.877 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/event 00:03:29.877 20:41:20 -- event/event.sh@9 -- # source /usr/home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:03:29.877 20:41:20 -- bdev/nbd_common.sh@6 -- # set -e 00:03:29.877 20:41:20 -- event/event.sh@45 -- # run_test event_perf /usr/home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:03:29.877 20:41:20 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:03:29.877 20:41:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:29.877 20:41:20 -- common/autotest_common.sh@10 -- # set +x 00:03:29.877 ************************************ 00:03:29.877 START TEST event_perf 00:03:29.877 ************************************ 00:03:29.877 20:41:20 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:03:29.877 Running I/O for 1 seconds...[2024-04-16 20:41:20.942395] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:03:29.877 [2024-04-16 20:41:20.942746] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:30.446 EAL: TSC is not safe to use in SMP mode 00:03:30.446 EAL: TSC is not invariant 00:03:30.446 [2024-04-16 20:41:21.380268] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:03:30.446 [2024-04-16 20:41:21.475272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:03:30.446 [2024-04-16 20:41:21.475611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:30.446 [2024-04-16 20:41:21.475422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:03:30.446 [2024-04-16 20:41:21.475556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:03:31.823 Running I/O for 1 seconds... 00:03:31.823 lcore 0: 2552231 00:03:31.823 lcore 1: 2552231 00:03:31.823 lcore 2: 2552229 00:03:31.823 lcore 3: 2552230 00:03:31.823 done. 00:03:31.823 00:03:31.823 real 0m1.633s 00:03:31.823 user 0m4.191s 00:03:31.823 sys 0m0.442s 00:03:31.823 20:41:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:31.823 20:41:22 -- common/autotest_common.sh@10 -- # set +x 00:03:31.823 ************************************ 00:03:31.823 END TEST event_perf 00:03:31.823 ************************************ 00:03:31.823 20:41:22 -- event/event.sh@46 -- # run_test event_reactor /usr/home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:03:31.823 20:41:22 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:03:31.823 20:41:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:31.823 20:41:22 -- common/autotest_common.sh@10 -- # set +x 00:03:31.823 ************************************ 00:03:31.823 START TEST event_reactor 00:03:31.823 ************************************ 00:03:31.823 20:41:22 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:03:31.823 [2024-04-16 20:41:22.621993] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:03:31.823 [2024-04-16 20:41:22.622316] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:32.082 EAL: TSC is not safe to use in SMP mode 00:03:32.082 EAL: TSC is not invariant 00:03:32.082 [2024-04-16 20:41:23.048787] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:32.082 [2024-04-16 20:41:23.140592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:33.458 test_start 00:03:33.458 oneshot 00:03:33.458 tick 100 00:03:33.458 tick 100 00:03:33.458 tick 250 00:03:33.458 tick 100 00:03:33.458 tick 100 00:03:33.458 tick 100 00:03:33.458 tick 250 00:03:33.458 tick 500 00:03:33.458 tick 100 00:03:33.458 tick 100 00:03:33.458 tick 250 00:03:33.459 tick 100 00:03:33.459 tick 100 00:03:33.459 test_end 00:03:33.459 00:03:33.459 real 0m1.621s 00:03:33.459 user 0m1.167s 00:03:33.459 sys 0m0.453s 00:03:33.459 20:41:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:33.459 20:41:24 -- common/autotest_common.sh@10 -- # set +x 00:03:33.459 ************************************ 00:03:33.459 END TEST event_reactor 00:03:33.459 ************************************ 00:03:33.459 20:41:24 -- event/event.sh@47 -- # run_test event_reactor_perf /usr/home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:03:33.459 20:41:24 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:03:33.459 20:41:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:33.459 20:41:24 -- common/autotest_common.sh@10 -- # set +x 00:03:33.459 ************************************ 00:03:33.459 START TEST event_reactor_perf 00:03:33.459 ************************************ 00:03:33.459 20:41:24 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:03:33.459 [2024-04-16 20:41:24.291570] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:03:33.459 [2024-04-16 20:41:24.291887] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:33.716 EAL: TSC is not safe to use in SMP mode 00:03:33.716 EAL: TSC is not invariant 00:03:33.716 [2024-04-16 20:41:24.718526] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:33.716 [2024-04-16 20:41:24.812016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:35.091 test_start 00:03:35.091 test_end 00:03:35.091 Performance: 4660371 events per second 00:03:35.091 00:03:35.091 real 0m1.624s 00:03:35.091 user 0m1.163s 00:03:35.091 sys 0m0.459s 00:03:35.091 20:41:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:35.091 20:41:25 -- common/autotest_common.sh@10 -- # set +x 00:03:35.091 ************************************ 00:03:35.091 END TEST event_reactor_perf 00:03:35.091 ************************************ 00:03:35.091 20:41:25 -- event/event.sh@49 -- # uname -s 00:03:35.091 20:41:25 -- event/event.sh@49 -- # '[' FreeBSD = Linux ']' 00:03:35.091 00:03:35.091 real 0m5.204s 00:03:35.091 user 0m6.675s 00:03:35.091 sys 0m1.587s 00:03:35.091 20:41:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:35.091 20:41:25 -- common/autotest_common.sh@10 -- # set +x 00:03:35.091 ************************************ 00:03:35.091 END TEST event 00:03:35.091 ************************************ 00:03:35.091 20:41:25 -- spdk/autotest.sh@188 -- # run_test thread /usr/home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:03:35.091 20:41:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:35.091 20:41:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:35.091 20:41:25 -- common/autotest_common.sh@10 -- # set +x 00:03:35.091 ************************************ 00:03:35.091 START TEST thread 00:03:35.091 ************************************ 00:03:35.091 20:41:26 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:03:35.091 * Looking for test storage... 00:03:35.091 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/thread 00:03:35.091 20:41:26 -- thread/thread.sh@11 -- # run_test thread_poller_perf /usr/home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:03:35.091 20:41:26 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:03:35.091 20:41:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:35.091 20:41:26 -- common/autotest_common.sh@10 -- # set +x 00:03:35.091 ************************************ 00:03:35.091 START TEST thread_poller_perf 00:03:35.091 ************************************ 00:03:35.091 20:41:26 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:03:35.091 [2024-04-16 20:41:26.199643] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:03:35.091 [2024-04-16 20:41:26.199810] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:35.657 EAL: TSC is not safe to use in SMP mode 00:03:35.657 EAL: TSC is not invariant 00:03:35.657 [2024-04-16 20:41:26.651640] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:35.657 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:03:35.657 [2024-04-16 20:41:26.744215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:37.033 ====================================== 00:03:37.033 busy:2296428020 (cyc) 00:03:37.033 total_run_count: 7063000 00:03:37.033 tsc_hz: 2294601473 (cyc) 00:03:37.033 ====================================== 00:03:37.033 poller_cost: 325 (cyc), 141 (nsec) 00:03:37.033 00:03:37.033 real 0m1.644s 00:03:37.033 user 0m1.159s 00:03:37.033 sys 0m0.484s 00:03:37.033 20:41:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:37.033 20:41:27 -- common/autotest_common.sh@10 -- # set +x 00:03:37.033 ************************************ 00:03:37.033 END TEST thread_poller_perf 00:03:37.033 ************************************ 00:03:37.033 20:41:27 -- thread/thread.sh@12 -- # run_test thread_poller_perf /usr/home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:03:37.033 20:41:27 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:03:37.033 20:41:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:37.033 20:41:27 -- common/autotest_common.sh@10 -- # set +x 00:03:37.033 ************************************ 00:03:37.033 START TEST thread_poller_perf 00:03:37.034 ************************************ 00:03:37.034 20:41:27 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:03:37.034 [2024-04-16 20:41:27.899514] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:03:37.034 [2024-04-16 20:41:27.899886] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:37.292 EAL: TSC is not safe to use in SMP mode 00:03:37.292 EAL: TSC is not invariant 00:03:37.292 [2024-04-16 20:41:28.337866] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:37.551 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:03:37.551 [2024-04-16 20:41:28.429257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:38.489 ====================================== 00:03:38.489 busy:2296027108 (cyc) 00:03:38.489 total_run_count: 99950000 00:03:38.489 tsc_hz: 2294601473 (cyc) 00:03:38.489 ====================================== 00:03:38.489 poller_cost: 22 (cyc), 9 (nsec) 00:03:38.489 00:03:38.489 real 0m1.635s 00:03:38.489 user 0m1.157s 00:03:38.489 sys 0m0.476s 00:03:38.489 20:41:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:38.489 20:41:29 -- common/autotest_common.sh@10 -- # set +x 00:03:38.489 ************************************ 00:03:38.489 END TEST thread_poller_perf 00:03:38.489 ************************************ 00:03:38.489 20:41:29 -- thread/thread.sh@17 -- # [[ n != \y ]] 00:03:38.489 20:41:29 -- thread/thread.sh@18 -- # run_test thread_spdk_lock /usr/home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:03:38.490 20:41:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:38.490 20:41:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:38.490 20:41:29 -- common/autotest_common.sh@10 -- # set +x 00:03:38.490 ************************************ 00:03:38.490 START TEST thread_spdk_lock 00:03:38.490 ************************************ 00:03:38.490 20:41:29 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:03:38.490 [2024-04-16 20:41:29.584473] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:03:38.490 [2024-04-16 20:41:29.584786] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:39.058 EAL: TSC is not safe to use in SMP mode 00:03:39.058 EAL: TSC is not invariant 00:03:39.058 [2024-04-16 20:41:30.017683] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:03:39.058 [2024-04-16 20:41:30.099774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:39.058 [2024-04-16 20:41:30.099772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:03:39.627 [2024-04-16 20:41:30.538961] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 955:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:03:39.627 [2024-04-16 20:41:30.539020] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3062:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:03:39.627 [2024-04-16 20:41:30.539029] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x30ee20 00:03:39.627 [2024-04-16 20:41:30.539401] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:03:39.627 [2024-04-16 20:41:30.539501] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1016:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:03:39.627 [2024-04-16 20:41:30.539510] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:03:39.627 Starting test contend 00:03:39.627 Worker Delay Wait us Hold us Total us 00:03:39.627 0 3 261088 162609 423698 00:03:39.627 1 5 162859 263602 426461 00:03:39.627 PASS test contend 00:03:39.627 Starting test hold_by_poller 00:03:39.627 PASS test hold_by_poller 00:03:39.627 Starting test hold_by_message 00:03:39.627 PASS test hold_by_message 00:03:39.627 /usr/home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:03:39.627 100014 assertions passed 00:03:39.627 0 assertions failed 00:03:39.627 00:03:39.627 real 0m1.057s 00:03:39.627 user 0m1.014s 00:03:39.627 sys 0m0.480s 00:03:39.627 20:41:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:39.627 20:41:30 -- common/autotest_common.sh@10 -- # set +x 00:03:39.627 ************************************ 00:03:39.627 END TEST thread_spdk_lock 00:03:39.627 ************************************ 00:03:39.627 00:03:39.627 real 0m4.669s 00:03:39.627 user 0m3.523s 00:03:39.627 sys 0m1.626s 00:03:39.627 20:41:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:39.627 20:41:30 -- common/autotest_common.sh@10 -- # set +x 00:03:39.627 ************************************ 00:03:39.627 END TEST thread 00:03:39.627 ************************************ 00:03:39.627 20:41:30 -- spdk/autotest.sh@189 -- # run_test accel /usr/home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:03:39.627 20:41:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:39.627 20:41:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:39.627 20:41:30 -- common/autotest_common.sh@10 -- # set +x 00:03:39.627 ************************************ 00:03:39.627 START TEST accel 00:03:39.627 ************************************ 00:03:39.627 20:41:30 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:03:39.887 * Looking for test storage... 00:03:39.887 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/accel 00:03:39.887 20:41:30 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:03:39.887 20:41:30 -- accel/accel.sh@74 -- # get_expected_opcs 00:03:39.887 20:41:30 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:03:39.887 20:41:30 -- accel/accel.sh@59 -- # spdk_tgt_pid=46123 00:03:39.887 20:41:30 -- accel/accel.sh@60 -- # waitforlisten 46123 00:03:39.887 20:41:30 -- common/autotest_common.sh@819 -- # '[' -z 46123 ']' 00:03:39.887 20:41:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:39.887 20:41:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:03:39.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:39.887 20:41:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:39.887 20:41:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:03:39.887 20:41:30 -- common/autotest_common.sh@10 -- # set +x 00:03:39.887 20:41:30 -- accel/accel.sh@58 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /tmp//sh-np.uoqljo 00:03:39.887 [2024-04-16 20:41:30.909010] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:03:39.887 [2024-04-16 20:41:30.909376] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:40.460 EAL: TSC is not safe to use in SMP mode 00:03:40.460 EAL: TSC is not invariant 00:03:40.460 [2024-04-16 20:41:31.344332] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:40.460 [2024-04-16 20:41:31.438169] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:03:40.460 [2024-04-16 20:41:31.438257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:40.460 20:41:31 -- accel/accel.sh@58 -- # build_accel_config 00:03:40.460 20:41:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:03:40.460 20:41:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:03:40.460 20:41:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:03:40.460 20:41:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:03:40.460 20:41:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:03:40.460 20:41:31 -- accel/accel.sh@41 -- # local IFS=, 00:03:40.460 20:41:31 -- accel/accel.sh@42 -- # jq -r . 00:03:40.719 20:41:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:03:40.719 20:41:31 -- common/autotest_common.sh@852 -- # return 0 00:03:40.719 20:41:31 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:03:40.719 20:41:31 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:03:40.719 20:41:31 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:03:40.719 20:41:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:40.719 20:41:31 -- common/autotest_common.sh@10 -- # set +x 00:03:40.719 20:41:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:40.978 20:41:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:03:40.978 20:41:31 -- accel/accel.sh@64 -- # IFS== 00:03:40.978 20:41:31 -- accel/accel.sh@64 -- # read -r opc module 00:03:40.978 20:41:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:03:40.978 20:41:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:03:40.978 20:41:31 -- accel/accel.sh@64 -- # IFS== 00:03:40.978 20:41:31 -- accel/accel.sh@64 -- # read -r opc module 00:03:40.978 20:41:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:03:40.978 20:41:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:03:40.978 20:41:31 -- accel/accel.sh@64 -- # IFS== 00:03:40.978 20:41:31 -- accel/accel.sh@64 -- # read -r opc module 00:03:40.978 20:41:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:03:40.978 20:41:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:03:40.978 20:41:31 -- accel/accel.sh@64 -- # IFS== 00:03:40.978 20:41:31 -- accel/accel.sh@64 -- # read -r opc module 00:03:40.978 20:41:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:03:40.978 20:41:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:03:40.978 20:41:31 -- accel/accel.sh@64 -- # IFS== 00:03:40.978 20:41:31 -- accel/accel.sh@64 -- # read -r opc module 00:03:40.978 20:41:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:03:40.978 20:41:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:03:40.978 20:41:31 -- accel/accel.sh@64 -- # IFS== 00:03:40.978 20:41:31 -- accel/accel.sh@64 -- # read -r opc module 00:03:40.978 20:41:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:03:40.978 20:41:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:03:40.978 20:41:31 -- accel/accel.sh@64 -- # IFS== 00:03:40.978 20:41:31 -- accel/accel.sh@64 -- # read -r opc module 00:03:40.978 20:41:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:03:40.978 20:41:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:03:40.978 20:41:31 -- accel/accel.sh@64 -- # IFS== 00:03:40.978 20:41:31 -- accel/accel.sh@64 -- # read -r opc module 00:03:40.978 20:41:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:03:40.978 20:41:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:03:40.978 20:41:31 -- accel/accel.sh@64 -- # IFS== 00:03:40.978 20:41:31 -- accel/accel.sh@64 -- # read -r opc module 00:03:40.978 20:41:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:03:40.978 20:41:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:03:40.978 20:41:31 -- accel/accel.sh@64 -- # IFS== 00:03:40.978 20:41:31 -- accel/accel.sh@64 -- # read -r opc module 00:03:40.978 20:41:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:03:40.978 20:41:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:03:40.978 20:41:31 -- accel/accel.sh@64 -- # IFS== 00:03:40.978 20:41:31 -- accel/accel.sh@64 -- # read -r opc module 00:03:40.978 20:41:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:03:40.978 20:41:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:03:40.978 20:41:31 -- accel/accel.sh@64 -- # IFS== 00:03:40.978 20:41:31 -- accel/accel.sh@64 -- # read -r opc module 00:03:40.978 20:41:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:03:40.978 20:41:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:03:40.978 20:41:31 -- accel/accel.sh@64 -- # IFS== 00:03:40.978 20:41:31 -- accel/accel.sh@64 -- # read -r opc module 00:03:40.978 20:41:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:03:40.978 20:41:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:03:40.978 20:41:31 -- accel/accel.sh@64 -- # IFS== 00:03:40.978 20:41:31 -- accel/accel.sh@64 -- # read -r opc module 00:03:40.978 20:41:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:03:40.978 20:41:31 -- accel/accel.sh@67 -- # killprocess 46123 00:03:40.978 20:41:31 -- common/autotest_common.sh@926 -- # '[' -z 46123 ']' 00:03:40.978 20:41:31 -- common/autotest_common.sh@930 -- # kill -0 46123 00:03:40.978 20:41:31 -- common/autotest_common.sh@931 -- # uname 00:03:40.978 20:41:31 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:03:40.978 20:41:31 -- common/autotest_common.sh@934 -- # ps -c -o command 46123 00:03:40.978 20:41:31 -- common/autotest_common.sh@934 -- # tail -1 00:03:40.978 20:41:31 -- common/autotest_common.sh@934 -- # process_name=spdk_tgt 00:03:40.978 20:41:31 -- common/autotest_common.sh@936 -- # '[' spdk_tgt = sudo ']' 00:03:40.978 killing process with pid 46123 00:03:40.978 20:41:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 46123' 00:03:40.978 20:41:31 -- common/autotest_common.sh@945 -- # kill 46123 00:03:40.978 20:41:31 -- common/autotest_common.sh@950 -- # wait 46123 00:03:40.978 20:41:32 -- accel/accel.sh@68 -- # trap - ERR 00:03:40.978 20:41:32 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:03:40.978 20:41:32 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:03:40.978 20:41:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:40.978 20:41:32 -- common/autotest_common.sh@10 -- # set +x 00:03:40.978 20:41:32 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:03:40.978 20:41:32 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.k78aAv -h 00:03:40.978 20:41:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:40.978 20:41:32 -- common/autotest_common.sh@10 -- # set +x 00:03:41.238 20:41:32 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:03:41.238 20:41:32 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:03:41.238 20:41:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:41.238 20:41:32 -- common/autotest_common.sh@10 -- # set +x 00:03:41.238 ************************************ 00:03:41.238 START TEST accel_missing_filename 00:03:41.238 ************************************ 00:03:41.238 20:41:32 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:03:41.238 20:41:32 -- common/autotest_common.sh@640 -- # local es=0 00:03:41.238 20:41:32 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:03:41.238 20:41:32 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:03:41.238 20:41:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:03:41.238 20:41:32 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:03:41.238 20:41:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:03:41.238 20:41:32 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:03:41.238 20:41:32 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.rvtnuU -t 1 -w compress 00:03:41.238 [2024-04-16 20:41:32.147519] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:03:41.238 [2024-04-16 20:41:32.147870] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:41.500 EAL: TSC is not safe to use in SMP mode 00:03:41.500 EAL: TSC is not invariant 00:03:41.500 [2024-04-16 20:41:32.575098] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:41.763 [2024-04-16 20:41:32.656695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:41.763 20:41:32 -- accel/accel.sh@12 -- # build_accel_config 00:03:41.763 20:41:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:03:41.763 20:41:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:03:41.763 20:41:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:03:41.763 20:41:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:03:41.763 20:41:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:03:41.763 20:41:32 -- accel/accel.sh@41 -- # local IFS=, 00:03:41.763 20:41:32 -- accel/accel.sh@42 -- # jq -r . 00:03:41.763 [2024-04-16 20:41:32.670869] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:41.763 [2024-04-16 20:41:32.700078] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:03:41.763 A filename is required. 00:03:41.763 20:41:32 -- common/autotest_common.sh@643 -- # es=234 00:03:41.763 20:41:32 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:03:41.763 20:41:32 -- common/autotest_common.sh@652 -- # es=106 00:03:41.763 20:41:32 -- common/autotest_common.sh@653 -- # case "$es" in 00:03:41.763 20:41:32 -- common/autotest_common.sh@660 -- # es=1 00:03:41.763 20:41:32 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:03:41.763 00:03:41.763 real 0m0.660s 00:03:41.763 user 0m0.185s 00:03:41.763 sys 0m0.476s 00:03:41.763 20:41:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:41.763 20:41:32 -- common/autotest_common.sh@10 -- # set +x 00:03:41.763 ************************************ 00:03:41.763 END TEST accel_missing_filename 00:03:41.763 ************************************ 00:03:41.763 20:41:32 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:03:41.763 20:41:32 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:03:41.763 20:41:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:41.763 20:41:32 -- common/autotest_common.sh@10 -- # set +x 00:03:41.763 ************************************ 00:03:41.763 START TEST accel_compress_verify 00:03:41.763 ************************************ 00:03:41.763 20:41:32 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:03:41.763 20:41:32 -- common/autotest_common.sh@640 -- # local es=0 00:03:41.763 20:41:32 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:03:41.763 20:41:32 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:03:41.763 20:41:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:03:41.763 20:41:32 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:03:41.763 20:41:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:03:41.763 20:41:32 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:03:41.763 20:41:32 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.zbtCwI -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:03:41.763 [2024-04-16 20:41:32.857459] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:03:41.763 [2024-04-16 20:41:32.857806] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:42.332 EAL: TSC is not safe to use in SMP mode 00:03:42.332 EAL: TSC is not invariant 00:03:42.332 [2024-04-16 20:41:33.294829] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:42.332 [2024-04-16 20:41:33.385594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:42.332 20:41:33 -- accel/accel.sh@12 -- # build_accel_config 00:03:42.332 20:41:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:03:42.332 20:41:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:03:42.332 20:41:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:03:42.332 20:41:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:03:42.332 20:41:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:03:42.332 20:41:33 -- accel/accel.sh@41 -- # local IFS=, 00:03:42.332 20:41:33 -- accel/accel.sh@42 -- # jq -r . 00:03:42.332 [2024-04-16 20:41:33.395001] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:42.332 [2024-04-16 20:41:33.423225] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:03:42.592 00:03:42.592 Compression does not support the verify option, aborting. 00:03:42.592 20:41:33 -- common/autotest_common.sh@643 -- # es=211 00:03:42.592 20:41:33 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:03:42.592 20:41:33 -- common/autotest_common.sh@652 -- # es=83 00:03:42.592 20:41:33 -- common/autotest_common.sh@653 -- # case "$es" in 00:03:42.592 20:41:33 -- common/autotest_common.sh@660 -- # es=1 00:03:42.592 20:41:33 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:03:42.592 00:03:42.592 real 0m0.672s 00:03:42.592 user 0m0.188s 00:03:42.592 sys 0m0.484s 00:03:42.592 20:41:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:42.592 20:41:33 -- common/autotest_common.sh@10 -- # set +x 00:03:42.592 ************************************ 00:03:42.592 END TEST accel_compress_verify 00:03:42.592 ************************************ 00:03:42.592 20:41:33 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:03:42.592 20:41:33 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:03:42.592 20:41:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:42.592 20:41:33 -- common/autotest_common.sh@10 -- # set +x 00:03:42.592 ************************************ 00:03:42.592 START TEST accel_wrong_workload 00:03:42.592 ************************************ 00:03:42.592 20:41:33 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:03:42.592 20:41:33 -- common/autotest_common.sh@640 -- # local es=0 00:03:42.592 20:41:33 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:03:42.592 20:41:33 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:03:42.592 20:41:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:03:42.592 20:41:33 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:03:42.592 20:41:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:03:42.592 20:41:33 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:03:42.592 20:41:33 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.JFF7wB -t 1 -w foobar 00:03:42.592 Unsupported workload type: foobar 00:03:42.592 [2024-04-16 20:41:33.584845] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:03:42.592 accel_perf options: 00:03:42.592 [-h help message] 00:03:42.592 [-q queue depth per core] 00:03:42.592 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:03:42.592 [-T number of threads per core 00:03:42.592 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:03:42.592 [-t time in seconds] 00:03:42.592 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:03:42.592 [ dif_verify, , dif_generate, dif_generate_copy 00:03:42.592 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:03:42.592 [-l for compress/decompress workloads, name of uncompressed input file 00:03:42.592 [-S for crc32c workload, use this seed value (default 0) 00:03:42.592 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:03:42.592 [-f for fill workload, use this BYTE value (default 255) 00:03:42.592 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:03:42.592 [-y verify result if this switch is on] 00:03:42.592 [-a tasks to allocate per core (default: same value as -q)] 00:03:42.592 Can be used to spread operations across a wider range of memory. 00:03:42.592 20:41:33 -- common/autotest_common.sh@643 -- # es=1 00:03:42.592 20:41:33 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:03:42.592 20:41:33 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:03:42.592 20:41:33 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:03:42.592 00:03:42.592 real 0m0.015s 00:03:42.592 user 0m0.008s 00:03:42.592 sys 0m0.009s 00:03:42.592 20:41:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:42.592 20:41:33 -- common/autotest_common.sh@10 -- # set +x 00:03:42.592 ************************************ 00:03:42.592 END TEST accel_wrong_workload 00:03:42.592 ************************************ 00:03:42.592 20:41:33 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:03:42.592 20:41:33 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:03:42.592 20:41:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:42.592 20:41:33 -- common/autotest_common.sh@10 -- # set +x 00:03:42.592 ************************************ 00:03:42.592 START TEST accel_negative_buffers 00:03:42.592 ************************************ 00:03:42.592 20:41:33 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:03:42.592 20:41:33 -- common/autotest_common.sh@640 -- # local es=0 00:03:42.592 20:41:33 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:03:42.592 20:41:33 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:03:42.592 20:41:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:03:42.592 20:41:33 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:03:42.592 20:41:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:03:42.592 20:41:33 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:03:42.592 20:41:33 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.SfZw1g -t 1 -w xor -y -x -1 00:03:42.592 -x option must be non-negative. 00:03:42.592 [2024-04-16 20:41:33.656403] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:03:42.592 accel_perf options: 00:03:42.592 [-h help message] 00:03:42.592 [-q queue depth per core] 00:03:42.592 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:03:42.592 [-T number of threads per core 00:03:42.592 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:03:42.592 [-t time in seconds] 00:03:42.592 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:03:42.592 [ dif_verify, , dif_generate, dif_generate_copy 00:03:42.592 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:03:42.592 [-l for compress/decompress workloads, name of uncompressed input file 00:03:42.592 [-S for crc32c workload, use this seed value (default 0) 00:03:42.592 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:03:42.592 [-f for fill workload, use this BYTE value (default 255) 00:03:42.592 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:03:42.592 [-y verify result if this switch is on] 00:03:42.592 [-a tasks to allocate per core (default: same value as -q)] 00:03:42.592 Can be used to spread operations across a wider range of memory. 00:03:42.592 20:41:33 -- common/autotest_common.sh@643 -- # es=1 00:03:42.592 20:41:33 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:03:42.592 20:41:33 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:03:42.592 20:41:33 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:03:42.592 00:03:42.592 real 0m0.015s 00:03:42.592 user 0m0.012s 00:03:42.592 sys 0m0.001s 00:03:42.592 20:41:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:42.592 20:41:33 -- common/autotest_common.sh@10 -- # set +x 00:03:42.592 ************************************ 00:03:42.592 END TEST accel_negative_buffers 00:03:42.592 ************************************ 00:03:42.592 20:41:33 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:03:42.593 20:41:33 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:03:42.593 20:41:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:42.593 20:41:33 -- common/autotest_common.sh@10 -- # set +x 00:03:42.852 ************************************ 00:03:42.852 START TEST accel_crc32c 00:03:42.852 ************************************ 00:03:42.852 20:41:33 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:03:42.852 20:41:33 -- accel/accel.sh@16 -- # local accel_opc 00:03:42.852 20:41:33 -- accel/accel.sh@17 -- # local accel_module 00:03:42.852 20:41:33 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:03:42.852 20:41:33 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.Pi8MdE -t 1 -w crc32c -S 32 -y 00:03:42.852 [2024-04-16 20:41:33.730307] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:03:42.852 [2024-04-16 20:41:33.730672] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:43.111 EAL: TSC is not safe to use in SMP mode 00:03:43.111 EAL: TSC is not invariant 00:03:43.112 [2024-04-16 20:41:34.191543] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:43.370 [2024-04-16 20:41:34.284064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:43.370 20:41:34 -- accel/accel.sh@12 -- # build_accel_config 00:03:43.370 20:41:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:03:43.370 20:41:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:03:43.370 20:41:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:03:43.370 20:41:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:03:43.370 20:41:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:03:43.370 20:41:34 -- accel/accel.sh@41 -- # local IFS=, 00:03:43.370 20:41:34 -- accel/accel.sh@42 -- # jq -r . 00:03:44.309 20:41:35 -- accel/accel.sh@18 -- # out=' 00:03:44.309 SPDK Configuration: 00:03:44.309 Core mask: 0x1 00:03:44.309 00:03:44.309 Accel Perf Configuration: 00:03:44.309 Workload Type: crc32c 00:03:44.309 CRC-32C seed: 32 00:03:44.309 Transfer size: 4096 bytes 00:03:44.309 Vector count 1 00:03:44.309 Module: software 00:03:44.309 Queue depth: 32 00:03:44.309 Allocate depth: 32 00:03:44.309 # threads/core: 1 00:03:44.309 Run time: 1 seconds 00:03:44.309 Verify: Yes 00:03:44.309 00:03:44.309 Running for 1 seconds... 00:03:44.309 00:03:44.309 Core,Thread Transfers Bandwidth Failed Miscompares 00:03:44.309 ------------------------------------------------------------------------------------ 00:03:44.309 0,0 2581888/s 10085 MiB/s 0 0 00:03:44.309 ==================================================================================== 00:03:44.309 Total 2581888/s 10085 MiB/s 0 0' 00:03:44.309 20:41:35 -- accel/accel.sh@20 -- # IFS=: 00:03:44.309 20:41:35 -- accel/accel.sh@20 -- # read -r var val 00:03:44.309 20:41:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:03:44.309 20:41:35 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.GubQ4u -t 1 -w crc32c -S 32 -y 00:03:44.568 [2024-04-16 20:41:35.432801] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:03:44.568 [2024-04-16 20:41:35.433279] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:44.827 EAL: TSC is not safe to use in SMP mode 00:03:44.827 EAL: TSC is not invariant 00:03:44.827 [2024-04-16 20:41:35.867709] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:45.085 [2024-04-16 20:41:35.959362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:45.085 20:41:35 -- accel/accel.sh@12 -- # build_accel_config 00:03:45.085 20:41:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:03:45.085 20:41:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:03:45.085 20:41:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:03:45.085 20:41:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:03:45.085 20:41:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:03:45.085 20:41:35 -- accel/accel.sh@41 -- # local IFS=, 00:03:45.085 20:41:35 -- accel/accel.sh@42 -- # jq -r . 00:03:45.085 20:41:35 -- accel/accel.sh@21 -- # val= 00:03:45.085 20:41:35 -- accel/accel.sh@22 -- # case "$var" in 00:03:45.085 20:41:35 -- accel/accel.sh@20 -- # IFS=: 00:03:45.085 20:41:35 -- accel/accel.sh@20 -- # read -r var val 00:03:45.085 20:41:35 -- accel/accel.sh@21 -- # val= 00:03:45.085 20:41:35 -- accel/accel.sh@22 -- # case "$var" in 00:03:45.085 20:41:35 -- accel/accel.sh@20 -- # IFS=: 00:03:45.085 20:41:35 -- accel/accel.sh@20 -- # read -r var val 00:03:45.085 20:41:35 -- accel/accel.sh@21 -- # val=0x1 00:03:45.085 20:41:35 -- accel/accel.sh@22 -- # case "$var" in 00:03:45.085 20:41:35 -- accel/accel.sh@20 -- # IFS=: 00:03:45.085 20:41:35 -- accel/accel.sh@20 -- # read -r var val 00:03:45.085 20:41:35 -- accel/accel.sh@21 -- # val= 00:03:45.085 20:41:35 -- accel/accel.sh@22 -- # case "$var" in 00:03:45.085 20:41:35 -- accel/accel.sh@20 -- # IFS=: 00:03:45.085 20:41:35 -- accel/accel.sh@20 -- # read -r var val 00:03:45.085 20:41:35 -- accel/accel.sh@21 -- # val= 00:03:45.085 20:41:35 -- accel/accel.sh@22 -- # case "$var" in 00:03:45.086 20:41:35 -- accel/accel.sh@20 -- # IFS=: 00:03:45.086 20:41:35 -- accel/accel.sh@20 -- # read -r var val 00:03:45.086 20:41:35 -- accel/accel.sh@21 -- # val=crc32c 00:03:45.086 20:41:35 -- accel/accel.sh@22 -- # case "$var" in 00:03:45.086 20:41:35 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:03:45.086 20:41:35 -- accel/accel.sh@20 -- # IFS=: 00:03:45.086 20:41:35 -- accel/accel.sh@20 -- # read -r var val 00:03:45.086 20:41:35 -- accel/accel.sh@21 -- # val=32 00:03:45.086 20:41:35 -- accel/accel.sh@22 -- # case "$var" in 00:03:45.086 20:41:35 -- accel/accel.sh@20 -- # IFS=: 00:03:45.086 20:41:35 -- accel/accel.sh@20 -- # read -r var val 00:03:45.086 20:41:35 -- accel/accel.sh@21 -- # val='4096 bytes' 00:03:45.086 20:41:35 -- accel/accel.sh@22 -- # case "$var" in 00:03:45.086 20:41:35 -- accel/accel.sh@20 -- # IFS=: 00:03:45.086 20:41:35 -- accel/accel.sh@20 -- # read -r var val 00:03:45.086 20:41:35 -- accel/accel.sh@21 -- # val= 00:03:45.086 20:41:35 -- accel/accel.sh@22 -- # case "$var" in 00:03:45.086 20:41:35 -- accel/accel.sh@20 -- # IFS=: 00:03:45.086 20:41:35 -- accel/accel.sh@20 -- # read -r var val 00:03:45.086 20:41:35 -- accel/accel.sh@21 -- # val=software 00:03:45.086 20:41:35 -- accel/accel.sh@22 -- # case "$var" in 00:03:45.086 20:41:35 -- accel/accel.sh@23 -- # accel_module=software 00:03:45.086 20:41:35 -- accel/accel.sh@20 -- # IFS=: 00:03:45.086 20:41:35 -- accel/accel.sh@20 -- # read -r var val 00:03:45.086 20:41:35 -- accel/accel.sh@21 -- # val=32 00:03:45.086 20:41:35 -- accel/accel.sh@22 -- # case "$var" in 00:03:45.086 20:41:35 -- accel/accel.sh@20 -- # IFS=: 00:03:45.086 20:41:35 -- accel/accel.sh@20 -- # read -r var val 00:03:45.086 20:41:35 -- accel/accel.sh@21 -- # val=32 00:03:45.086 20:41:35 -- accel/accel.sh@22 -- # case "$var" in 00:03:45.086 20:41:35 -- accel/accel.sh@20 -- # IFS=: 00:03:45.086 20:41:35 -- accel/accel.sh@20 -- # read -r var val 00:03:45.086 20:41:35 -- accel/accel.sh@21 -- # val=1 00:03:45.086 20:41:35 -- accel/accel.sh@22 -- # case "$var" in 00:03:45.086 20:41:35 -- accel/accel.sh@20 -- # IFS=: 00:03:45.086 20:41:35 -- accel/accel.sh@20 -- # read -r var val 00:03:45.086 20:41:35 -- accel/accel.sh@21 -- # val='1 seconds' 00:03:45.086 20:41:35 -- accel/accel.sh@22 -- # case "$var" in 00:03:45.086 20:41:35 -- accel/accel.sh@20 -- # IFS=: 00:03:45.086 20:41:35 -- accel/accel.sh@20 -- # read -r var val 00:03:45.086 20:41:35 -- accel/accel.sh@21 -- # val=Yes 00:03:45.086 20:41:35 -- accel/accel.sh@22 -- # case "$var" in 00:03:45.086 20:41:35 -- accel/accel.sh@20 -- # IFS=: 00:03:45.086 20:41:35 -- accel/accel.sh@20 -- # read -r var val 00:03:45.086 20:41:35 -- accel/accel.sh@21 -- # val= 00:03:45.086 20:41:35 -- accel/accel.sh@22 -- # case "$var" in 00:03:45.086 20:41:35 -- accel/accel.sh@20 -- # IFS=: 00:03:45.086 20:41:35 -- accel/accel.sh@20 -- # read -r var val 00:03:45.086 20:41:35 -- accel/accel.sh@21 -- # val= 00:03:45.086 20:41:35 -- accel/accel.sh@22 -- # case "$var" in 00:03:45.086 20:41:35 -- accel/accel.sh@20 -- # IFS=: 00:03:45.086 20:41:35 -- accel/accel.sh@20 -- # read -r var val 00:03:46.024 20:41:37 -- accel/accel.sh@21 -- # val= 00:03:46.024 20:41:37 -- accel/accel.sh@22 -- # case "$var" in 00:03:46.024 20:41:37 -- accel/accel.sh@20 -- # IFS=: 00:03:46.024 20:41:37 -- accel/accel.sh@20 -- # read -r var val 00:03:46.024 20:41:37 -- accel/accel.sh@21 -- # val= 00:03:46.024 20:41:37 -- accel/accel.sh@22 -- # case "$var" in 00:03:46.024 20:41:37 -- accel/accel.sh@20 -- # IFS=: 00:03:46.024 20:41:37 -- accel/accel.sh@20 -- # read -r var val 00:03:46.024 20:41:37 -- accel/accel.sh@21 -- # val= 00:03:46.024 20:41:37 -- accel/accel.sh@22 -- # case "$var" in 00:03:46.024 20:41:37 -- accel/accel.sh@20 -- # IFS=: 00:03:46.024 20:41:37 -- accel/accel.sh@20 -- # read -r var val 00:03:46.024 20:41:37 -- accel/accel.sh@21 -- # val= 00:03:46.024 20:41:37 -- accel/accel.sh@22 -- # case "$var" in 00:03:46.024 20:41:37 -- accel/accel.sh@20 -- # IFS=: 00:03:46.024 20:41:37 -- accel/accel.sh@20 -- # read -r var val 00:03:46.024 20:41:37 -- accel/accel.sh@21 -- # val= 00:03:46.024 20:41:37 -- accel/accel.sh@22 -- # case "$var" in 00:03:46.024 20:41:37 -- accel/accel.sh@20 -- # IFS=: 00:03:46.024 20:41:37 -- accel/accel.sh@20 -- # read -r var val 00:03:46.024 20:41:37 -- accel/accel.sh@21 -- # val= 00:03:46.024 20:41:37 -- accel/accel.sh@22 -- # case "$var" in 00:03:46.024 20:41:37 -- accel/accel.sh@20 -- # IFS=: 00:03:46.024 20:41:37 -- accel/accel.sh@20 -- # read -r var val 00:03:46.024 20:41:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:03:46.024 20:41:37 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:03:46.024 20:41:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:03:46.024 00:03:46.024 real 0m3.375s 00:03:46.024 user 0m2.398s 00:03:46.024 sys 0m0.992s 00:03:46.024 20:41:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:46.024 20:41:37 -- common/autotest_common.sh@10 -- # set +x 00:03:46.024 ************************************ 00:03:46.024 END TEST accel_crc32c 00:03:46.024 ************************************ 00:03:46.024 20:41:37 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:03:46.024 20:41:37 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:03:46.024 20:41:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:46.024 20:41:37 -- common/autotest_common.sh@10 -- # set +x 00:03:46.024 ************************************ 00:03:46.024 START TEST accel_crc32c_C2 00:03:46.024 ************************************ 00:03:46.024 20:41:37 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:03:46.024 20:41:37 -- accel/accel.sh@16 -- # local accel_opc 00:03:46.024 20:41:37 -- accel/accel.sh@17 -- # local accel_module 00:03:46.282 20:41:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:03:46.282 20:41:37 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.mR0fMK -t 1 -w crc32c -y -C 2 00:03:46.282 [2024-04-16 20:41:37.157144] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:03:46.282 [2024-04-16 20:41:37.157513] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:46.540 EAL: TSC is not safe to use in SMP mode 00:03:46.540 EAL: TSC is not invariant 00:03:46.540 [2024-04-16 20:41:37.595430] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:46.798 [2024-04-16 20:41:37.686629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:46.798 20:41:37 -- accel/accel.sh@12 -- # build_accel_config 00:03:46.798 20:41:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:03:46.798 20:41:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:03:46.798 20:41:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:03:46.798 20:41:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:03:46.798 20:41:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:03:46.798 20:41:37 -- accel/accel.sh@41 -- # local IFS=, 00:03:46.798 20:41:37 -- accel/accel.sh@42 -- # jq -r . 00:03:47.734 20:41:38 -- accel/accel.sh@18 -- # out=' 00:03:47.734 SPDK Configuration: 00:03:47.734 Core mask: 0x1 00:03:47.734 00:03:47.734 Accel Perf Configuration: 00:03:47.734 Workload Type: crc32c 00:03:47.734 CRC-32C seed: 0 00:03:47.734 Transfer size: 4096 bytes 00:03:47.734 Vector count 2 00:03:47.734 Module: software 00:03:47.734 Queue depth: 32 00:03:47.734 Allocate depth: 32 00:03:47.734 # threads/core: 1 00:03:47.734 Run time: 1 seconds 00:03:47.734 Verify: Yes 00:03:47.734 00:03:47.734 Running for 1 seconds... 00:03:47.734 00:03:47.734 Core,Thread Transfers Bandwidth Failed Miscompares 00:03:47.734 ------------------------------------------------------------------------------------ 00:03:47.734 0,0 1400480/s 10941 MiB/s 0 0 00:03:47.734 ==================================================================================== 00:03:47.734 Total 1400480/s 5470 MiB/s 0 0' 00:03:47.734 20:41:38 -- accel/accel.sh@20 -- # IFS=: 00:03:47.734 20:41:38 -- accel/accel.sh@20 -- # read -r var val 00:03:47.734 20:41:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:03:47.734 20:41:38 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.uMIO37 -t 1 -w crc32c -y -C 2 00:03:47.734 [2024-04-16 20:41:38.827441] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:03:47.734 [2024-04-16 20:41:38.827575] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:48.319 EAL: TSC is not safe to use in SMP mode 00:03:48.319 EAL: TSC is not invariant 00:03:48.319 [2024-04-16 20:41:39.247867] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:48.319 [2024-04-16 20:41:39.329132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:48.319 20:41:39 -- accel/accel.sh@12 -- # build_accel_config 00:03:48.319 20:41:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:03:48.319 20:41:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:03:48.319 20:41:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:03:48.319 20:41:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:03:48.319 20:41:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:03:48.319 20:41:39 -- accel/accel.sh@41 -- # local IFS=, 00:03:48.319 20:41:39 -- accel/accel.sh@42 -- # jq -r . 00:03:48.319 20:41:39 -- accel/accel.sh@21 -- # val= 00:03:48.319 20:41:39 -- accel/accel.sh@22 -- # case "$var" in 00:03:48.319 20:41:39 -- accel/accel.sh@20 -- # IFS=: 00:03:48.319 20:41:39 -- accel/accel.sh@20 -- # read -r var val 00:03:48.319 20:41:39 -- accel/accel.sh@21 -- # val= 00:03:48.319 20:41:39 -- accel/accel.sh@22 -- # case "$var" in 00:03:48.319 20:41:39 -- accel/accel.sh@20 -- # IFS=: 00:03:48.319 20:41:39 -- accel/accel.sh@20 -- # read -r var val 00:03:48.319 20:41:39 -- accel/accel.sh@21 -- # val=0x1 00:03:48.319 20:41:39 -- accel/accel.sh@22 -- # case "$var" in 00:03:48.319 20:41:39 -- accel/accel.sh@20 -- # IFS=: 00:03:48.319 20:41:39 -- accel/accel.sh@20 -- # read -r var val 00:03:48.319 20:41:39 -- accel/accel.sh@21 -- # val= 00:03:48.319 20:41:39 -- accel/accel.sh@22 -- # case "$var" in 00:03:48.319 20:41:39 -- accel/accel.sh@20 -- # IFS=: 00:03:48.319 20:41:39 -- accel/accel.sh@20 -- # read -r var val 00:03:48.319 20:41:39 -- accel/accel.sh@21 -- # val= 00:03:48.319 20:41:39 -- accel/accel.sh@22 -- # case "$var" in 00:03:48.319 20:41:39 -- accel/accel.sh@20 -- # IFS=: 00:03:48.319 20:41:39 -- accel/accel.sh@20 -- # read -r var val 00:03:48.319 20:41:39 -- accel/accel.sh@21 -- # val=crc32c 00:03:48.319 20:41:39 -- accel/accel.sh@22 -- # case "$var" in 00:03:48.319 20:41:39 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:03:48.319 20:41:39 -- accel/accel.sh@20 -- # IFS=: 00:03:48.319 20:41:39 -- accel/accel.sh@20 -- # read -r var val 00:03:48.319 20:41:39 -- accel/accel.sh@21 -- # val=0 00:03:48.319 20:41:39 -- accel/accel.sh@22 -- # case "$var" in 00:03:48.319 20:41:39 -- accel/accel.sh@20 -- # IFS=: 00:03:48.319 20:41:39 -- accel/accel.sh@20 -- # read -r var val 00:03:48.319 20:41:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:03:48.319 20:41:39 -- accel/accel.sh@22 -- # case "$var" in 00:03:48.319 20:41:39 -- accel/accel.sh@20 -- # IFS=: 00:03:48.319 20:41:39 -- accel/accel.sh@20 -- # read -r var val 00:03:48.319 20:41:39 -- accel/accel.sh@21 -- # val= 00:03:48.319 20:41:39 -- accel/accel.sh@22 -- # case "$var" in 00:03:48.319 20:41:39 -- accel/accel.sh@20 -- # IFS=: 00:03:48.319 20:41:39 -- accel/accel.sh@20 -- # read -r var val 00:03:48.319 20:41:39 -- accel/accel.sh@21 -- # val=software 00:03:48.319 20:41:39 -- accel/accel.sh@22 -- # case "$var" in 00:03:48.319 20:41:39 -- accel/accel.sh@23 -- # accel_module=software 00:03:48.319 20:41:39 -- accel/accel.sh@20 -- # IFS=: 00:03:48.319 20:41:39 -- accel/accel.sh@20 -- # read -r var val 00:03:48.319 20:41:39 -- accel/accel.sh@21 -- # val=32 00:03:48.319 20:41:39 -- accel/accel.sh@22 -- # case "$var" in 00:03:48.319 20:41:39 -- accel/accel.sh@20 -- # IFS=: 00:03:48.319 20:41:39 -- accel/accel.sh@20 -- # read -r var val 00:03:48.319 20:41:39 -- accel/accel.sh@21 -- # val=32 00:03:48.319 20:41:39 -- accel/accel.sh@22 -- # case "$var" in 00:03:48.319 20:41:39 -- accel/accel.sh@20 -- # IFS=: 00:03:48.319 20:41:39 -- accel/accel.sh@20 -- # read -r var val 00:03:48.319 20:41:39 -- accel/accel.sh@21 -- # val=1 00:03:48.319 20:41:39 -- accel/accel.sh@22 -- # case "$var" in 00:03:48.319 20:41:39 -- accel/accel.sh@20 -- # IFS=: 00:03:48.319 20:41:39 -- accel/accel.sh@20 -- # read -r var val 00:03:48.319 20:41:39 -- accel/accel.sh@21 -- # val='1 seconds' 00:03:48.319 20:41:39 -- accel/accel.sh@22 -- # case "$var" in 00:03:48.319 20:41:39 -- accel/accel.sh@20 -- # IFS=: 00:03:48.319 20:41:39 -- accel/accel.sh@20 -- # read -r var val 00:03:48.319 20:41:39 -- accel/accel.sh@21 -- # val=Yes 00:03:48.319 20:41:39 -- accel/accel.sh@22 -- # case "$var" in 00:03:48.319 20:41:39 -- accel/accel.sh@20 -- # IFS=: 00:03:48.319 20:41:39 -- accel/accel.sh@20 -- # read -r var val 00:03:48.319 20:41:39 -- accel/accel.sh@21 -- # val= 00:03:48.319 20:41:39 -- accel/accel.sh@22 -- # case "$var" in 00:03:48.319 20:41:39 -- accel/accel.sh@20 -- # IFS=: 00:03:48.319 20:41:39 -- accel/accel.sh@20 -- # read -r var val 00:03:48.319 20:41:39 -- accel/accel.sh@21 -- # val= 00:03:48.319 20:41:39 -- accel/accel.sh@22 -- # case "$var" in 00:03:48.319 20:41:39 -- accel/accel.sh@20 -- # IFS=: 00:03:48.319 20:41:39 -- accel/accel.sh@20 -- # read -r var val 00:03:49.698 20:41:40 -- accel/accel.sh@21 -- # val= 00:03:49.698 20:41:40 -- accel/accel.sh@22 -- # case "$var" in 00:03:49.698 20:41:40 -- accel/accel.sh@20 -- # IFS=: 00:03:49.698 20:41:40 -- accel/accel.sh@20 -- # read -r var val 00:03:49.698 20:41:40 -- accel/accel.sh@21 -- # val= 00:03:49.698 20:41:40 -- accel/accel.sh@22 -- # case "$var" in 00:03:49.698 20:41:40 -- accel/accel.sh@20 -- # IFS=: 00:03:49.698 20:41:40 -- accel/accel.sh@20 -- # read -r var val 00:03:49.698 20:41:40 -- accel/accel.sh@21 -- # val= 00:03:49.698 20:41:40 -- accel/accel.sh@22 -- # case "$var" in 00:03:49.698 20:41:40 -- accel/accel.sh@20 -- # IFS=: 00:03:49.698 20:41:40 -- accel/accel.sh@20 -- # read -r var val 00:03:49.698 20:41:40 -- accel/accel.sh@21 -- # val= 00:03:49.698 20:41:40 -- accel/accel.sh@22 -- # case "$var" in 00:03:49.698 20:41:40 -- accel/accel.sh@20 -- # IFS=: 00:03:49.698 20:41:40 -- accel/accel.sh@20 -- # read -r var val 00:03:49.698 20:41:40 -- accel/accel.sh@21 -- # val= 00:03:49.698 20:41:40 -- accel/accel.sh@22 -- # case "$var" in 00:03:49.698 20:41:40 -- accel/accel.sh@20 -- # IFS=: 00:03:49.698 20:41:40 -- accel/accel.sh@20 -- # read -r var val 00:03:49.698 20:41:40 -- accel/accel.sh@21 -- # val= 00:03:49.698 20:41:40 -- accel/accel.sh@22 -- # case "$var" in 00:03:49.698 20:41:40 -- accel/accel.sh@20 -- # IFS=: 00:03:49.698 20:41:40 -- accel/accel.sh@20 -- # read -r var val 00:03:49.698 20:41:40 -- accel/accel.sh@28 -- # [[ -n software ]] 00:03:49.698 20:41:40 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:03:49.698 20:41:40 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:03:49.698 00:03:49.698 real 0m3.327s 00:03:49.698 user 0m2.399s 00:03:49.698 sys 0m0.942s 00:03:49.698 20:41:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:49.698 20:41:40 -- common/autotest_common.sh@10 -- # set +x 00:03:49.698 ************************************ 00:03:49.698 END TEST accel_crc32c_C2 00:03:49.698 ************************************ 00:03:49.698 20:41:40 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:03:49.698 20:41:40 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:03:49.698 20:41:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:49.698 20:41:40 -- common/autotest_common.sh@10 -- # set +x 00:03:49.698 ************************************ 00:03:49.698 START TEST accel_copy 00:03:49.698 ************************************ 00:03:49.698 20:41:40 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:03:49.698 20:41:40 -- accel/accel.sh@16 -- # local accel_opc 00:03:49.698 20:41:40 -- accel/accel.sh@17 -- # local accel_module 00:03:49.698 20:41:40 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:03:49.698 20:41:40 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.AKYJPY -t 1 -w copy -y 00:03:49.698 [2024-04-16 20:41:40.534355] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:03:49.698 [2024-04-16 20:41:40.534673] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:49.958 EAL: TSC is not safe to use in SMP mode 00:03:49.958 EAL: TSC is not invariant 00:03:49.958 [2024-04-16 20:41:40.954190] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:49.958 [2024-04-16 20:41:41.036064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:49.958 20:41:41 -- accel/accel.sh@12 -- # build_accel_config 00:03:49.958 20:41:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:03:49.958 20:41:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:03:49.958 20:41:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:03:49.958 20:41:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:03:49.958 20:41:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:03:49.958 20:41:41 -- accel/accel.sh@41 -- # local IFS=, 00:03:49.958 20:41:41 -- accel/accel.sh@42 -- # jq -r . 00:03:51.337 20:41:42 -- accel/accel.sh@18 -- # out=' 00:03:51.337 SPDK Configuration: 00:03:51.337 Core mask: 0x1 00:03:51.337 00:03:51.337 Accel Perf Configuration: 00:03:51.337 Workload Type: copy 00:03:51.337 Transfer size: 4096 bytes 00:03:51.337 Vector count 1 00:03:51.337 Module: software 00:03:51.337 Queue depth: 32 00:03:51.337 Allocate depth: 32 00:03:51.337 # threads/core: 1 00:03:51.337 Run time: 1 seconds 00:03:51.337 Verify: Yes 00:03:51.337 00:03:51.337 Running for 1 seconds... 00:03:51.337 00:03:51.337 Core,Thread Transfers Bandwidth Failed Miscompares 00:03:51.337 ------------------------------------------------------------------------------------ 00:03:51.337 0,0 2590112/s 10117 MiB/s 0 0 00:03:51.337 ==================================================================================== 00:03:51.337 Total 2590112/s 10117 MiB/s 0 0' 00:03:51.337 20:41:42 -- accel/accel.sh@20 -- # IFS=: 00:03:51.337 20:41:42 -- accel/accel.sh@20 -- # read -r var val 00:03:51.337 20:41:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:03:51.337 20:41:42 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.TkkXd7 -t 1 -w copy -y 00:03:51.337 [2024-04-16 20:41:42.181216] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:03:51.337 [2024-04-16 20:41:42.181536] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:51.597 EAL: TSC is not safe to use in SMP mode 00:03:51.597 EAL: TSC is not invariant 00:03:51.597 [2024-04-16 20:41:42.610633] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:51.597 [2024-04-16 20:41:42.699936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:51.597 20:41:42 -- accel/accel.sh@12 -- # build_accel_config 00:03:51.597 20:41:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:03:51.597 20:41:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:03:51.597 20:41:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:03:51.597 20:41:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:03:51.597 20:41:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:03:51.597 20:41:42 -- accel/accel.sh@41 -- # local IFS=, 00:03:51.597 20:41:42 -- accel/accel.sh@42 -- # jq -r . 00:03:51.597 20:41:42 -- accel/accel.sh@21 -- # val= 00:03:51.597 20:41:42 -- accel/accel.sh@22 -- # case "$var" in 00:03:51.597 20:41:42 -- accel/accel.sh@20 -- # IFS=: 00:03:51.597 20:41:42 -- accel/accel.sh@20 -- # read -r var val 00:03:51.597 20:41:42 -- accel/accel.sh@21 -- # val= 00:03:51.597 20:41:42 -- accel/accel.sh@22 -- # case "$var" in 00:03:51.597 20:41:42 -- accel/accel.sh@20 -- # IFS=: 00:03:51.597 20:41:42 -- accel/accel.sh@20 -- # read -r var val 00:03:51.597 20:41:42 -- accel/accel.sh@21 -- # val=0x1 00:03:51.597 20:41:42 -- accel/accel.sh@22 -- # case "$var" in 00:03:51.597 20:41:42 -- accel/accel.sh@20 -- # IFS=: 00:03:51.597 20:41:42 -- accel/accel.sh@20 -- # read -r var val 00:03:51.597 20:41:42 -- accel/accel.sh@21 -- # val= 00:03:51.597 20:41:42 -- accel/accel.sh@22 -- # case "$var" in 00:03:51.597 20:41:42 -- accel/accel.sh@20 -- # IFS=: 00:03:51.597 20:41:42 -- accel/accel.sh@20 -- # read -r var val 00:03:51.597 20:41:42 -- accel/accel.sh@21 -- # val= 00:03:51.597 20:41:42 -- accel/accel.sh@22 -- # case "$var" in 00:03:51.597 20:41:42 -- accel/accel.sh@20 -- # IFS=: 00:03:51.857 20:41:42 -- accel/accel.sh@20 -- # read -r var val 00:03:51.857 20:41:42 -- accel/accel.sh@21 -- # val=copy 00:03:51.857 20:41:42 -- accel/accel.sh@22 -- # case "$var" in 00:03:51.857 20:41:42 -- accel/accel.sh@24 -- # accel_opc=copy 00:03:51.857 20:41:42 -- accel/accel.sh@20 -- # IFS=: 00:03:51.857 20:41:42 -- accel/accel.sh@20 -- # read -r var val 00:03:51.857 20:41:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:03:51.857 20:41:42 -- accel/accel.sh@22 -- # case "$var" in 00:03:51.857 20:41:42 -- accel/accel.sh@20 -- # IFS=: 00:03:51.857 20:41:42 -- accel/accel.sh@20 -- # read -r var val 00:03:51.857 20:41:42 -- accel/accel.sh@21 -- # val= 00:03:51.857 20:41:42 -- accel/accel.sh@22 -- # case "$var" in 00:03:51.857 20:41:42 -- accel/accel.sh@20 -- # IFS=: 00:03:51.857 20:41:42 -- accel/accel.sh@20 -- # read -r var val 00:03:51.857 20:41:42 -- accel/accel.sh@21 -- # val=software 00:03:51.857 20:41:42 -- accel/accel.sh@22 -- # case "$var" in 00:03:51.857 20:41:42 -- accel/accel.sh@23 -- # accel_module=software 00:03:51.857 20:41:42 -- accel/accel.sh@20 -- # IFS=: 00:03:51.857 20:41:42 -- accel/accel.sh@20 -- # read -r var val 00:03:51.857 20:41:42 -- accel/accel.sh@21 -- # val=32 00:03:51.857 20:41:42 -- accel/accel.sh@22 -- # case "$var" in 00:03:51.857 20:41:42 -- accel/accel.sh@20 -- # IFS=: 00:03:51.857 20:41:42 -- accel/accel.sh@20 -- # read -r var val 00:03:51.857 20:41:42 -- accel/accel.sh@21 -- # val=32 00:03:51.857 20:41:42 -- accel/accel.sh@22 -- # case "$var" in 00:03:51.857 20:41:42 -- accel/accel.sh@20 -- # IFS=: 00:03:51.857 20:41:42 -- accel/accel.sh@20 -- # read -r var val 00:03:51.857 20:41:42 -- accel/accel.sh@21 -- # val=1 00:03:51.857 20:41:42 -- accel/accel.sh@22 -- # case "$var" in 00:03:51.857 20:41:42 -- accel/accel.sh@20 -- # IFS=: 00:03:51.857 20:41:42 -- accel/accel.sh@20 -- # read -r var val 00:03:51.857 20:41:42 -- accel/accel.sh@21 -- # val='1 seconds' 00:03:51.858 20:41:42 -- accel/accel.sh@22 -- # case "$var" in 00:03:51.858 20:41:42 -- accel/accel.sh@20 -- # IFS=: 00:03:51.858 20:41:42 -- accel/accel.sh@20 -- # read -r var val 00:03:51.858 20:41:42 -- accel/accel.sh@21 -- # val=Yes 00:03:51.858 20:41:42 -- accel/accel.sh@22 -- # case "$var" in 00:03:51.858 20:41:42 -- accel/accel.sh@20 -- # IFS=: 00:03:51.858 20:41:42 -- accel/accel.sh@20 -- # read -r var val 00:03:51.858 20:41:42 -- accel/accel.sh@21 -- # val= 00:03:51.858 20:41:42 -- accel/accel.sh@22 -- # case "$var" in 00:03:51.858 20:41:42 -- accel/accel.sh@20 -- # IFS=: 00:03:51.858 20:41:42 -- accel/accel.sh@20 -- # read -r var val 00:03:51.858 20:41:42 -- accel/accel.sh@21 -- # val= 00:03:51.858 20:41:42 -- accel/accel.sh@22 -- # case "$var" in 00:03:51.858 20:41:42 -- accel/accel.sh@20 -- # IFS=: 00:03:51.858 20:41:42 -- accel/accel.sh@20 -- # read -r var val 00:03:52.796 20:41:43 -- accel/accel.sh@21 -- # val= 00:03:52.796 20:41:43 -- accel/accel.sh@22 -- # case "$var" in 00:03:52.796 20:41:43 -- accel/accel.sh@20 -- # IFS=: 00:03:52.796 20:41:43 -- accel/accel.sh@20 -- # read -r var val 00:03:52.796 20:41:43 -- accel/accel.sh@21 -- # val= 00:03:52.796 20:41:43 -- accel/accel.sh@22 -- # case "$var" in 00:03:52.796 20:41:43 -- accel/accel.sh@20 -- # IFS=: 00:03:52.796 20:41:43 -- accel/accel.sh@20 -- # read -r var val 00:03:52.796 20:41:43 -- accel/accel.sh@21 -- # val= 00:03:52.796 20:41:43 -- accel/accel.sh@22 -- # case "$var" in 00:03:52.796 20:41:43 -- accel/accel.sh@20 -- # IFS=: 00:03:52.796 20:41:43 -- accel/accel.sh@20 -- # read -r var val 00:03:52.796 20:41:43 -- accel/accel.sh@21 -- # val= 00:03:52.796 20:41:43 -- accel/accel.sh@22 -- # case "$var" in 00:03:52.796 20:41:43 -- accel/accel.sh@20 -- # IFS=: 00:03:52.796 20:41:43 -- accel/accel.sh@20 -- # read -r var val 00:03:52.796 20:41:43 -- accel/accel.sh@21 -- # val= 00:03:52.796 20:41:43 -- accel/accel.sh@22 -- # case "$var" in 00:03:52.796 20:41:43 -- accel/accel.sh@20 -- # IFS=: 00:03:52.796 20:41:43 -- accel/accel.sh@20 -- # read -r var val 00:03:52.796 20:41:43 -- accel/accel.sh@21 -- # val= 00:03:52.796 20:41:43 -- accel/accel.sh@22 -- # case "$var" in 00:03:52.796 20:41:43 -- accel/accel.sh@20 -- # IFS=: 00:03:52.796 20:41:43 -- accel/accel.sh@20 -- # read -r var val 00:03:52.796 20:41:43 -- accel/accel.sh@28 -- # [[ -n software ]] 00:03:52.796 20:41:43 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:03:52.796 20:41:43 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:03:52.796 00:03:52.796 real 0m3.316s 00:03:52.796 user 0m2.410s 00:03:52.796 sys 0m0.920s 00:03:52.796 20:41:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:52.796 20:41:43 -- common/autotest_common.sh@10 -- # set +x 00:03:52.796 ************************************ 00:03:52.796 END TEST accel_copy 00:03:52.796 ************************************ 00:03:52.796 20:41:43 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:03:52.796 20:41:43 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:03:52.796 20:41:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:52.796 20:41:43 -- common/autotest_common.sh@10 -- # set +x 00:03:52.796 ************************************ 00:03:52.796 START TEST accel_fill 00:03:52.796 ************************************ 00:03:52.796 20:41:43 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:03:52.796 20:41:43 -- accel/accel.sh@16 -- # local accel_opc 00:03:52.796 20:41:43 -- accel/accel.sh@17 -- # local accel_module 00:03:52.796 20:41:43 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:03:52.796 20:41:43 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.2BeSrR -t 1 -w fill -f 128 -q 64 -a 64 -y 00:03:52.796 [2024-04-16 20:41:43.903381] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:03:52.796 [2024-04-16 20:41:43.903724] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:53.364 EAL: TSC is not safe to use in SMP mode 00:03:53.364 EAL: TSC is not invariant 00:03:53.364 [2024-04-16 20:41:44.325430] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:53.364 [2024-04-16 20:41:44.404088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:53.364 20:41:44 -- accel/accel.sh@12 -- # build_accel_config 00:03:53.364 20:41:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:03:53.364 20:41:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:03:53.364 20:41:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:03:53.364 20:41:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:03:53.364 20:41:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:03:53.364 20:41:44 -- accel/accel.sh@41 -- # local IFS=, 00:03:53.364 20:41:44 -- accel/accel.sh@42 -- # jq -r . 00:03:54.742 20:41:45 -- accel/accel.sh@18 -- # out=' 00:03:54.742 SPDK Configuration: 00:03:54.742 Core mask: 0x1 00:03:54.742 00:03:54.742 Accel Perf Configuration: 00:03:54.742 Workload Type: fill 00:03:54.742 Fill pattern: 0x80 00:03:54.742 Transfer size: 4096 bytes 00:03:54.742 Vector count 1 00:03:54.742 Module: software 00:03:54.742 Queue depth: 64 00:03:54.742 Allocate depth: 64 00:03:54.742 # threads/core: 1 00:03:54.742 Run time: 1 seconds 00:03:54.742 Verify: Yes 00:03:54.742 00:03:54.742 Running for 1 seconds... 00:03:54.742 00:03:54.742 Core,Thread Transfers Bandwidth Failed Miscompares 00:03:54.742 ------------------------------------------------------------------------------------ 00:03:54.742 0,0 3046528/s 11900 MiB/s 0 0 00:03:54.742 ==================================================================================== 00:03:54.742 Total 3046528/s 11900 MiB/s 0 0' 00:03:54.742 20:41:45 -- accel/accel.sh@20 -- # IFS=: 00:03:54.742 20:41:45 -- accel/accel.sh@20 -- # read -r var val 00:03:54.742 20:41:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:03:54.742 20:41:45 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.b4TISm -t 1 -w fill -f 128 -q 64 -a 64 -y 00:03:54.742 [2024-04-16 20:41:45.551191] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:03:54.742 [2024-04-16 20:41:45.551549] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:55.002 EAL: TSC is not safe to use in SMP mode 00:03:55.002 EAL: TSC is not invariant 00:03:55.002 [2024-04-16 20:41:45.987630] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:55.002 [2024-04-16 20:41:46.077655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:55.002 20:41:46 -- accel/accel.sh@12 -- # build_accel_config 00:03:55.002 20:41:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:03:55.002 20:41:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:03:55.002 20:41:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:03:55.002 20:41:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:03:55.002 20:41:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:03:55.002 20:41:46 -- accel/accel.sh@41 -- # local IFS=, 00:03:55.002 20:41:46 -- accel/accel.sh@42 -- # jq -r . 00:03:55.002 20:41:46 -- accel/accel.sh@21 -- # val= 00:03:55.002 20:41:46 -- accel/accel.sh@22 -- # case "$var" in 00:03:55.002 20:41:46 -- accel/accel.sh@20 -- # IFS=: 00:03:55.002 20:41:46 -- accel/accel.sh@20 -- # read -r var val 00:03:55.002 20:41:46 -- accel/accel.sh@21 -- # val= 00:03:55.002 20:41:46 -- accel/accel.sh@22 -- # case "$var" in 00:03:55.002 20:41:46 -- accel/accel.sh@20 -- # IFS=: 00:03:55.002 20:41:46 -- accel/accel.sh@20 -- # read -r var val 00:03:55.002 20:41:46 -- accel/accel.sh@21 -- # val=0x1 00:03:55.002 20:41:46 -- accel/accel.sh@22 -- # case "$var" in 00:03:55.002 20:41:46 -- accel/accel.sh@20 -- # IFS=: 00:03:55.002 20:41:46 -- accel/accel.sh@20 -- # read -r var val 00:03:55.002 20:41:46 -- accel/accel.sh@21 -- # val= 00:03:55.002 20:41:46 -- accel/accel.sh@22 -- # case "$var" in 00:03:55.002 20:41:46 -- accel/accel.sh@20 -- # IFS=: 00:03:55.002 20:41:46 -- accel/accel.sh@20 -- # read -r var val 00:03:55.002 20:41:46 -- accel/accel.sh@21 -- # val= 00:03:55.002 20:41:46 -- accel/accel.sh@22 -- # case "$var" in 00:03:55.002 20:41:46 -- accel/accel.sh@20 -- # IFS=: 00:03:55.002 20:41:46 -- accel/accel.sh@20 -- # read -r var val 00:03:55.002 20:41:46 -- accel/accel.sh@21 -- # val=fill 00:03:55.002 20:41:46 -- accel/accel.sh@22 -- # case "$var" in 00:03:55.002 20:41:46 -- accel/accel.sh@24 -- # accel_opc=fill 00:03:55.002 20:41:46 -- accel/accel.sh@20 -- # IFS=: 00:03:55.002 20:41:46 -- accel/accel.sh@20 -- # read -r var val 00:03:55.002 20:41:46 -- accel/accel.sh@21 -- # val=0x80 00:03:55.002 20:41:46 -- accel/accel.sh@22 -- # case "$var" in 00:03:55.002 20:41:46 -- accel/accel.sh@20 -- # IFS=: 00:03:55.002 20:41:46 -- accel/accel.sh@20 -- # read -r var val 00:03:55.002 20:41:46 -- accel/accel.sh@21 -- # val='4096 bytes' 00:03:55.002 20:41:46 -- accel/accel.sh@22 -- # case "$var" in 00:03:55.002 20:41:46 -- accel/accel.sh@20 -- # IFS=: 00:03:55.002 20:41:46 -- accel/accel.sh@20 -- # read -r var val 00:03:55.002 20:41:46 -- accel/accel.sh@21 -- # val= 00:03:55.002 20:41:46 -- accel/accel.sh@22 -- # case "$var" in 00:03:55.002 20:41:46 -- accel/accel.sh@20 -- # IFS=: 00:03:55.002 20:41:46 -- accel/accel.sh@20 -- # read -r var val 00:03:55.002 20:41:46 -- accel/accel.sh@21 -- # val=software 00:03:55.002 20:41:46 -- accel/accel.sh@22 -- # case "$var" in 00:03:55.002 20:41:46 -- accel/accel.sh@23 -- # accel_module=software 00:03:55.002 20:41:46 -- accel/accel.sh@20 -- # IFS=: 00:03:55.002 20:41:46 -- accel/accel.sh@20 -- # read -r var val 00:03:55.002 20:41:46 -- accel/accel.sh@21 -- # val=64 00:03:55.002 20:41:46 -- accel/accel.sh@22 -- # case "$var" in 00:03:55.002 20:41:46 -- accel/accel.sh@20 -- # IFS=: 00:03:55.002 20:41:46 -- accel/accel.sh@20 -- # read -r var val 00:03:55.002 20:41:46 -- accel/accel.sh@21 -- # val=64 00:03:55.002 20:41:46 -- accel/accel.sh@22 -- # case "$var" in 00:03:55.002 20:41:46 -- accel/accel.sh@20 -- # IFS=: 00:03:55.002 20:41:46 -- accel/accel.sh@20 -- # read -r var val 00:03:55.002 20:41:46 -- accel/accel.sh@21 -- # val=1 00:03:55.002 20:41:46 -- accel/accel.sh@22 -- # case "$var" in 00:03:55.002 20:41:46 -- accel/accel.sh@20 -- # IFS=: 00:03:55.002 20:41:46 -- accel/accel.sh@20 -- # read -r var val 00:03:55.002 20:41:46 -- accel/accel.sh@21 -- # val='1 seconds' 00:03:55.002 20:41:46 -- accel/accel.sh@22 -- # case "$var" in 00:03:55.002 20:41:46 -- accel/accel.sh@20 -- # IFS=: 00:03:55.002 20:41:46 -- accel/accel.sh@20 -- # read -r var val 00:03:55.002 20:41:46 -- accel/accel.sh@21 -- # val=Yes 00:03:55.002 20:41:46 -- accel/accel.sh@22 -- # case "$var" in 00:03:55.002 20:41:46 -- accel/accel.sh@20 -- # IFS=: 00:03:55.002 20:41:46 -- accel/accel.sh@20 -- # read -r var val 00:03:55.002 20:41:46 -- accel/accel.sh@21 -- # val= 00:03:55.002 20:41:46 -- accel/accel.sh@22 -- # case "$var" in 00:03:55.002 20:41:46 -- accel/accel.sh@20 -- # IFS=: 00:03:55.002 20:41:46 -- accel/accel.sh@20 -- # read -r var val 00:03:55.002 20:41:46 -- accel/accel.sh@21 -- # val= 00:03:55.002 20:41:46 -- accel/accel.sh@22 -- # case "$var" in 00:03:55.002 20:41:46 -- accel/accel.sh@20 -- # IFS=: 00:03:55.002 20:41:46 -- accel/accel.sh@20 -- # read -r var val 00:03:56.384 20:41:47 -- accel/accel.sh@21 -- # val= 00:03:56.384 20:41:47 -- accel/accel.sh@22 -- # case "$var" in 00:03:56.384 20:41:47 -- accel/accel.sh@20 -- # IFS=: 00:03:56.384 20:41:47 -- accel/accel.sh@20 -- # read -r var val 00:03:56.384 20:41:47 -- accel/accel.sh@21 -- # val= 00:03:56.384 20:41:47 -- accel/accel.sh@22 -- # case "$var" in 00:03:56.384 20:41:47 -- accel/accel.sh@20 -- # IFS=: 00:03:56.384 20:41:47 -- accel/accel.sh@20 -- # read -r var val 00:03:56.384 20:41:47 -- accel/accel.sh@21 -- # val= 00:03:56.384 20:41:47 -- accel/accel.sh@22 -- # case "$var" in 00:03:56.384 20:41:47 -- accel/accel.sh@20 -- # IFS=: 00:03:56.384 20:41:47 -- accel/accel.sh@20 -- # read -r var val 00:03:56.384 20:41:47 -- accel/accel.sh@21 -- # val= 00:03:56.384 20:41:47 -- accel/accel.sh@22 -- # case "$var" in 00:03:56.384 20:41:47 -- accel/accel.sh@20 -- # IFS=: 00:03:56.384 20:41:47 -- accel/accel.sh@20 -- # read -r var val 00:03:56.384 20:41:47 -- accel/accel.sh@21 -- # val= 00:03:56.384 20:41:47 -- accel/accel.sh@22 -- # case "$var" in 00:03:56.384 20:41:47 -- accel/accel.sh@20 -- # IFS=: 00:03:56.384 20:41:47 -- accel/accel.sh@20 -- # read -r var val 00:03:56.384 20:41:47 -- accel/accel.sh@21 -- # val= 00:03:56.384 20:41:47 -- accel/accel.sh@22 -- # case "$var" in 00:03:56.384 20:41:47 -- accel/accel.sh@20 -- # IFS=: 00:03:56.384 20:41:47 -- accel/accel.sh@20 -- # read -r var val 00:03:56.384 20:41:47 -- accel/accel.sh@28 -- # [[ -n software ]] 00:03:56.384 20:41:47 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:03:56.384 20:41:47 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:03:56.384 00:03:56.384 real 0m3.325s 00:03:56.384 user 0m2.387s 00:03:56.384 sys 0m0.952s 00:03:56.384 20:41:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:56.384 20:41:47 -- common/autotest_common.sh@10 -- # set +x 00:03:56.384 ************************************ 00:03:56.384 END TEST accel_fill 00:03:56.384 ************************************ 00:03:56.384 20:41:47 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:03:56.384 20:41:47 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:03:56.384 20:41:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:56.384 20:41:47 -- common/autotest_common.sh@10 -- # set +x 00:03:56.384 ************************************ 00:03:56.384 START TEST accel_copy_crc32c 00:03:56.384 ************************************ 00:03:56.384 20:41:47 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:03:56.384 20:41:47 -- accel/accel.sh@16 -- # local accel_opc 00:03:56.384 20:41:47 -- accel/accel.sh@17 -- # local accel_module 00:03:56.384 20:41:47 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:03:56.384 20:41:47 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.kKiAIf -t 1 -w copy_crc32c -y 00:03:56.384 [2024-04-16 20:41:47.276825] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:03:56.384 [2024-04-16 20:41:47.277171] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:56.643 EAL: TSC is not safe to use in SMP mode 00:03:56.643 EAL: TSC is not invariant 00:03:56.643 [2024-04-16 20:41:47.701736] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:56.903 [2024-04-16 20:41:47.791002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:56.903 20:41:47 -- accel/accel.sh@12 -- # build_accel_config 00:03:56.903 20:41:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:03:56.903 20:41:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:03:56.903 20:41:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:03:56.903 20:41:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:03:56.903 20:41:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:03:56.903 20:41:47 -- accel/accel.sh@41 -- # local IFS=, 00:03:56.903 20:41:47 -- accel/accel.sh@42 -- # jq -r . 00:03:57.841 20:41:48 -- accel/accel.sh@18 -- # out=' 00:03:57.841 SPDK Configuration: 00:03:57.841 Core mask: 0x1 00:03:57.841 00:03:57.841 Accel Perf Configuration: 00:03:57.841 Workload Type: copy_crc32c 00:03:57.841 CRC-32C seed: 0 00:03:57.841 Vector size: 4096 bytes 00:03:57.841 Transfer size: 4096 bytes 00:03:57.841 Vector count 1 00:03:57.841 Module: software 00:03:57.841 Queue depth: 32 00:03:57.841 Allocate depth: 32 00:03:57.841 # threads/core: 1 00:03:57.841 Run time: 1 seconds 00:03:57.841 Verify: Yes 00:03:57.841 00:03:57.841 Running for 1 seconds... 00:03:57.841 00:03:57.841 Core,Thread Transfers Bandwidth Failed Miscompares 00:03:57.841 ------------------------------------------------------------------------------------ 00:03:57.841 0,0 1425760/s 5569 MiB/s 0 0 00:03:57.841 ==================================================================================== 00:03:57.841 Total 1425760/s 5569 MiB/s 0 0' 00:03:57.841 20:41:48 -- accel/accel.sh@20 -- # IFS=: 00:03:57.841 20:41:48 -- accel/accel.sh@20 -- # read -r var val 00:03:57.841 20:41:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:03:57.841 20:41:48 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.FHfzpx -t 1 -w copy_crc32c -y 00:03:57.841 [2024-04-16 20:41:48.937702] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:03:57.841 [2024-04-16 20:41:48.938052] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:58.410 EAL: TSC is not safe to use in SMP mode 00:03:58.410 EAL: TSC is not invariant 00:03:58.410 [2024-04-16 20:41:49.368231] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:58.410 [2024-04-16 20:41:49.457570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:58.411 20:41:49 -- accel/accel.sh@12 -- # build_accel_config 00:03:58.411 20:41:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:03:58.411 20:41:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:03:58.411 20:41:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:03:58.411 20:41:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:03:58.411 20:41:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:03:58.411 20:41:49 -- accel/accel.sh@41 -- # local IFS=, 00:03:58.411 20:41:49 -- accel/accel.sh@42 -- # jq -r . 00:03:58.411 20:41:49 -- accel/accel.sh@21 -- # val= 00:03:58.411 20:41:49 -- accel/accel.sh@22 -- # case "$var" in 00:03:58.411 20:41:49 -- accel/accel.sh@20 -- # IFS=: 00:03:58.411 20:41:49 -- accel/accel.sh@20 -- # read -r var val 00:03:58.411 20:41:49 -- accel/accel.sh@21 -- # val= 00:03:58.411 20:41:49 -- accel/accel.sh@22 -- # case "$var" in 00:03:58.411 20:41:49 -- accel/accel.sh@20 -- # IFS=: 00:03:58.411 20:41:49 -- accel/accel.sh@20 -- # read -r var val 00:03:58.411 20:41:49 -- accel/accel.sh@21 -- # val=0x1 00:03:58.411 20:41:49 -- accel/accel.sh@22 -- # case "$var" in 00:03:58.411 20:41:49 -- accel/accel.sh@20 -- # IFS=: 00:03:58.411 20:41:49 -- accel/accel.sh@20 -- # read -r var val 00:03:58.411 20:41:49 -- accel/accel.sh@21 -- # val= 00:03:58.411 20:41:49 -- accel/accel.sh@22 -- # case "$var" in 00:03:58.411 20:41:49 -- accel/accel.sh@20 -- # IFS=: 00:03:58.411 20:41:49 -- accel/accel.sh@20 -- # read -r var val 00:03:58.411 20:41:49 -- accel/accel.sh@21 -- # val= 00:03:58.411 20:41:49 -- accel/accel.sh@22 -- # case "$var" in 00:03:58.411 20:41:49 -- accel/accel.sh@20 -- # IFS=: 00:03:58.411 20:41:49 -- accel/accel.sh@20 -- # read -r var val 00:03:58.411 20:41:49 -- accel/accel.sh@21 -- # val=copy_crc32c 00:03:58.411 20:41:49 -- accel/accel.sh@22 -- # case "$var" in 00:03:58.411 20:41:49 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:03:58.411 20:41:49 -- accel/accel.sh@20 -- # IFS=: 00:03:58.411 20:41:49 -- accel/accel.sh@20 -- # read -r var val 00:03:58.411 20:41:49 -- accel/accel.sh@21 -- # val=0 00:03:58.411 20:41:49 -- accel/accel.sh@22 -- # case "$var" in 00:03:58.411 20:41:49 -- accel/accel.sh@20 -- # IFS=: 00:03:58.411 20:41:49 -- accel/accel.sh@20 -- # read -r var val 00:03:58.411 20:41:49 -- accel/accel.sh@21 -- # val='4096 bytes' 00:03:58.411 20:41:49 -- accel/accel.sh@22 -- # case "$var" in 00:03:58.411 20:41:49 -- accel/accel.sh@20 -- # IFS=: 00:03:58.411 20:41:49 -- accel/accel.sh@20 -- # read -r var val 00:03:58.411 20:41:49 -- accel/accel.sh@21 -- # val='4096 bytes' 00:03:58.411 20:41:49 -- accel/accel.sh@22 -- # case "$var" in 00:03:58.411 20:41:49 -- accel/accel.sh@20 -- # IFS=: 00:03:58.411 20:41:49 -- accel/accel.sh@20 -- # read -r var val 00:03:58.411 20:41:49 -- accel/accel.sh@21 -- # val= 00:03:58.411 20:41:49 -- accel/accel.sh@22 -- # case "$var" in 00:03:58.411 20:41:49 -- accel/accel.sh@20 -- # IFS=: 00:03:58.411 20:41:49 -- accel/accel.sh@20 -- # read -r var val 00:03:58.411 20:41:49 -- accel/accel.sh@21 -- # val=software 00:03:58.411 20:41:49 -- accel/accel.sh@22 -- # case "$var" in 00:03:58.411 20:41:49 -- accel/accel.sh@23 -- # accel_module=software 00:03:58.411 20:41:49 -- accel/accel.sh@20 -- # IFS=: 00:03:58.411 20:41:49 -- accel/accel.sh@20 -- # read -r var val 00:03:58.411 20:41:49 -- accel/accel.sh@21 -- # val=32 00:03:58.411 20:41:49 -- accel/accel.sh@22 -- # case "$var" in 00:03:58.411 20:41:49 -- accel/accel.sh@20 -- # IFS=: 00:03:58.411 20:41:49 -- accel/accel.sh@20 -- # read -r var val 00:03:58.411 20:41:49 -- accel/accel.sh@21 -- # val=32 00:03:58.411 20:41:49 -- accel/accel.sh@22 -- # case "$var" in 00:03:58.411 20:41:49 -- accel/accel.sh@20 -- # IFS=: 00:03:58.411 20:41:49 -- accel/accel.sh@20 -- # read -r var val 00:03:58.411 20:41:49 -- accel/accel.sh@21 -- # val=1 00:03:58.411 20:41:49 -- accel/accel.sh@22 -- # case "$var" in 00:03:58.411 20:41:49 -- accel/accel.sh@20 -- # IFS=: 00:03:58.411 20:41:49 -- accel/accel.sh@20 -- # read -r var val 00:03:58.411 20:41:49 -- accel/accel.sh@21 -- # val='1 seconds' 00:03:58.411 20:41:49 -- accel/accel.sh@22 -- # case "$var" in 00:03:58.411 20:41:49 -- accel/accel.sh@20 -- # IFS=: 00:03:58.411 20:41:49 -- accel/accel.sh@20 -- # read -r var val 00:03:58.411 20:41:49 -- accel/accel.sh@21 -- # val=Yes 00:03:58.411 20:41:49 -- accel/accel.sh@22 -- # case "$var" in 00:03:58.411 20:41:49 -- accel/accel.sh@20 -- # IFS=: 00:03:58.411 20:41:49 -- accel/accel.sh@20 -- # read -r var val 00:03:58.411 20:41:49 -- accel/accel.sh@21 -- # val= 00:03:58.411 20:41:49 -- accel/accel.sh@22 -- # case "$var" in 00:03:58.411 20:41:49 -- accel/accel.sh@20 -- # IFS=: 00:03:58.411 20:41:49 -- accel/accel.sh@20 -- # read -r var val 00:03:58.411 20:41:49 -- accel/accel.sh@21 -- # val= 00:03:58.411 20:41:49 -- accel/accel.sh@22 -- # case "$var" in 00:03:58.411 20:41:49 -- accel/accel.sh@20 -- # IFS=: 00:03:58.411 20:41:49 -- accel/accel.sh@20 -- # read -r var val 00:03:59.790 20:41:50 -- accel/accel.sh@21 -- # val= 00:03:59.790 20:41:50 -- accel/accel.sh@22 -- # case "$var" in 00:03:59.790 20:41:50 -- accel/accel.sh@20 -- # IFS=: 00:03:59.790 20:41:50 -- accel/accel.sh@20 -- # read -r var val 00:03:59.790 20:41:50 -- accel/accel.sh@21 -- # val= 00:03:59.790 20:41:50 -- accel/accel.sh@22 -- # case "$var" in 00:03:59.790 20:41:50 -- accel/accel.sh@20 -- # IFS=: 00:03:59.790 20:41:50 -- accel/accel.sh@20 -- # read -r var val 00:03:59.790 20:41:50 -- accel/accel.sh@21 -- # val= 00:03:59.790 20:41:50 -- accel/accel.sh@22 -- # case "$var" in 00:03:59.790 20:41:50 -- accel/accel.sh@20 -- # IFS=: 00:03:59.790 20:41:50 -- accel/accel.sh@20 -- # read -r var val 00:03:59.790 20:41:50 -- accel/accel.sh@21 -- # val= 00:03:59.790 20:41:50 -- accel/accel.sh@22 -- # case "$var" in 00:03:59.790 20:41:50 -- accel/accel.sh@20 -- # IFS=: 00:03:59.790 20:41:50 -- accel/accel.sh@20 -- # read -r var val 00:03:59.790 20:41:50 -- accel/accel.sh@21 -- # val= 00:03:59.790 20:41:50 -- accel/accel.sh@22 -- # case "$var" in 00:03:59.790 20:41:50 -- accel/accel.sh@20 -- # IFS=: 00:03:59.790 20:41:50 -- accel/accel.sh@20 -- # read -r var val 00:03:59.790 20:41:50 -- accel/accel.sh@21 -- # val= 00:03:59.790 20:41:50 -- accel/accel.sh@22 -- # case "$var" in 00:03:59.790 20:41:50 -- accel/accel.sh@20 -- # IFS=: 00:03:59.790 20:41:50 -- accel/accel.sh@20 -- # read -r var val 00:03:59.790 20:41:50 -- accel/accel.sh@28 -- # [[ -n software ]] 00:03:59.790 20:41:50 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:03:59.790 20:41:50 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:03:59.790 00:03:59.790 real 0m3.327s 00:03:59.790 user 0m2.374s 00:03:59.790 sys 0m0.954s 00:03:59.790 20:41:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.790 20:41:50 -- common/autotest_common.sh@10 -- # set +x 00:03:59.790 ************************************ 00:03:59.790 END TEST accel_copy_crc32c 00:03:59.790 ************************************ 00:03:59.790 20:41:50 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:03:59.790 20:41:50 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:03:59.790 20:41:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:59.790 20:41:50 -- common/autotest_common.sh@10 -- # set +x 00:03:59.790 ************************************ 00:03:59.790 START TEST accel_copy_crc32c_C2 00:03:59.790 ************************************ 00:03:59.790 20:41:50 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:03:59.790 20:41:50 -- accel/accel.sh@16 -- # local accel_opc 00:03:59.790 20:41:50 -- accel/accel.sh@17 -- # local accel_module 00:03:59.790 20:41:50 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:03:59.790 20:41:50 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.ZfuptJ -t 1 -w copy_crc32c -y -C 2 00:03:59.790 [2024-04-16 20:41:50.651890] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:03:59.790 [2024-04-16 20:41:50.652239] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:00.049 EAL: TSC is not safe to use in SMP mode 00:04:00.049 EAL: TSC is not invariant 00:04:00.049 [2024-04-16 20:41:51.082805] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:00.309 [2024-04-16 20:41:51.172001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:00.309 20:41:51 -- accel/accel.sh@12 -- # build_accel_config 00:04:00.309 20:41:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:00.309 20:41:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:00.309 20:41:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:00.309 20:41:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:00.309 20:41:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:00.309 20:41:51 -- accel/accel.sh@41 -- # local IFS=, 00:04:00.309 20:41:51 -- accel/accel.sh@42 -- # jq -r . 00:04:01.247 20:41:52 -- accel/accel.sh@18 -- # out=' 00:04:01.247 SPDK Configuration: 00:04:01.247 Core mask: 0x1 00:04:01.247 00:04:01.247 Accel Perf Configuration: 00:04:01.247 Workload Type: copy_crc32c 00:04:01.247 CRC-32C seed: 0 00:04:01.247 Vector size: 4096 bytes 00:04:01.247 Transfer size: 8192 bytes 00:04:01.247 Vector count 2 00:04:01.247 Module: software 00:04:01.247 Queue depth: 32 00:04:01.247 Allocate depth: 32 00:04:01.247 # threads/core: 1 00:04:01.247 Run time: 1 seconds 00:04:01.247 Verify: Yes 00:04:01.247 00:04:01.247 Running for 1 seconds... 00:04:01.247 00:04:01.247 Core,Thread Transfers Bandwidth Failed Miscompares 00:04:01.247 ------------------------------------------------------------------------------------ 00:04:01.247 0,0 783072/s 6117 MiB/s 0 0 00:04:01.247 ==================================================================================== 00:04:01.247 Total 783072/s 3058 MiB/s 0 0' 00:04:01.247 20:41:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:04:01.247 20:41:52 -- accel/accel.sh@20 -- # IFS=: 00:04:01.247 20:41:52 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.c7O7XJ -t 1 -w copy_crc32c -y -C 2 00:04:01.247 20:41:52 -- accel/accel.sh@20 -- # read -r var val 00:04:01.247 [2024-04-16 20:41:52.315909] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:04:01.247 [2024-04-16 20:41:52.316259] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:01.817 EAL: TSC is not safe to use in SMP mode 00:04:01.817 EAL: TSC is not invariant 00:04:01.817 [2024-04-16 20:41:52.748466] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:01.817 [2024-04-16 20:41:52.837108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:01.817 20:41:52 -- accel/accel.sh@12 -- # build_accel_config 00:04:01.817 20:41:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:01.817 20:41:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:01.817 20:41:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:01.817 20:41:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:01.817 20:41:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:01.817 20:41:52 -- accel/accel.sh@41 -- # local IFS=, 00:04:01.817 20:41:52 -- accel/accel.sh@42 -- # jq -r . 00:04:01.817 20:41:52 -- accel/accel.sh@21 -- # val= 00:04:01.817 20:41:52 -- accel/accel.sh@22 -- # case "$var" in 00:04:01.817 20:41:52 -- accel/accel.sh@20 -- # IFS=: 00:04:01.817 20:41:52 -- accel/accel.sh@20 -- # read -r var val 00:04:01.817 20:41:52 -- accel/accel.sh@21 -- # val= 00:04:01.817 20:41:52 -- accel/accel.sh@22 -- # case "$var" in 00:04:01.817 20:41:52 -- accel/accel.sh@20 -- # IFS=: 00:04:01.817 20:41:52 -- accel/accel.sh@20 -- # read -r var val 00:04:01.817 20:41:52 -- accel/accel.sh@21 -- # val=0x1 00:04:01.817 20:41:52 -- accel/accel.sh@22 -- # case "$var" in 00:04:01.817 20:41:52 -- accel/accel.sh@20 -- # IFS=: 00:04:01.817 20:41:52 -- accel/accel.sh@20 -- # read -r var val 00:04:01.817 20:41:52 -- accel/accel.sh@21 -- # val= 00:04:01.817 20:41:52 -- accel/accel.sh@22 -- # case "$var" in 00:04:01.817 20:41:52 -- accel/accel.sh@20 -- # IFS=: 00:04:01.817 20:41:52 -- accel/accel.sh@20 -- # read -r var val 00:04:01.817 20:41:52 -- accel/accel.sh@21 -- # val= 00:04:01.817 20:41:52 -- accel/accel.sh@22 -- # case "$var" in 00:04:01.817 20:41:52 -- accel/accel.sh@20 -- # IFS=: 00:04:01.817 20:41:52 -- accel/accel.sh@20 -- # read -r var val 00:04:01.817 20:41:52 -- accel/accel.sh@21 -- # val=copy_crc32c 00:04:01.817 20:41:52 -- accel/accel.sh@22 -- # case "$var" in 00:04:01.817 20:41:52 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:04:01.817 20:41:52 -- accel/accel.sh@20 -- # IFS=: 00:04:01.817 20:41:52 -- accel/accel.sh@20 -- # read -r var val 00:04:01.817 20:41:52 -- accel/accel.sh@21 -- # val=0 00:04:01.817 20:41:52 -- accel/accel.sh@22 -- # case "$var" in 00:04:01.817 20:41:52 -- accel/accel.sh@20 -- # IFS=: 00:04:01.817 20:41:52 -- accel/accel.sh@20 -- # read -r var val 00:04:01.817 20:41:52 -- accel/accel.sh@21 -- # val='4096 bytes' 00:04:01.817 20:41:52 -- accel/accel.sh@22 -- # case "$var" in 00:04:01.817 20:41:52 -- accel/accel.sh@20 -- # IFS=: 00:04:01.817 20:41:52 -- accel/accel.sh@20 -- # read -r var val 00:04:01.817 20:41:52 -- accel/accel.sh@21 -- # val='8192 bytes' 00:04:01.817 20:41:52 -- accel/accel.sh@22 -- # case "$var" in 00:04:01.817 20:41:52 -- accel/accel.sh@20 -- # IFS=: 00:04:01.817 20:41:52 -- accel/accel.sh@20 -- # read -r var val 00:04:01.817 20:41:52 -- accel/accel.sh@21 -- # val= 00:04:01.817 20:41:52 -- accel/accel.sh@22 -- # case "$var" in 00:04:01.817 20:41:52 -- accel/accel.sh@20 -- # IFS=: 00:04:01.818 20:41:52 -- accel/accel.sh@20 -- # read -r var val 00:04:01.818 20:41:52 -- accel/accel.sh@21 -- # val=software 00:04:01.818 20:41:52 -- accel/accel.sh@22 -- # case "$var" in 00:04:01.818 20:41:52 -- accel/accel.sh@23 -- # accel_module=software 00:04:01.818 20:41:52 -- accel/accel.sh@20 -- # IFS=: 00:04:01.818 20:41:52 -- accel/accel.sh@20 -- # read -r var val 00:04:01.818 20:41:52 -- accel/accel.sh@21 -- # val=32 00:04:01.818 20:41:52 -- accel/accel.sh@22 -- # case "$var" in 00:04:01.818 20:41:52 -- accel/accel.sh@20 -- # IFS=: 00:04:01.818 20:41:52 -- accel/accel.sh@20 -- # read -r var val 00:04:01.818 20:41:52 -- accel/accel.sh@21 -- # val=32 00:04:01.818 20:41:52 -- accel/accel.sh@22 -- # case "$var" in 00:04:01.818 20:41:52 -- accel/accel.sh@20 -- # IFS=: 00:04:01.818 20:41:52 -- accel/accel.sh@20 -- # read -r var val 00:04:01.818 20:41:52 -- accel/accel.sh@21 -- # val=1 00:04:01.818 20:41:52 -- accel/accel.sh@22 -- # case "$var" in 00:04:01.818 20:41:52 -- accel/accel.sh@20 -- # IFS=: 00:04:01.818 20:41:52 -- accel/accel.sh@20 -- # read -r var val 00:04:01.818 20:41:52 -- accel/accel.sh@21 -- # val='1 seconds' 00:04:01.818 20:41:52 -- accel/accel.sh@22 -- # case "$var" in 00:04:01.818 20:41:52 -- accel/accel.sh@20 -- # IFS=: 00:04:01.818 20:41:52 -- accel/accel.sh@20 -- # read -r var val 00:04:01.818 20:41:52 -- accel/accel.sh@21 -- # val=Yes 00:04:01.818 20:41:52 -- accel/accel.sh@22 -- # case "$var" in 00:04:01.818 20:41:52 -- accel/accel.sh@20 -- # IFS=: 00:04:01.818 20:41:52 -- accel/accel.sh@20 -- # read -r var val 00:04:01.818 20:41:52 -- accel/accel.sh@21 -- # val= 00:04:01.818 20:41:52 -- accel/accel.sh@22 -- # case "$var" in 00:04:01.818 20:41:52 -- accel/accel.sh@20 -- # IFS=: 00:04:01.818 20:41:52 -- accel/accel.sh@20 -- # read -r var val 00:04:01.818 20:41:52 -- accel/accel.sh@21 -- # val= 00:04:01.818 20:41:52 -- accel/accel.sh@22 -- # case "$var" in 00:04:01.818 20:41:52 -- accel/accel.sh@20 -- # IFS=: 00:04:01.818 20:41:52 -- accel/accel.sh@20 -- # read -r var val 00:04:03.199 20:41:53 -- accel/accel.sh@21 -- # val= 00:04:03.199 20:41:53 -- accel/accel.sh@22 -- # case "$var" in 00:04:03.199 20:41:53 -- accel/accel.sh@20 -- # IFS=: 00:04:03.199 20:41:53 -- accel/accel.sh@20 -- # read -r var val 00:04:03.199 20:41:53 -- accel/accel.sh@21 -- # val= 00:04:03.199 20:41:53 -- accel/accel.sh@22 -- # case "$var" in 00:04:03.199 20:41:53 -- accel/accel.sh@20 -- # IFS=: 00:04:03.199 20:41:53 -- accel/accel.sh@20 -- # read -r var val 00:04:03.199 20:41:53 -- accel/accel.sh@21 -- # val= 00:04:03.199 20:41:53 -- accel/accel.sh@22 -- # case "$var" in 00:04:03.199 20:41:53 -- accel/accel.sh@20 -- # IFS=: 00:04:03.199 20:41:53 -- accel/accel.sh@20 -- # read -r var val 00:04:03.199 20:41:53 -- accel/accel.sh@21 -- # val= 00:04:03.199 20:41:53 -- accel/accel.sh@22 -- # case "$var" in 00:04:03.199 20:41:53 -- accel/accel.sh@20 -- # IFS=: 00:04:03.200 20:41:53 -- accel/accel.sh@20 -- # read -r var val 00:04:03.200 20:41:53 -- accel/accel.sh@21 -- # val= 00:04:03.200 20:41:53 -- accel/accel.sh@22 -- # case "$var" in 00:04:03.200 20:41:53 -- accel/accel.sh@20 -- # IFS=: 00:04:03.200 20:41:53 -- accel/accel.sh@20 -- # read -r var val 00:04:03.200 20:41:53 -- accel/accel.sh@21 -- # val= 00:04:03.200 20:41:53 -- accel/accel.sh@22 -- # case "$var" in 00:04:03.200 20:41:53 -- accel/accel.sh@20 -- # IFS=: 00:04:03.200 20:41:53 -- accel/accel.sh@20 -- # read -r var val 00:04:03.200 20:41:53 -- accel/accel.sh@28 -- # [[ -n software ]] 00:04:03.200 20:41:53 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:04:03.200 20:41:53 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:03.200 00:04:03.200 real 0m3.340s 00:04:03.200 user 0m2.389s 00:04:03.200 sys 0m0.958s 00:04:03.200 20:41:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.200 20:41:53 -- common/autotest_common.sh@10 -- # set +x 00:04:03.200 ************************************ 00:04:03.200 END TEST accel_copy_crc32c_C2 00:04:03.200 ************************************ 00:04:03.200 20:41:54 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:04:03.200 20:41:54 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:04:03.200 20:41:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:03.200 20:41:54 -- common/autotest_common.sh@10 -- # set +x 00:04:03.200 ************************************ 00:04:03.200 START TEST accel_dualcast 00:04:03.200 ************************************ 00:04:03.200 20:41:54 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:04:03.200 20:41:54 -- accel/accel.sh@16 -- # local accel_opc 00:04:03.200 20:41:54 -- accel/accel.sh@17 -- # local accel_module 00:04:03.200 20:41:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:04:03.200 20:41:54 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.6uj59i -t 1 -w dualcast -y 00:04:03.200 [2024-04-16 20:41:54.043364] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:04:03.200 [2024-04-16 20:41:54.043711] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:03.459 EAL: TSC is not safe to use in SMP mode 00:04:03.459 EAL: TSC is not invariant 00:04:03.459 [2024-04-16 20:41:54.473703] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:03.459 [2024-04-16 20:41:54.566033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.459 20:41:54 -- accel/accel.sh@12 -- # build_accel_config 00:04:03.460 20:41:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:03.460 20:41:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:03.460 20:41:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:03.460 20:41:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:03.460 20:41:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:03.460 20:41:54 -- accel/accel.sh@41 -- # local IFS=, 00:04:03.460 20:41:54 -- accel/accel.sh@42 -- # jq -r . 00:04:04.839 20:41:55 -- accel/accel.sh@18 -- # out=' 00:04:04.839 SPDK Configuration: 00:04:04.839 Core mask: 0x1 00:04:04.839 00:04:04.839 Accel Perf Configuration: 00:04:04.839 Workload Type: dualcast 00:04:04.839 Transfer size: 4096 bytes 00:04:04.839 Vector count 1 00:04:04.839 Module: software 00:04:04.839 Queue depth: 32 00:04:04.839 Allocate depth: 32 00:04:04.839 # threads/core: 1 00:04:04.839 Run time: 1 seconds 00:04:04.839 Verify: Yes 00:04:04.839 00:04:04.839 Running for 1 seconds... 00:04:04.839 00:04:04.839 Core,Thread Transfers Bandwidth Failed Miscompares 00:04:04.839 ------------------------------------------------------------------------------------ 00:04:04.839 0,0 1688448/s 6595 MiB/s 0 0 00:04:04.839 ==================================================================================== 00:04:04.840 Total 1688448/s 6595 MiB/s 0 0' 00:04:04.840 20:41:55 -- accel/accel.sh@20 -- # IFS=: 00:04:04.840 20:41:55 -- accel/accel.sh@20 -- # read -r var val 00:04:04.840 20:41:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:04:04.840 20:41:55 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.UgOgcG -t 1 -w dualcast -y 00:04:04.840 [2024-04-16 20:41:55.712432] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:04:04.840 [2024-04-16 20:41:55.712789] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:05.099 EAL: TSC is not safe to use in SMP mode 00:04:05.099 EAL: TSC is not invariant 00:04:05.099 [2024-04-16 20:41:56.147720] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.360 [2024-04-16 20:41:56.238702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:05.360 20:41:56 -- accel/accel.sh@12 -- # build_accel_config 00:04:05.360 20:41:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:05.360 20:41:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:05.360 20:41:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:05.360 20:41:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:05.360 20:41:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:05.360 20:41:56 -- accel/accel.sh@41 -- # local IFS=, 00:04:05.360 20:41:56 -- accel/accel.sh@42 -- # jq -r . 00:04:05.360 20:41:56 -- accel/accel.sh@21 -- # val= 00:04:05.360 20:41:56 -- accel/accel.sh@22 -- # case "$var" in 00:04:05.360 20:41:56 -- accel/accel.sh@20 -- # IFS=: 00:04:05.361 20:41:56 -- accel/accel.sh@20 -- # read -r var val 00:04:05.361 20:41:56 -- accel/accel.sh@21 -- # val= 00:04:05.361 20:41:56 -- accel/accel.sh@22 -- # case "$var" in 00:04:05.361 20:41:56 -- accel/accel.sh@20 -- # IFS=: 00:04:05.361 20:41:56 -- accel/accel.sh@20 -- # read -r var val 00:04:05.361 20:41:56 -- accel/accel.sh@21 -- # val=0x1 00:04:05.361 20:41:56 -- accel/accel.sh@22 -- # case "$var" in 00:04:05.361 20:41:56 -- accel/accel.sh@20 -- # IFS=: 00:04:05.361 20:41:56 -- accel/accel.sh@20 -- # read -r var val 00:04:05.361 20:41:56 -- accel/accel.sh@21 -- # val= 00:04:05.361 20:41:56 -- accel/accel.sh@22 -- # case "$var" in 00:04:05.361 20:41:56 -- accel/accel.sh@20 -- # IFS=: 00:04:05.361 20:41:56 -- accel/accel.sh@20 -- # read -r var val 00:04:05.361 20:41:56 -- accel/accel.sh@21 -- # val= 00:04:05.361 20:41:56 -- accel/accel.sh@22 -- # case "$var" in 00:04:05.361 20:41:56 -- accel/accel.sh@20 -- # IFS=: 00:04:05.361 20:41:56 -- accel/accel.sh@20 -- # read -r var val 00:04:05.361 20:41:56 -- accel/accel.sh@21 -- # val=dualcast 00:04:05.361 20:41:56 -- accel/accel.sh@22 -- # case "$var" in 00:04:05.361 20:41:56 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:04:05.361 20:41:56 -- accel/accel.sh@20 -- # IFS=: 00:04:05.361 20:41:56 -- accel/accel.sh@20 -- # read -r var val 00:04:05.361 20:41:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:04:05.361 20:41:56 -- accel/accel.sh@22 -- # case "$var" in 00:04:05.361 20:41:56 -- accel/accel.sh@20 -- # IFS=: 00:04:05.361 20:41:56 -- accel/accel.sh@20 -- # read -r var val 00:04:05.361 20:41:56 -- accel/accel.sh@21 -- # val= 00:04:05.361 20:41:56 -- accel/accel.sh@22 -- # case "$var" in 00:04:05.361 20:41:56 -- accel/accel.sh@20 -- # IFS=: 00:04:05.361 20:41:56 -- accel/accel.sh@20 -- # read -r var val 00:04:05.361 20:41:56 -- accel/accel.sh@21 -- # val=software 00:04:05.361 20:41:56 -- accel/accel.sh@22 -- # case "$var" in 00:04:05.361 20:41:56 -- accel/accel.sh@23 -- # accel_module=software 00:04:05.361 20:41:56 -- accel/accel.sh@20 -- # IFS=: 00:04:05.361 20:41:56 -- accel/accel.sh@20 -- # read -r var val 00:04:05.361 20:41:56 -- accel/accel.sh@21 -- # val=32 00:04:05.361 20:41:56 -- accel/accel.sh@22 -- # case "$var" in 00:04:05.361 20:41:56 -- accel/accel.sh@20 -- # IFS=: 00:04:05.361 20:41:56 -- accel/accel.sh@20 -- # read -r var val 00:04:05.361 20:41:56 -- accel/accel.sh@21 -- # val=32 00:04:05.361 20:41:56 -- accel/accel.sh@22 -- # case "$var" in 00:04:05.361 20:41:56 -- accel/accel.sh@20 -- # IFS=: 00:04:05.361 20:41:56 -- accel/accel.sh@20 -- # read -r var val 00:04:05.361 20:41:56 -- accel/accel.sh@21 -- # val=1 00:04:05.361 20:41:56 -- accel/accel.sh@22 -- # case "$var" in 00:04:05.361 20:41:56 -- accel/accel.sh@20 -- # IFS=: 00:04:05.361 20:41:56 -- accel/accel.sh@20 -- # read -r var val 00:04:05.361 20:41:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:04:05.361 20:41:56 -- accel/accel.sh@22 -- # case "$var" in 00:04:05.361 20:41:56 -- accel/accel.sh@20 -- # IFS=: 00:04:05.361 20:41:56 -- accel/accel.sh@20 -- # read -r var val 00:04:05.361 20:41:56 -- accel/accel.sh@21 -- # val=Yes 00:04:05.361 20:41:56 -- accel/accel.sh@22 -- # case "$var" in 00:04:05.361 20:41:56 -- accel/accel.sh@20 -- # IFS=: 00:04:05.361 20:41:56 -- accel/accel.sh@20 -- # read -r var val 00:04:05.361 20:41:56 -- accel/accel.sh@21 -- # val= 00:04:05.361 20:41:56 -- accel/accel.sh@22 -- # case "$var" in 00:04:05.361 20:41:56 -- accel/accel.sh@20 -- # IFS=: 00:04:05.361 20:41:56 -- accel/accel.sh@20 -- # read -r var val 00:04:05.361 20:41:56 -- accel/accel.sh@21 -- # val= 00:04:05.361 20:41:56 -- accel/accel.sh@22 -- # case "$var" in 00:04:05.361 20:41:56 -- accel/accel.sh@20 -- # IFS=: 00:04:05.361 20:41:56 -- accel/accel.sh@20 -- # read -r var val 00:04:06.297 20:41:57 -- accel/accel.sh@21 -- # val= 00:04:06.297 20:41:57 -- accel/accel.sh@22 -- # case "$var" in 00:04:06.297 20:41:57 -- accel/accel.sh@20 -- # IFS=: 00:04:06.297 20:41:57 -- accel/accel.sh@20 -- # read -r var val 00:04:06.297 20:41:57 -- accel/accel.sh@21 -- # val= 00:04:06.297 20:41:57 -- accel/accel.sh@22 -- # case "$var" in 00:04:06.297 20:41:57 -- accel/accel.sh@20 -- # IFS=: 00:04:06.297 20:41:57 -- accel/accel.sh@20 -- # read -r var val 00:04:06.297 20:41:57 -- accel/accel.sh@21 -- # val= 00:04:06.297 20:41:57 -- accel/accel.sh@22 -- # case "$var" in 00:04:06.297 20:41:57 -- accel/accel.sh@20 -- # IFS=: 00:04:06.297 20:41:57 -- accel/accel.sh@20 -- # read -r var val 00:04:06.297 20:41:57 -- accel/accel.sh@21 -- # val= 00:04:06.297 20:41:57 -- accel/accel.sh@22 -- # case "$var" in 00:04:06.297 20:41:57 -- accel/accel.sh@20 -- # IFS=: 00:04:06.297 20:41:57 -- accel/accel.sh@20 -- # read -r var val 00:04:06.297 20:41:57 -- accel/accel.sh@21 -- # val= 00:04:06.297 20:41:57 -- accel/accel.sh@22 -- # case "$var" in 00:04:06.297 20:41:57 -- accel/accel.sh@20 -- # IFS=: 00:04:06.297 20:41:57 -- accel/accel.sh@20 -- # read -r var val 00:04:06.297 20:41:57 -- accel/accel.sh@21 -- # val= 00:04:06.298 20:41:57 -- accel/accel.sh@22 -- # case "$var" in 00:04:06.298 20:41:57 -- accel/accel.sh@20 -- # IFS=: 00:04:06.298 20:41:57 -- accel/accel.sh@20 -- # read -r var val 00:04:06.298 20:41:57 -- accel/accel.sh@28 -- # [[ -n software ]] 00:04:06.298 20:41:57 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:04:06.298 20:41:57 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:06.298 00:04:06.298 real 0m3.347s 00:04:06.298 user 0m2.427s 00:04:06.298 sys 0m0.933s 00:04:06.298 20:41:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:06.298 20:41:57 -- common/autotest_common.sh@10 -- # set +x 00:04:06.298 ************************************ 00:04:06.298 END TEST accel_dualcast 00:04:06.298 ************************************ 00:04:06.558 20:41:57 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:04:06.558 20:41:57 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:04:06.558 20:41:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:06.558 20:41:57 -- common/autotest_common.sh@10 -- # set +x 00:04:06.558 ************************************ 00:04:06.558 START TEST accel_compare 00:04:06.558 ************************************ 00:04:06.558 20:41:57 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:04:06.558 20:41:57 -- accel/accel.sh@16 -- # local accel_opc 00:04:06.558 20:41:57 -- accel/accel.sh@17 -- # local accel_module 00:04:06.558 20:41:57 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:04:06.558 20:41:57 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.wGJ4ew -t 1 -w compare -y 00:04:06.558 [2024-04-16 20:41:57.440078] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:04:06.558 [2024-04-16 20:41:57.440429] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:06.818 EAL: TSC is not safe to use in SMP mode 00:04:06.818 EAL: TSC is not invariant 00:04:06.818 [2024-04-16 20:41:57.868010] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:07.077 [2024-04-16 20:41:57.958223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.077 20:41:57 -- accel/accel.sh@12 -- # build_accel_config 00:04:07.077 20:41:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:07.077 20:41:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:07.077 20:41:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:07.077 20:41:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:07.077 20:41:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:07.077 20:41:57 -- accel/accel.sh@41 -- # local IFS=, 00:04:07.077 20:41:57 -- accel/accel.sh@42 -- # jq -r . 00:04:08.019 20:41:59 -- accel/accel.sh@18 -- # out=' 00:04:08.019 SPDK Configuration: 00:04:08.019 Core mask: 0x1 00:04:08.019 00:04:08.019 Accel Perf Configuration: 00:04:08.019 Workload Type: compare 00:04:08.019 Transfer size: 4096 bytes 00:04:08.019 Vector count 1 00:04:08.019 Module: software 00:04:08.019 Queue depth: 32 00:04:08.019 Allocate depth: 32 00:04:08.019 # threads/core: 1 00:04:08.019 Run time: 1 seconds 00:04:08.019 Verify: Yes 00:04:08.019 00:04:08.019 Running for 1 seconds... 00:04:08.019 00:04:08.019 Core,Thread Transfers Bandwidth Failed Miscompares 00:04:08.019 ------------------------------------------------------------------------------------ 00:04:08.019 0,0 3200544/s 12502 MiB/s 0 0 00:04:08.019 ==================================================================================== 00:04:08.019 Total 3200544/s 12502 MiB/s 0 0' 00:04:08.019 20:41:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:04:08.019 20:41:59 -- accel/accel.sh@20 -- # IFS=: 00:04:08.019 20:41:59 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.pDmdbK -t 1 -w compare -y 00:04:08.019 20:41:59 -- accel/accel.sh@20 -- # read -r var val 00:04:08.019 [2024-04-16 20:41:59.103179] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:04:08.019 [2024-04-16 20:41:59.103600] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:08.589 EAL: TSC is not safe to use in SMP mode 00:04:08.589 EAL: TSC is not invariant 00:04:08.589 [2024-04-16 20:41:59.529518] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:08.589 [2024-04-16 20:41:59.618524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:08.589 20:41:59 -- accel/accel.sh@12 -- # build_accel_config 00:04:08.589 20:41:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:08.589 20:41:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:08.589 20:41:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:08.589 20:41:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:08.589 20:41:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:08.589 20:41:59 -- accel/accel.sh@41 -- # local IFS=, 00:04:08.589 20:41:59 -- accel/accel.sh@42 -- # jq -r . 00:04:08.589 20:41:59 -- accel/accel.sh@21 -- # val= 00:04:08.589 20:41:59 -- accel/accel.sh@22 -- # case "$var" in 00:04:08.589 20:41:59 -- accel/accel.sh@20 -- # IFS=: 00:04:08.589 20:41:59 -- accel/accel.sh@20 -- # read -r var val 00:04:08.589 20:41:59 -- accel/accel.sh@21 -- # val= 00:04:08.589 20:41:59 -- accel/accel.sh@22 -- # case "$var" in 00:04:08.589 20:41:59 -- accel/accel.sh@20 -- # IFS=: 00:04:08.589 20:41:59 -- accel/accel.sh@20 -- # read -r var val 00:04:08.589 20:41:59 -- accel/accel.sh@21 -- # val=0x1 00:04:08.589 20:41:59 -- accel/accel.sh@22 -- # case "$var" in 00:04:08.589 20:41:59 -- accel/accel.sh@20 -- # IFS=: 00:04:08.589 20:41:59 -- accel/accel.sh@20 -- # read -r var val 00:04:08.589 20:41:59 -- accel/accel.sh@21 -- # val= 00:04:08.589 20:41:59 -- accel/accel.sh@22 -- # case "$var" in 00:04:08.589 20:41:59 -- accel/accel.sh@20 -- # IFS=: 00:04:08.589 20:41:59 -- accel/accel.sh@20 -- # read -r var val 00:04:08.589 20:41:59 -- accel/accel.sh@21 -- # val= 00:04:08.589 20:41:59 -- accel/accel.sh@22 -- # case "$var" in 00:04:08.589 20:41:59 -- accel/accel.sh@20 -- # IFS=: 00:04:08.589 20:41:59 -- accel/accel.sh@20 -- # read -r var val 00:04:08.589 20:41:59 -- accel/accel.sh@21 -- # val=compare 00:04:08.589 20:41:59 -- accel/accel.sh@22 -- # case "$var" in 00:04:08.589 20:41:59 -- accel/accel.sh@24 -- # accel_opc=compare 00:04:08.589 20:41:59 -- accel/accel.sh@20 -- # IFS=: 00:04:08.589 20:41:59 -- accel/accel.sh@20 -- # read -r var val 00:04:08.589 20:41:59 -- accel/accel.sh@21 -- # val='4096 bytes' 00:04:08.589 20:41:59 -- accel/accel.sh@22 -- # case "$var" in 00:04:08.589 20:41:59 -- accel/accel.sh@20 -- # IFS=: 00:04:08.589 20:41:59 -- accel/accel.sh@20 -- # read -r var val 00:04:08.589 20:41:59 -- accel/accel.sh@21 -- # val= 00:04:08.589 20:41:59 -- accel/accel.sh@22 -- # case "$var" in 00:04:08.589 20:41:59 -- accel/accel.sh@20 -- # IFS=: 00:04:08.589 20:41:59 -- accel/accel.sh@20 -- # read -r var val 00:04:08.589 20:41:59 -- accel/accel.sh@21 -- # val=software 00:04:08.589 20:41:59 -- accel/accel.sh@22 -- # case "$var" in 00:04:08.589 20:41:59 -- accel/accel.sh@23 -- # accel_module=software 00:04:08.589 20:41:59 -- accel/accel.sh@20 -- # IFS=: 00:04:08.589 20:41:59 -- accel/accel.sh@20 -- # read -r var val 00:04:08.589 20:41:59 -- accel/accel.sh@21 -- # val=32 00:04:08.589 20:41:59 -- accel/accel.sh@22 -- # case "$var" in 00:04:08.589 20:41:59 -- accel/accel.sh@20 -- # IFS=: 00:04:08.589 20:41:59 -- accel/accel.sh@20 -- # read -r var val 00:04:08.589 20:41:59 -- accel/accel.sh@21 -- # val=32 00:04:08.589 20:41:59 -- accel/accel.sh@22 -- # case "$var" in 00:04:08.589 20:41:59 -- accel/accel.sh@20 -- # IFS=: 00:04:08.589 20:41:59 -- accel/accel.sh@20 -- # read -r var val 00:04:08.589 20:41:59 -- accel/accel.sh@21 -- # val=1 00:04:08.589 20:41:59 -- accel/accel.sh@22 -- # case "$var" in 00:04:08.589 20:41:59 -- accel/accel.sh@20 -- # IFS=: 00:04:08.589 20:41:59 -- accel/accel.sh@20 -- # read -r var val 00:04:08.590 20:41:59 -- accel/accel.sh@21 -- # val='1 seconds' 00:04:08.590 20:41:59 -- accel/accel.sh@22 -- # case "$var" in 00:04:08.590 20:41:59 -- accel/accel.sh@20 -- # IFS=: 00:04:08.590 20:41:59 -- accel/accel.sh@20 -- # read -r var val 00:04:08.590 20:41:59 -- accel/accel.sh@21 -- # val=Yes 00:04:08.590 20:41:59 -- accel/accel.sh@22 -- # case "$var" in 00:04:08.590 20:41:59 -- accel/accel.sh@20 -- # IFS=: 00:04:08.590 20:41:59 -- accel/accel.sh@20 -- # read -r var val 00:04:08.590 20:41:59 -- accel/accel.sh@21 -- # val= 00:04:08.590 20:41:59 -- accel/accel.sh@22 -- # case "$var" in 00:04:08.590 20:41:59 -- accel/accel.sh@20 -- # IFS=: 00:04:08.590 20:41:59 -- accel/accel.sh@20 -- # read -r var val 00:04:08.590 20:41:59 -- accel/accel.sh@21 -- # val= 00:04:08.590 20:41:59 -- accel/accel.sh@22 -- # case "$var" in 00:04:08.590 20:41:59 -- accel/accel.sh@20 -- # IFS=: 00:04:08.590 20:41:59 -- accel/accel.sh@20 -- # read -r var val 00:04:09.969 20:42:00 -- accel/accel.sh@21 -- # val= 00:04:09.969 20:42:00 -- accel/accel.sh@22 -- # case "$var" in 00:04:09.969 20:42:00 -- accel/accel.sh@20 -- # IFS=: 00:04:09.969 20:42:00 -- accel/accel.sh@20 -- # read -r var val 00:04:09.969 20:42:00 -- accel/accel.sh@21 -- # val= 00:04:09.969 20:42:00 -- accel/accel.sh@22 -- # case "$var" in 00:04:09.969 20:42:00 -- accel/accel.sh@20 -- # IFS=: 00:04:09.969 20:42:00 -- accel/accel.sh@20 -- # read -r var val 00:04:09.969 20:42:00 -- accel/accel.sh@21 -- # val= 00:04:09.969 20:42:00 -- accel/accel.sh@22 -- # case "$var" in 00:04:09.969 20:42:00 -- accel/accel.sh@20 -- # IFS=: 00:04:09.969 20:42:00 -- accel/accel.sh@20 -- # read -r var val 00:04:09.969 20:42:00 -- accel/accel.sh@21 -- # val= 00:04:09.969 20:42:00 -- accel/accel.sh@22 -- # case "$var" in 00:04:09.969 20:42:00 -- accel/accel.sh@20 -- # IFS=: 00:04:09.969 20:42:00 -- accel/accel.sh@20 -- # read -r var val 00:04:09.969 20:42:00 -- accel/accel.sh@21 -- # val= 00:04:09.969 20:42:00 -- accel/accel.sh@22 -- # case "$var" in 00:04:09.969 20:42:00 -- accel/accel.sh@20 -- # IFS=: 00:04:09.969 20:42:00 -- accel/accel.sh@20 -- # read -r var val 00:04:09.969 20:42:00 -- accel/accel.sh@21 -- # val= 00:04:09.969 20:42:00 -- accel/accel.sh@22 -- # case "$var" in 00:04:09.969 20:42:00 -- accel/accel.sh@20 -- # IFS=: 00:04:09.969 20:42:00 -- accel/accel.sh@20 -- # read -r var val 00:04:09.969 20:42:00 -- accel/accel.sh@28 -- # [[ -n software ]] 00:04:09.969 20:42:00 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:04:09.969 20:42:00 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:09.969 00:04:09.969 real 0m3.330s 00:04:09.969 user 0m2.392s 00:04:09.969 sys 0m0.951s 00:04:09.969 20:42:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:09.969 20:42:00 -- common/autotest_common.sh@10 -- # set +x 00:04:09.969 ************************************ 00:04:09.969 END TEST accel_compare 00:04:09.969 ************************************ 00:04:09.969 20:42:00 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:04:09.969 20:42:00 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:04:09.969 20:42:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:09.969 20:42:00 -- common/autotest_common.sh@10 -- # set +x 00:04:09.969 ************************************ 00:04:09.969 START TEST accel_xor 00:04:09.969 ************************************ 00:04:09.969 20:42:00 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:04:09.969 20:42:00 -- accel/accel.sh@16 -- # local accel_opc 00:04:09.969 20:42:00 -- accel/accel.sh@17 -- # local accel_module 00:04:09.969 20:42:00 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:04:09.969 20:42:00 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.DVB0DP -t 1 -w xor -y 00:04:09.969 [2024-04-16 20:42:00.823224] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:04:09.969 [2024-04-16 20:42:00.823573] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:10.228 EAL: TSC is not safe to use in SMP mode 00:04:10.228 EAL: TSC is not invariant 00:04:10.228 [2024-04-16 20:42:01.264101] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:10.487 [2024-04-16 20:42:01.355150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.487 20:42:01 -- accel/accel.sh@12 -- # build_accel_config 00:04:10.487 20:42:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:10.487 20:42:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:10.487 20:42:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:10.487 20:42:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:10.487 20:42:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:10.487 20:42:01 -- accel/accel.sh@41 -- # local IFS=, 00:04:10.487 20:42:01 -- accel/accel.sh@42 -- # jq -r . 00:04:11.424 20:42:02 -- accel/accel.sh@18 -- # out=' 00:04:11.424 SPDK Configuration: 00:04:11.424 Core mask: 0x1 00:04:11.424 00:04:11.424 Accel Perf Configuration: 00:04:11.424 Workload Type: xor 00:04:11.424 Source buffers: 2 00:04:11.424 Transfer size: 4096 bytes 00:04:11.424 Vector count 1 00:04:11.424 Module: software 00:04:11.424 Queue depth: 32 00:04:11.424 Allocate depth: 32 00:04:11.424 # threads/core: 1 00:04:11.424 Run time: 1 seconds 00:04:11.424 Verify: Yes 00:04:11.424 00:04:11.424 Running for 1 seconds... 00:04:11.424 00:04:11.424 Core,Thread Transfers Bandwidth Failed Miscompares 00:04:11.424 ------------------------------------------------------------------------------------ 00:04:11.424 0,0 2040128/s 7969 MiB/s 0 0 00:04:11.424 ==================================================================================== 00:04:11.424 Total 2040128/s 7969 MiB/s 0 0' 00:04:11.424 20:42:02 -- accel/accel.sh@20 -- # IFS=: 00:04:11.424 20:42:02 -- accel/accel.sh@20 -- # read -r var val 00:04:11.424 20:42:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:04:11.424 20:42:02 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.CfhDl9 -t 1 -w xor -y 00:04:11.424 [2024-04-16 20:42:02.503413] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:04:11.424 [2024-04-16 20:42:02.503760] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:11.994 EAL: TSC is not safe to use in SMP mode 00:04:11.994 EAL: TSC is not invariant 00:04:11.994 [2024-04-16 20:42:02.953523] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:11.994 [2024-04-16 20:42:03.046958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.994 20:42:03 -- accel/accel.sh@12 -- # build_accel_config 00:04:11.994 20:42:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:11.994 20:42:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:11.994 20:42:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:11.994 20:42:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:11.994 20:42:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:11.994 20:42:03 -- accel/accel.sh@41 -- # local IFS=, 00:04:11.994 20:42:03 -- accel/accel.sh@42 -- # jq -r . 00:04:11.994 20:42:03 -- accel/accel.sh@21 -- # val= 00:04:11.994 20:42:03 -- accel/accel.sh@22 -- # case "$var" in 00:04:11.994 20:42:03 -- accel/accel.sh@20 -- # IFS=: 00:04:11.994 20:42:03 -- accel/accel.sh@20 -- # read -r var val 00:04:11.994 20:42:03 -- accel/accel.sh@21 -- # val= 00:04:11.994 20:42:03 -- accel/accel.sh@22 -- # case "$var" in 00:04:11.994 20:42:03 -- accel/accel.sh@20 -- # IFS=: 00:04:11.994 20:42:03 -- accel/accel.sh@20 -- # read -r var val 00:04:11.994 20:42:03 -- accel/accel.sh@21 -- # val=0x1 00:04:11.994 20:42:03 -- accel/accel.sh@22 -- # case "$var" in 00:04:11.994 20:42:03 -- accel/accel.sh@20 -- # IFS=: 00:04:11.994 20:42:03 -- accel/accel.sh@20 -- # read -r var val 00:04:11.994 20:42:03 -- accel/accel.sh@21 -- # val= 00:04:11.994 20:42:03 -- accel/accel.sh@22 -- # case "$var" in 00:04:11.994 20:42:03 -- accel/accel.sh@20 -- # IFS=: 00:04:11.994 20:42:03 -- accel/accel.sh@20 -- # read -r var val 00:04:11.994 20:42:03 -- accel/accel.sh@21 -- # val= 00:04:11.994 20:42:03 -- accel/accel.sh@22 -- # case "$var" in 00:04:11.994 20:42:03 -- accel/accel.sh@20 -- # IFS=: 00:04:11.994 20:42:03 -- accel/accel.sh@20 -- # read -r var val 00:04:11.994 20:42:03 -- accel/accel.sh@21 -- # val=xor 00:04:11.994 20:42:03 -- accel/accel.sh@22 -- # case "$var" in 00:04:11.994 20:42:03 -- accel/accel.sh@24 -- # accel_opc=xor 00:04:11.994 20:42:03 -- accel/accel.sh@20 -- # IFS=: 00:04:11.994 20:42:03 -- accel/accel.sh@20 -- # read -r var val 00:04:11.994 20:42:03 -- accel/accel.sh@21 -- # val=2 00:04:11.994 20:42:03 -- accel/accel.sh@22 -- # case "$var" in 00:04:11.994 20:42:03 -- accel/accel.sh@20 -- # IFS=: 00:04:11.994 20:42:03 -- accel/accel.sh@20 -- # read -r var val 00:04:11.994 20:42:03 -- accel/accel.sh@21 -- # val='4096 bytes' 00:04:11.994 20:42:03 -- accel/accel.sh@22 -- # case "$var" in 00:04:11.994 20:42:03 -- accel/accel.sh@20 -- # IFS=: 00:04:11.994 20:42:03 -- accel/accel.sh@20 -- # read -r var val 00:04:11.994 20:42:03 -- accel/accel.sh@21 -- # val= 00:04:11.994 20:42:03 -- accel/accel.sh@22 -- # case "$var" in 00:04:11.994 20:42:03 -- accel/accel.sh@20 -- # IFS=: 00:04:11.994 20:42:03 -- accel/accel.sh@20 -- # read -r var val 00:04:11.994 20:42:03 -- accel/accel.sh@21 -- # val=software 00:04:11.994 20:42:03 -- accel/accel.sh@22 -- # case "$var" in 00:04:11.994 20:42:03 -- accel/accel.sh@23 -- # accel_module=software 00:04:11.994 20:42:03 -- accel/accel.sh@20 -- # IFS=: 00:04:11.994 20:42:03 -- accel/accel.sh@20 -- # read -r var val 00:04:11.994 20:42:03 -- accel/accel.sh@21 -- # val=32 00:04:11.994 20:42:03 -- accel/accel.sh@22 -- # case "$var" in 00:04:11.994 20:42:03 -- accel/accel.sh@20 -- # IFS=: 00:04:11.994 20:42:03 -- accel/accel.sh@20 -- # read -r var val 00:04:11.994 20:42:03 -- accel/accel.sh@21 -- # val=32 00:04:11.994 20:42:03 -- accel/accel.sh@22 -- # case "$var" in 00:04:11.994 20:42:03 -- accel/accel.sh@20 -- # IFS=: 00:04:11.994 20:42:03 -- accel/accel.sh@20 -- # read -r var val 00:04:11.994 20:42:03 -- accel/accel.sh@21 -- # val=1 00:04:11.994 20:42:03 -- accel/accel.sh@22 -- # case "$var" in 00:04:11.994 20:42:03 -- accel/accel.sh@20 -- # IFS=: 00:04:11.994 20:42:03 -- accel/accel.sh@20 -- # read -r var val 00:04:11.994 20:42:03 -- accel/accel.sh@21 -- # val='1 seconds' 00:04:11.994 20:42:03 -- accel/accel.sh@22 -- # case "$var" in 00:04:11.994 20:42:03 -- accel/accel.sh@20 -- # IFS=: 00:04:11.994 20:42:03 -- accel/accel.sh@20 -- # read -r var val 00:04:11.994 20:42:03 -- accel/accel.sh@21 -- # val=Yes 00:04:11.994 20:42:03 -- accel/accel.sh@22 -- # case "$var" in 00:04:11.994 20:42:03 -- accel/accel.sh@20 -- # IFS=: 00:04:11.994 20:42:03 -- accel/accel.sh@20 -- # read -r var val 00:04:11.994 20:42:03 -- accel/accel.sh@21 -- # val= 00:04:11.994 20:42:03 -- accel/accel.sh@22 -- # case "$var" in 00:04:11.994 20:42:03 -- accel/accel.sh@20 -- # IFS=: 00:04:11.994 20:42:03 -- accel/accel.sh@20 -- # read -r var val 00:04:11.994 20:42:03 -- accel/accel.sh@21 -- # val= 00:04:11.994 20:42:03 -- accel/accel.sh@22 -- # case "$var" in 00:04:11.994 20:42:03 -- accel/accel.sh@20 -- # IFS=: 00:04:11.994 20:42:03 -- accel/accel.sh@20 -- # read -r var val 00:04:13.375 20:42:04 -- accel/accel.sh@21 -- # val= 00:04:13.375 20:42:04 -- accel/accel.sh@22 -- # case "$var" in 00:04:13.375 20:42:04 -- accel/accel.sh@20 -- # IFS=: 00:04:13.375 20:42:04 -- accel/accel.sh@20 -- # read -r var val 00:04:13.375 20:42:04 -- accel/accel.sh@21 -- # val= 00:04:13.375 20:42:04 -- accel/accel.sh@22 -- # case "$var" in 00:04:13.375 20:42:04 -- accel/accel.sh@20 -- # IFS=: 00:04:13.375 20:42:04 -- accel/accel.sh@20 -- # read -r var val 00:04:13.375 20:42:04 -- accel/accel.sh@21 -- # val= 00:04:13.375 20:42:04 -- accel/accel.sh@22 -- # case "$var" in 00:04:13.375 20:42:04 -- accel/accel.sh@20 -- # IFS=: 00:04:13.375 20:42:04 -- accel/accel.sh@20 -- # read -r var val 00:04:13.375 20:42:04 -- accel/accel.sh@21 -- # val= 00:04:13.375 20:42:04 -- accel/accel.sh@22 -- # case "$var" in 00:04:13.375 20:42:04 -- accel/accel.sh@20 -- # IFS=: 00:04:13.375 20:42:04 -- accel/accel.sh@20 -- # read -r var val 00:04:13.375 20:42:04 -- accel/accel.sh@21 -- # val= 00:04:13.375 20:42:04 -- accel/accel.sh@22 -- # case "$var" in 00:04:13.375 20:42:04 -- accel/accel.sh@20 -- # IFS=: 00:04:13.375 20:42:04 -- accel/accel.sh@20 -- # read -r var val 00:04:13.375 20:42:04 -- accel/accel.sh@21 -- # val= 00:04:13.375 20:42:04 -- accel/accel.sh@22 -- # case "$var" in 00:04:13.375 20:42:04 -- accel/accel.sh@20 -- # IFS=: 00:04:13.375 20:42:04 -- accel/accel.sh@20 -- # read -r var val 00:04:13.375 20:42:04 -- accel/accel.sh@28 -- # [[ -n software ]] 00:04:13.375 20:42:04 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:04:13.375 20:42:04 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:13.375 00:04:13.375 real 0m3.380s 00:04:13.375 user 0m2.414s 00:04:13.375 sys 0m0.982s 00:04:13.375 20:42:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.375 20:42:04 -- common/autotest_common.sh@10 -- # set +x 00:04:13.375 ************************************ 00:04:13.375 END TEST accel_xor 00:04:13.375 ************************************ 00:04:13.375 20:42:04 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:04:13.375 20:42:04 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:04:13.375 20:42:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:13.375 20:42:04 -- common/autotest_common.sh@10 -- # set +x 00:04:13.375 ************************************ 00:04:13.375 START TEST accel_xor 00:04:13.375 ************************************ 00:04:13.375 20:42:04 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:04:13.375 20:42:04 -- accel/accel.sh@16 -- # local accel_opc 00:04:13.375 20:42:04 -- accel/accel.sh@17 -- # local accel_module 00:04:13.375 20:42:04 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:04:13.375 20:42:04 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.05vJXe -t 1 -w xor -y -x 3 00:04:13.375 [2024-04-16 20:42:04.253381] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:04:13.375 [2024-04-16 20:42:04.253747] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:13.634 EAL: TSC is not safe to use in SMP mode 00:04:13.634 EAL: TSC is not invariant 00:04:13.634 [2024-04-16 20:42:04.684321] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:13.894 [2024-04-16 20:42:04.777013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.894 20:42:04 -- accel/accel.sh@12 -- # build_accel_config 00:04:13.894 20:42:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:13.894 20:42:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:13.894 20:42:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:13.894 20:42:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:13.894 20:42:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:13.894 20:42:04 -- accel/accel.sh@41 -- # local IFS=, 00:04:13.894 20:42:04 -- accel/accel.sh@42 -- # jq -r . 00:04:14.833 20:42:05 -- accel/accel.sh@18 -- # out=' 00:04:14.833 SPDK Configuration: 00:04:14.833 Core mask: 0x1 00:04:14.833 00:04:14.833 Accel Perf Configuration: 00:04:14.833 Workload Type: xor 00:04:14.833 Source buffers: 3 00:04:14.833 Transfer size: 4096 bytes 00:04:14.833 Vector count 1 00:04:14.833 Module: software 00:04:14.833 Queue depth: 32 00:04:14.833 Allocate depth: 32 00:04:14.833 # threads/core: 1 00:04:14.833 Run time: 1 seconds 00:04:14.833 Verify: Yes 00:04:14.833 00:04:14.833 Running for 1 seconds... 00:04:14.833 00:04:14.833 Core,Thread Transfers Bandwidth Failed Miscompares 00:04:14.833 ------------------------------------------------------------------------------------ 00:04:14.833 0,0 1717664/s 6709 MiB/s 0 0 00:04:14.833 ==================================================================================== 00:04:14.833 Total 1717664/s 6709 MiB/s 0 0' 00:04:14.833 20:42:05 -- accel/accel.sh@20 -- # IFS=: 00:04:14.833 20:42:05 -- accel/accel.sh@20 -- # read -r var val 00:04:14.833 20:42:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:04:14.833 20:42:05 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.u0RwJm -t 1 -w xor -y -x 3 00:04:14.833 [2024-04-16 20:42:05.925597] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:04:14.833 [2024-04-16 20:42:05.925952] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:15.403 EAL: TSC is not safe to use in SMP mode 00:04:15.403 EAL: TSC is not invariant 00:04:15.403 [2024-04-16 20:42:06.350912] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.403 [2024-04-16 20:42:06.440525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.403 20:42:06 -- accel/accel.sh@12 -- # build_accel_config 00:04:15.403 20:42:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:15.403 20:42:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:15.403 20:42:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:15.403 20:42:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:15.403 20:42:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:15.403 20:42:06 -- accel/accel.sh@41 -- # local IFS=, 00:04:15.403 20:42:06 -- accel/accel.sh@42 -- # jq -r . 00:04:15.403 20:42:06 -- accel/accel.sh@21 -- # val= 00:04:15.403 20:42:06 -- accel/accel.sh@22 -- # case "$var" in 00:04:15.403 20:42:06 -- accel/accel.sh@20 -- # IFS=: 00:04:15.403 20:42:06 -- accel/accel.sh@20 -- # read -r var val 00:04:15.403 20:42:06 -- accel/accel.sh@21 -- # val= 00:04:15.403 20:42:06 -- accel/accel.sh@22 -- # case "$var" in 00:04:15.403 20:42:06 -- accel/accel.sh@20 -- # IFS=: 00:04:15.403 20:42:06 -- accel/accel.sh@20 -- # read -r var val 00:04:15.403 20:42:06 -- accel/accel.sh@21 -- # val=0x1 00:04:15.403 20:42:06 -- accel/accel.sh@22 -- # case "$var" in 00:04:15.403 20:42:06 -- accel/accel.sh@20 -- # IFS=: 00:04:15.404 20:42:06 -- accel/accel.sh@20 -- # read -r var val 00:04:15.404 20:42:06 -- accel/accel.sh@21 -- # val= 00:04:15.404 20:42:06 -- accel/accel.sh@22 -- # case "$var" in 00:04:15.404 20:42:06 -- accel/accel.sh@20 -- # IFS=: 00:04:15.404 20:42:06 -- accel/accel.sh@20 -- # read -r var val 00:04:15.404 20:42:06 -- accel/accel.sh@21 -- # val= 00:04:15.404 20:42:06 -- accel/accel.sh@22 -- # case "$var" in 00:04:15.404 20:42:06 -- accel/accel.sh@20 -- # IFS=: 00:04:15.404 20:42:06 -- accel/accel.sh@20 -- # read -r var val 00:04:15.404 20:42:06 -- accel/accel.sh@21 -- # val=xor 00:04:15.404 20:42:06 -- accel/accel.sh@22 -- # case "$var" in 00:04:15.404 20:42:06 -- accel/accel.sh@24 -- # accel_opc=xor 00:04:15.404 20:42:06 -- accel/accel.sh@20 -- # IFS=: 00:04:15.404 20:42:06 -- accel/accel.sh@20 -- # read -r var val 00:04:15.404 20:42:06 -- accel/accel.sh@21 -- # val=3 00:04:15.404 20:42:06 -- accel/accel.sh@22 -- # case "$var" in 00:04:15.404 20:42:06 -- accel/accel.sh@20 -- # IFS=: 00:04:15.404 20:42:06 -- accel/accel.sh@20 -- # read -r var val 00:04:15.404 20:42:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:04:15.404 20:42:06 -- accel/accel.sh@22 -- # case "$var" in 00:04:15.404 20:42:06 -- accel/accel.sh@20 -- # IFS=: 00:04:15.404 20:42:06 -- accel/accel.sh@20 -- # read -r var val 00:04:15.404 20:42:06 -- accel/accel.sh@21 -- # val= 00:04:15.404 20:42:06 -- accel/accel.sh@22 -- # case "$var" in 00:04:15.404 20:42:06 -- accel/accel.sh@20 -- # IFS=: 00:04:15.404 20:42:06 -- accel/accel.sh@20 -- # read -r var val 00:04:15.404 20:42:06 -- accel/accel.sh@21 -- # val=software 00:04:15.404 20:42:06 -- accel/accel.sh@22 -- # case "$var" in 00:04:15.404 20:42:06 -- accel/accel.sh@23 -- # accel_module=software 00:04:15.404 20:42:06 -- accel/accel.sh@20 -- # IFS=: 00:04:15.404 20:42:06 -- accel/accel.sh@20 -- # read -r var val 00:04:15.404 20:42:06 -- accel/accel.sh@21 -- # val=32 00:04:15.404 20:42:06 -- accel/accel.sh@22 -- # case "$var" in 00:04:15.404 20:42:06 -- accel/accel.sh@20 -- # IFS=: 00:04:15.404 20:42:06 -- accel/accel.sh@20 -- # read -r var val 00:04:15.404 20:42:06 -- accel/accel.sh@21 -- # val=32 00:04:15.404 20:42:06 -- accel/accel.sh@22 -- # case "$var" in 00:04:15.404 20:42:06 -- accel/accel.sh@20 -- # IFS=: 00:04:15.404 20:42:06 -- accel/accel.sh@20 -- # read -r var val 00:04:15.404 20:42:06 -- accel/accel.sh@21 -- # val=1 00:04:15.404 20:42:06 -- accel/accel.sh@22 -- # case "$var" in 00:04:15.404 20:42:06 -- accel/accel.sh@20 -- # IFS=: 00:04:15.404 20:42:06 -- accel/accel.sh@20 -- # read -r var val 00:04:15.404 20:42:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:04:15.404 20:42:06 -- accel/accel.sh@22 -- # case "$var" in 00:04:15.404 20:42:06 -- accel/accel.sh@20 -- # IFS=: 00:04:15.404 20:42:06 -- accel/accel.sh@20 -- # read -r var val 00:04:15.404 20:42:06 -- accel/accel.sh@21 -- # val=Yes 00:04:15.404 20:42:06 -- accel/accel.sh@22 -- # case "$var" in 00:04:15.404 20:42:06 -- accel/accel.sh@20 -- # IFS=: 00:04:15.404 20:42:06 -- accel/accel.sh@20 -- # read -r var val 00:04:15.404 20:42:06 -- accel/accel.sh@21 -- # val= 00:04:15.404 20:42:06 -- accel/accel.sh@22 -- # case "$var" in 00:04:15.404 20:42:06 -- accel/accel.sh@20 -- # IFS=: 00:04:15.404 20:42:06 -- accel/accel.sh@20 -- # read -r var val 00:04:15.404 20:42:06 -- accel/accel.sh@21 -- # val= 00:04:15.404 20:42:06 -- accel/accel.sh@22 -- # case "$var" in 00:04:15.404 20:42:06 -- accel/accel.sh@20 -- # IFS=: 00:04:15.404 20:42:06 -- accel/accel.sh@20 -- # read -r var val 00:04:16.782 20:42:07 -- accel/accel.sh@21 -- # val= 00:04:16.782 20:42:07 -- accel/accel.sh@22 -- # case "$var" in 00:04:16.782 20:42:07 -- accel/accel.sh@20 -- # IFS=: 00:04:16.782 20:42:07 -- accel/accel.sh@20 -- # read -r var val 00:04:16.782 20:42:07 -- accel/accel.sh@21 -- # val= 00:04:16.782 20:42:07 -- accel/accel.sh@22 -- # case "$var" in 00:04:16.782 20:42:07 -- accel/accel.sh@20 -- # IFS=: 00:04:16.782 20:42:07 -- accel/accel.sh@20 -- # read -r var val 00:04:16.782 20:42:07 -- accel/accel.sh@21 -- # val= 00:04:16.782 20:42:07 -- accel/accel.sh@22 -- # case "$var" in 00:04:16.782 20:42:07 -- accel/accel.sh@20 -- # IFS=: 00:04:16.782 20:42:07 -- accel/accel.sh@20 -- # read -r var val 00:04:16.782 20:42:07 -- accel/accel.sh@21 -- # val= 00:04:16.782 20:42:07 -- accel/accel.sh@22 -- # case "$var" in 00:04:16.782 20:42:07 -- accel/accel.sh@20 -- # IFS=: 00:04:16.782 20:42:07 -- accel/accel.sh@20 -- # read -r var val 00:04:16.782 20:42:07 -- accel/accel.sh@21 -- # val= 00:04:16.782 20:42:07 -- accel/accel.sh@22 -- # case "$var" in 00:04:16.782 20:42:07 -- accel/accel.sh@20 -- # IFS=: 00:04:16.782 20:42:07 -- accel/accel.sh@20 -- # read -r var val 00:04:16.782 20:42:07 -- accel/accel.sh@21 -- # val= 00:04:16.782 20:42:07 -- accel/accel.sh@22 -- # case "$var" in 00:04:16.782 20:42:07 -- accel/accel.sh@20 -- # IFS=: 00:04:16.782 20:42:07 -- accel/accel.sh@20 -- # read -r var val 00:04:16.782 20:42:07 -- accel/accel.sh@28 -- # [[ -n software ]] 00:04:16.782 20:42:07 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:04:16.782 20:42:07 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:16.782 00:04:16.782 real 0m3.339s 00:04:16.782 user 0m2.414s 00:04:16.782 sys 0m0.939s 00:04:16.782 20:42:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:16.782 20:42:07 -- common/autotest_common.sh@10 -- # set +x 00:04:16.782 ************************************ 00:04:16.782 END TEST accel_xor 00:04:16.782 ************************************ 00:04:16.782 20:42:07 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:04:16.782 20:42:07 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:04:16.782 20:42:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:16.782 20:42:07 -- common/autotest_common.sh@10 -- # set +x 00:04:16.782 ************************************ 00:04:16.782 START TEST accel_dif_verify 00:04:16.782 ************************************ 00:04:16.782 20:42:07 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:04:16.782 20:42:07 -- accel/accel.sh@16 -- # local accel_opc 00:04:16.782 20:42:07 -- accel/accel.sh@17 -- # local accel_module 00:04:16.782 20:42:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:04:16.782 20:42:07 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.uNZmzA -t 1 -w dif_verify 00:04:16.782 [2024-04-16 20:42:07.640352] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:04:16.782 [2024-04-16 20:42:07.640697] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:17.041 EAL: TSC is not safe to use in SMP mode 00:04:17.041 EAL: TSC is not invariant 00:04:17.041 [2024-04-16 20:42:08.068250] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:17.041 [2024-04-16 20:42:08.160088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:17.300 20:42:08 -- accel/accel.sh@12 -- # build_accel_config 00:04:17.300 20:42:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:17.300 20:42:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:17.300 20:42:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:17.300 20:42:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:17.300 20:42:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:17.300 20:42:08 -- accel/accel.sh@41 -- # local IFS=, 00:04:17.300 20:42:08 -- accel/accel.sh@42 -- # jq -r . 00:04:18.235 20:42:09 -- accel/accel.sh@18 -- # out=' 00:04:18.235 SPDK Configuration: 00:04:18.235 Core mask: 0x1 00:04:18.235 00:04:18.235 Accel Perf Configuration: 00:04:18.235 Workload Type: dif_verify 00:04:18.235 Vector size: 4096 bytes 00:04:18.235 Transfer size: 4096 bytes 00:04:18.235 Block size: 512 bytes 00:04:18.235 Metadata size: 8 bytes 00:04:18.235 Vector count 1 00:04:18.235 Module: software 00:04:18.235 Queue depth: 32 00:04:18.235 Allocate depth: 32 00:04:18.235 # threads/core: 1 00:04:18.235 Run time: 1 seconds 00:04:18.235 Verify: No 00:04:18.235 00:04:18.235 Running for 1 seconds... 00:04:18.235 00:04:18.235 Core,Thread Transfers Bandwidth Failed Miscompares 00:04:18.235 ------------------------------------------------------------------------------------ 00:04:18.235 0,0 1396640/s 5540 MiB/s 0 0 00:04:18.235 ==================================================================================== 00:04:18.235 Total 1396640/s 5455 MiB/s 0 0' 00:04:18.235 20:42:09 -- accel/accel.sh@20 -- # IFS=: 00:04:18.235 20:42:09 -- accel/accel.sh@20 -- # read -r var val 00:04:18.235 20:42:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:04:18.235 20:42:09 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.dGnQax -t 1 -w dif_verify 00:04:18.235 [2024-04-16 20:42:09.306853] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:04:18.235 [2024-04-16 20:42:09.307210] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:18.802 EAL: TSC is not safe to use in SMP mode 00:04:18.802 EAL: TSC is not invariant 00:04:18.802 [2024-04-16 20:42:09.738951] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:18.802 [2024-04-16 20:42:09.828430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.802 20:42:09 -- accel/accel.sh@12 -- # build_accel_config 00:04:18.802 20:42:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:18.802 20:42:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:18.802 20:42:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:18.802 20:42:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:18.802 20:42:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:18.802 20:42:09 -- accel/accel.sh@41 -- # local IFS=, 00:04:18.802 20:42:09 -- accel/accel.sh@42 -- # jq -r . 00:04:18.802 20:42:09 -- accel/accel.sh@21 -- # val= 00:04:18.802 20:42:09 -- accel/accel.sh@22 -- # case "$var" in 00:04:18.802 20:42:09 -- accel/accel.sh@20 -- # IFS=: 00:04:18.802 20:42:09 -- accel/accel.sh@20 -- # read -r var val 00:04:18.802 20:42:09 -- accel/accel.sh@21 -- # val= 00:04:18.802 20:42:09 -- accel/accel.sh@22 -- # case "$var" in 00:04:18.802 20:42:09 -- accel/accel.sh@20 -- # IFS=: 00:04:18.802 20:42:09 -- accel/accel.sh@20 -- # read -r var val 00:04:18.802 20:42:09 -- accel/accel.sh@21 -- # val=0x1 00:04:18.802 20:42:09 -- accel/accel.sh@22 -- # case "$var" in 00:04:18.802 20:42:09 -- accel/accel.sh@20 -- # IFS=: 00:04:18.802 20:42:09 -- accel/accel.sh@20 -- # read -r var val 00:04:18.802 20:42:09 -- accel/accel.sh@21 -- # val= 00:04:18.802 20:42:09 -- accel/accel.sh@22 -- # case "$var" in 00:04:18.802 20:42:09 -- accel/accel.sh@20 -- # IFS=: 00:04:18.802 20:42:09 -- accel/accel.sh@20 -- # read -r var val 00:04:18.802 20:42:09 -- accel/accel.sh@21 -- # val= 00:04:18.802 20:42:09 -- accel/accel.sh@22 -- # case "$var" in 00:04:18.802 20:42:09 -- accel/accel.sh@20 -- # IFS=: 00:04:18.802 20:42:09 -- accel/accel.sh@20 -- # read -r var val 00:04:18.802 20:42:09 -- accel/accel.sh@21 -- # val=dif_verify 00:04:18.802 20:42:09 -- accel/accel.sh@22 -- # case "$var" in 00:04:18.802 20:42:09 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:04:18.802 20:42:09 -- accel/accel.sh@20 -- # IFS=: 00:04:18.802 20:42:09 -- accel/accel.sh@20 -- # read -r var val 00:04:18.802 20:42:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:04:18.802 20:42:09 -- accel/accel.sh@22 -- # case "$var" in 00:04:18.802 20:42:09 -- accel/accel.sh@20 -- # IFS=: 00:04:18.802 20:42:09 -- accel/accel.sh@20 -- # read -r var val 00:04:18.802 20:42:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:04:18.802 20:42:09 -- accel/accel.sh@22 -- # case "$var" in 00:04:18.802 20:42:09 -- accel/accel.sh@20 -- # IFS=: 00:04:18.802 20:42:09 -- accel/accel.sh@20 -- # read -r var val 00:04:18.802 20:42:09 -- accel/accel.sh@21 -- # val='512 bytes' 00:04:18.802 20:42:09 -- accel/accel.sh@22 -- # case "$var" in 00:04:18.802 20:42:09 -- accel/accel.sh@20 -- # IFS=: 00:04:18.802 20:42:09 -- accel/accel.sh@20 -- # read -r var val 00:04:18.802 20:42:09 -- accel/accel.sh@21 -- # val='8 bytes' 00:04:18.802 20:42:09 -- accel/accel.sh@22 -- # case "$var" in 00:04:18.802 20:42:09 -- accel/accel.sh@20 -- # IFS=: 00:04:18.802 20:42:09 -- accel/accel.sh@20 -- # read -r var val 00:04:18.802 20:42:09 -- accel/accel.sh@21 -- # val= 00:04:18.802 20:42:09 -- accel/accel.sh@22 -- # case "$var" in 00:04:18.802 20:42:09 -- accel/accel.sh@20 -- # IFS=: 00:04:18.803 20:42:09 -- accel/accel.sh@20 -- # read -r var val 00:04:18.803 20:42:09 -- accel/accel.sh@21 -- # val=software 00:04:18.803 20:42:09 -- accel/accel.sh@22 -- # case "$var" in 00:04:18.803 20:42:09 -- accel/accel.sh@23 -- # accel_module=software 00:04:18.803 20:42:09 -- accel/accel.sh@20 -- # IFS=: 00:04:18.803 20:42:09 -- accel/accel.sh@20 -- # read -r var val 00:04:18.803 20:42:09 -- accel/accel.sh@21 -- # val=32 00:04:18.803 20:42:09 -- accel/accel.sh@22 -- # case "$var" in 00:04:18.803 20:42:09 -- accel/accel.sh@20 -- # IFS=: 00:04:18.803 20:42:09 -- accel/accel.sh@20 -- # read -r var val 00:04:18.803 20:42:09 -- accel/accel.sh@21 -- # val=32 00:04:18.803 20:42:09 -- accel/accel.sh@22 -- # case "$var" in 00:04:18.803 20:42:09 -- accel/accel.sh@20 -- # IFS=: 00:04:18.803 20:42:09 -- accel/accel.sh@20 -- # read -r var val 00:04:18.803 20:42:09 -- accel/accel.sh@21 -- # val=1 00:04:18.803 20:42:09 -- accel/accel.sh@22 -- # case "$var" in 00:04:18.803 20:42:09 -- accel/accel.sh@20 -- # IFS=: 00:04:18.803 20:42:09 -- accel/accel.sh@20 -- # read -r var val 00:04:18.803 20:42:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:04:18.803 20:42:09 -- accel/accel.sh@22 -- # case "$var" in 00:04:18.803 20:42:09 -- accel/accel.sh@20 -- # IFS=: 00:04:18.803 20:42:09 -- accel/accel.sh@20 -- # read -r var val 00:04:18.803 20:42:09 -- accel/accel.sh@21 -- # val=No 00:04:18.803 20:42:09 -- accel/accel.sh@22 -- # case "$var" in 00:04:18.803 20:42:09 -- accel/accel.sh@20 -- # IFS=: 00:04:18.803 20:42:09 -- accel/accel.sh@20 -- # read -r var val 00:04:18.803 20:42:09 -- accel/accel.sh@21 -- # val= 00:04:18.803 20:42:09 -- accel/accel.sh@22 -- # case "$var" in 00:04:18.803 20:42:09 -- accel/accel.sh@20 -- # IFS=: 00:04:18.803 20:42:09 -- accel/accel.sh@20 -- # read -r var val 00:04:18.803 20:42:09 -- accel/accel.sh@21 -- # val= 00:04:18.803 20:42:09 -- accel/accel.sh@22 -- # case "$var" in 00:04:18.803 20:42:09 -- accel/accel.sh@20 -- # IFS=: 00:04:18.803 20:42:09 -- accel/accel.sh@20 -- # read -r var val 00:04:20.220 20:42:10 -- accel/accel.sh@21 -- # val= 00:04:20.220 20:42:10 -- accel/accel.sh@22 -- # case "$var" in 00:04:20.220 20:42:10 -- accel/accel.sh@20 -- # IFS=: 00:04:20.220 20:42:10 -- accel/accel.sh@20 -- # read -r var val 00:04:20.220 20:42:10 -- accel/accel.sh@21 -- # val= 00:04:20.220 20:42:10 -- accel/accel.sh@22 -- # case "$var" in 00:04:20.220 20:42:10 -- accel/accel.sh@20 -- # IFS=: 00:04:20.220 20:42:10 -- accel/accel.sh@20 -- # read -r var val 00:04:20.220 20:42:10 -- accel/accel.sh@21 -- # val= 00:04:20.220 20:42:10 -- accel/accel.sh@22 -- # case "$var" in 00:04:20.220 20:42:10 -- accel/accel.sh@20 -- # IFS=: 00:04:20.220 20:42:10 -- accel/accel.sh@20 -- # read -r var val 00:04:20.220 20:42:10 -- accel/accel.sh@21 -- # val= 00:04:20.220 20:42:10 -- accel/accel.sh@22 -- # case "$var" in 00:04:20.220 20:42:10 -- accel/accel.sh@20 -- # IFS=: 00:04:20.220 20:42:10 -- accel/accel.sh@20 -- # read -r var val 00:04:20.220 20:42:10 -- accel/accel.sh@21 -- # val= 00:04:20.220 20:42:10 -- accel/accel.sh@22 -- # case "$var" in 00:04:20.220 20:42:10 -- accel/accel.sh@20 -- # IFS=: 00:04:20.220 20:42:10 -- accel/accel.sh@20 -- # read -r var val 00:04:20.220 20:42:10 -- accel/accel.sh@21 -- # val= 00:04:20.220 20:42:10 -- accel/accel.sh@22 -- # case "$var" in 00:04:20.220 20:42:10 -- accel/accel.sh@20 -- # IFS=: 00:04:20.220 20:42:10 -- accel/accel.sh@20 -- # read -r var val 00:04:20.220 20:42:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:04:20.220 20:42:10 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:04:20.220 20:42:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:20.220 00:04:20.220 real 0m3.338s 00:04:20.220 user 0m2.410s 00:04:20.220 sys 0m0.939s 00:04:20.220 20:42:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.220 20:42:10 -- common/autotest_common.sh@10 -- # set +x 00:04:20.220 ************************************ 00:04:20.220 END TEST accel_dif_verify 00:04:20.220 ************************************ 00:04:20.220 20:42:11 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:04:20.220 20:42:11 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:04:20.220 20:42:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:20.221 20:42:11 -- common/autotest_common.sh@10 -- # set +x 00:04:20.221 ************************************ 00:04:20.221 START TEST accel_dif_generate 00:04:20.221 ************************************ 00:04:20.221 20:42:11 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:04:20.221 20:42:11 -- accel/accel.sh@16 -- # local accel_opc 00:04:20.221 20:42:11 -- accel/accel.sh@17 -- # local accel_module 00:04:20.221 20:42:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:04:20.221 20:42:11 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.172rVY -t 1 -w dif_generate 00:04:20.221 [2024-04-16 20:42:11.035535] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:04:20.221 [2024-04-16 20:42:11.035883] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:20.479 EAL: TSC is not safe to use in SMP mode 00:04:20.479 EAL: TSC is not invariant 00:04:20.479 [2024-04-16 20:42:11.462728] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.479 [2024-04-16 20:42:11.553649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.479 20:42:11 -- accel/accel.sh@12 -- # build_accel_config 00:04:20.479 20:42:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:20.479 20:42:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:20.479 20:42:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:20.479 20:42:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:20.479 20:42:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:20.479 20:42:11 -- accel/accel.sh@41 -- # local IFS=, 00:04:20.479 20:42:11 -- accel/accel.sh@42 -- # jq -r . 00:04:21.857 20:42:12 -- accel/accel.sh@18 -- # out=' 00:04:21.858 SPDK Configuration: 00:04:21.858 Core mask: 0x1 00:04:21.858 00:04:21.858 Accel Perf Configuration: 00:04:21.858 Workload Type: dif_generate 00:04:21.858 Vector size: 4096 bytes 00:04:21.858 Transfer size: 4096 bytes 00:04:21.858 Block size: 512 bytes 00:04:21.858 Metadata size: 8 bytes 00:04:21.858 Vector count 1 00:04:21.858 Module: software 00:04:21.858 Queue depth: 32 00:04:21.858 Allocate depth: 32 00:04:21.858 # threads/core: 1 00:04:21.858 Run time: 1 seconds 00:04:21.858 Verify: No 00:04:21.858 00:04:21.858 Running for 1 seconds... 00:04:21.858 00:04:21.858 Core,Thread Transfers Bandwidth Failed Miscompares 00:04:21.858 ------------------------------------------------------------------------------------ 00:04:21.858 0,0 1624192/s 6443 MiB/s 0 0 00:04:21.858 ==================================================================================== 00:04:21.858 Total 1624192/s 6344 MiB/s 0 0' 00:04:21.858 20:42:12 -- accel/accel.sh@20 -- # IFS=: 00:04:21.858 20:42:12 -- accel/accel.sh@20 -- # read -r var val 00:04:21.858 20:42:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:04:21.858 20:42:12 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.eoPY0T -t 1 -w dif_generate 00:04:21.858 [2024-04-16 20:42:12.693068] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:04:21.858 [2024-04-16 20:42:12.693366] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:22.118 EAL: TSC is not safe to use in SMP mode 00:04:22.118 EAL: TSC is not invariant 00:04:22.118 [2024-04-16 20:42:13.121424] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.118 [2024-04-16 20:42:13.210234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.118 20:42:13 -- accel/accel.sh@12 -- # build_accel_config 00:04:22.118 20:42:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:22.118 20:42:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:22.118 20:42:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:22.118 20:42:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:22.118 20:42:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:22.118 20:42:13 -- accel/accel.sh@41 -- # local IFS=, 00:04:22.118 20:42:13 -- accel/accel.sh@42 -- # jq -r . 00:04:22.118 20:42:13 -- accel/accel.sh@21 -- # val= 00:04:22.118 20:42:13 -- accel/accel.sh@22 -- # case "$var" in 00:04:22.118 20:42:13 -- accel/accel.sh@20 -- # IFS=: 00:04:22.118 20:42:13 -- accel/accel.sh@20 -- # read -r var val 00:04:22.118 20:42:13 -- accel/accel.sh@21 -- # val= 00:04:22.118 20:42:13 -- accel/accel.sh@22 -- # case "$var" in 00:04:22.118 20:42:13 -- accel/accel.sh@20 -- # IFS=: 00:04:22.118 20:42:13 -- accel/accel.sh@20 -- # read -r var val 00:04:22.118 20:42:13 -- accel/accel.sh@21 -- # val=0x1 00:04:22.118 20:42:13 -- accel/accel.sh@22 -- # case "$var" in 00:04:22.118 20:42:13 -- accel/accel.sh@20 -- # IFS=: 00:04:22.118 20:42:13 -- accel/accel.sh@20 -- # read -r var val 00:04:22.118 20:42:13 -- accel/accel.sh@21 -- # val= 00:04:22.118 20:42:13 -- accel/accel.sh@22 -- # case "$var" in 00:04:22.118 20:42:13 -- accel/accel.sh@20 -- # IFS=: 00:04:22.118 20:42:13 -- accel/accel.sh@20 -- # read -r var val 00:04:22.118 20:42:13 -- accel/accel.sh@21 -- # val= 00:04:22.118 20:42:13 -- accel/accel.sh@22 -- # case "$var" in 00:04:22.118 20:42:13 -- accel/accel.sh@20 -- # IFS=: 00:04:22.118 20:42:13 -- accel/accel.sh@20 -- # read -r var val 00:04:22.118 20:42:13 -- accel/accel.sh@21 -- # val=dif_generate 00:04:22.118 20:42:13 -- accel/accel.sh@22 -- # case "$var" in 00:04:22.118 20:42:13 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:04:22.118 20:42:13 -- accel/accel.sh@20 -- # IFS=: 00:04:22.118 20:42:13 -- accel/accel.sh@20 -- # read -r var val 00:04:22.118 20:42:13 -- accel/accel.sh@21 -- # val='4096 bytes' 00:04:22.118 20:42:13 -- accel/accel.sh@22 -- # case "$var" in 00:04:22.118 20:42:13 -- accel/accel.sh@20 -- # IFS=: 00:04:22.118 20:42:13 -- accel/accel.sh@20 -- # read -r var val 00:04:22.118 20:42:13 -- accel/accel.sh@21 -- # val='4096 bytes' 00:04:22.118 20:42:13 -- accel/accel.sh@22 -- # case "$var" in 00:04:22.118 20:42:13 -- accel/accel.sh@20 -- # IFS=: 00:04:22.118 20:42:13 -- accel/accel.sh@20 -- # read -r var val 00:04:22.118 20:42:13 -- accel/accel.sh@21 -- # val='512 bytes' 00:04:22.118 20:42:13 -- accel/accel.sh@22 -- # case "$var" in 00:04:22.118 20:42:13 -- accel/accel.sh@20 -- # IFS=: 00:04:22.118 20:42:13 -- accel/accel.sh@20 -- # read -r var val 00:04:22.118 20:42:13 -- accel/accel.sh@21 -- # val='8 bytes' 00:04:22.118 20:42:13 -- accel/accel.sh@22 -- # case "$var" in 00:04:22.118 20:42:13 -- accel/accel.sh@20 -- # IFS=: 00:04:22.118 20:42:13 -- accel/accel.sh@20 -- # read -r var val 00:04:22.118 20:42:13 -- accel/accel.sh@21 -- # val= 00:04:22.118 20:42:13 -- accel/accel.sh@22 -- # case "$var" in 00:04:22.118 20:42:13 -- accel/accel.sh@20 -- # IFS=: 00:04:22.118 20:42:13 -- accel/accel.sh@20 -- # read -r var val 00:04:22.118 20:42:13 -- accel/accel.sh@21 -- # val=software 00:04:22.118 20:42:13 -- accel/accel.sh@22 -- # case "$var" in 00:04:22.118 20:42:13 -- accel/accel.sh@23 -- # accel_module=software 00:04:22.118 20:42:13 -- accel/accel.sh@20 -- # IFS=: 00:04:22.118 20:42:13 -- accel/accel.sh@20 -- # read -r var val 00:04:22.118 20:42:13 -- accel/accel.sh@21 -- # val=32 00:04:22.118 20:42:13 -- accel/accel.sh@22 -- # case "$var" in 00:04:22.118 20:42:13 -- accel/accel.sh@20 -- # IFS=: 00:04:22.118 20:42:13 -- accel/accel.sh@20 -- # read -r var val 00:04:22.377 20:42:13 -- accel/accel.sh@21 -- # val=32 00:04:22.377 20:42:13 -- accel/accel.sh@22 -- # case "$var" in 00:04:22.377 20:42:13 -- accel/accel.sh@20 -- # IFS=: 00:04:22.377 20:42:13 -- accel/accel.sh@20 -- # read -r var val 00:04:22.377 20:42:13 -- accel/accel.sh@21 -- # val=1 00:04:22.377 20:42:13 -- accel/accel.sh@22 -- # case "$var" in 00:04:22.377 20:42:13 -- accel/accel.sh@20 -- # IFS=: 00:04:22.377 20:42:13 -- accel/accel.sh@20 -- # read -r var val 00:04:22.377 20:42:13 -- accel/accel.sh@21 -- # val='1 seconds' 00:04:22.377 20:42:13 -- accel/accel.sh@22 -- # case "$var" in 00:04:22.377 20:42:13 -- accel/accel.sh@20 -- # IFS=: 00:04:22.377 20:42:13 -- accel/accel.sh@20 -- # read -r var val 00:04:22.377 20:42:13 -- accel/accel.sh@21 -- # val=No 00:04:22.377 20:42:13 -- accel/accel.sh@22 -- # case "$var" in 00:04:22.377 20:42:13 -- accel/accel.sh@20 -- # IFS=: 00:04:22.377 20:42:13 -- accel/accel.sh@20 -- # read -r var val 00:04:22.377 20:42:13 -- accel/accel.sh@21 -- # val= 00:04:22.377 20:42:13 -- accel/accel.sh@22 -- # case "$var" in 00:04:22.377 20:42:13 -- accel/accel.sh@20 -- # IFS=: 00:04:22.377 20:42:13 -- accel/accel.sh@20 -- # read -r var val 00:04:22.377 20:42:13 -- accel/accel.sh@21 -- # val= 00:04:22.377 20:42:13 -- accel/accel.sh@22 -- # case "$var" in 00:04:22.377 20:42:13 -- accel/accel.sh@20 -- # IFS=: 00:04:22.377 20:42:13 -- accel/accel.sh@20 -- # read -r var val 00:04:23.391 20:42:14 -- accel/accel.sh@21 -- # val= 00:04:23.391 20:42:14 -- accel/accel.sh@22 -- # case "$var" in 00:04:23.391 20:42:14 -- accel/accel.sh@20 -- # IFS=: 00:04:23.391 20:42:14 -- accel/accel.sh@20 -- # read -r var val 00:04:23.391 20:42:14 -- accel/accel.sh@21 -- # val= 00:04:23.391 20:42:14 -- accel/accel.sh@22 -- # case "$var" in 00:04:23.391 20:42:14 -- accel/accel.sh@20 -- # IFS=: 00:04:23.391 20:42:14 -- accel/accel.sh@20 -- # read -r var val 00:04:23.391 20:42:14 -- accel/accel.sh@21 -- # val= 00:04:23.391 20:42:14 -- accel/accel.sh@22 -- # case "$var" in 00:04:23.391 20:42:14 -- accel/accel.sh@20 -- # IFS=: 00:04:23.391 20:42:14 -- accel/accel.sh@20 -- # read -r var val 00:04:23.391 20:42:14 -- accel/accel.sh@21 -- # val= 00:04:23.391 20:42:14 -- accel/accel.sh@22 -- # case "$var" in 00:04:23.391 20:42:14 -- accel/accel.sh@20 -- # IFS=: 00:04:23.391 20:42:14 -- accel/accel.sh@20 -- # read -r var val 00:04:23.391 20:42:14 -- accel/accel.sh@21 -- # val= 00:04:23.391 20:42:14 -- accel/accel.sh@22 -- # case "$var" in 00:04:23.391 20:42:14 -- accel/accel.sh@20 -- # IFS=: 00:04:23.391 20:42:14 -- accel/accel.sh@20 -- # read -r var val 00:04:23.391 20:42:14 -- accel/accel.sh@21 -- # val= 00:04:23.391 20:42:14 -- accel/accel.sh@22 -- # case "$var" in 00:04:23.391 20:42:14 -- accel/accel.sh@20 -- # IFS=: 00:04:23.391 20:42:14 -- accel/accel.sh@20 -- # read -r var val 00:04:23.392 20:42:14 -- accel/accel.sh@28 -- # [[ -n software ]] 00:04:23.392 20:42:14 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:04:23.392 20:42:14 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:23.392 00:04:23.392 real 0m3.327s 00:04:23.392 user 0m2.402s 00:04:23.392 sys 0m0.941s 00:04:23.392 20:42:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:23.392 20:42:14 -- common/autotest_common.sh@10 -- # set +x 00:04:23.392 ************************************ 00:04:23.392 END TEST accel_dif_generate 00:04:23.392 ************************************ 00:04:23.392 20:42:14 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:04:23.392 20:42:14 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:04:23.392 20:42:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:23.392 20:42:14 -- common/autotest_common.sh@10 -- # set +x 00:04:23.392 ************************************ 00:04:23.392 START TEST accel_dif_generate_copy 00:04:23.392 ************************************ 00:04:23.392 20:42:14 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:04:23.392 20:42:14 -- accel/accel.sh@16 -- # local accel_opc 00:04:23.392 20:42:14 -- accel/accel.sh@17 -- # local accel_module 00:04:23.392 20:42:14 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:04:23.392 20:42:14 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.YKn6LF -t 1 -w dif_generate_copy 00:04:23.392 [2024-04-16 20:42:14.418619] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:04:23.392 [2024-04-16 20:42:14.418972] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:23.969 EAL: TSC is not safe to use in SMP mode 00:04:23.969 EAL: TSC is not invariant 00:04:23.969 [2024-04-16 20:42:14.853518] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.969 [2024-04-16 20:42:14.945439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.969 20:42:14 -- accel/accel.sh@12 -- # build_accel_config 00:04:23.969 20:42:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:23.969 20:42:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:23.970 20:42:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:23.970 20:42:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:23.970 20:42:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:23.970 20:42:14 -- accel/accel.sh@41 -- # local IFS=, 00:04:23.970 20:42:14 -- accel/accel.sh@42 -- # jq -r . 00:04:25.351 20:42:16 -- accel/accel.sh@18 -- # out=' 00:04:25.351 SPDK Configuration: 00:04:25.351 Core mask: 0x1 00:04:25.351 00:04:25.351 Accel Perf Configuration: 00:04:25.351 Workload Type: dif_generate_copy 00:04:25.351 Vector size: 4096 bytes 00:04:25.351 Transfer size: 4096 bytes 00:04:25.351 Vector count 1 00:04:25.351 Module: software 00:04:25.351 Queue depth: 32 00:04:25.351 Allocate depth: 32 00:04:25.351 # threads/core: 1 00:04:25.351 Run time: 1 seconds 00:04:25.351 Verify: No 00:04:25.351 00:04:25.351 Running for 1 seconds... 00:04:25.351 00:04:25.351 Core,Thread Transfers Bandwidth Failed Miscompares 00:04:25.351 ------------------------------------------------------------------------------------ 00:04:25.351 0,0 1278848/s 5073 MiB/s 0 0 00:04:25.351 ==================================================================================== 00:04:25.351 Total 1278848/s 4995 MiB/s 0 0' 00:04:25.351 20:42:16 -- accel/accel.sh@20 -- # IFS=: 00:04:25.351 20:42:16 -- accel/accel.sh@20 -- # read -r var val 00:04:25.351 20:42:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:04:25.351 20:42:16 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.FAkOOP -t 1 -w dif_generate_copy 00:04:25.351 [2024-04-16 20:42:16.085465] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:04:25.351 [2024-04-16 20:42:16.085585] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:25.611 EAL: TSC is not safe to use in SMP mode 00:04:25.611 EAL: TSC is not invariant 00:04:25.611 [2024-04-16 20:42:16.501628] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.611 [2024-04-16 20:42:16.581193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.611 20:42:16 -- accel/accel.sh@12 -- # build_accel_config 00:04:25.611 20:42:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:25.611 20:42:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:25.611 20:42:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:25.611 20:42:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:25.611 20:42:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:25.611 20:42:16 -- accel/accel.sh@41 -- # local IFS=, 00:04:25.611 20:42:16 -- accel/accel.sh@42 -- # jq -r . 00:04:25.611 20:42:16 -- accel/accel.sh@21 -- # val= 00:04:25.611 20:42:16 -- accel/accel.sh@22 -- # case "$var" in 00:04:25.611 20:42:16 -- accel/accel.sh@20 -- # IFS=: 00:04:25.611 20:42:16 -- accel/accel.sh@20 -- # read -r var val 00:04:25.611 20:42:16 -- accel/accel.sh@21 -- # val= 00:04:25.611 20:42:16 -- accel/accel.sh@22 -- # case "$var" in 00:04:25.611 20:42:16 -- accel/accel.sh@20 -- # IFS=: 00:04:25.611 20:42:16 -- accel/accel.sh@20 -- # read -r var val 00:04:25.611 20:42:16 -- accel/accel.sh@21 -- # val=0x1 00:04:25.611 20:42:16 -- accel/accel.sh@22 -- # case "$var" in 00:04:25.611 20:42:16 -- accel/accel.sh@20 -- # IFS=: 00:04:25.611 20:42:16 -- accel/accel.sh@20 -- # read -r var val 00:04:25.611 20:42:16 -- accel/accel.sh@21 -- # val= 00:04:25.611 20:42:16 -- accel/accel.sh@22 -- # case "$var" in 00:04:25.611 20:42:16 -- accel/accel.sh@20 -- # IFS=: 00:04:25.611 20:42:16 -- accel/accel.sh@20 -- # read -r var val 00:04:25.611 20:42:16 -- accel/accel.sh@21 -- # val= 00:04:25.611 20:42:16 -- accel/accel.sh@22 -- # case "$var" in 00:04:25.611 20:42:16 -- accel/accel.sh@20 -- # IFS=: 00:04:25.611 20:42:16 -- accel/accel.sh@20 -- # read -r var val 00:04:25.611 20:42:16 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:04:25.611 20:42:16 -- accel/accel.sh@22 -- # case "$var" in 00:04:25.611 20:42:16 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:04:25.611 20:42:16 -- accel/accel.sh@20 -- # IFS=: 00:04:25.611 20:42:16 -- accel/accel.sh@20 -- # read -r var val 00:04:25.611 20:42:16 -- accel/accel.sh@21 -- # val='4096 bytes' 00:04:25.611 20:42:16 -- accel/accel.sh@22 -- # case "$var" in 00:04:25.611 20:42:16 -- accel/accel.sh@20 -- # IFS=: 00:04:25.611 20:42:16 -- accel/accel.sh@20 -- # read -r var val 00:04:25.611 20:42:16 -- accel/accel.sh@21 -- # val='4096 bytes' 00:04:25.611 20:42:16 -- accel/accel.sh@22 -- # case "$var" in 00:04:25.611 20:42:16 -- accel/accel.sh@20 -- # IFS=: 00:04:25.611 20:42:16 -- accel/accel.sh@20 -- # read -r var val 00:04:25.611 20:42:16 -- accel/accel.sh@21 -- # val= 00:04:25.611 20:42:16 -- accel/accel.sh@22 -- # case "$var" in 00:04:25.611 20:42:16 -- accel/accel.sh@20 -- # IFS=: 00:04:25.611 20:42:16 -- accel/accel.sh@20 -- # read -r var val 00:04:25.611 20:42:16 -- accel/accel.sh@21 -- # val=software 00:04:25.611 20:42:16 -- accel/accel.sh@22 -- # case "$var" in 00:04:25.611 20:42:16 -- accel/accel.sh@23 -- # accel_module=software 00:04:25.611 20:42:16 -- accel/accel.sh@20 -- # IFS=: 00:04:25.611 20:42:16 -- accel/accel.sh@20 -- # read -r var val 00:04:25.611 20:42:16 -- accel/accel.sh@21 -- # val=32 00:04:25.611 20:42:16 -- accel/accel.sh@22 -- # case "$var" in 00:04:25.611 20:42:16 -- accel/accel.sh@20 -- # IFS=: 00:04:25.611 20:42:16 -- accel/accel.sh@20 -- # read -r var val 00:04:25.611 20:42:16 -- accel/accel.sh@21 -- # val=32 00:04:25.611 20:42:16 -- accel/accel.sh@22 -- # case "$var" in 00:04:25.611 20:42:16 -- accel/accel.sh@20 -- # IFS=: 00:04:25.611 20:42:16 -- accel/accel.sh@20 -- # read -r var val 00:04:25.611 20:42:16 -- accel/accel.sh@21 -- # val=1 00:04:25.611 20:42:16 -- accel/accel.sh@22 -- # case "$var" in 00:04:25.611 20:42:16 -- accel/accel.sh@20 -- # IFS=: 00:04:25.611 20:42:16 -- accel/accel.sh@20 -- # read -r var val 00:04:25.611 20:42:16 -- accel/accel.sh@21 -- # val='1 seconds' 00:04:25.611 20:42:16 -- accel/accel.sh@22 -- # case "$var" in 00:04:25.611 20:42:16 -- accel/accel.sh@20 -- # IFS=: 00:04:25.611 20:42:16 -- accel/accel.sh@20 -- # read -r var val 00:04:25.611 20:42:16 -- accel/accel.sh@21 -- # val=No 00:04:25.611 20:42:16 -- accel/accel.sh@22 -- # case "$var" in 00:04:25.611 20:42:16 -- accel/accel.sh@20 -- # IFS=: 00:04:25.611 20:42:16 -- accel/accel.sh@20 -- # read -r var val 00:04:25.611 20:42:16 -- accel/accel.sh@21 -- # val= 00:04:25.611 20:42:16 -- accel/accel.sh@22 -- # case "$var" in 00:04:25.611 20:42:16 -- accel/accel.sh@20 -- # IFS=: 00:04:25.611 20:42:16 -- accel/accel.sh@20 -- # read -r var val 00:04:25.611 20:42:16 -- accel/accel.sh@21 -- # val= 00:04:25.611 20:42:16 -- accel/accel.sh@22 -- # case "$var" in 00:04:25.611 20:42:16 -- accel/accel.sh@20 -- # IFS=: 00:04:25.611 20:42:16 -- accel/accel.sh@20 -- # read -r var val 00:04:26.992 20:42:17 -- accel/accel.sh@21 -- # val= 00:04:26.992 20:42:17 -- accel/accel.sh@22 -- # case "$var" in 00:04:26.992 20:42:17 -- accel/accel.sh@20 -- # IFS=: 00:04:26.992 20:42:17 -- accel/accel.sh@20 -- # read -r var val 00:04:26.992 20:42:17 -- accel/accel.sh@21 -- # val= 00:04:26.992 20:42:17 -- accel/accel.sh@22 -- # case "$var" in 00:04:26.992 20:42:17 -- accel/accel.sh@20 -- # IFS=: 00:04:26.992 20:42:17 -- accel/accel.sh@20 -- # read -r var val 00:04:26.992 20:42:17 -- accel/accel.sh@21 -- # val= 00:04:26.992 20:42:17 -- accel/accel.sh@22 -- # case "$var" in 00:04:26.992 20:42:17 -- accel/accel.sh@20 -- # IFS=: 00:04:26.992 20:42:17 -- accel/accel.sh@20 -- # read -r var val 00:04:26.992 20:42:17 -- accel/accel.sh@21 -- # val= 00:04:26.992 20:42:17 -- accel/accel.sh@22 -- # case "$var" in 00:04:26.992 20:42:17 -- accel/accel.sh@20 -- # IFS=: 00:04:26.992 20:42:17 -- accel/accel.sh@20 -- # read -r var val 00:04:26.992 20:42:17 -- accel/accel.sh@21 -- # val= 00:04:26.992 20:42:17 -- accel/accel.sh@22 -- # case "$var" in 00:04:26.992 20:42:17 -- accel/accel.sh@20 -- # IFS=: 00:04:26.992 20:42:17 -- accel/accel.sh@20 -- # read -r var val 00:04:26.992 20:42:17 -- accel/accel.sh@21 -- # val= 00:04:26.992 20:42:17 -- accel/accel.sh@22 -- # case "$var" in 00:04:26.992 20:42:17 -- accel/accel.sh@20 -- # IFS=: 00:04:26.992 20:42:17 -- accel/accel.sh@20 -- # read -r var val 00:04:26.992 20:42:17 -- accel/accel.sh@28 -- # [[ -n software ]] 00:04:26.992 20:42:17 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:04:26.992 20:42:17 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:26.992 00:04:26.992 real 0m3.316s 00:04:26.992 user 0m2.382s 00:04:26.992 sys 0m0.945s 00:04:26.992 20:42:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.992 20:42:17 -- common/autotest_common.sh@10 -- # set +x 00:04:26.992 ************************************ 00:04:26.992 END TEST accel_dif_generate_copy 00:04:26.992 ************************************ 00:04:26.992 20:42:17 -- accel/accel.sh@107 -- # [[ y == y ]] 00:04:26.992 20:42:17 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:26.993 20:42:17 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:04:26.993 20:42:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:26.993 20:42:17 -- common/autotest_common.sh@10 -- # set +x 00:04:26.993 ************************************ 00:04:26.993 START TEST accel_comp 00:04:26.993 ************************************ 00:04:26.993 20:42:17 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:26.993 20:42:17 -- accel/accel.sh@16 -- # local accel_opc 00:04:26.993 20:42:17 -- accel/accel.sh@17 -- # local accel_module 00:04:26.993 20:42:17 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:26.993 20:42:17 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.0SFG3x -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:26.993 [2024-04-16 20:42:17.796423] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:04:26.993 [2024-04-16 20:42:17.796828] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:27.252 EAL: TSC is not safe to use in SMP mode 00:04:27.252 EAL: TSC is not invariant 00:04:27.252 [2024-04-16 20:42:18.232671] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.252 [2024-04-16 20:42:18.321493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.252 20:42:18 -- accel/accel.sh@12 -- # build_accel_config 00:04:27.252 20:42:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:27.252 20:42:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:27.252 20:42:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:27.252 20:42:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:27.252 20:42:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:27.252 20:42:18 -- accel/accel.sh@41 -- # local IFS=, 00:04:27.252 20:42:18 -- accel/accel.sh@42 -- # jq -r . 00:04:28.633 20:42:19 -- accel/accel.sh@18 -- # out='Preparing input file... 00:04:28.633 00:04:28.633 SPDK Configuration: 00:04:28.633 Core mask: 0x1 00:04:28.633 00:04:28.633 Accel Perf Configuration: 00:04:28.633 Workload Type: compress 00:04:28.633 Transfer size: 4096 bytes 00:04:28.633 Vector count 1 00:04:28.633 Module: software 00:04:28.633 File Name: /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:28.633 Queue depth: 32 00:04:28.633 Allocate depth: 32 00:04:28.633 # threads/core: 1 00:04:28.633 Run time: 1 seconds 00:04:28.633 Verify: No 00:04:28.633 00:04:28.633 Running for 1 seconds... 00:04:28.633 00:04:28.633 Core,Thread Transfers Bandwidth Failed Miscompares 00:04:28.633 ------------------------------------------------------------------------------------ 00:04:28.633 0,0 65856/s 274 MiB/s 0 0 00:04:28.633 ==================================================================================== 00:04:28.633 Total 65856/s 257 MiB/s 0 0' 00:04:28.633 20:42:19 -- accel/accel.sh@20 -- # IFS=: 00:04:28.633 20:42:19 -- accel/accel.sh@20 -- # read -r var val 00:04:28.633 20:42:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:28.633 20:42:19 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.JNhJFk -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:28.633 [2024-04-16 20:42:19.470986] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:04:28.633 [2024-04-16 20:42:19.471360] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:28.893 EAL: TSC is not safe to use in SMP mode 00:04:28.893 EAL: TSC is not invariant 00:04:28.893 [2024-04-16 20:42:19.901585] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.893 [2024-04-16 20:42:19.993212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.893 20:42:19 -- accel/accel.sh@12 -- # build_accel_config 00:04:28.893 20:42:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:28.893 20:42:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:28.893 20:42:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:28.893 20:42:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:28.893 20:42:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:28.893 20:42:19 -- accel/accel.sh@41 -- # local IFS=, 00:04:28.893 20:42:19 -- accel/accel.sh@42 -- # jq -r . 00:04:28.893 20:42:20 -- accel/accel.sh@21 -- # val= 00:04:28.893 20:42:20 -- accel/accel.sh@22 -- # case "$var" in 00:04:28.893 20:42:20 -- accel/accel.sh@20 -- # IFS=: 00:04:29.152 20:42:20 -- accel/accel.sh@20 -- # read -r var val 00:04:29.152 20:42:20 -- accel/accel.sh@21 -- # val= 00:04:29.152 20:42:20 -- accel/accel.sh@22 -- # case "$var" in 00:04:29.152 20:42:20 -- accel/accel.sh@20 -- # IFS=: 00:04:29.152 20:42:20 -- accel/accel.sh@20 -- # read -r var val 00:04:29.152 20:42:20 -- accel/accel.sh@21 -- # val= 00:04:29.152 20:42:20 -- accel/accel.sh@22 -- # case "$var" in 00:04:29.152 20:42:20 -- accel/accel.sh@20 -- # IFS=: 00:04:29.152 20:42:20 -- accel/accel.sh@20 -- # read -r var val 00:04:29.152 20:42:20 -- accel/accel.sh@21 -- # val=0x1 00:04:29.152 20:42:20 -- accel/accel.sh@22 -- # case "$var" in 00:04:29.152 20:42:20 -- accel/accel.sh@20 -- # IFS=: 00:04:29.152 20:42:20 -- accel/accel.sh@20 -- # read -r var val 00:04:29.152 20:42:20 -- accel/accel.sh@21 -- # val= 00:04:29.152 20:42:20 -- accel/accel.sh@22 -- # case "$var" in 00:04:29.152 20:42:20 -- accel/accel.sh@20 -- # IFS=: 00:04:29.152 20:42:20 -- accel/accel.sh@20 -- # read -r var val 00:04:29.152 20:42:20 -- accel/accel.sh@21 -- # val= 00:04:29.152 20:42:20 -- accel/accel.sh@22 -- # case "$var" in 00:04:29.152 20:42:20 -- accel/accel.sh@20 -- # IFS=: 00:04:29.152 20:42:20 -- accel/accel.sh@20 -- # read -r var val 00:04:29.152 20:42:20 -- accel/accel.sh@21 -- # val=compress 00:04:29.152 20:42:20 -- accel/accel.sh@22 -- # case "$var" in 00:04:29.152 20:42:20 -- accel/accel.sh@24 -- # accel_opc=compress 00:04:29.152 20:42:20 -- accel/accel.sh@20 -- # IFS=: 00:04:29.152 20:42:20 -- accel/accel.sh@20 -- # read -r var val 00:04:29.152 20:42:20 -- accel/accel.sh@21 -- # val='4096 bytes' 00:04:29.152 20:42:20 -- accel/accel.sh@22 -- # case "$var" in 00:04:29.152 20:42:20 -- accel/accel.sh@20 -- # IFS=: 00:04:29.152 20:42:20 -- accel/accel.sh@20 -- # read -r var val 00:04:29.152 20:42:20 -- accel/accel.sh@21 -- # val= 00:04:29.152 20:42:20 -- accel/accel.sh@22 -- # case "$var" in 00:04:29.152 20:42:20 -- accel/accel.sh@20 -- # IFS=: 00:04:29.152 20:42:20 -- accel/accel.sh@20 -- # read -r var val 00:04:29.152 20:42:20 -- accel/accel.sh@21 -- # val=software 00:04:29.152 20:42:20 -- accel/accel.sh@22 -- # case "$var" in 00:04:29.152 20:42:20 -- accel/accel.sh@23 -- # accel_module=software 00:04:29.152 20:42:20 -- accel/accel.sh@20 -- # IFS=: 00:04:29.152 20:42:20 -- accel/accel.sh@20 -- # read -r var val 00:04:29.152 20:42:20 -- accel/accel.sh@21 -- # val=/usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:29.152 20:42:20 -- accel/accel.sh@22 -- # case "$var" in 00:04:29.152 20:42:20 -- accel/accel.sh@20 -- # IFS=: 00:04:29.152 20:42:20 -- accel/accel.sh@20 -- # read -r var val 00:04:29.152 20:42:20 -- accel/accel.sh@21 -- # val=32 00:04:29.152 20:42:20 -- accel/accel.sh@22 -- # case "$var" in 00:04:29.152 20:42:20 -- accel/accel.sh@20 -- # IFS=: 00:04:29.152 20:42:20 -- accel/accel.sh@20 -- # read -r var val 00:04:29.152 20:42:20 -- accel/accel.sh@21 -- # val=32 00:04:29.152 20:42:20 -- accel/accel.sh@22 -- # case "$var" in 00:04:29.152 20:42:20 -- accel/accel.sh@20 -- # IFS=: 00:04:29.152 20:42:20 -- accel/accel.sh@20 -- # read -r var val 00:04:29.152 20:42:20 -- accel/accel.sh@21 -- # val=1 00:04:29.152 20:42:20 -- accel/accel.sh@22 -- # case "$var" in 00:04:29.152 20:42:20 -- accel/accel.sh@20 -- # IFS=: 00:04:29.152 20:42:20 -- accel/accel.sh@20 -- # read -r var val 00:04:29.152 20:42:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:04:29.152 20:42:20 -- accel/accel.sh@22 -- # case "$var" in 00:04:29.152 20:42:20 -- accel/accel.sh@20 -- # IFS=: 00:04:29.152 20:42:20 -- accel/accel.sh@20 -- # read -r var val 00:04:29.152 20:42:20 -- accel/accel.sh@21 -- # val=No 00:04:29.152 20:42:20 -- accel/accel.sh@22 -- # case "$var" in 00:04:29.152 20:42:20 -- accel/accel.sh@20 -- # IFS=: 00:04:29.152 20:42:20 -- accel/accel.sh@20 -- # read -r var val 00:04:29.152 20:42:20 -- accel/accel.sh@21 -- # val= 00:04:29.152 20:42:20 -- accel/accel.sh@22 -- # case "$var" in 00:04:29.152 20:42:20 -- accel/accel.sh@20 -- # IFS=: 00:04:29.152 20:42:20 -- accel/accel.sh@20 -- # read -r var val 00:04:29.152 20:42:20 -- accel/accel.sh@21 -- # val= 00:04:29.152 20:42:20 -- accel/accel.sh@22 -- # case "$var" in 00:04:29.152 20:42:20 -- accel/accel.sh@20 -- # IFS=: 00:04:29.152 20:42:20 -- accel/accel.sh@20 -- # read -r var val 00:04:30.094 20:42:21 -- accel/accel.sh@21 -- # val= 00:04:30.094 20:42:21 -- accel/accel.sh@22 -- # case "$var" in 00:04:30.094 20:42:21 -- accel/accel.sh@20 -- # IFS=: 00:04:30.094 20:42:21 -- accel/accel.sh@20 -- # read -r var val 00:04:30.094 20:42:21 -- accel/accel.sh@21 -- # val= 00:04:30.094 20:42:21 -- accel/accel.sh@22 -- # case "$var" in 00:04:30.094 20:42:21 -- accel/accel.sh@20 -- # IFS=: 00:04:30.094 20:42:21 -- accel/accel.sh@20 -- # read -r var val 00:04:30.094 20:42:21 -- accel/accel.sh@21 -- # val= 00:04:30.094 20:42:21 -- accel/accel.sh@22 -- # case "$var" in 00:04:30.094 20:42:21 -- accel/accel.sh@20 -- # IFS=: 00:04:30.094 20:42:21 -- accel/accel.sh@20 -- # read -r var val 00:04:30.094 20:42:21 -- accel/accel.sh@21 -- # val= 00:04:30.094 20:42:21 -- accel/accel.sh@22 -- # case "$var" in 00:04:30.094 20:42:21 -- accel/accel.sh@20 -- # IFS=: 00:04:30.094 20:42:21 -- accel/accel.sh@20 -- # read -r var val 00:04:30.094 20:42:21 -- accel/accel.sh@21 -- # val= 00:04:30.094 20:42:21 -- accel/accel.sh@22 -- # case "$var" in 00:04:30.094 20:42:21 -- accel/accel.sh@20 -- # IFS=: 00:04:30.094 20:42:21 -- accel/accel.sh@20 -- # read -r var val 00:04:30.094 20:42:21 -- accel/accel.sh@21 -- # val= 00:04:30.094 20:42:21 -- accel/accel.sh@22 -- # case "$var" in 00:04:30.094 20:42:21 -- accel/accel.sh@20 -- # IFS=: 00:04:30.094 20:42:21 -- accel/accel.sh@20 -- # read -r var val 00:04:30.094 20:42:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:04:30.094 20:42:21 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:04:30.094 20:42:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:30.094 00:04:30.094 real 0m3.351s 00:04:30.094 user 0m2.415s 00:04:30.094 sys 0m0.952s 00:04:30.094 20:42:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.094 20:42:21 -- common/autotest_common.sh@10 -- # set +x 00:04:30.094 ************************************ 00:04:30.094 END TEST accel_comp 00:04:30.094 ************************************ 00:04:30.094 20:42:21 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:30.094 20:42:21 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:04:30.094 20:42:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:30.094 20:42:21 -- common/autotest_common.sh@10 -- # set +x 00:04:30.094 ************************************ 00:04:30.094 START TEST accel_decomp 00:04:30.094 ************************************ 00:04:30.094 20:42:21 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:30.094 20:42:21 -- accel/accel.sh@16 -- # local accel_opc 00:04:30.094 20:42:21 -- accel/accel.sh@17 -- # local accel_module 00:04:30.094 20:42:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:30.094 20:42:21 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.qegC0X -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:30.094 [2024-04-16 20:42:21.207693] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:04:30.094 [2024-04-16 20:42:21.208042] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:30.663 EAL: TSC is not safe to use in SMP mode 00:04:30.663 EAL: TSC is not invariant 00:04:30.663 [2024-04-16 20:42:21.637463] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.663 [2024-04-16 20:42:21.727702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.663 20:42:21 -- accel/accel.sh@12 -- # build_accel_config 00:04:30.663 20:42:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:30.663 20:42:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:30.663 20:42:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:30.663 20:42:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:30.663 20:42:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:30.663 20:42:21 -- accel/accel.sh@41 -- # local IFS=, 00:04:30.663 20:42:21 -- accel/accel.sh@42 -- # jq -r . 00:04:32.063 20:42:22 -- accel/accel.sh@18 -- # out='Preparing input file... 00:04:32.063 00:04:32.063 SPDK Configuration: 00:04:32.063 Core mask: 0x1 00:04:32.063 00:04:32.063 Accel Perf Configuration: 00:04:32.063 Workload Type: decompress 00:04:32.063 Transfer size: 4096 bytes 00:04:32.063 Vector count 1 00:04:32.063 Module: software 00:04:32.063 File Name: /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:32.063 Queue depth: 32 00:04:32.063 Allocate depth: 32 00:04:32.063 # threads/core: 1 00:04:32.063 Run time: 1 seconds 00:04:32.063 Verify: Yes 00:04:32.063 00:04:32.063 Running for 1 seconds... 00:04:32.063 00:04:32.063 Core,Thread Transfers Bandwidth Failed Miscompares 00:04:32.063 ------------------------------------------------------------------------------------ 00:04:32.063 0,0 90944/s 167 MiB/s 0 0 00:04:32.063 ==================================================================================== 00:04:32.063 Total 90944/s 355 MiB/s 0 0' 00:04:32.063 20:42:22 -- accel/accel.sh@20 -- # IFS=: 00:04:32.063 20:42:22 -- accel/accel.sh@20 -- # read -r var val 00:04:32.063 20:42:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:32.063 20:42:22 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.fBu37A -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:32.063 [2024-04-16 20:42:22.873791] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:04:32.063 [2024-04-16 20:42:22.874143] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:32.329 EAL: TSC is not safe to use in SMP mode 00:04:32.329 EAL: TSC is not invariant 00:04:32.329 [2024-04-16 20:42:23.304984] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.329 [2024-04-16 20:42:23.395762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.329 20:42:23 -- accel/accel.sh@12 -- # build_accel_config 00:04:32.329 20:42:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:32.329 20:42:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:32.329 20:42:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:32.329 20:42:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:32.330 20:42:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:32.330 20:42:23 -- accel/accel.sh@41 -- # local IFS=, 00:04:32.330 20:42:23 -- accel/accel.sh@42 -- # jq -r . 00:04:32.330 20:42:23 -- accel/accel.sh@21 -- # val= 00:04:32.330 20:42:23 -- accel/accel.sh@22 -- # case "$var" in 00:04:32.330 20:42:23 -- accel/accel.sh@20 -- # IFS=: 00:04:32.330 20:42:23 -- accel/accel.sh@20 -- # read -r var val 00:04:32.330 20:42:23 -- accel/accel.sh@21 -- # val= 00:04:32.330 20:42:23 -- accel/accel.sh@22 -- # case "$var" in 00:04:32.330 20:42:23 -- accel/accel.sh@20 -- # IFS=: 00:04:32.330 20:42:23 -- accel/accel.sh@20 -- # read -r var val 00:04:32.330 20:42:23 -- accel/accel.sh@21 -- # val= 00:04:32.330 20:42:23 -- accel/accel.sh@22 -- # case "$var" in 00:04:32.330 20:42:23 -- accel/accel.sh@20 -- # IFS=: 00:04:32.330 20:42:23 -- accel/accel.sh@20 -- # read -r var val 00:04:32.330 20:42:23 -- accel/accel.sh@21 -- # val=0x1 00:04:32.330 20:42:23 -- accel/accel.sh@22 -- # case "$var" in 00:04:32.330 20:42:23 -- accel/accel.sh@20 -- # IFS=: 00:04:32.330 20:42:23 -- accel/accel.sh@20 -- # read -r var val 00:04:32.330 20:42:23 -- accel/accel.sh@21 -- # val= 00:04:32.330 20:42:23 -- accel/accel.sh@22 -- # case "$var" in 00:04:32.330 20:42:23 -- accel/accel.sh@20 -- # IFS=: 00:04:32.330 20:42:23 -- accel/accel.sh@20 -- # read -r var val 00:04:32.330 20:42:23 -- accel/accel.sh@21 -- # val= 00:04:32.330 20:42:23 -- accel/accel.sh@22 -- # case "$var" in 00:04:32.330 20:42:23 -- accel/accel.sh@20 -- # IFS=: 00:04:32.330 20:42:23 -- accel/accel.sh@20 -- # read -r var val 00:04:32.330 20:42:23 -- accel/accel.sh@21 -- # val=decompress 00:04:32.330 20:42:23 -- accel/accel.sh@22 -- # case "$var" in 00:04:32.330 20:42:23 -- accel/accel.sh@24 -- # accel_opc=decompress 00:04:32.330 20:42:23 -- accel/accel.sh@20 -- # IFS=: 00:04:32.330 20:42:23 -- accel/accel.sh@20 -- # read -r var val 00:04:32.330 20:42:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:04:32.330 20:42:23 -- accel/accel.sh@22 -- # case "$var" in 00:04:32.330 20:42:23 -- accel/accel.sh@20 -- # IFS=: 00:04:32.330 20:42:23 -- accel/accel.sh@20 -- # read -r var val 00:04:32.330 20:42:23 -- accel/accel.sh@21 -- # val= 00:04:32.330 20:42:23 -- accel/accel.sh@22 -- # case "$var" in 00:04:32.330 20:42:23 -- accel/accel.sh@20 -- # IFS=: 00:04:32.330 20:42:23 -- accel/accel.sh@20 -- # read -r var val 00:04:32.330 20:42:23 -- accel/accel.sh@21 -- # val=software 00:04:32.330 20:42:23 -- accel/accel.sh@22 -- # case "$var" in 00:04:32.330 20:42:23 -- accel/accel.sh@23 -- # accel_module=software 00:04:32.330 20:42:23 -- accel/accel.sh@20 -- # IFS=: 00:04:32.330 20:42:23 -- accel/accel.sh@20 -- # read -r var val 00:04:32.330 20:42:23 -- accel/accel.sh@21 -- # val=/usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:32.330 20:42:23 -- accel/accel.sh@22 -- # case "$var" in 00:04:32.330 20:42:23 -- accel/accel.sh@20 -- # IFS=: 00:04:32.330 20:42:23 -- accel/accel.sh@20 -- # read -r var val 00:04:32.330 20:42:23 -- accel/accel.sh@21 -- # val=32 00:04:32.330 20:42:23 -- accel/accel.sh@22 -- # case "$var" in 00:04:32.330 20:42:23 -- accel/accel.sh@20 -- # IFS=: 00:04:32.330 20:42:23 -- accel/accel.sh@20 -- # read -r var val 00:04:32.330 20:42:23 -- accel/accel.sh@21 -- # val=32 00:04:32.330 20:42:23 -- accel/accel.sh@22 -- # case "$var" in 00:04:32.330 20:42:23 -- accel/accel.sh@20 -- # IFS=: 00:04:32.330 20:42:23 -- accel/accel.sh@20 -- # read -r var val 00:04:32.330 20:42:23 -- accel/accel.sh@21 -- # val=1 00:04:32.330 20:42:23 -- accel/accel.sh@22 -- # case "$var" in 00:04:32.330 20:42:23 -- accel/accel.sh@20 -- # IFS=: 00:04:32.330 20:42:23 -- accel/accel.sh@20 -- # read -r var val 00:04:32.330 20:42:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:04:32.330 20:42:23 -- accel/accel.sh@22 -- # case "$var" in 00:04:32.330 20:42:23 -- accel/accel.sh@20 -- # IFS=: 00:04:32.330 20:42:23 -- accel/accel.sh@20 -- # read -r var val 00:04:32.330 20:42:23 -- accel/accel.sh@21 -- # val=Yes 00:04:32.330 20:42:23 -- accel/accel.sh@22 -- # case "$var" in 00:04:32.330 20:42:23 -- accel/accel.sh@20 -- # IFS=: 00:04:32.330 20:42:23 -- accel/accel.sh@20 -- # read -r var val 00:04:32.330 20:42:23 -- accel/accel.sh@21 -- # val= 00:04:32.330 20:42:23 -- accel/accel.sh@22 -- # case "$var" in 00:04:32.330 20:42:23 -- accel/accel.sh@20 -- # IFS=: 00:04:32.330 20:42:23 -- accel/accel.sh@20 -- # read -r var val 00:04:32.330 20:42:23 -- accel/accel.sh@21 -- # val= 00:04:32.330 20:42:23 -- accel/accel.sh@22 -- # case "$var" in 00:04:32.330 20:42:23 -- accel/accel.sh@20 -- # IFS=: 00:04:32.330 20:42:23 -- accel/accel.sh@20 -- # read -r var val 00:04:33.709 20:42:24 -- accel/accel.sh@21 -- # val= 00:04:33.709 20:42:24 -- accel/accel.sh@22 -- # case "$var" in 00:04:33.709 20:42:24 -- accel/accel.sh@20 -- # IFS=: 00:04:33.709 20:42:24 -- accel/accel.sh@20 -- # read -r var val 00:04:33.709 20:42:24 -- accel/accel.sh@21 -- # val= 00:04:33.709 20:42:24 -- accel/accel.sh@22 -- # case "$var" in 00:04:33.709 20:42:24 -- accel/accel.sh@20 -- # IFS=: 00:04:33.709 20:42:24 -- accel/accel.sh@20 -- # read -r var val 00:04:33.709 20:42:24 -- accel/accel.sh@21 -- # val= 00:04:33.709 20:42:24 -- accel/accel.sh@22 -- # case "$var" in 00:04:33.709 20:42:24 -- accel/accel.sh@20 -- # IFS=: 00:04:33.709 20:42:24 -- accel/accel.sh@20 -- # read -r var val 00:04:33.709 20:42:24 -- accel/accel.sh@21 -- # val= 00:04:33.709 20:42:24 -- accel/accel.sh@22 -- # case "$var" in 00:04:33.709 20:42:24 -- accel/accel.sh@20 -- # IFS=: 00:04:33.709 20:42:24 -- accel/accel.sh@20 -- # read -r var val 00:04:33.709 20:42:24 -- accel/accel.sh@21 -- # val= 00:04:33.709 20:42:24 -- accel/accel.sh@22 -- # case "$var" in 00:04:33.709 20:42:24 -- accel/accel.sh@20 -- # IFS=: 00:04:33.709 20:42:24 -- accel/accel.sh@20 -- # read -r var val 00:04:33.709 20:42:24 -- accel/accel.sh@21 -- # val= 00:04:33.709 20:42:24 -- accel/accel.sh@22 -- # case "$var" in 00:04:33.709 20:42:24 -- accel/accel.sh@20 -- # IFS=: 00:04:33.709 20:42:24 -- accel/accel.sh@20 -- # read -r var val 00:04:33.709 20:42:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:04:33.709 20:42:24 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:04:33.709 20:42:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:33.709 00:04:33.709 real 0m3.342s 00:04:33.709 user 0m2.404s 00:04:33.709 sys 0m0.951s 00:04:33.709 20:42:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.709 20:42:24 -- common/autotest_common.sh@10 -- # set +x 00:04:33.709 ************************************ 00:04:33.709 END TEST accel_decomp 00:04:33.709 ************************************ 00:04:33.709 20:42:24 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:04:33.709 20:42:24 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:04:33.709 20:42:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:33.709 20:42:24 -- common/autotest_common.sh@10 -- # set +x 00:04:33.709 ************************************ 00:04:33.709 START TEST accel_decmop_full 00:04:33.709 ************************************ 00:04:33.709 20:42:24 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:04:33.709 20:42:24 -- accel/accel.sh@16 -- # local accel_opc 00:04:33.709 20:42:24 -- accel/accel.sh@17 -- # local accel_module 00:04:33.709 20:42:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:04:33.709 20:42:24 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.UgKkWq -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:04:33.709 [2024-04-16 20:42:24.604460] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:04:33.709 [2024-04-16 20:42:24.604807] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:33.968 EAL: TSC is not safe to use in SMP mode 00:04:33.968 EAL: TSC is not invariant 00:04:33.968 [2024-04-16 20:42:25.030340] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.227 [2024-04-16 20:42:25.107861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.227 20:42:25 -- accel/accel.sh@12 -- # build_accel_config 00:04:34.227 20:42:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:34.227 20:42:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:34.227 20:42:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:34.227 20:42:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:34.227 20:42:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:34.227 20:42:25 -- accel/accel.sh@41 -- # local IFS=, 00:04:34.227 20:42:25 -- accel/accel.sh@42 -- # jq -r . 00:04:35.164 20:42:26 -- accel/accel.sh@18 -- # out='Preparing input file... 00:04:35.164 00:04:35.164 SPDK Configuration: 00:04:35.164 Core mask: 0x1 00:04:35.164 00:04:35.164 Accel Perf Configuration: 00:04:35.164 Workload Type: decompress 00:04:35.164 Transfer size: 111250 bytes 00:04:35.164 Vector count 1 00:04:35.164 Module: software 00:04:35.164 File Name: /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:35.164 Queue depth: 32 00:04:35.164 Allocate depth: 32 00:04:35.164 # threads/core: 1 00:04:35.164 Run time: 1 seconds 00:04:35.164 Verify: Yes 00:04:35.164 00:04:35.164 Running for 1 seconds... 00:04:35.164 00:04:35.164 Core,Thread Transfers Bandwidth Failed Miscompares 00:04:35.164 ------------------------------------------------------------------------------------ 00:04:35.164 0,0 5248/s 216 MiB/s 0 0 00:04:35.165 ==================================================================================== 00:04:35.165 Total 5248/s 556 MiB/s 0 0' 00:04:35.165 20:42:26 -- accel/accel.sh@20 -- # IFS=: 00:04:35.165 20:42:26 -- accel/accel.sh@20 -- # read -r var val 00:04:35.165 20:42:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:04:35.165 20:42:26 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.tXdaTV -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:04:35.165 [2024-04-16 20:42:26.269343] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:04:35.165 [2024-04-16 20:42:26.269687] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:35.733 EAL: TSC is not safe to use in SMP mode 00:04:35.733 EAL: TSC is not invariant 00:04:35.733 [2024-04-16 20:42:26.697070] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.733 [2024-04-16 20:42:26.786980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.733 20:42:26 -- accel/accel.sh@12 -- # build_accel_config 00:04:35.733 20:42:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:35.733 20:42:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:35.733 20:42:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:35.733 20:42:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:35.733 20:42:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:35.733 20:42:26 -- accel/accel.sh@41 -- # local IFS=, 00:04:35.733 20:42:26 -- accel/accel.sh@42 -- # jq -r . 00:04:35.733 20:42:26 -- accel/accel.sh@21 -- # val= 00:04:35.733 20:42:26 -- accel/accel.sh@22 -- # case "$var" in 00:04:35.733 20:42:26 -- accel/accel.sh@20 -- # IFS=: 00:04:35.733 20:42:26 -- accel/accel.sh@20 -- # read -r var val 00:04:35.733 20:42:26 -- accel/accel.sh@21 -- # val= 00:04:35.733 20:42:26 -- accel/accel.sh@22 -- # case "$var" in 00:04:35.733 20:42:26 -- accel/accel.sh@20 -- # IFS=: 00:04:35.733 20:42:26 -- accel/accel.sh@20 -- # read -r var val 00:04:35.733 20:42:26 -- accel/accel.sh@21 -- # val= 00:04:35.733 20:42:26 -- accel/accel.sh@22 -- # case "$var" in 00:04:35.733 20:42:26 -- accel/accel.sh@20 -- # IFS=: 00:04:35.733 20:42:26 -- accel/accel.sh@20 -- # read -r var val 00:04:35.733 20:42:26 -- accel/accel.sh@21 -- # val=0x1 00:04:35.733 20:42:26 -- accel/accel.sh@22 -- # case "$var" in 00:04:35.733 20:42:26 -- accel/accel.sh@20 -- # IFS=: 00:04:35.733 20:42:26 -- accel/accel.sh@20 -- # read -r var val 00:04:35.733 20:42:26 -- accel/accel.sh@21 -- # val= 00:04:35.733 20:42:26 -- accel/accel.sh@22 -- # case "$var" in 00:04:35.733 20:42:26 -- accel/accel.sh@20 -- # IFS=: 00:04:35.733 20:42:26 -- accel/accel.sh@20 -- # read -r var val 00:04:35.733 20:42:26 -- accel/accel.sh@21 -- # val= 00:04:35.733 20:42:26 -- accel/accel.sh@22 -- # case "$var" in 00:04:35.733 20:42:26 -- accel/accel.sh@20 -- # IFS=: 00:04:35.733 20:42:26 -- accel/accel.sh@20 -- # read -r var val 00:04:35.733 20:42:26 -- accel/accel.sh@21 -- # val=decompress 00:04:35.733 20:42:26 -- accel/accel.sh@22 -- # case "$var" in 00:04:35.733 20:42:26 -- accel/accel.sh@24 -- # accel_opc=decompress 00:04:35.733 20:42:26 -- accel/accel.sh@20 -- # IFS=: 00:04:35.733 20:42:26 -- accel/accel.sh@20 -- # read -r var val 00:04:35.733 20:42:26 -- accel/accel.sh@21 -- # val='111250 bytes' 00:04:35.733 20:42:26 -- accel/accel.sh@22 -- # case "$var" in 00:04:35.733 20:42:26 -- accel/accel.sh@20 -- # IFS=: 00:04:35.733 20:42:26 -- accel/accel.sh@20 -- # read -r var val 00:04:35.733 20:42:26 -- accel/accel.sh@21 -- # val= 00:04:35.733 20:42:26 -- accel/accel.sh@22 -- # case "$var" in 00:04:35.733 20:42:26 -- accel/accel.sh@20 -- # IFS=: 00:04:35.733 20:42:26 -- accel/accel.sh@20 -- # read -r var val 00:04:35.733 20:42:26 -- accel/accel.sh@21 -- # val=software 00:04:35.733 20:42:26 -- accel/accel.sh@22 -- # case "$var" in 00:04:35.733 20:42:26 -- accel/accel.sh@23 -- # accel_module=software 00:04:35.733 20:42:26 -- accel/accel.sh@20 -- # IFS=: 00:04:35.733 20:42:26 -- accel/accel.sh@20 -- # read -r var val 00:04:35.733 20:42:26 -- accel/accel.sh@21 -- # val=/usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:35.733 20:42:26 -- accel/accel.sh@22 -- # case "$var" in 00:04:35.733 20:42:26 -- accel/accel.sh@20 -- # IFS=: 00:04:35.733 20:42:26 -- accel/accel.sh@20 -- # read -r var val 00:04:35.733 20:42:26 -- accel/accel.sh@21 -- # val=32 00:04:35.733 20:42:26 -- accel/accel.sh@22 -- # case "$var" in 00:04:35.733 20:42:26 -- accel/accel.sh@20 -- # IFS=: 00:04:35.733 20:42:26 -- accel/accel.sh@20 -- # read -r var val 00:04:35.733 20:42:26 -- accel/accel.sh@21 -- # val=32 00:04:35.733 20:42:26 -- accel/accel.sh@22 -- # case "$var" in 00:04:35.733 20:42:26 -- accel/accel.sh@20 -- # IFS=: 00:04:35.733 20:42:26 -- accel/accel.sh@20 -- # read -r var val 00:04:35.733 20:42:26 -- accel/accel.sh@21 -- # val=1 00:04:35.733 20:42:26 -- accel/accel.sh@22 -- # case "$var" in 00:04:35.733 20:42:26 -- accel/accel.sh@20 -- # IFS=: 00:04:35.733 20:42:26 -- accel/accel.sh@20 -- # read -r var val 00:04:35.733 20:42:26 -- accel/accel.sh@21 -- # val='1 seconds' 00:04:35.733 20:42:26 -- accel/accel.sh@22 -- # case "$var" in 00:04:35.733 20:42:26 -- accel/accel.sh@20 -- # IFS=: 00:04:35.733 20:42:26 -- accel/accel.sh@20 -- # read -r var val 00:04:35.733 20:42:26 -- accel/accel.sh@21 -- # val=Yes 00:04:35.733 20:42:26 -- accel/accel.sh@22 -- # case "$var" in 00:04:35.733 20:42:26 -- accel/accel.sh@20 -- # IFS=: 00:04:35.733 20:42:26 -- accel/accel.sh@20 -- # read -r var val 00:04:35.733 20:42:26 -- accel/accel.sh@21 -- # val= 00:04:35.733 20:42:26 -- accel/accel.sh@22 -- # case "$var" in 00:04:35.733 20:42:26 -- accel/accel.sh@20 -- # IFS=: 00:04:35.733 20:42:26 -- accel/accel.sh@20 -- # read -r var val 00:04:35.733 20:42:26 -- accel/accel.sh@21 -- # val= 00:04:35.733 20:42:26 -- accel/accel.sh@22 -- # case "$var" in 00:04:35.733 20:42:26 -- accel/accel.sh@20 -- # IFS=: 00:04:35.733 20:42:26 -- accel/accel.sh@20 -- # read -r var val 00:04:37.110 20:42:27 -- accel/accel.sh@21 -- # val= 00:04:37.110 20:42:27 -- accel/accel.sh@22 -- # case "$var" in 00:04:37.110 20:42:27 -- accel/accel.sh@20 -- # IFS=: 00:04:37.110 20:42:27 -- accel/accel.sh@20 -- # read -r var val 00:04:37.110 20:42:27 -- accel/accel.sh@21 -- # val= 00:04:37.110 20:42:27 -- accel/accel.sh@22 -- # case "$var" in 00:04:37.110 20:42:27 -- accel/accel.sh@20 -- # IFS=: 00:04:37.110 20:42:27 -- accel/accel.sh@20 -- # read -r var val 00:04:37.110 20:42:27 -- accel/accel.sh@21 -- # val= 00:04:37.110 20:42:27 -- accel/accel.sh@22 -- # case "$var" in 00:04:37.110 20:42:27 -- accel/accel.sh@20 -- # IFS=: 00:04:37.110 20:42:27 -- accel/accel.sh@20 -- # read -r var val 00:04:37.110 20:42:27 -- accel/accel.sh@21 -- # val= 00:04:37.110 20:42:27 -- accel/accel.sh@22 -- # case "$var" in 00:04:37.110 20:42:27 -- accel/accel.sh@20 -- # IFS=: 00:04:37.110 20:42:27 -- accel/accel.sh@20 -- # read -r var val 00:04:37.110 20:42:27 -- accel/accel.sh@21 -- # val= 00:04:37.110 20:42:27 -- accel/accel.sh@22 -- # case "$var" in 00:04:37.110 20:42:27 -- accel/accel.sh@20 -- # IFS=: 00:04:37.110 20:42:27 -- accel/accel.sh@20 -- # read -r var val 00:04:37.110 20:42:27 -- accel/accel.sh@21 -- # val= 00:04:37.110 20:42:27 -- accel/accel.sh@22 -- # case "$var" in 00:04:37.110 20:42:27 -- accel/accel.sh@20 -- # IFS=: 00:04:37.110 20:42:27 -- accel/accel.sh@20 -- # read -r var val 00:04:37.110 20:42:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:04:37.110 20:42:27 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:04:37.110 20:42:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:37.110 00:04:37.110 real 0m3.343s 00:04:37.110 user 0m2.397s 00:04:37.110 sys 0m0.954s 00:04:37.110 20:42:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.110 20:42:27 -- common/autotest_common.sh@10 -- # set +x 00:04:37.110 ************************************ 00:04:37.110 END TEST accel_decmop_full 00:04:37.110 ************************************ 00:04:37.110 20:42:27 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:04:37.110 20:42:27 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:04:37.110 20:42:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:37.110 20:42:27 -- common/autotest_common.sh@10 -- # set +x 00:04:37.110 ************************************ 00:04:37.110 START TEST accel_decomp_mcore 00:04:37.110 ************************************ 00:04:37.110 20:42:27 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:04:37.110 20:42:27 -- accel/accel.sh@16 -- # local accel_opc 00:04:37.110 20:42:27 -- accel/accel.sh@17 -- # local accel_module 00:04:37.110 20:42:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:04:37.110 20:42:27 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.cjMtk3 -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:04:37.110 [2024-04-16 20:42:27.997142] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:04:37.110 [2024-04-16 20:42:27.997512] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:37.368 EAL: TSC is not safe to use in SMP mode 00:04:37.368 EAL: TSC is not invariant 00:04:37.369 [2024-04-16 20:42:28.435026] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:37.626 [2024-04-16 20:42:28.526463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:37.626 [2024-04-16 20:42:28.526763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.626 [2024-04-16 20:42:28.526608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:37.626 [2024-04-16 20:42:28.526765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:37.626 20:42:28 -- accel/accel.sh@12 -- # build_accel_config 00:04:37.626 20:42:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:37.626 20:42:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:37.626 20:42:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:37.626 20:42:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:37.626 20:42:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:37.626 20:42:28 -- accel/accel.sh@41 -- # local IFS=, 00:04:37.626 20:42:28 -- accel/accel.sh@42 -- # jq -r . 00:04:38.560 20:42:29 -- accel/accel.sh@18 -- # out='Preparing input file... 00:04:38.560 00:04:38.560 SPDK Configuration: 00:04:38.560 Core mask: 0xf 00:04:38.560 00:04:38.560 Accel Perf Configuration: 00:04:38.560 Workload Type: decompress 00:04:38.560 Transfer size: 4096 bytes 00:04:38.560 Vector count 1 00:04:38.560 Module: software 00:04:38.560 File Name: /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:38.560 Queue depth: 32 00:04:38.560 Allocate depth: 32 00:04:38.560 # threads/core: 1 00:04:38.560 Run time: 1 seconds 00:04:38.560 Verify: Yes 00:04:38.560 00:04:38.560 Running for 1 seconds... 00:04:38.560 00:04:38.560 Core,Thread Transfers Bandwidth Failed Miscompares 00:04:38.560 ------------------------------------------------------------------------------------ 00:04:38.560 0,0 85952/s 158 MiB/s 0 0 00:04:38.560 3,0 85344/s 157 MiB/s 0 0 00:04:38.561 2,0 85280/s 157 MiB/s 0 0 00:04:38.561 1,0 85376/s 157 MiB/s 0 0 00:04:38.561 ==================================================================================== 00:04:38.561 Total 341952/s 1335 MiB/s 0 0' 00:04:38.561 20:42:29 -- accel/accel.sh@20 -- # IFS=: 00:04:38.561 20:42:29 -- accel/accel.sh@20 -- # read -r var val 00:04:38.561 20:42:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:04:38.561 20:42:29 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.Tc0YMa -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:04:38.837 [2024-04-16 20:42:29.679037] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:04:38.837 [2024-04-16 20:42:29.679361] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:39.096 EAL: TSC is not safe to use in SMP mode 00:04:39.096 EAL: TSC is not invariant 00:04:39.096 [2024-04-16 20:42:30.117922] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:39.096 [2024-04-16 20:42:30.211119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:39.096 [2024-04-16 20:42:30.211370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.096 [2024-04-16 20:42:30.211246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:39.096 [2024-04-16 20:42:30.211375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:39.096 20:42:30 -- accel/accel.sh@12 -- # build_accel_config 00:04:39.355 20:42:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:39.355 20:42:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:39.355 20:42:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:39.355 20:42:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:39.355 20:42:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:39.355 20:42:30 -- accel/accel.sh@41 -- # local IFS=, 00:04:39.355 20:42:30 -- accel/accel.sh@42 -- # jq -r . 00:04:39.355 20:42:30 -- accel/accel.sh@21 -- # val= 00:04:39.355 20:42:30 -- accel/accel.sh@22 -- # case "$var" in 00:04:39.355 20:42:30 -- accel/accel.sh@20 -- # IFS=: 00:04:39.355 20:42:30 -- accel/accel.sh@20 -- # read -r var val 00:04:39.355 20:42:30 -- accel/accel.sh@21 -- # val= 00:04:39.355 20:42:30 -- accel/accel.sh@22 -- # case "$var" in 00:04:39.355 20:42:30 -- accel/accel.sh@20 -- # IFS=: 00:04:39.355 20:42:30 -- accel/accel.sh@20 -- # read -r var val 00:04:39.355 20:42:30 -- accel/accel.sh@21 -- # val= 00:04:39.355 20:42:30 -- accel/accel.sh@22 -- # case "$var" in 00:04:39.355 20:42:30 -- accel/accel.sh@20 -- # IFS=: 00:04:39.355 20:42:30 -- accel/accel.sh@20 -- # read -r var val 00:04:39.355 20:42:30 -- accel/accel.sh@21 -- # val=0xf 00:04:39.355 20:42:30 -- accel/accel.sh@22 -- # case "$var" in 00:04:39.355 20:42:30 -- accel/accel.sh@20 -- # IFS=: 00:04:39.355 20:42:30 -- accel/accel.sh@20 -- # read -r var val 00:04:39.355 20:42:30 -- accel/accel.sh@21 -- # val= 00:04:39.355 20:42:30 -- accel/accel.sh@22 -- # case "$var" in 00:04:39.355 20:42:30 -- accel/accel.sh@20 -- # IFS=: 00:04:39.355 20:42:30 -- accel/accel.sh@20 -- # read -r var val 00:04:39.355 20:42:30 -- accel/accel.sh@21 -- # val= 00:04:39.355 20:42:30 -- accel/accel.sh@22 -- # case "$var" in 00:04:39.355 20:42:30 -- accel/accel.sh@20 -- # IFS=: 00:04:39.355 20:42:30 -- accel/accel.sh@20 -- # read -r var val 00:04:39.355 20:42:30 -- accel/accel.sh@21 -- # val=decompress 00:04:39.355 20:42:30 -- accel/accel.sh@22 -- # case "$var" in 00:04:39.355 20:42:30 -- accel/accel.sh@24 -- # accel_opc=decompress 00:04:39.355 20:42:30 -- accel/accel.sh@20 -- # IFS=: 00:04:39.355 20:42:30 -- accel/accel.sh@20 -- # read -r var val 00:04:39.355 20:42:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:04:39.355 20:42:30 -- accel/accel.sh@22 -- # case "$var" in 00:04:39.355 20:42:30 -- accel/accel.sh@20 -- # IFS=: 00:04:39.355 20:42:30 -- accel/accel.sh@20 -- # read -r var val 00:04:39.355 20:42:30 -- accel/accel.sh@21 -- # val= 00:04:39.355 20:42:30 -- accel/accel.sh@22 -- # case "$var" in 00:04:39.355 20:42:30 -- accel/accel.sh@20 -- # IFS=: 00:04:39.355 20:42:30 -- accel/accel.sh@20 -- # read -r var val 00:04:39.355 20:42:30 -- accel/accel.sh@21 -- # val=software 00:04:39.355 20:42:30 -- accel/accel.sh@22 -- # case "$var" in 00:04:39.355 20:42:30 -- accel/accel.sh@23 -- # accel_module=software 00:04:39.355 20:42:30 -- accel/accel.sh@20 -- # IFS=: 00:04:39.355 20:42:30 -- accel/accel.sh@20 -- # read -r var val 00:04:39.355 20:42:30 -- accel/accel.sh@21 -- # val=/usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:39.355 20:42:30 -- accel/accel.sh@22 -- # case "$var" in 00:04:39.355 20:42:30 -- accel/accel.sh@20 -- # IFS=: 00:04:39.355 20:42:30 -- accel/accel.sh@20 -- # read -r var val 00:04:39.355 20:42:30 -- accel/accel.sh@21 -- # val=32 00:04:39.355 20:42:30 -- accel/accel.sh@22 -- # case "$var" in 00:04:39.355 20:42:30 -- accel/accel.sh@20 -- # IFS=: 00:04:39.355 20:42:30 -- accel/accel.sh@20 -- # read -r var val 00:04:39.355 20:42:30 -- accel/accel.sh@21 -- # val=32 00:04:39.355 20:42:30 -- accel/accel.sh@22 -- # case "$var" in 00:04:39.355 20:42:30 -- accel/accel.sh@20 -- # IFS=: 00:04:39.355 20:42:30 -- accel/accel.sh@20 -- # read -r var val 00:04:39.355 20:42:30 -- accel/accel.sh@21 -- # val=1 00:04:39.355 20:42:30 -- accel/accel.sh@22 -- # case "$var" in 00:04:39.355 20:42:30 -- accel/accel.sh@20 -- # IFS=: 00:04:39.355 20:42:30 -- accel/accel.sh@20 -- # read -r var val 00:04:39.355 20:42:30 -- accel/accel.sh@21 -- # val='1 seconds' 00:04:39.355 20:42:30 -- accel/accel.sh@22 -- # case "$var" in 00:04:39.355 20:42:30 -- accel/accel.sh@20 -- # IFS=: 00:04:39.355 20:42:30 -- accel/accel.sh@20 -- # read -r var val 00:04:39.355 20:42:30 -- accel/accel.sh@21 -- # val=Yes 00:04:39.355 20:42:30 -- accel/accel.sh@22 -- # case "$var" in 00:04:39.355 20:42:30 -- accel/accel.sh@20 -- # IFS=: 00:04:39.355 20:42:30 -- accel/accel.sh@20 -- # read -r var val 00:04:39.355 20:42:30 -- accel/accel.sh@21 -- # val= 00:04:39.355 20:42:30 -- accel/accel.sh@22 -- # case "$var" in 00:04:39.355 20:42:30 -- accel/accel.sh@20 -- # IFS=: 00:04:39.355 20:42:30 -- accel/accel.sh@20 -- # read -r var val 00:04:39.355 20:42:30 -- accel/accel.sh@21 -- # val= 00:04:39.355 20:42:30 -- accel/accel.sh@22 -- # case "$var" in 00:04:39.355 20:42:30 -- accel/accel.sh@20 -- # IFS=: 00:04:39.355 20:42:30 -- accel/accel.sh@20 -- # read -r var val 00:04:40.293 20:42:31 -- accel/accel.sh@21 -- # val= 00:04:40.293 20:42:31 -- accel/accel.sh@22 -- # case "$var" in 00:04:40.293 20:42:31 -- accel/accel.sh@20 -- # IFS=: 00:04:40.293 20:42:31 -- accel/accel.sh@20 -- # read -r var val 00:04:40.293 20:42:31 -- accel/accel.sh@21 -- # val= 00:04:40.293 20:42:31 -- accel/accel.sh@22 -- # case "$var" in 00:04:40.293 20:42:31 -- accel/accel.sh@20 -- # IFS=: 00:04:40.293 20:42:31 -- accel/accel.sh@20 -- # read -r var val 00:04:40.293 20:42:31 -- accel/accel.sh@21 -- # val= 00:04:40.293 20:42:31 -- accel/accel.sh@22 -- # case "$var" in 00:04:40.293 20:42:31 -- accel/accel.sh@20 -- # IFS=: 00:04:40.293 20:42:31 -- accel/accel.sh@20 -- # read -r var val 00:04:40.293 20:42:31 -- accel/accel.sh@21 -- # val= 00:04:40.293 20:42:31 -- accel/accel.sh@22 -- # case "$var" in 00:04:40.293 20:42:31 -- accel/accel.sh@20 -- # IFS=: 00:04:40.293 20:42:31 -- accel/accel.sh@20 -- # read -r var val 00:04:40.293 20:42:31 -- accel/accel.sh@21 -- # val= 00:04:40.293 20:42:31 -- accel/accel.sh@22 -- # case "$var" in 00:04:40.293 20:42:31 -- accel/accel.sh@20 -- # IFS=: 00:04:40.293 20:42:31 -- accel/accel.sh@20 -- # read -r var val 00:04:40.293 20:42:31 -- accel/accel.sh@21 -- # val= 00:04:40.293 20:42:31 -- accel/accel.sh@22 -- # case "$var" in 00:04:40.293 20:42:31 -- accel/accel.sh@20 -- # IFS=: 00:04:40.293 20:42:31 -- accel/accel.sh@20 -- # read -r var val 00:04:40.293 20:42:31 -- accel/accel.sh@21 -- # val= 00:04:40.293 20:42:31 -- accel/accel.sh@22 -- # case "$var" in 00:04:40.293 20:42:31 -- accel/accel.sh@20 -- # IFS=: 00:04:40.293 20:42:31 -- accel/accel.sh@20 -- # read -r var val 00:04:40.293 20:42:31 -- accel/accel.sh@21 -- # val= 00:04:40.293 20:42:31 -- accel/accel.sh@22 -- # case "$var" in 00:04:40.293 20:42:31 -- accel/accel.sh@20 -- # IFS=: 00:04:40.293 20:42:31 -- accel/accel.sh@20 -- # read -r var val 00:04:40.294 20:42:31 -- accel/accel.sh@21 -- # val= 00:04:40.294 20:42:31 -- accel/accel.sh@22 -- # case "$var" in 00:04:40.294 20:42:31 -- accel/accel.sh@20 -- # IFS=: 00:04:40.294 20:42:31 -- accel/accel.sh@20 -- # read -r var val 00:04:40.294 20:42:31 -- accel/accel.sh@28 -- # [[ -n software ]] 00:04:40.294 20:42:31 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:04:40.294 20:42:31 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:40.294 00:04:40.294 real 0m3.368s 00:04:40.294 user 0m8.627s 00:04:40.294 sys 0m1.010s 00:04:40.294 20:42:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.294 20:42:31 -- common/autotest_common.sh@10 -- # set +x 00:04:40.294 ************************************ 00:04:40.294 END TEST accel_decomp_mcore 00:04:40.294 ************************************ 00:04:40.294 20:42:31 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:04:40.294 20:42:31 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:04:40.294 20:42:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:40.294 20:42:31 -- common/autotest_common.sh@10 -- # set +x 00:04:40.294 ************************************ 00:04:40.294 START TEST accel_decomp_full_mcore 00:04:40.294 ************************************ 00:04:40.294 20:42:31 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:04:40.294 20:42:31 -- accel/accel.sh@16 -- # local accel_opc 00:04:40.294 20:42:31 -- accel/accel.sh@17 -- # local accel_module 00:04:40.294 20:42:31 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:04:40.294 20:42:31 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.fQh9Sl -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:04:40.553 [2024-04-16 20:42:31.414843] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:04:40.553 [2024-04-16 20:42:31.415194] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:40.811 EAL: TSC is not safe to use in SMP mode 00:04:40.811 EAL: TSC is not invariant 00:04:40.811 [2024-04-16 20:42:31.849139] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:41.070 [2024-04-16 20:42:31.942270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.070 [2024-04-16 20:42:31.942033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:41.070 [2024-04-16 20:42:31.942180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:41.070 [2024-04-16 20:42:31.942266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:41.070 20:42:31 -- accel/accel.sh@12 -- # build_accel_config 00:04:41.070 20:42:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:41.070 20:42:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:41.070 20:42:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:41.070 20:42:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:41.070 20:42:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:41.070 20:42:31 -- accel/accel.sh@41 -- # local IFS=, 00:04:41.070 20:42:31 -- accel/accel.sh@42 -- # jq -r . 00:04:42.005 20:42:33 -- accel/accel.sh@18 -- # out='Preparing input file... 00:04:42.005 00:04:42.005 SPDK Configuration: 00:04:42.005 Core mask: 0xf 00:04:42.005 00:04:42.005 Accel Perf Configuration: 00:04:42.005 Workload Type: decompress 00:04:42.005 Transfer size: 111250 bytes 00:04:42.005 Vector count 1 00:04:42.005 Module: software 00:04:42.005 File Name: /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:42.005 Queue depth: 32 00:04:42.005 Allocate depth: 32 00:04:42.005 # threads/core: 1 00:04:42.005 Run time: 1 seconds 00:04:42.005 Verify: Yes 00:04:42.005 00:04:42.005 Running for 1 seconds... 00:04:42.005 00:04:42.005 Core,Thread Transfers Bandwidth Failed Miscompares 00:04:42.005 ------------------------------------------------------------------------------------ 00:04:42.005 0,0 5024/s 207 MiB/s 0 0 00:04:42.005 3,0 5024/s 207 MiB/s 0 0 00:04:42.005 2,0 4960/s 204 MiB/s 0 0 00:04:42.005 1,0 4992/s 206 MiB/s 0 0 00:04:42.005 ==================================================================================== 00:04:42.005 Total 20000/s 2121 MiB/s 0 0' 00:04:42.005 20:42:33 -- accel/accel.sh@20 -- # IFS=: 00:04:42.005 20:42:33 -- accel/accel.sh@20 -- # read -r var val 00:04:42.005 20:42:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:04:42.005 20:42:33 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.7RrxkZ -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:04:42.005 [2024-04-16 20:42:33.100313] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:04:42.005 [2024-04-16 20:42:33.100657] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:42.574 EAL: TSC is not safe to use in SMP mode 00:04:42.574 EAL: TSC is not invariant 00:04:42.574 [2024-04-16 20:42:33.538642] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:42.574 [2024-04-16 20:42:33.632253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:42.574 [2024-04-16 20:42:33.632557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.574 [2024-04-16 20:42:33.632407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:42.574 [2024-04-16 20:42:33.632560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:42.574 20:42:33 -- accel/accel.sh@12 -- # build_accel_config 00:04:42.574 20:42:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:42.574 20:42:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:42.574 20:42:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:42.574 20:42:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:42.574 20:42:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:42.574 20:42:33 -- accel/accel.sh@41 -- # local IFS=, 00:04:42.574 20:42:33 -- accel/accel.sh@42 -- # jq -r . 00:04:42.574 20:42:33 -- accel/accel.sh@21 -- # val= 00:04:42.574 20:42:33 -- accel/accel.sh@22 -- # case "$var" in 00:04:42.574 20:42:33 -- accel/accel.sh@20 -- # IFS=: 00:04:42.574 20:42:33 -- accel/accel.sh@20 -- # read -r var val 00:04:42.574 20:42:33 -- accel/accel.sh@21 -- # val= 00:04:42.574 20:42:33 -- accel/accel.sh@22 -- # case "$var" in 00:04:42.574 20:42:33 -- accel/accel.sh@20 -- # IFS=: 00:04:42.574 20:42:33 -- accel/accel.sh@20 -- # read -r var val 00:04:42.574 20:42:33 -- accel/accel.sh@21 -- # val= 00:04:42.574 20:42:33 -- accel/accel.sh@22 -- # case "$var" in 00:04:42.574 20:42:33 -- accel/accel.sh@20 -- # IFS=: 00:04:42.574 20:42:33 -- accel/accel.sh@20 -- # read -r var val 00:04:42.574 20:42:33 -- accel/accel.sh@21 -- # val=0xf 00:04:42.574 20:42:33 -- accel/accel.sh@22 -- # case "$var" in 00:04:42.574 20:42:33 -- accel/accel.sh@20 -- # IFS=: 00:04:42.574 20:42:33 -- accel/accel.sh@20 -- # read -r var val 00:04:42.574 20:42:33 -- accel/accel.sh@21 -- # val= 00:04:42.574 20:42:33 -- accel/accel.sh@22 -- # case "$var" in 00:04:42.574 20:42:33 -- accel/accel.sh@20 -- # IFS=: 00:04:42.574 20:42:33 -- accel/accel.sh@20 -- # read -r var val 00:04:42.574 20:42:33 -- accel/accel.sh@21 -- # val= 00:04:42.574 20:42:33 -- accel/accel.sh@22 -- # case "$var" in 00:04:42.574 20:42:33 -- accel/accel.sh@20 -- # IFS=: 00:04:42.574 20:42:33 -- accel/accel.sh@20 -- # read -r var val 00:04:42.574 20:42:33 -- accel/accel.sh@21 -- # val=decompress 00:04:42.574 20:42:33 -- accel/accel.sh@22 -- # case "$var" in 00:04:42.574 20:42:33 -- accel/accel.sh@24 -- # accel_opc=decompress 00:04:42.574 20:42:33 -- accel/accel.sh@20 -- # IFS=: 00:04:42.574 20:42:33 -- accel/accel.sh@20 -- # read -r var val 00:04:42.574 20:42:33 -- accel/accel.sh@21 -- # val='111250 bytes' 00:04:42.574 20:42:33 -- accel/accel.sh@22 -- # case "$var" in 00:04:42.574 20:42:33 -- accel/accel.sh@20 -- # IFS=: 00:04:42.574 20:42:33 -- accel/accel.sh@20 -- # read -r var val 00:04:42.574 20:42:33 -- accel/accel.sh@21 -- # val= 00:04:42.574 20:42:33 -- accel/accel.sh@22 -- # case "$var" in 00:04:42.574 20:42:33 -- accel/accel.sh@20 -- # IFS=: 00:04:42.574 20:42:33 -- accel/accel.sh@20 -- # read -r var val 00:04:42.574 20:42:33 -- accel/accel.sh@21 -- # val=software 00:04:42.574 20:42:33 -- accel/accel.sh@22 -- # case "$var" in 00:04:42.574 20:42:33 -- accel/accel.sh@23 -- # accel_module=software 00:04:42.574 20:42:33 -- accel/accel.sh@20 -- # IFS=: 00:04:42.574 20:42:33 -- accel/accel.sh@20 -- # read -r var val 00:04:42.574 20:42:33 -- accel/accel.sh@21 -- # val=/usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:42.574 20:42:33 -- accel/accel.sh@22 -- # case "$var" in 00:04:42.574 20:42:33 -- accel/accel.sh@20 -- # IFS=: 00:04:42.574 20:42:33 -- accel/accel.sh@20 -- # read -r var val 00:04:42.574 20:42:33 -- accel/accel.sh@21 -- # val=32 00:04:42.574 20:42:33 -- accel/accel.sh@22 -- # case "$var" in 00:04:42.574 20:42:33 -- accel/accel.sh@20 -- # IFS=: 00:04:42.574 20:42:33 -- accel/accel.sh@20 -- # read -r var val 00:04:42.574 20:42:33 -- accel/accel.sh@21 -- # val=32 00:04:42.574 20:42:33 -- accel/accel.sh@22 -- # case "$var" in 00:04:42.574 20:42:33 -- accel/accel.sh@20 -- # IFS=: 00:04:42.574 20:42:33 -- accel/accel.sh@20 -- # read -r var val 00:04:42.574 20:42:33 -- accel/accel.sh@21 -- # val=1 00:04:42.574 20:42:33 -- accel/accel.sh@22 -- # case "$var" in 00:04:42.574 20:42:33 -- accel/accel.sh@20 -- # IFS=: 00:04:42.574 20:42:33 -- accel/accel.sh@20 -- # read -r var val 00:04:42.574 20:42:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:04:42.574 20:42:33 -- accel/accel.sh@22 -- # case "$var" in 00:04:42.574 20:42:33 -- accel/accel.sh@20 -- # IFS=: 00:04:42.574 20:42:33 -- accel/accel.sh@20 -- # read -r var val 00:04:42.574 20:42:33 -- accel/accel.sh@21 -- # val=Yes 00:04:42.574 20:42:33 -- accel/accel.sh@22 -- # case "$var" in 00:04:42.574 20:42:33 -- accel/accel.sh@20 -- # IFS=: 00:04:42.574 20:42:33 -- accel/accel.sh@20 -- # read -r var val 00:04:42.574 20:42:33 -- accel/accel.sh@21 -- # val= 00:04:42.574 20:42:33 -- accel/accel.sh@22 -- # case "$var" in 00:04:42.574 20:42:33 -- accel/accel.sh@20 -- # IFS=: 00:04:42.574 20:42:33 -- accel/accel.sh@20 -- # read -r var val 00:04:42.574 20:42:33 -- accel/accel.sh@21 -- # val= 00:04:42.574 20:42:33 -- accel/accel.sh@22 -- # case "$var" in 00:04:42.574 20:42:33 -- accel/accel.sh@20 -- # IFS=: 00:04:42.574 20:42:33 -- accel/accel.sh@20 -- # read -r var val 00:04:43.952 20:42:34 -- accel/accel.sh@21 -- # val= 00:04:43.952 20:42:34 -- accel/accel.sh@22 -- # case "$var" in 00:04:43.952 20:42:34 -- accel/accel.sh@20 -- # IFS=: 00:04:43.952 20:42:34 -- accel/accel.sh@20 -- # read -r var val 00:04:43.952 20:42:34 -- accel/accel.sh@21 -- # val= 00:04:43.952 20:42:34 -- accel/accel.sh@22 -- # case "$var" in 00:04:43.952 20:42:34 -- accel/accel.sh@20 -- # IFS=: 00:04:43.952 20:42:34 -- accel/accel.sh@20 -- # read -r var val 00:04:43.952 20:42:34 -- accel/accel.sh@21 -- # val= 00:04:43.952 20:42:34 -- accel/accel.sh@22 -- # case "$var" in 00:04:43.952 20:42:34 -- accel/accel.sh@20 -- # IFS=: 00:04:43.952 20:42:34 -- accel/accel.sh@20 -- # read -r var val 00:04:43.952 20:42:34 -- accel/accel.sh@21 -- # val= 00:04:43.952 20:42:34 -- accel/accel.sh@22 -- # case "$var" in 00:04:43.952 20:42:34 -- accel/accel.sh@20 -- # IFS=: 00:04:43.952 20:42:34 -- accel/accel.sh@20 -- # read -r var val 00:04:43.952 20:42:34 -- accel/accel.sh@21 -- # val= 00:04:43.952 20:42:34 -- accel/accel.sh@22 -- # case "$var" in 00:04:43.952 20:42:34 -- accel/accel.sh@20 -- # IFS=: 00:04:43.952 20:42:34 -- accel/accel.sh@20 -- # read -r var val 00:04:43.952 20:42:34 -- accel/accel.sh@21 -- # val= 00:04:43.952 20:42:34 -- accel/accel.sh@22 -- # case "$var" in 00:04:43.952 20:42:34 -- accel/accel.sh@20 -- # IFS=: 00:04:43.952 20:42:34 -- accel/accel.sh@20 -- # read -r var val 00:04:43.952 20:42:34 -- accel/accel.sh@21 -- # val= 00:04:43.952 20:42:34 -- accel/accel.sh@22 -- # case "$var" in 00:04:43.952 20:42:34 -- accel/accel.sh@20 -- # IFS=: 00:04:43.952 20:42:34 -- accel/accel.sh@20 -- # read -r var val 00:04:43.952 20:42:34 -- accel/accel.sh@21 -- # val= 00:04:43.952 20:42:34 -- accel/accel.sh@22 -- # case "$var" in 00:04:43.952 20:42:34 -- accel/accel.sh@20 -- # IFS=: 00:04:43.952 20:42:34 -- accel/accel.sh@20 -- # read -r var val 00:04:43.952 20:42:34 -- accel/accel.sh@21 -- # val= 00:04:43.952 20:42:34 -- accel/accel.sh@22 -- # case "$var" in 00:04:43.952 20:42:34 -- accel/accel.sh@20 -- # IFS=: 00:04:43.952 20:42:34 -- accel/accel.sh@20 -- # read -r var val 00:04:43.952 20:42:34 -- accel/accel.sh@28 -- # [[ -n software ]] 00:04:43.952 20:42:34 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:04:43.952 20:42:34 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:43.952 00:04:43.952 real 0m3.387s 00:04:43.952 user 0m8.743s 00:04:43.952 sys 0m0.986s 00:04:43.952 20:42:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.952 20:42:34 -- common/autotest_common.sh@10 -- # set +x 00:04:43.952 ************************************ 00:04:43.952 END TEST accel_decomp_full_mcore 00:04:43.952 ************************************ 00:04:43.952 20:42:34 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:04:43.952 20:42:34 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:04:43.952 20:42:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:43.952 20:42:34 -- common/autotest_common.sh@10 -- # set +x 00:04:43.952 ************************************ 00:04:43.952 START TEST accel_decomp_mthread 00:04:43.952 ************************************ 00:04:43.952 20:42:34 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:04:43.952 20:42:34 -- accel/accel.sh@16 -- # local accel_opc 00:04:43.952 20:42:34 -- accel/accel.sh@17 -- # local accel_module 00:04:43.952 20:42:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:04:43.952 20:42:34 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.KqCt7h -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:04:43.952 [2024-04-16 20:42:34.859417] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:04:43.952 [2024-04-16 20:42:34.859772] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:44.211 EAL: TSC is not safe to use in SMP mode 00:04:44.211 EAL: TSC is not invariant 00:04:44.211 [2024-04-16 20:42:35.284598] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.470 [2024-04-16 20:42:35.364665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.470 20:42:35 -- accel/accel.sh@12 -- # build_accel_config 00:04:44.470 20:42:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:44.470 20:42:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:44.470 20:42:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:44.470 20:42:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:44.470 20:42:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:44.470 20:42:35 -- accel/accel.sh@41 -- # local IFS=, 00:04:44.470 20:42:35 -- accel/accel.sh@42 -- # jq -r . 00:04:45.409 20:42:36 -- accel/accel.sh@18 -- # out='Preparing input file... 00:04:45.409 00:04:45.409 SPDK Configuration: 00:04:45.409 Core mask: 0x1 00:04:45.409 00:04:45.409 Accel Perf Configuration: 00:04:45.409 Workload Type: decompress 00:04:45.409 Transfer size: 4096 bytes 00:04:45.409 Vector count 1 00:04:45.409 Module: software 00:04:45.409 File Name: /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:45.409 Queue depth: 32 00:04:45.409 Allocate depth: 32 00:04:45.409 # threads/core: 2 00:04:45.409 Run time: 1 seconds 00:04:45.409 Verify: Yes 00:04:45.409 00:04:45.409 Running for 1 seconds... 00:04:45.409 00:04:45.409 Core,Thread Transfers Bandwidth Failed Miscompares 00:04:45.409 ------------------------------------------------------------------------------------ 00:04:45.409 0,1 44768/s 82 MiB/s 0 0 00:04:45.409 0,0 44640/s 82 MiB/s 0 0 00:04:45.409 ==================================================================================== 00:04:45.409 Total 89408/s 349 MiB/s 0 0' 00:04:45.409 20:42:36 -- accel/accel.sh@20 -- # IFS=: 00:04:45.409 20:42:36 -- accel/accel.sh@20 -- # read -r var val 00:04:45.409 20:42:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:04:45.409 20:42:36 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.APuDxi -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:04:45.409 [2024-04-16 20:42:36.519360] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:04:45.409 [2024-04-16 20:42:36.519730] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:45.979 EAL: TSC is not safe to use in SMP mode 00:04:45.979 EAL: TSC is not invariant 00:04:45.979 [2024-04-16 20:42:36.945308] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.979 [2024-04-16 20:42:37.034828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.979 20:42:37 -- accel/accel.sh@12 -- # build_accel_config 00:04:45.979 20:42:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:45.979 20:42:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:45.979 20:42:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:45.979 20:42:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:45.979 20:42:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:45.979 20:42:37 -- accel/accel.sh@41 -- # local IFS=, 00:04:45.979 20:42:37 -- accel/accel.sh@42 -- # jq -r . 00:04:45.979 20:42:37 -- accel/accel.sh@21 -- # val= 00:04:45.979 20:42:37 -- accel/accel.sh@22 -- # case "$var" in 00:04:45.979 20:42:37 -- accel/accel.sh@20 -- # IFS=: 00:04:45.979 20:42:37 -- accel/accel.sh@20 -- # read -r var val 00:04:45.979 20:42:37 -- accel/accel.sh@21 -- # val= 00:04:45.979 20:42:37 -- accel/accel.sh@22 -- # case "$var" in 00:04:45.979 20:42:37 -- accel/accel.sh@20 -- # IFS=: 00:04:45.979 20:42:37 -- accel/accel.sh@20 -- # read -r var val 00:04:45.979 20:42:37 -- accel/accel.sh@21 -- # val= 00:04:45.979 20:42:37 -- accel/accel.sh@22 -- # case "$var" in 00:04:45.979 20:42:37 -- accel/accel.sh@20 -- # IFS=: 00:04:45.979 20:42:37 -- accel/accel.sh@20 -- # read -r var val 00:04:45.979 20:42:37 -- accel/accel.sh@21 -- # val=0x1 00:04:45.979 20:42:37 -- accel/accel.sh@22 -- # case "$var" in 00:04:45.979 20:42:37 -- accel/accel.sh@20 -- # IFS=: 00:04:45.979 20:42:37 -- accel/accel.sh@20 -- # read -r var val 00:04:45.979 20:42:37 -- accel/accel.sh@21 -- # val= 00:04:45.979 20:42:37 -- accel/accel.sh@22 -- # case "$var" in 00:04:45.979 20:42:37 -- accel/accel.sh@20 -- # IFS=: 00:04:45.979 20:42:37 -- accel/accel.sh@20 -- # read -r var val 00:04:45.979 20:42:37 -- accel/accel.sh@21 -- # val= 00:04:45.979 20:42:37 -- accel/accel.sh@22 -- # case "$var" in 00:04:45.979 20:42:37 -- accel/accel.sh@20 -- # IFS=: 00:04:45.979 20:42:37 -- accel/accel.sh@20 -- # read -r var val 00:04:45.979 20:42:37 -- accel/accel.sh@21 -- # val=decompress 00:04:45.979 20:42:37 -- accel/accel.sh@22 -- # case "$var" in 00:04:45.979 20:42:37 -- accel/accel.sh@24 -- # accel_opc=decompress 00:04:45.979 20:42:37 -- accel/accel.sh@20 -- # IFS=: 00:04:45.979 20:42:37 -- accel/accel.sh@20 -- # read -r var val 00:04:45.979 20:42:37 -- accel/accel.sh@21 -- # val='4096 bytes' 00:04:45.979 20:42:37 -- accel/accel.sh@22 -- # case "$var" in 00:04:45.979 20:42:37 -- accel/accel.sh@20 -- # IFS=: 00:04:45.979 20:42:37 -- accel/accel.sh@20 -- # read -r var val 00:04:45.979 20:42:37 -- accel/accel.sh@21 -- # val= 00:04:45.979 20:42:37 -- accel/accel.sh@22 -- # case "$var" in 00:04:45.979 20:42:37 -- accel/accel.sh@20 -- # IFS=: 00:04:45.979 20:42:37 -- accel/accel.sh@20 -- # read -r var val 00:04:45.979 20:42:37 -- accel/accel.sh@21 -- # val=software 00:04:45.979 20:42:37 -- accel/accel.sh@22 -- # case "$var" in 00:04:45.979 20:42:37 -- accel/accel.sh@23 -- # accel_module=software 00:04:45.979 20:42:37 -- accel/accel.sh@20 -- # IFS=: 00:04:45.979 20:42:37 -- accel/accel.sh@20 -- # read -r var val 00:04:45.979 20:42:37 -- accel/accel.sh@21 -- # val=/usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:45.979 20:42:37 -- accel/accel.sh@22 -- # case "$var" in 00:04:45.979 20:42:37 -- accel/accel.sh@20 -- # IFS=: 00:04:45.979 20:42:37 -- accel/accel.sh@20 -- # read -r var val 00:04:45.979 20:42:37 -- accel/accel.sh@21 -- # val=32 00:04:45.979 20:42:37 -- accel/accel.sh@22 -- # case "$var" in 00:04:45.979 20:42:37 -- accel/accel.sh@20 -- # IFS=: 00:04:45.979 20:42:37 -- accel/accel.sh@20 -- # read -r var val 00:04:45.979 20:42:37 -- accel/accel.sh@21 -- # val=32 00:04:45.979 20:42:37 -- accel/accel.sh@22 -- # case "$var" in 00:04:45.979 20:42:37 -- accel/accel.sh@20 -- # IFS=: 00:04:45.979 20:42:37 -- accel/accel.sh@20 -- # read -r var val 00:04:45.979 20:42:37 -- accel/accel.sh@21 -- # val=2 00:04:45.979 20:42:37 -- accel/accel.sh@22 -- # case "$var" in 00:04:45.979 20:42:37 -- accel/accel.sh@20 -- # IFS=: 00:04:45.979 20:42:37 -- accel/accel.sh@20 -- # read -r var val 00:04:45.979 20:42:37 -- accel/accel.sh@21 -- # val='1 seconds' 00:04:45.979 20:42:37 -- accel/accel.sh@22 -- # case "$var" in 00:04:45.979 20:42:37 -- accel/accel.sh@20 -- # IFS=: 00:04:45.979 20:42:37 -- accel/accel.sh@20 -- # read -r var val 00:04:45.979 20:42:37 -- accel/accel.sh@21 -- # val=Yes 00:04:45.979 20:42:37 -- accel/accel.sh@22 -- # case "$var" in 00:04:45.979 20:42:37 -- accel/accel.sh@20 -- # IFS=: 00:04:45.979 20:42:37 -- accel/accel.sh@20 -- # read -r var val 00:04:45.979 20:42:37 -- accel/accel.sh@21 -- # val= 00:04:45.979 20:42:37 -- accel/accel.sh@22 -- # case "$var" in 00:04:45.979 20:42:37 -- accel/accel.sh@20 -- # IFS=: 00:04:45.979 20:42:37 -- accel/accel.sh@20 -- # read -r var val 00:04:45.979 20:42:37 -- accel/accel.sh@21 -- # val= 00:04:45.979 20:42:37 -- accel/accel.sh@22 -- # case "$var" in 00:04:45.979 20:42:37 -- accel/accel.sh@20 -- # IFS=: 00:04:45.979 20:42:37 -- accel/accel.sh@20 -- # read -r var val 00:04:47.361 20:42:38 -- accel/accel.sh@21 -- # val= 00:04:47.361 20:42:38 -- accel/accel.sh@22 -- # case "$var" in 00:04:47.361 20:42:38 -- accel/accel.sh@20 -- # IFS=: 00:04:47.361 20:42:38 -- accel/accel.sh@20 -- # read -r var val 00:04:47.361 20:42:38 -- accel/accel.sh@21 -- # val= 00:04:47.361 20:42:38 -- accel/accel.sh@22 -- # case "$var" in 00:04:47.361 20:42:38 -- accel/accel.sh@20 -- # IFS=: 00:04:47.361 20:42:38 -- accel/accel.sh@20 -- # read -r var val 00:04:47.361 20:42:38 -- accel/accel.sh@21 -- # val= 00:04:47.361 20:42:38 -- accel/accel.sh@22 -- # case "$var" in 00:04:47.361 20:42:38 -- accel/accel.sh@20 -- # IFS=: 00:04:47.361 20:42:38 -- accel/accel.sh@20 -- # read -r var val 00:04:47.361 20:42:38 -- accel/accel.sh@21 -- # val= 00:04:47.361 20:42:38 -- accel/accel.sh@22 -- # case "$var" in 00:04:47.361 20:42:38 -- accel/accel.sh@20 -- # IFS=: 00:04:47.361 20:42:38 -- accel/accel.sh@20 -- # read -r var val 00:04:47.361 20:42:38 -- accel/accel.sh@21 -- # val= 00:04:47.361 20:42:38 -- accel/accel.sh@22 -- # case "$var" in 00:04:47.361 20:42:38 -- accel/accel.sh@20 -- # IFS=: 00:04:47.361 20:42:38 -- accel/accel.sh@20 -- # read -r var val 00:04:47.361 20:42:38 -- accel/accel.sh@21 -- # val= 00:04:47.361 20:42:38 -- accel/accel.sh@22 -- # case "$var" in 00:04:47.361 20:42:38 -- accel/accel.sh@20 -- # IFS=: 00:04:47.361 20:42:38 -- accel/accel.sh@20 -- # read -r var val 00:04:47.361 20:42:38 -- accel/accel.sh@21 -- # val= 00:04:47.361 20:42:38 -- accel/accel.sh@22 -- # case "$var" in 00:04:47.361 20:42:38 -- accel/accel.sh@20 -- # IFS=: 00:04:47.361 20:42:38 -- accel/accel.sh@20 -- # read -r var val 00:04:47.361 20:42:38 -- accel/accel.sh@28 -- # [[ -n software ]] 00:04:47.361 20:42:38 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:04:47.361 20:42:38 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:47.361 00:04:47.361 real 0m3.335s 00:04:47.361 user 0m2.390s 00:04:47.361 sys 0m0.960s 00:04:47.361 20:42:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.361 20:42:38 -- common/autotest_common.sh@10 -- # set +x 00:04:47.361 ************************************ 00:04:47.361 END TEST accel_decomp_mthread 00:04:47.361 ************************************ 00:04:47.361 20:42:38 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:04:47.361 20:42:38 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:04:47.361 20:42:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:47.361 20:42:38 -- common/autotest_common.sh@10 -- # set +x 00:04:47.361 ************************************ 00:04:47.361 START TEST accel_deomp_full_mthread 00:04:47.361 ************************************ 00:04:47.361 20:42:38 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:04:47.361 20:42:38 -- accel/accel.sh@16 -- # local accel_opc 00:04:47.361 20:42:38 -- accel/accel.sh@17 -- # local accel_module 00:04:47.361 20:42:38 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:04:47.361 20:42:38 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.6mGRCy -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:04:47.361 [2024-04-16 20:42:38.251293] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:04:47.361 [2024-04-16 20:42:38.251638] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:47.623 EAL: TSC is not safe to use in SMP mode 00:04:47.623 EAL: TSC is not invariant 00:04:47.623 [2024-04-16 20:42:38.676800] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.883 [2024-04-16 20:42:38.768376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.883 20:42:38 -- accel/accel.sh@12 -- # build_accel_config 00:04:47.883 20:42:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:47.883 20:42:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:47.883 20:42:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:47.883 20:42:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:47.883 20:42:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:47.883 20:42:38 -- accel/accel.sh@41 -- # local IFS=, 00:04:47.883 20:42:38 -- accel/accel.sh@42 -- # jq -r . 00:04:48.823 20:42:39 -- accel/accel.sh@18 -- # out='Preparing input file... 00:04:48.823 00:04:48.823 SPDK Configuration: 00:04:48.823 Core mask: 0x1 00:04:48.823 00:04:48.823 Accel Perf Configuration: 00:04:48.823 Workload Type: decompress 00:04:48.823 Transfer size: 111250 bytes 00:04:48.823 Vector count 1 00:04:48.823 Module: software 00:04:48.823 File Name: /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:48.823 Queue depth: 32 00:04:48.823 Allocate depth: 32 00:04:48.823 # threads/core: 2 00:04:48.823 Run time: 1 seconds 00:04:48.823 Verify: Yes 00:04:48.823 00:04:48.823 Running for 1 seconds... 00:04:48.823 00:04:48.823 Core,Thread Transfers Bandwidth Failed Miscompares 00:04:48.823 ------------------------------------------------------------------------------------ 00:04:48.823 0,1 2752/s 113 MiB/s 0 0 00:04:48.823 0,0 2688/s 111 MiB/s 0 0 00:04:48.823 ==================================================================================== 00:04:48.823 Total 5440/s 577 MiB/s 0 0' 00:04:48.823 20:42:39 -- accel/accel.sh@20 -- # IFS=: 00:04:48.823 20:42:39 -- accel/accel.sh@20 -- # read -r var val 00:04:48.823 20:42:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:04:48.823 20:42:39 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.dcrfIZ -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:04:49.083 [2024-04-16 20:42:39.944553] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:04:49.083 [2024-04-16 20:42:39.944908] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:49.342 EAL: TSC is not safe to use in SMP mode 00:04:49.342 EAL: TSC is not invariant 00:04:49.342 [2024-04-16 20:42:40.374075] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.602 [2024-04-16 20:42:40.462443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.602 20:42:40 -- accel/accel.sh@12 -- # build_accel_config 00:04:49.602 20:42:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:49.602 20:42:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:49.602 20:42:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:49.602 20:42:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:49.602 20:42:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:49.602 20:42:40 -- accel/accel.sh@41 -- # local IFS=, 00:04:49.602 20:42:40 -- accel/accel.sh@42 -- # jq -r . 00:04:49.602 20:42:40 -- accel/accel.sh@21 -- # val= 00:04:49.602 20:42:40 -- accel/accel.sh@22 -- # case "$var" in 00:04:49.602 20:42:40 -- accel/accel.sh@20 -- # IFS=: 00:04:49.602 20:42:40 -- accel/accel.sh@20 -- # read -r var val 00:04:49.602 20:42:40 -- accel/accel.sh@21 -- # val= 00:04:49.602 20:42:40 -- accel/accel.sh@22 -- # case "$var" in 00:04:49.602 20:42:40 -- accel/accel.sh@20 -- # IFS=: 00:04:49.602 20:42:40 -- accel/accel.sh@20 -- # read -r var val 00:04:49.602 20:42:40 -- accel/accel.sh@21 -- # val= 00:04:49.602 20:42:40 -- accel/accel.sh@22 -- # case "$var" in 00:04:49.602 20:42:40 -- accel/accel.sh@20 -- # IFS=: 00:04:49.602 20:42:40 -- accel/accel.sh@20 -- # read -r var val 00:04:49.602 20:42:40 -- accel/accel.sh@21 -- # val=0x1 00:04:49.602 20:42:40 -- accel/accel.sh@22 -- # case "$var" in 00:04:49.602 20:42:40 -- accel/accel.sh@20 -- # IFS=: 00:04:49.602 20:42:40 -- accel/accel.sh@20 -- # read -r var val 00:04:49.602 20:42:40 -- accel/accel.sh@21 -- # val= 00:04:49.602 20:42:40 -- accel/accel.sh@22 -- # case "$var" in 00:04:49.602 20:42:40 -- accel/accel.sh@20 -- # IFS=: 00:04:49.602 20:42:40 -- accel/accel.sh@20 -- # read -r var val 00:04:49.602 20:42:40 -- accel/accel.sh@21 -- # val= 00:04:49.602 20:42:40 -- accel/accel.sh@22 -- # case "$var" in 00:04:49.602 20:42:40 -- accel/accel.sh@20 -- # IFS=: 00:04:49.602 20:42:40 -- accel/accel.sh@20 -- # read -r var val 00:04:49.602 20:42:40 -- accel/accel.sh@21 -- # val=decompress 00:04:49.602 20:42:40 -- accel/accel.sh@22 -- # case "$var" in 00:04:49.602 20:42:40 -- accel/accel.sh@24 -- # accel_opc=decompress 00:04:49.602 20:42:40 -- accel/accel.sh@20 -- # IFS=: 00:04:49.602 20:42:40 -- accel/accel.sh@20 -- # read -r var val 00:04:49.602 20:42:40 -- accel/accel.sh@21 -- # val='111250 bytes' 00:04:49.602 20:42:40 -- accel/accel.sh@22 -- # case "$var" in 00:04:49.602 20:42:40 -- accel/accel.sh@20 -- # IFS=: 00:04:49.602 20:42:40 -- accel/accel.sh@20 -- # read -r var val 00:04:49.602 20:42:40 -- accel/accel.sh@21 -- # val= 00:04:49.602 20:42:40 -- accel/accel.sh@22 -- # case "$var" in 00:04:49.602 20:42:40 -- accel/accel.sh@20 -- # IFS=: 00:04:49.602 20:42:40 -- accel/accel.sh@20 -- # read -r var val 00:04:49.602 20:42:40 -- accel/accel.sh@21 -- # val=software 00:04:49.602 20:42:40 -- accel/accel.sh@22 -- # case "$var" in 00:04:49.602 20:42:40 -- accel/accel.sh@23 -- # accel_module=software 00:04:49.602 20:42:40 -- accel/accel.sh@20 -- # IFS=: 00:04:49.602 20:42:40 -- accel/accel.sh@20 -- # read -r var val 00:04:49.602 20:42:40 -- accel/accel.sh@21 -- # val=/usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:49.602 20:42:40 -- accel/accel.sh@22 -- # case "$var" in 00:04:49.602 20:42:40 -- accel/accel.sh@20 -- # IFS=: 00:04:49.602 20:42:40 -- accel/accel.sh@20 -- # read -r var val 00:04:49.602 20:42:40 -- accel/accel.sh@21 -- # val=32 00:04:49.602 20:42:40 -- accel/accel.sh@22 -- # case "$var" in 00:04:49.602 20:42:40 -- accel/accel.sh@20 -- # IFS=: 00:04:49.602 20:42:40 -- accel/accel.sh@20 -- # read -r var val 00:04:49.602 20:42:40 -- accel/accel.sh@21 -- # val=32 00:04:49.603 20:42:40 -- accel/accel.sh@22 -- # case "$var" in 00:04:49.603 20:42:40 -- accel/accel.sh@20 -- # IFS=: 00:04:49.603 20:42:40 -- accel/accel.sh@20 -- # read -r var val 00:04:49.603 20:42:40 -- accel/accel.sh@21 -- # val=2 00:04:49.603 20:42:40 -- accel/accel.sh@22 -- # case "$var" in 00:04:49.603 20:42:40 -- accel/accel.sh@20 -- # IFS=: 00:04:49.603 20:42:40 -- accel/accel.sh@20 -- # read -r var val 00:04:49.603 20:42:40 -- accel/accel.sh@21 -- # val='1 seconds' 00:04:49.603 20:42:40 -- accel/accel.sh@22 -- # case "$var" in 00:04:49.603 20:42:40 -- accel/accel.sh@20 -- # IFS=: 00:04:49.603 20:42:40 -- accel/accel.sh@20 -- # read -r var val 00:04:49.603 20:42:40 -- accel/accel.sh@21 -- # val=Yes 00:04:49.603 20:42:40 -- accel/accel.sh@22 -- # case "$var" in 00:04:49.603 20:42:40 -- accel/accel.sh@20 -- # IFS=: 00:04:49.603 20:42:40 -- accel/accel.sh@20 -- # read -r var val 00:04:49.603 20:42:40 -- accel/accel.sh@21 -- # val= 00:04:49.603 20:42:40 -- accel/accel.sh@22 -- # case "$var" in 00:04:49.603 20:42:40 -- accel/accel.sh@20 -- # IFS=: 00:04:49.603 20:42:40 -- accel/accel.sh@20 -- # read -r var val 00:04:49.603 20:42:40 -- accel/accel.sh@21 -- # val= 00:04:49.603 20:42:40 -- accel/accel.sh@22 -- # case "$var" in 00:04:49.603 20:42:40 -- accel/accel.sh@20 -- # IFS=: 00:04:49.603 20:42:40 -- accel/accel.sh@20 -- # read -r var val 00:04:50.542 20:42:41 -- accel/accel.sh@21 -- # val= 00:04:50.542 20:42:41 -- accel/accel.sh@22 -- # case "$var" in 00:04:50.542 20:42:41 -- accel/accel.sh@20 -- # IFS=: 00:04:50.542 20:42:41 -- accel/accel.sh@20 -- # read -r var val 00:04:50.542 20:42:41 -- accel/accel.sh@21 -- # val= 00:04:50.542 20:42:41 -- accel/accel.sh@22 -- # case "$var" in 00:04:50.542 20:42:41 -- accel/accel.sh@20 -- # IFS=: 00:04:50.542 20:42:41 -- accel/accel.sh@20 -- # read -r var val 00:04:50.542 20:42:41 -- accel/accel.sh@21 -- # val= 00:04:50.542 20:42:41 -- accel/accel.sh@22 -- # case "$var" in 00:04:50.542 20:42:41 -- accel/accel.sh@20 -- # IFS=: 00:04:50.542 20:42:41 -- accel/accel.sh@20 -- # read -r var val 00:04:50.542 20:42:41 -- accel/accel.sh@21 -- # val= 00:04:50.542 20:42:41 -- accel/accel.sh@22 -- # case "$var" in 00:04:50.542 20:42:41 -- accel/accel.sh@20 -- # IFS=: 00:04:50.542 20:42:41 -- accel/accel.sh@20 -- # read -r var val 00:04:50.542 20:42:41 -- accel/accel.sh@21 -- # val= 00:04:50.542 20:42:41 -- accel/accel.sh@22 -- # case "$var" in 00:04:50.542 20:42:41 -- accel/accel.sh@20 -- # IFS=: 00:04:50.542 20:42:41 -- accel/accel.sh@20 -- # read -r var val 00:04:50.542 20:42:41 -- accel/accel.sh@21 -- # val= 00:04:50.542 20:42:41 -- accel/accel.sh@22 -- # case "$var" in 00:04:50.542 20:42:41 -- accel/accel.sh@20 -- # IFS=: 00:04:50.542 20:42:41 -- accel/accel.sh@20 -- # read -r var val 00:04:50.542 20:42:41 -- accel/accel.sh@21 -- # val= 00:04:50.542 20:42:41 -- accel/accel.sh@22 -- # case "$var" in 00:04:50.542 20:42:41 -- accel/accel.sh@20 -- # IFS=: 00:04:50.542 20:42:41 -- accel/accel.sh@20 -- # read -r var val 00:04:50.542 20:42:41 -- accel/accel.sh@28 -- # [[ -n software ]] 00:04:50.542 20:42:41 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:04:50.542 20:42:41 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:50.542 00:04:50.542 real 0m3.393s 00:04:50.542 user 0m2.440s 00:04:50.542 sys 0m0.956s 00:04:50.542 20:42:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.542 20:42:41 -- common/autotest_common.sh@10 -- # set +x 00:04:50.542 ************************************ 00:04:50.542 END TEST accel_deomp_full_mthread 00:04:50.542 ************************************ 00:04:50.802 20:42:41 -- accel/accel.sh@116 -- # [[ n == y ]] 00:04:50.802 20:42:41 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /usr/home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /tmp//sh-np.zhunR3 00:04:50.802 20:42:41 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:04:50.802 20:42:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:50.802 20:42:41 -- common/autotest_common.sh@10 -- # set +x 00:04:50.802 ************************************ 00:04:50.802 START TEST accel_dif_functional_tests 00:04:50.802 ************************************ 00:04:50.802 20:42:41 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /tmp//sh-np.zhunR3 00:04:50.803 [2024-04-16 20:42:41.700941] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:04:50.803 [2024-04-16 20:42:41.701166] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:51.062 EAL: TSC is not safe to use in SMP mode 00:04:51.062 EAL: TSC is not invariant 00:04:51.062 [2024-04-16 20:42:42.129616] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:51.322 [2024-04-16 20:42:42.222997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.322 [2024-04-16 20:42:42.222850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:51.322 [2024-04-16 20:42:42.223001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:51.322 20:42:42 -- accel/accel.sh@129 -- # build_accel_config 00:04:51.322 20:42:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:51.322 20:42:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:51.322 20:42:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:51.323 20:42:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:51.323 20:42:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:51.323 20:42:42 -- accel/accel.sh@41 -- # local IFS=, 00:04:51.323 20:42:42 -- accel/accel.sh@42 -- # jq -r . 00:04:51.323 00:04:51.323 00:04:51.323 CUnit - A unit testing framework for C - Version 2.1-3 00:04:51.323 http://cunit.sourceforge.net/ 00:04:51.323 00:04:51.323 00:04:51.323 Suite: accel_dif 00:04:51.323 Test: verify: DIF generated, GUARD check ...passed 00:04:51.323 Test: verify: DIF generated, APPTAG check ...passed 00:04:51.323 Test: verify: DIF generated, REFTAG check ...passed 00:04:51.323 Test: verify: DIF not generated, GUARD check ...passed 00:04:51.323 Test: verify: DIF not generated, APPTAG check ...passed 00:04:51.323 Test: verify: DIF not generated, REFTAG check ...passed 00:04:51.323 Test: verify: APPTAG correct, APPTAG check ...passed 00:04:51.323 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:04:51.323 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:04:51.323 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:04:51.323 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:04:51.323 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:04:51.323 Test: generate copy: DIF generated, GUARD check ...passed 00:04:51.323 Test: generate copy: DIF generated, APTTAG check ...[2024-04-16 20:42:42.249526] dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:04:51.323 [2024-04-16 20:42:42.249569] dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:04:51.323 [2024-04-16 20:42:42.249593] dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:04:51.323 [2024-04-16 20:42:42.249613] dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:04:51.323 [2024-04-16 20:42:42.249624] dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:04:51.323 [2024-04-16 20:42:42.249643] dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:04:51.323 [2024-04-16 20:42:42.249663] dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:04:51.323 [2024-04-16 20:42:42.249720] dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:04:51.323 passed 00:04:51.323 Test: generate copy: DIF generated, REFTAG check ...passed 00:04:51.323 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:04:51.323 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:04:51.323 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:04:51.323 Test: generate copy: iovecs-len validate ...passed 00:04:51.323 Test: generate copy: buffer alignment validate ...passed 00:04:51.323 00:04:51.323 Run Summary: Type Total Ran Passed Failed Inactive 00:04:51.323 suites 1 1 n/a 0 0 00:04:51.323 tests 20 20 20 0 0 00:04:51.323 asserts 204 204 204 0 n/a 00:04:51.323 00:04:51.323 Elapsed time = 0.008 seconds 00:04:51.323 [2024-04-16 20:42:42.249830] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:04:51.323 00:04:51.323 real 0m0.699s 00:04:51.323 user 0m0.358s 00:04:51.323 sys 0m0.482s 00:04:51.323 20:42:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.323 20:42:42 -- common/autotest_common.sh@10 -- # set +x 00:04:51.323 ************************************ 00:04:51.323 END TEST accel_dif_functional_tests 00:04:51.323 ************************************ 00:04:51.323 00:04:51.323 real 1m11.713s 00:04:51.323 user 1m2.977s 00:04:51.323 sys 0m22.084s 00:04:51.323 20:42:42 -- accel/accel.sh@12 -- # build_accel_config 00:04:51.323 20:42:42 -- accel/accel.sh@12 -- # build_accel_config 00:04:51.323 20:42:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:51.323 20:42:42 -- accel/accel.sh@12 -- # build_accel_config 00:04:51.323 20:42:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:51.323 20:42:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.323 20:42:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:51.323 20:42:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:51.323 20:42:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:51.323 20:42:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:51.323 20:42:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:51.323 20:42:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:51.323 20:42:42 -- common/autotest_common.sh@10 -- # set +x 00:04:51.323 20:42:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:51.323 20:42:42 -- accel/accel.sh@41 -- # local IFS=, 00:04:51.323 ************************************ 00:04:51.323 END TEST accel 00:04:51.323 20:42:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:51.323 ************************************ 00:04:51.323 20:42:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:51.323 20:42:42 -- accel/accel.sh@42 -- # jq -r . 00:04:51.323 20:42:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:51.323 20:42:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:51.323 20:42:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:51.323 20:42:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:51.323 20:42:42 -- accel/accel.sh@41 -- # local IFS=, 00:04:51.323 20:42:42 -- accel/accel.sh@41 -- # local IFS=, 00:04:51.323 20:42:42 -- accel/accel.sh@42 -- # jq -r . 00:04:51.323 20:42:42 -- accel/accel.sh@42 -- # jq -r . 00:04:51.583 20:42:42 -- spdk/autotest.sh@190 -- # run_test accel_rpc /usr/home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:04:51.584 20:42:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:51.584 20:42:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:51.584 20:42:42 -- common/autotest_common.sh@10 -- # set +x 00:04:51.584 ************************************ 00:04:51.584 START TEST accel_rpc 00:04:51.584 ************************************ 00:04:51.584 20:42:42 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:04:51.584 * Looking for test storage... 00:04:51.584 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/accel 00:04:51.584 20:42:42 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:51.584 20:42:42 -- accel/accel_rpc.sh@13 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:04:51.584 20:42:42 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=46804 00:04:51.584 20:42:42 -- accel/accel_rpc.sh@15 -- # waitforlisten 46804 00:04:51.584 20:42:42 -- common/autotest_common.sh@819 -- # '[' -z 46804 ']' 00:04:51.584 20:42:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.584 20:42:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:51.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.584 20:42:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.584 20:42:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:51.584 20:42:42 -- common/autotest_common.sh@10 -- # set +x 00:04:51.584 [2024-04-16 20:42:42.673834] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:04:51.584 [2024-04-16 20:42:42.674011] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:52.153 EAL: TSC is not safe to use in SMP mode 00:04:52.153 EAL: TSC is not invariant 00:04:52.153 [2024-04-16 20:42:43.140675] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.153 [2024-04-16 20:42:43.232305] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:52.153 [2024-04-16 20:42:43.232377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.722 20:42:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:52.722 20:42:43 -- common/autotest_common.sh@852 -- # return 0 00:04:52.722 20:42:43 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:04:52.722 20:42:43 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:04:52.722 20:42:43 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:04:52.722 20:42:43 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:04:52.722 20:42:43 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:04:52.722 20:42:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:52.722 20:42:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:52.722 20:42:43 -- common/autotest_common.sh@10 -- # set +x 00:04:52.722 ************************************ 00:04:52.722 START TEST accel_assign_opcode 00:04:52.722 ************************************ 00:04:52.722 20:42:43 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:04:52.722 20:42:43 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:04:52.722 20:42:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:52.722 20:42:43 -- common/autotest_common.sh@10 -- # set +x 00:04:52.722 [2024-04-16 20:42:43.588616] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:04:52.722 20:42:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:52.722 20:42:43 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:04:52.722 20:42:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:52.722 20:42:43 -- common/autotest_common.sh@10 -- # set +x 00:04:52.722 [2024-04-16 20:42:43.600610] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:04:52.722 20:42:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:52.722 20:42:43 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:04:52.722 20:42:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:52.722 20:42:43 -- common/autotest_common.sh@10 -- # set +x 00:04:52.722 20:42:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:52.722 20:42:43 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:04:52.722 20:42:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:52.722 20:42:43 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:04:52.722 20:42:43 -- common/autotest_common.sh@10 -- # set +x 00:04:52.722 20:42:43 -- accel/accel_rpc.sh@42 -- # grep software 00:04:52.722 20:42:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:52.722 software 00:04:52.722 00:04:52.722 real 0m0.074s 00:04:52.722 user 0m0.025s 00:04:52.722 sys 0m0.000s 00:04:52.722 20:42:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.722 20:42:43 -- common/autotest_common.sh@10 -- # set +x 00:04:52.722 ************************************ 00:04:52.722 END TEST accel_assign_opcode 00:04:52.722 ************************************ 00:04:52.722 20:42:43 -- accel/accel_rpc.sh@55 -- # killprocess 46804 00:04:52.722 20:42:43 -- common/autotest_common.sh@926 -- # '[' -z 46804 ']' 00:04:52.722 20:42:43 -- common/autotest_common.sh@930 -- # kill -0 46804 00:04:52.722 20:42:43 -- common/autotest_common.sh@931 -- # uname 00:04:52.722 20:42:43 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:04:52.722 20:42:43 -- common/autotest_common.sh@934 -- # ps -c -o command 46804 00:04:52.722 20:42:43 -- common/autotest_common.sh@934 -- # tail -1 00:04:52.722 20:42:43 -- common/autotest_common.sh@934 -- # process_name=spdk_tgt 00:04:52.722 20:42:43 -- common/autotest_common.sh@936 -- # '[' spdk_tgt = sudo ']' 00:04:52.722 killing process with pid 46804 00:04:52.722 20:42:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 46804' 00:04:52.722 20:42:43 -- common/autotest_common.sh@945 -- # kill 46804 00:04:52.722 20:42:43 -- common/autotest_common.sh@950 -- # wait 46804 00:04:52.981 00:04:52.981 real 0m1.431s 00:04:52.981 user 0m1.140s 00:04:52.981 sys 0m0.814s 00:04:52.981 20:42:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.981 20:42:43 -- common/autotest_common.sh@10 -- # set +x 00:04:52.981 ************************************ 00:04:52.981 END TEST accel_rpc 00:04:52.981 ************************************ 00:04:52.981 20:42:43 -- spdk/autotest.sh@191 -- # run_test app_cmdline /usr/home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:04:52.981 20:42:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:52.981 20:42:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:52.981 20:42:43 -- common/autotest_common.sh@10 -- # set +x 00:04:52.981 ************************************ 00:04:52.981 START TEST app_cmdline 00:04:52.981 ************************************ 00:04:52.981 20:42:43 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:04:53.241 * Looking for test storage... 00:04:53.241 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/app 00:04:53.241 20:42:44 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:04:53.241 20:42:44 -- app/cmdline.sh@17 -- # spdk_tgt_pid=46877 00:04:53.241 20:42:44 -- app/cmdline.sh@18 -- # waitforlisten 46877 00:04:53.241 20:42:44 -- common/autotest_common.sh@819 -- # '[' -z 46877 ']' 00:04:53.241 20:42:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.241 20:42:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:53.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.241 20:42:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.241 20:42:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:53.241 20:42:44 -- common/autotest_common.sh@10 -- # set +x 00:04:53.241 20:42:44 -- app/cmdline.sh@16 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:04:53.241 [2024-04-16 20:42:44.144922] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:04:53.241 [2024-04-16 20:42:44.145270] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:53.499 EAL: TSC is not safe to use in SMP mode 00:04:53.499 EAL: TSC is not invariant 00:04:53.499 [2024-04-16 20:42:44.570348] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.757 [2024-04-16 20:42:44.660988] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:53.757 [2024-04-16 20:42:44.661071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.016 20:42:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:54.016 20:42:45 -- common/autotest_common.sh@852 -- # return 0 00:04:54.016 20:42:45 -- app/cmdline.sh@20 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:04:54.276 { 00:04:54.276 "version": "SPDK v24.01.1-pre git sha1 4b134b4ab", 00:04:54.276 "fields": { 00:04:54.276 "major": 24, 00:04:54.276 "minor": 1, 00:04:54.276 "patch": 1, 00:04:54.276 "suffix": "-pre", 00:04:54.276 "commit": "4b134b4ab" 00:04:54.276 } 00:04:54.276 } 00:04:54.276 20:42:45 -- app/cmdline.sh@22 -- # expected_methods=() 00:04:54.276 20:42:45 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:04:54.276 20:42:45 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:04:54.276 20:42:45 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:04:54.276 20:42:45 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:04:54.276 20:42:45 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:04:54.276 20:42:45 -- app/cmdline.sh@26 -- # sort 00:04:54.276 20:42:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:54.276 20:42:45 -- common/autotest_common.sh@10 -- # set +x 00:04:54.276 20:42:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:54.276 20:42:45 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:04:54.276 20:42:45 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:04:54.276 20:42:45 -- app/cmdline.sh@30 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:54.276 20:42:45 -- common/autotest_common.sh@640 -- # local es=0 00:04:54.276 20:42:45 -- common/autotest_common.sh@642 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:54.276 20:42:45 -- common/autotest_common.sh@628 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:04:54.276 20:42:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:04:54.276 20:42:45 -- common/autotest_common.sh@632 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:04:54.276 20:42:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:04:54.276 20:42:45 -- common/autotest_common.sh@634 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:04:54.276 20:42:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:04:54.276 20:42:45 -- common/autotest_common.sh@634 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:04:54.276 20:42:45 -- common/autotest_common.sh@634 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:04:54.276 20:42:45 -- common/autotest_common.sh@643 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:54.535 request: 00:04:54.535 { 00:04:54.535 "method": "env_dpdk_get_mem_stats", 00:04:54.535 "req_id": 1 00:04:54.535 } 00:04:54.535 Got JSON-RPC error response 00:04:54.535 response: 00:04:54.535 { 00:04:54.535 "code": -32601, 00:04:54.535 "message": "Method not found" 00:04:54.535 } 00:04:54.535 20:42:45 -- common/autotest_common.sh@643 -- # es=1 00:04:54.535 20:42:45 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:04:54.535 20:42:45 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:04:54.535 20:42:45 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:04:54.535 20:42:45 -- app/cmdline.sh@1 -- # killprocess 46877 00:04:54.535 20:42:45 -- common/autotest_common.sh@926 -- # '[' -z 46877 ']' 00:04:54.535 20:42:45 -- common/autotest_common.sh@930 -- # kill -0 46877 00:04:54.535 20:42:45 -- common/autotest_common.sh@931 -- # uname 00:04:54.535 20:42:45 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:04:54.535 20:42:45 -- common/autotest_common.sh@934 -- # ps -c -o command 46877 00:04:54.535 20:42:45 -- common/autotest_common.sh@934 -- # tail -1 00:04:54.535 20:42:45 -- common/autotest_common.sh@934 -- # process_name=spdk_tgt 00:04:54.535 20:42:45 -- common/autotest_common.sh@936 -- # '[' spdk_tgt = sudo ']' 00:04:54.535 killing process with pid 46877 00:04:54.535 20:42:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 46877' 00:04:54.535 20:42:45 -- common/autotest_common.sh@945 -- # kill 46877 00:04:54.535 20:42:45 -- common/autotest_common.sh@950 -- # wait 46877 00:04:54.535 00:04:54.535 real 0m1.662s 00:04:54.535 user 0m1.852s 00:04:54.535 sys 0m0.647s 00:04:54.535 20:42:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.535 20:42:45 -- common/autotest_common.sh@10 -- # set +x 00:04:54.535 ************************************ 00:04:54.535 END TEST app_cmdline 00:04:54.535 ************************************ 00:04:54.803 20:42:45 -- spdk/autotest.sh@192 -- # run_test version /usr/home/vagrant/spdk_repo/spdk/test/app/version.sh 00:04:54.803 20:42:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:54.803 20:42:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:54.803 20:42:45 -- common/autotest_common.sh@10 -- # set +x 00:04:54.803 ************************************ 00:04:54.803 START TEST version 00:04:54.803 ************************************ 00:04:54.803 20:42:45 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/version.sh 00:04:54.803 * Looking for test storage... 00:04:54.803 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/app 00:04:54.803 20:42:45 -- app/version.sh@17 -- # get_header_version major 00:04:54.803 20:42:45 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /usr/home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:04:54.803 20:42:45 -- app/version.sh@14 -- # tr -d '"' 00:04:54.803 20:42:45 -- app/version.sh@14 -- # cut -f2 00:04:54.803 20:42:45 -- app/version.sh@17 -- # major=24 00:04:54.803 20:42:45 -- app/version.sh@18 -- # get_header_version minor 00:04:54.803 20:42:45 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /usr/home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:04:54.803 20:42:45 -- app/version.sh@14 -- # tr -d '"' 00:04:54.803 20:42:45 -- app/version.sh@14 -- # cut -f2 00:04:54.803 20:42:45 -- app/version.sh@18 -- # minor=1 00:04:54.803 20:42:45 -- app/version.sh@19 -- # get_header_version patch 00:04:54.803 20:42:45 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /usr/home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:04:54.803 20:42:45 -- app/version.sh@14 -- # cut -f2 00:04:54.803 20:42:45 -- app/version.sh@14 -- # tr -d '"' 00:04:54.803 20:42:45 -- app/version.sh@19 -- # patch=1 00:04:54.803 20:42:45 -- app/version.sh@20 -- # get_header_version suffix 00:04:54.803 20:42:45 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /usr/home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:04:54.803 20:42:45 -- app/version.sh@14 -- # cut -f2 00:04:54.803 20:42:45 -- app/version.sh@14 -- # tr -d '"' 00:04:54.803 20:42:45 -- app/version.sh@20 -- # suffix=-pre 00:04:54.803 20:42:45 -- app/version.sh@22 -- # version=24.1 00:04:54.803 20:42:45 -- app/version.sh@25 -- # (( patch != 0 )) 00:04:54.803 20:42:45 -- app/version.sh@25 -- # version=24.1.1 00:04:54.803 20:42:45 -- app/version.sh@28 -- # version=24.1.1rc0 00:04:54.803 20:42:45 -- app/version.sh@30 -- # PYTHONPATH=:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/python 00:04:54.803 20:42:45 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:04:55.080 20:42:45 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:04:55.080 20:42:45 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:04:55.080 00:04:55.080 real 0m0.241s 00:04:55.080 user 0m0.179s 00:04:55.080 sys 0m0.150s 00:04:55.080 20:42:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.080 20:42:45 -- common/autotest_common.sh@10 -- # set +x 00:04:55.080 ************************************ 00:04:55.080 END TEST version 00:04:55.080 ************************************ 00:04:55.080 20:42:45 -- spdk/autotest.sh@194 -- # '[' 1 -eq 1 ']' 00:04:55.080 20:42:45 -- spdk/autotest.sh@195 -- # run_test blockdev_general /usr/home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:04:55.080 20:42:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:55.080 20:42:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:55.080 20:42:45 -- common/autotest_common.sh@10 -- # set +x 00:04:55.080 ************************************ 00:04:55.080 START TEST blockdev_general 00:04:55.080 ************************************ 00:04:55.080 20:42:45 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:04:55.080 * Looking for test storage... 00:04:55.080 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/bdev 00:04:55.080 20:42:46 -- bdev/blockdev.sh@10 -- # source /usr/home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:55.080 20:42:46 -- bdev/nbd_common.sh@6 -- # set -e 00:04:55.080 20:42:46 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:04:55.080 20:42:46 -- bdev/blockdev.sh@13 -- # conf_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:04:55.080 20:42:46 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:04:55.080 20:42:46 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:04:55.080 20:42:46 -- bdev/blockdev.sh@18 -- # : 00:04:55.080 20:42:46 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:04:55.080 20:42:46 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:04:55.080 20:42:46 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:04:55.080 20:42:46 -- bdev/blockdev.sh@672 -- # uname -s 00:04:55.080 20:42:46 -- bdev/blockdev.sh@672 -- # '[' FreeBSD = Linux ']' 00:04:55.080 20:42:46 -- bdev/blockdev.sh@677 -- # PRE_RESERVED_MEM=2048 00:04:55.080 20:42:46 -- bdev/blockdev.sh@680 -- # test_type=bdev 00:04:55.080 20:42:46 -- bdev/blockdev.sh@681 -- # crypto_device= 00:04:55.080 20:42:46 -- bdev/blockdev.sh@682 -- # dek= 00:04:55.080 20:42:46 -- bdev/blockdev.sh@683 -- # env_ctx= 00:04:55.080 20:42:46 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:04:55.080 20:42:46 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:04:55.080 20:42:46 -- bdev/blockdev.sh@688 -- # [[ bdev == bdev ]] 00:04:55.080 20:42:46 -- bdev/blockdev.sh@689 -- # wait_for_rpc=--wait-for-rpc 00:04:55.080 20:42:46 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:04:55.080 20:42:46 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=47002 00:04:55.080 20:42:46 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:04:55.080 20:42:46 -- bdev/blockdev.sh@47 -- # waitforlisten 47002 00:04:55.080 20:42:46 -- common/autotest_common.sh@819 -- # '[' -z 47002 ']' 00:04:55.080 20:42:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.080 20:42:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:55.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.080 20:42:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.080 20:42:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:55.080 20:42:46 -- bdev/blockdev.sh@44 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:04:55.080 20:42:46 -- common/autotest_common.sh@10 -- # set +x 00:04:55.080 [2024-04-16 20:42:46.161636] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:04:55.080 [2024-04-16 20:42:46.161984] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:55.651 EAL: TSC is not safe to use in SMP mode 00:04:55.651 EAL: TSC is not invariant 00:04:55.651 [2024-04-16 20:42:46.598489] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.651 [2024-04-16 20:42:46.690034] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:55.651 [2024-04-16 20:42:46.690115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.910 20:42:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:55.910 20:42:47 -- common/autotest_common.sh@852 -- # return 0 00:04:55.910 20:42:47 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:04:55.910 20:42:47 -- bdev/blockdev.sh@694 -- # setup_bdev_conf 00:04:55.910 20:42:47 -- bdev/blockdev.sh@51 -- # rpc_cmd 00:04:55.910 20:42:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:55.910 20:42:47 -- common/autotest_common.sh@10 -- # set +x 00:04:56.170 [2024-04-16 20:42:47.076664] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:04:56.170 [2024-04-16 20:42:47.076715] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:04:56.170 00:04:56.170 [2024-04-16 20:42:47.084649] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:04:56.170 [2024-04-16 20:42:47.084664] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:04:56.170 00:04:56.170 Malloc0 00:04:56.170 Malloc1 00:04:56.170 Malloc2 00:04:56.170 Malloc3 00:04:56.170 Malloc4 00:04:56.170 Malloc5 00:04:56.170 Malloc6 00:04:56.170 Malloc7 00:04:56.170 Malloc8 00:04:56.170 Malloc9 00:04:56.170 [2024-04-16 20:42:47.172657] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:04:56.170 [2024-04-16 20:42:47.172687] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:56.170 [2024-04-16 20:42:47.172701] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b78d700 00:04:56.170 [2024-04-16 20:42:47.172706] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:56.170 [2024-04-16 20:42:47.172970] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:56.170 [2024-04-16 20:42:47.172989] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:04:56.170 TestPT 00:04:56.170 20:42:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:56.170 20:42:47 -- bdev/blockdev.sh@74 -- # dd if=/dev/zero of=/usr/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:04:56.170 5000+0 records in 00:04:56.170 5000+0 records out 00:04:56.170 10240000 bytes transferred in 0.031091 secs (329359046 bytes/sec) 00:04:56.170 20:42:47 -- bdev/blockdev.sh@75 -- # rpc_cmd bdev_aio_create /usr/home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:04:56.170 20:42:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:56.170 20:42:47 -- common/autotest_common.sh@10 -- # set +x 00:04:56.170 AIO0 00:04:56.170 20:42:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:56.170 20:42:47 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:04:56.170 20:42:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:56.170 20:42:47 -- common/autotest_common.sh@10 -- # set +x 00:04:56.170 20:42:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:56.170 20:42:47 -- bdev/blockdev.sh@738 -- # cat 00:04:56.170 20:42:47 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:04:56.170 20:42:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:56.170 20:42:47 -- common/autotest_common.sh@10 -- # set +x 00:04:56.431 20:42:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:56.431 20:42:47 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:04:56.431 20:42:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:56.431 20:42:47 -- common/autotest_common.sh@10 -- # set +x 00:04:56.431 20:42:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:56.431 20:42:47 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:04:56.431 20:42:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:56.431 20:42:47 -- common/autotest_common.sh@10 -- # set +x 00:04:56.431 20:42:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:56.431 20:42:47 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:04:56.431 20:42:47 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:04:56.431 20:42:47 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:04:56.431 20:42:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:56.431 20:42:47 -- common/autotest_common.sh@10 -- # set +x 00:04:56.431 20:42:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:56.431 20:42:47 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:04:56.431 20:42:47 -- bdev/blockdev.sh@747 -- # jq -r .name 00:04:56.433 20:42:47 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "e2096141-fc31-11ee-80f8-ef3e42bb1492"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "e2096141-fc31-11ee-80f8-ef3e42bb1492",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "d2d5b68a-faee-6459-98b9-a091e7ebd4e9"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "d2d5b68a-faee-6459-98b9-a091e7ebd4e9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "20f82f24-803b-b253-ac6c-6bae8fadaf59"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "20f82f24-803b-b253-ac6c-6bae8fadaf59",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "3033d337-8fc7-1c57-a46a-4c6b94703ec6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "3033d337-8fc7-1c57-a46a-4c6b94703ec6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "e98efa15-4823-305e-b1fb-8fbdc8adf67f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e98efa15-4823-305e-b1fb-8fbdc8adf67f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "523e2494-77cb-ed5d-9955-91754f176c62"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "523e2494-77cb-ed5d-9955-91754f176c62",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "a53dc61c-22e6-8455-9e9a-2d9b22e4187a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a53dc61c-22e6-8455-9e9a-2d9b22e4187a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "922bb871-a1b4-8857-9311-3ee9dbcdb0c0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "922bb871-a1b4-8857-9311-3ee9dbcdb0c0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "64f1ac13-ef95-ca5f-a6d6-2440beaeacc9"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "64f1ac13-ef95-ca5f-a6d6-2440beaeacc9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "20c8ccb3-6ab5-8d5a-8afc-777b332e53e2"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "20c8ccb3-6ab5-8d5a-8afc-777b332e53e2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "20f789a9-25c1-5850-b432-da135bb0ab9a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "20f789a9-25c1-5850-b432-da135bb0ab9a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "fd78039d-eacd-035a-880b-970f03d55d19"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "fd78039d-eacd-035a-880b-970f03d55d19",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "e216d69d-fc31-11ee-80f8-ef3e42bb1492"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "e216d69d-fc31-11ee-80f8-ef3e42bb1492",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "e216d69d-fc31-11ee-80f8-ef3e42bb1492",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "e20e4314-fc31-11ee-80f8-ef3e42bb1492",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "e20f7b69-fc31-11ee-80f8-ef3e42bb1492",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "e218077c-fc31-11ee-80f8-ef3e42bb1492"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "e218077c-fc31-11ee-80f8-ef3e42bb1492",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "e218077c-fc31-11ee-80f8-ef3e42bb1492",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "e210b3d8-fc31-11ee-80f8-ef3e42bb1492",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "e211ec56-fc31-11ee-80f8-ef3e42bb1492",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "e2193faa-fc31-11ee-80f8-ef3e42bb1492"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "e2193faa-fc31-11ee-80f8-ef3e42bb1492",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "e2193faa-fc31-11ee-80f8-ef3e42bb1492",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "e21324da-fc31-11ee-80f8-ef3e42bb1492",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "e2145d5a-fc31-11ee-80f8-ef3e42bb1492",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "e223044c-fc31-11ee-80f8-ef3e42bb1492"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "e223044c-fc31-11ee-80f8-ef3e42bb1492",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/usr/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:04:56.433 20:42:47 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:04:56.433 20:42:47 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Malloc0 00:04:56.433 20:42:47 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:04:56.433 20:42:47 -- bdev/blockdev.sh@752 -- # killprocess 47002 00:04:56.433 20:42:47 -- common/autotest_common.sh@926 -- # '[' -z 47002 ']' 00:04:56.433 20:42:47 -- common/autotest_common.sh@930 -- # kill -0 47002 00:04:56.433 20:42:47 -- common/autotest_common.sh@931 -- # uname 00:04:56.433 20:42:47 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:04:56.433 20:42:47 -- common/autotest_common.sh@934 -- # tail -1 00:04:56.433 20:42:47 -- common/autotest_common.sh@934 -- # ps -c -o command 47002 00:04:56.433 20:42:47 -- common/autotest_common.sh@934 -- # process_name=spdk_tgt 00:04:56.433 20:42:47 -- common/autotest_common.sh@936 -- # '[' spdk_tgt = sudo ']' 00:04:56.433 killing process with pid 47002 00:04:56.433 20:42:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 47002' 00:04:56.433 20:42:47 -- common/autotest_common.sh@945 -- # kill 47002 00:04:56.433 20:42:47 -- common/autotest_common.sh@950 -- # wait 47002 00:04:56.693 20:42:47 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:04:56.693 20:42:47 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /usr/home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:04:56.693 20:42:47 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:04:56.693 20:42:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:56.693 20:42:47 -- common/autotest_common.sh@10 -- # set +x 00:04:56.693 ************************************ 00:04:56.693 START TEST bdev_hello_world 00:04:56.693 ************************************ 00:04:56.693 20:42:47 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:04:56.693 [2024-04-16 20:42:47.776945] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:04:56.693 [2024-04-16 20:42:47.777181] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:57.262 EAL: TSC is not safe to use in SMP mode 00:04:57.262 EAL: TSC is not invariant 00:04:57.262 [2024-04-16 20:42:48.235309] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.262 [2024-04-16 20:42:48.328243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.522 [2024-04-16 20:42:48.382551] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:04:57.522 [2024-04-16 20:42:48.382586] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:04:57.522 [2024-04-16 20:42:48.390535] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:04:57.522 [2024-04-16 20:42:48.390551] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:04:57.522 [2024-04-16 20:42:48.398551] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:04:57.522 [2024-04-16 20:42:48.398567] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:04:57.523 [2024-04-16 20:42:48.398574] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:04:57.523 [2024-04-16 20:42:48.446551] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:04:57.523 [2024-04-16 20:42:48.446581] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:57.523 [2024-04-16 20:42:48.446592] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82af59800 00:04:57.523 [2024-04-16 20:42:48.446597] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:57.523 [2024-04-16 20:42:48.446875] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:57.523 [2024-04-16 20:42:48.446892] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:04:57.523 [2024-04-16 20:42:48.547744] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:04:57.523 [2024-04-16 20:42:48.547771] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:04:57.523 [2024-04-16 20:42:48.547781] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:04:57.523 [2024-04-16 20:42:48.547791] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:04:57.523 [2024-04-16 20:42:48.547802] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:04:57.523 [2024-04-16 20:42:48.547807] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:04:57.523 [2024-04-16 20:42:48.547816] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:04:57.523 00:04:57.523 [2024-04-16 20:42:48.547822] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:04:57.782 00:04:57.782 real 0m0.968s 00:04:57.782 user 0m0.452s 00:04:57.782 sys 0m0.514s 00:04:57.782 20:42:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.782 20:42:48 -- common/autotest_common.sh@10 -- # set +x 00:04:57.782 ************************************ 00:04:57.782 END TEST bdev_hello_world 00:04:57.782 ************************************ 00:04:57.782 20:42:48 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:04:57.782 20:42:48 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:04:57.782 20:42:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:57.782 20:42:48 -- common/autotest_common.sh@10 -- # set +x 00:04:57.782 ************************************ 00:04:57.782 START TEST bdev_bounds 00:04:57.782 ************************************ 00:04:57.782 20:42:48 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:04:57.782 20:42:48 -- bdev/blockdev.sh@288 -- # bdevio_pid=47042 00:04:57.782 20:42:48 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:04:57.782 Process bdevio pid: 47042 00:04:57.782 20:42:48 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 47042' 00:04:57.782 20:42:48 -- bdev/blockdev.sh@291 -- # waitforlisten 47042 00:04:57.782 20:42:48 -- common/autotest_common.sh@819 -- # '[' -z 47042 ']' 00:04:57.782 20:42:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.782 20:42:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:57.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.782 20:42:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.782 20:42:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:57.782 20:42:48 -- common/autotest_common.sh@10 -- # set +x 00:04:57.782 20:42:48 -- bdev/blockdev.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 2048 --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:04:57.782 [2024-04-16 20:42:48.798237] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:04:57.782 [2024-04-16 20:42:48.798602] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 2048 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:58.351 EAL: TSC is not safe to use in SMP mode 00:04:58.351 EAL: TSC is not invariant 00:04:58.351 [2024-04-16 20:42:49.232781] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:58.351 [2024-04-16 20:42:49.325100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.351 [2024-04-16 20:42:49.324998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.351 [2024-04-16 20:42:49.325100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:58.351 [2024-04-16 20:42:49.380880] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:04:58.351 [2024-04-16 20:42:49.380912] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:04:58.351 [2024-04-16 20:42:49.388870] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:04:58.351 [2024-04-16 20:42:49.388884] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:04:58.351 [2024-04-16 20:42:49.396882] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:04:58.351 [2024-04-16 20:42:49.396897] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:04:58.351 [2024-04-16 20:42:49.396903] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:04:58.351 [2024-04-16 20:42:49.444888] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:04:58.351 [2024-04-16 20:42:49.444919] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:58.351 [2024-04-16 20:42:49.444930] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82e6c7800 00:04:58.351 [2024-04-16 20:42:49.444937] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:58.351 [2024-04-16 20:42:49.445228] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:58.351 [2024-04-16 20:42:49.445250] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:04:58.610 20:42:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:58.610 20:42:49 -- common/autotest_common.sh@852 -- # return 0 00:04:58.610 20:42:49 -- bdev/blockdev.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:04:58.870 I/O targets: 00:04:58.870 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:04:58.870 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:04:58.870 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:04:58.870 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:04:58.870 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:04:58.870 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:04:58.870 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:04:58.870 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:04:58.870 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:04:58.870 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:04:58.870 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:04:58.870 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:04:58.870 raid0: 131072 blocks of 512 bytes (64 MiB) 00:04:58.870 concat0: 131072 blocks of 512 bytes (64 MiB) 00:04:58.870 raid1: 65536 blocks of 512 bytes (32 MiB) 00:04:58.870 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:04:58.870 00:04:58.870 00:04:58.870 CUnit - A unit testing framework for C - Version 2.1-3 00:04:58.870 http://cunit.sourceforge.net/ 00:04:58.870 00:04:58.870 00:04:58.870 Suite: bdevio tests on: AIO0 00:04:58.870 Test: blockdev write read block ...passed 00:04:58.870 Test: blockdev write zeroes read block ...passed 00:04:58.870 Test: blockdev write zeroes read no split ...passed 00:04:58.870 Test: blockdev write zeroes read split ...passed 00:04:58.870 Test: blockdev write zeroes read split partial ...passed 00:04:58.870 Test: blockdev reset ...passed 00:04:58.870 Test: blockdev write read 8 blocks ...passed 00:04:58.870 Test: blockdev write read size > 128k ...passed 00:04:58.870 Test: blockdev write read invalid size ...passed 00:04:58.870 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:04:58.870 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:04:58.870 Test: blockdev write read max offset ...passed 00:04:58.870 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:04:58.870 Test: blockdev writev readv 8 blocks ...passed 00:04:58.870 Test: blockdev writev readv 30 x 1block ...passed 00:04:58.870 Test: blockdev writev readv block ...passed 00:04:58.870 Test: blockdev writev readv size > 128k ...passed 00:04:58.870 Test: blockdev writev readv size > 128k in two iovs ...passed 00:04:58.870 Test: blockdev comparev and writev ...passed 00:04:58.870 Test: blockdev nvme passthru rw ...passed 00:04:58.870 Test: blockdev nvme passthru vendor specific ...passed 00:04:58.870 Test: blockdev nvme admin passthru ...passed 00:04:58.870 Test: blockdev copy ...passed 00:04:58.870 Suite: bdevio tests on: raid1 00:04:58.870 Test: blockdev write read block ...passed 00:04:58.870 Test: blockdev write zeroes read block ...passed 00:04:58.870 Test: blockdev write zeroes read no split ...passed 00:04:58.870 Test: blockdev write zeroes read split ...passed 00:04:58.870 Test: blockdev write zeroes read split partial ...passed 00:04:58.870 Test: blockdev reset ...passed 00:04:58.870 Test: blockdev write read 8 blocks ...passed 00:04:58.870 Test: blockdev write read size > 128k ...passed 00:04:58.870 Test: blockdev write read invalid size ...passed 00:04:58.870 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:04:58.870 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:04:58.870 Test: blockdev write read max offset ...passed 00:04:58.870 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:04:58.870 Test: blockdev writev readv 8 blocks ...passed 00:04:58.870 Test: blockdev writev readv 30 x 1block ...passed 00:04:58.870 Test: blockdev writev readv block ...passed 00:04:58.870 Test: blockdev writev readv size > 128k ...passed 00:04:58.870 Test: blockdev writev readv size > 128k in two iovs ...passed 00:04:58.870 Test: blockdev comparev and writev ...passed 00:04:58.870 Test: blockdev nvme passthru rw ...passed 00:04:58.870 Test: blockdev nvme passthru vendor specific ...passed 00:04:58.870 Test: blockdev nvme admin passthru ...passed 00:04:58.870 Test: blockdev copy ...passed 00:04:58.870 Suite: bdevio tests on: concat0 00:04:58.870 Test: blockdev write read block ...passed 00:04:58.870 Test: blockdev write zeroes read block ...passed 00:04:58.870 Test: blockdev write zeroes read no split ...passed 00:04:58.870 Test: blockdev write zeroes read split ...passed 00:04:58.870 Test: blockdev write zeroes read split partial ...passed 00:04:58.870 Test: blockdev reset ...passed 00:04:58.870 Test: blockdev write read 8 blocks ...passed 00:04:58.870 Test: blockdev write read size > 128k ...passed 00:04:58.870 Test: blockdev write read invalid size ...passed 00:04:58.870 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:04:58.870 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:04:58.870 Test: blockdev write read max offset ...passed 00:04:58.870 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:04:58.870 Test: blockdev writev readv 8 blocks ...passed 00:04:58.870 Test: blockdev writev readv 30 x 1block ...passed 00:04:58.870 Test: blockdev writev readv block ...passed 00:04:58.870 Test: blockdev writev readv size > 128k ...passed 00:04:58.870 Test: blockdev writev readv size > 128k in two iovs ...passed 00:04:58.870 Test: blockdev comparev and writev ...passed 00:04:58.870 Test: blockdev nvme passthru rw ...passed 00:04:58.870 Test: blockdev nvme passthru vendor specific ...passed 00:04:58.870 Test: blockdev nvme admin passthru ...passed 00:04:58.870 Test: blockdev copy ...passed 00:04:58.870 Suite: bdevio tests on: raid0 00:04:58.870 Test: blockdev write read block ...passed 00:04:58.870 Test: blockdev write zeroes read block ...passed 00:04:58.870 Test: blockdev write zeroes read no split ...passed 00:04:58.870 Test: blockdev write zeroes read split ...passed 00:04:58.870 Test: blockdev write zeroes read split partial ...passed 00:04:58.870 Test: blockdev reset ...passed 00:04:58.870 Test: blockdev write read 8 blocks ...passed 00:04:58.870 Test: blockdev write read size > 128k ...passed 00:04:58.870 Test: blockdev write read invalid size ...passed 00:04:58.870 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:04:58.870 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:04:58.870 Test: blockdev write read max offset ...passed 00:04:58.870 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:04:58.870 Test: blockdev writev readv 8 blocks ...passed 00:04:58.870 Test: blockdev writev readv 30 x 1block ...passed 00:04:58.870 Test: blockdev writev readv block ...passed 00:04:58.870 Test: blockdev writev readv size > 128k ...passed 00:04:58.870 Test: blockdev writev readv size > 128k in two iovs ...passed 00:04:58.870 Test: blockdev comparev and writev ...passed 00:04:58.870 Test: blockdev nvme passthru rw ...passed 00:04:58.870 Test: blockdev nvme passthru vendor specific ...passed 00:04:58.870 Test: blockdev nvme admin passthru ...passed 00:04:58.870 Test: blockdev copy ...passed 00:04:58.870 Suite: bdevio tests on: TestPT 00:04:58.870 Test: blockdev write read block ...passed 00:04:58.870 Test: blockdev write zeroes read block ...passed 00:04:58.870 Test: blockdev write zeroes read no split ...passed 00:04:58.870 Test: blockdev write zeroes read split ...passed 00:04:58.870 Test: blockdev write zeroes read split partial ...passed 00:04:58.870 Test: blockdev reset ...passed 00:04:58.870 Test: blockdev write read 8 blocks ...passed 00:04:58.870 Test: blockdev write read size > 128k ...passed 00:04:58.870 Test: blockdev write read invalid size ...passed 00:04:58.870 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:04:58.870 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:04:58.870 Test: blockdev write read max offset ...passed 00:04:58.870 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:04:58.870 Test: blockdev writev readv 8 blocks ...passed 00:04:58.870 Test: blockdev writev readv 30 x 1block ...passed 00:04:58.870 Test: blockdev writev readv block ...passed 00:04:58.870 Test: blockdev writev readv size > 128k ...passed 00:04:58.870 Test: blockdev writev readv size > 128k in two iovs ...passed 00:04:58.870 Test: blockdev comparev and writev ...passed 00:04:58.870 Test: blockdev nvme passthru rw ...passed 00:04:58.870 Test: blockdev nvme passthru vendor specific ...passed 00:04:58.870 Test: blockdev nvme admin passthru ...passed 00:04:58.870 Test: blockdev copy ...passed 00:04:58.870 Suite: bdevio tests on: Malloc2p7 00:04:58.870 Test: blockdev write read block ...passed 00:04:58.870 Test: blockdev write zeroes read block ...passed 00:04:58.870 Test: blockdev write zeroes read no split ...passed 00:04:58.870 Test: blockdev write zeroes read split ...passed 00:04:59.131 Test: blockdev write zeroes read split partial ...passed 00:04:59.131 Test: blockdev reset ...passed 00:04:59.131 Test: blockdev write read 8 blocks ...passed 00:04:59.131 Test: blockdev write read size > 128k ...passed 00:04:59.131 Test: blockdev write read invalid size ...passed 00:04:59.131 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:04:59.131 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:04:59.131 Test: blockdev write read max offset ...passed 00:04:59.131 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:04:59.131 Test: blockdev writev readv 8 blocks ...passed 00:04:59.131 Test: blockdev writev readv 30 x 1block ...passed 00:04:59.131 Test: blockdev writev readv block ...passed 00:04:59.131 Test: blockdev writev readv size > 128k ...passed 00:04:59.131 Test: blockdev writev readv size > 128k in two iovs ...passed 00:04:59.131 Test: blockdev comparev and writev ...passed 00:04:59.131 Test: blockdev nvme passthru rw ...passed 00:04:59.131 Test: blockdev nvme passthru vendor specific ...passed 00:04:59.131 Test: blockdev nvme admin passthru ...passed 00:04:59.131 Test: blockdev copy ...passed 00:04:59.131 Suite: bdevio tests on: Malloc2p6 00:04:59.131 Test: blockdev write read block ...passed 00:04:59.131 Test: blockdev write zeroes read block ...passed 00:04:59.131 Test: blockdev write zeroes read no split ...passed 00:04:59.131 Test: blockdev write zeroes read split ...passed 00:04:59.131 Test: blockdev write zeroes read split partial ...passed 00:04:59.131 Test: blockdev reset ...passed 00:04:59.131 Test: blockdev write read 8 blocks ...passed 00:04:59.131 Test: blockdev write read size > 128k ...passed 00:04:59.131 Test: blockdev write read invalid size ...passed 00:04:59.131 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:04:59.131 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:04:59.131 Test: blockdev write read max offset ...passed 00:04:59.131 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:04:59.131 Test: blockdev writev readv 8 blocks ...passed 00:04:59.131 Test: blockdev writev readv 30 x 1block ...passed 00:04:59.131 Test: blockdev writev readv block ...passed 00:04:59.131 Test: blockdev writev readv size > 128k ...passed 00:04:59.131 Test: blockdev writev readv size > 128k in two iovs ...passed 00:04:59.131 Test: blockdev comparev and writev ...passed 00:04:59.131 Test: blockdev nvme passthru rw ...passed 00:04:59.131 Test: blockdev nvme passthru vendor specific ...passed 00:04:59.131 Test: blockdev nvme admin passthru ...passed 00:04:59.131 Test: blockdev copy ...passed 00:04:59.131 Suite: bdevio tests on: Malloc2p5 00:04:59.131 Test: blockdev write read block ...passed 00:04:59.131 Test: blockdev write zeroes read block ...passed 00:04:59.131 Test: blockdev write zeroes read no split ...passed 00:04:59.131 Test: blockdev write zeroes read split ...passed 00:04:59.131 Test: blockdev write zeroes read split partial ...passed 00:04:59.131 Test: blockdev reset ...passed 00:04:59.131 Test: blockdev write read 8 blocks ...passed 00:04:59.131 Test: blockdev write read size > 128k ...passed 00:04:59.131 Test: blockdev write read invalid size ...passed 00:04:59.131 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:04:59.131 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:04:59.131 Test: blockdev write read max offset ...passed 00:04:59.131 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:04:59.131 Test: blockdev writev readv 8 blocks ...passed 00:04:59.131 Test: blockdev writev readv 30 x 1block ...passed 00:04:59.131 Test: blockdev writev readv block ...passed 00:04:59.131 Test: blockdev writev readv size > 128k ...passed 00:04:59.131 Test: blockdev writev readv size > 128k in two iovs ...passed 00:04:59.131 Test: blockdev comparev and writev ...passed 00:04:59.131 Test: blockdev nvme passthru rw ...passed 00:04:59.131 Test: blockdev nvme passthru vendor specific ...passed 00:04:59.131 Test: blockdev nvme admin passthru ...passed 00:04:59.131 Test: blockdev copy ...passed 00:04:59.131 Suite: bdevio tests on: Malloc2p4 00:04:59.131 Test: blockdev write read block ...passed 00:04:59.131 Test: blockdev write zeroes read block ...passed 00:04:59.131 Test: blockdev write zeroes read no split ...passed 00:04:59.131 Test: blockdev write zeroes read split ...passed 00:04:59.131 Test: blockdev write zeroes read split partial ...passed 00:04:59.131 Test: blockdev reset ...passed 00:04:59.131 Test: blockdev write read 8 blocks ...passed 00:04:59.131 Test: blockdev write read size > 128k ...passed 00:04:59.131 Test: blockdev write read invalid size ...passed 00:04:59.131 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:04:59.131 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:04:59.131 Test: blockdev write read max offset ...passed 00:04:59.131 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:04:59.131 Test: blockdev writev readv 8 blocks ...passed 00:04:59.131 Test: blockdev writev readv 30 x 1block ...passed 00:04:59.131 Test: blockdev writev readv block ...passed 00:04:59.131 Test: blockdev writev readv size > 128k ...passed 00:04:59.131 Test: blockdev writev readv size > 128k in two iovs ...passed 00:04:59.131 Test: blockdev comparev and writev ...passed 00:04:59.131 Test: blockdev nvme passthru rw ...passed 00:04:59.131 Test: blockdev nvme passthru vendor specific ...passed 00:04:59.131 Test: blockdev nvme admin passthru ...passed 00:04:59.131 Test: blockdev copy ...passed 00:04:59.131 Suite: bdevio tests on: Malloc2p3 00:04:59.131 Test: blockdev write read block ...passed 00:04:59.131 Test: blockdev write zeroes read block ...passed 00:04:59.131 Test: blockdev write zeroes read no split ...passed 00:04:59.131 Test: blockdev write zeroes read split ...passed 00:04:59.131 Test: blockdev write zeroes read split partial ...passed 00:04:59.131 Test: blockdev reset ...passed 00:04:59.131 Test: blockdev write read 8 blocks ...passed 00:04:59.131 Test: blockdev write read size > 128k ...passed 00:04:59.131 Test: blockdev write read invalid size ...passed 00:04:59.131 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:04:59.131 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:04:59.131 Test: blockdev write read max offset ...passed 00:04:59.131 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:04:59.131 Test: blockdev writev readv 8 blocks ...passed 00:04:59.131 Test: blockdev writev readv 30 x 1block ...passed 00:04:59.131 Test: blockdev writev readv block ...passed 00:04:59.131 Test: blockdev writev readv size > 128k ...passed 00:04:59.131 Test: blockdev writev readv size > 128k in two iovs ...passed 00:04:59.131 Test: blockdev comparev and writev ...passed 00:04:59.131 Test: blockdev nvme passthru rw ...passed 00:04:59.131 Test: blockdev nvme passthru vendor specific ...passed 00:04:59.131 Test: blockdev nvme admin passthru ...passed 00:04:59.131 Test: blockdev copy ...passed 00:04:59.131 Suite: bdevio tests on: Malloc2p2 00:04:59.131 Test: blockdev write read block ...passed 00:04:59.131 Test: blockdev write zeroes read block ...passed 00:04:59.131 Test: blockdev write zeroes read no split ...passed 00:04:59.131 Test: blockdev write zeroes read split ...passed 00:04:59.131 Test: blockdev write zeroes read split partial ...passed 00:04:59.131 Test: blockdev reset ...passed 00:04:59.131 Test: blockdev write read 8 blocks ...passed 00:04:59.131 Test: blockdev write read size > 128k ...passed 00:04:59.131 Test: blockdev write read invalid size ...passed 00:04:59.131 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:04:59.131 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:04:59.131 Test: blockdev write read max offset ...passed 00:04:59.131 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:04:59.131 Test: blockdev writev readv 8 blocks ...passed 00:04:59.131 Test: blockdev writev readv 30 x 1block ...passed 00:04:59.131 Test: blockdev writev readv block ...passed 00:04:59.131 Test: blockdev writev readv size > 128k ...passed 00:04:59.131 Test: blockdev writev readv size > 128k in two iovs ...passed 00:04:59.131 Test: blockdev comparev and writev ...passed 00:04:59.131 Test: blockdev nvme passthru rw ...passed 00:04:59.131 Test: blockdev nvme passthru vendor specific ...passed 00:04:59.131 Test: blockdev nvme admin passthru ...passed 00:04:59.131 Test: blockdev copy ...passed 00:04:59.131 Suite: bdevio tests on: Malloc2p1 00:04:59.131 Test: blockdev write read block ...passed 00:04:59.131 Test: blockdev write zeroes read block ...passed 00:04:59.131 Test: blockdev write zeroes read no split ...passed 00:04:59.131 Test: blockdev write zeroes read split ...passed 00:04:59.131 Test: blockdev write zeroes read split partial ...passed 00:04:59.131 Test: blockdev reset ...passed 00:04:59.131 Test: blockdev write read 8 blocks ...passed 00:04:59.131 Test: blockdev write read size > 128k ...passed 00:04:59.131 Test: blockdev write read invalid size ...passed 00:04:59.132 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:04:59.132 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:04:59.132 Test: blockdev write read max offset ...passed 00:04:59.132 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:04:59.132 Test: blockdev writev readv 8 blocks ...passed 00:04:59.132 Test: blockdev writev readv 30 x 1block ...passed 00:04:59.132 Test: blockdev writev readv block ...passed 00:04:59.132 Test: blockdev writev readv size > 128k ...passed 00:04:59.132 Test: blockdev writev readv size > 128k in two iovs ...passed 00:04:59.132 Test: blockdev comparev and writev ...passed 00:04:59.132 Test: blockdev nvme passthru rw ...passed 00:04:59.132 Test: blockdev nvme passthru vendor specific ...passed 00:04:59.132 Test: blockdev nvme admin passthru ...passed 00:04:59.132 Test: blockdev copy ...passed 00:04:59.132 Suite: bdevio tests on: Malloc2p0 00:04:59.132 Test: blockdev write read block ...passed 00:04:59.132 Test: blockdev write zeroes read block ...passed 00:04:59.132 Test: blockdev write zeroes read no split ...passed 00:04:59.132 Test: blockdev write zeroes read split ...passed 00:04:59.132 Test: blockdev write zeroes read split partial ...passed 00:04:59.132 Test: blockdev reset ...passed 00:04:59.132 Test: blockdev write read 8 blocks ...passed 00:04:59.132 Test: blockdev write read size > 128k ...passed 00:04:59.132 Test: blockdev write read invalid size ...passed 00:04:59.132 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:04:59.132 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:04:59.132 Test: blockdev write read max offset ...passed 00:04:59.132 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:04:59.132 Test: blockdev writev readv 8 blocks ...passed 00:04:59.132 Test: blockdev writev readv 30 x 1block ...passed 00:04:59.132 Test: blockdev writev readv block ...passed 00:04:59.132 Test: blockdev writev readv size > 128k ...passed 00:04:59.132 Test: blockdev writev readv size > 128k in two iovs ...passed 00:04:59.132 Test: blockdev comparev and writev ...passed 00:04:59.132 Test: blockdev nvme passthru rw ...passed 00:04:59.132 Test: blockdev nvme passthru vendor specific ...passed 00:04:59.132 Test: blockdev nvme admin passthru ...passed 00:04:59.132 Test: blockdev copy ...passed 00:04:59.132 Suite: bdevio tests on: Malloc1p1 00:04:59.132 Test: blockdev write read block ...passed 00:04:59.132 Test: blockdev write zeroes read block ...passed 00:04:59.132 Test: blockdev write zeroes read no split ...passed 00:04:59.132 Test: blockdev write zeroes read split ...passed 00:04:59.132 Test: blockdev write zeroes read split partial ...passed 00:04:59.132 Test: blockdev reset ...passed 00:04:59.132 Test: blockdev write read 8 blocks ...passed 00:04:59.132 Test: blockdev write read size > 128k ...passed 00:04:59.132 Test: blockdev write read invalid size ...passed 00:04:59.132 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:04:59.132 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:04:59.132 Test: blockdev write read max offset ...passed 00:04:59.132 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:04:59.132 Test: blockdev writev readv 8 blocks ...passed 00:04:59.132 Test: blockdev writev readv 30 x 1block ...passed 00:04:59.132 Test: blockdev writev readv block ...passed 00:04:59.132 Test: blockdev writev readv size > 128k ...passed 00:04:59.132 Test: blockdev writev readv size > 128k in two iovs ...passed 00:04:59.132 Test: blockdev comparev and writev ...passed 00:04:59.132 Test: blockdev nvme passthru rw ...passed 00:04:59.132 Test: blockdev nvme passthru vendor specific ...passed 00:04:59.132 Test: blockdev nvme admin passthru ...passed 00:04:59.132 Test: blockdev copy ...passed 00:04:59.132 Suite: bdevio tests on: Malloc1p0 00:04:59.132 Test: blockdev write read block ...passed 00:04:59.132 Test: blockdev write zeroes read block ...passed 00:04:59.132 Test: blockdev write zeroes read no split ...passed 00:04:59.132 Test: blockdev write zeroes read split ...passed 00:04:59.132 Test: blockdev write zeroes read split partial ...passed 00:04:59.132 Test: blockdev reset ...passed 00:04:59.132 Test: blockdev write read 8 blocks ...passed 00:04:59.132 Test: blockdev write read size > 128k ...passed 00:04:59.132 Test: blockdev write read invalid size ...passed 00:04:59.132 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:04:59.132 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:04:59.132 Test: blockdev write read max offset ...passed 00:04:59.132 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:04:59.132 Test: blockdev writev readv 8 blocks ...passed 00:04:59.132 Test: blockdev writev readv 30 x 1block ...passed 00:04:59.132 Test: blockdev writev readv block ...passed 00:04:59.132 Test: blockdev writev readv size > 128k ...passed 00:04:59.132 Test: blockdev writev readv size > 128k in two iovs ...passed 00:04:59.132 Test: blockdev comparev and writev ...passed 00:04:59.132 Test: blockdev nvme passthru rw ...passed 00:04:59.132 Test: blockdev nvme passthru vendor specific ...passed 00:04:59.132 Test: blockdev nvme admin passthru ...passed 00:04:59.132 Test: blockdev copy ...passed 00:04:59.132 Suite: bdevio tests on: Malloc0 00:04:59.132 Test: blockdev write read block ...passed 00:04:59.132 Test: blockdev write zeroes read block ...passed 00:04:59.132 Test: blockdev write zeroes read no split ...passed 00:04:59.132 Test: blockdev write zeroes read split ...passed 00:04:59.132 Test: blockdev write zeroes read split partial ...passed 00:04:59.132 Test: blockdev reset ...passed 00:04:59.132 Test: blockdev write read 8 blocks ...passed 00:04:59.132 Test: blockdev write read size > 128k ...passed 00:04:59.132 Test: blockdev write read invalid size ...passed 00:04:59.132 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:04:59.132 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:04:59.132 Test: blockdev write read max offset ...passed 00:04:59.132 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:04:59.132 Test: blockdev writev readv 8 blocks ...passed 00:04:59.132 Test: blockdev writev readv 30 x 1block ...passed 00:04:59.132 Test: blockdev writev readv block ...passed 00:04:59.132 Test: blockdev writev readv size > 128k ...passed 00:04:59.132 Test: blockdev writev readv size > 128k in two iovs ...passed 00:04:59.132 Test: blockdev comparev and writev ...passed 00:04:59.132 Test: blockdev nvme passthru rw ...passed 00:04:59.132 Test: blockdev nvme passthru vendor specific ...passed 00:04:59.132 Test: blockdev nvme admin passthru ...passed 00:04:59.132 Test: blockdev copy ...passed 00:04:59.132 00:04:59.132 Run Summary: Type Total Ran Passed Failed Inactive 00:04:59.132 suites 16 16 n/a 0 0 00:04:59.132 tests 368 368 368 0 0 00:04:59.132 asserts 2224 2224 2224 0 n/a 00:04:59.132 00:04:59.132 Elapsed time = 0.539 seconds 00:04:59.132 0 00:04:59.132 20:42:50 -- bdev/blockdev.sh@293 -- # killprocess 47042 00:04:59.132 20:42:50 -- common/autotest_common.sh@926 -- # '[' -z 47042 ']' 00:04:59.132 20:42:50 -- common/autotest_common.sh@930 -- # kill -0 47042 00:04:59.132 20:42:50 -- common/autotest_common.sh@931 -- # uname 00:04:59.132 20:42:50 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:04:59.132 20:42:50 -- common/autotest_common.sh@934 -- # ps -c -o command 47042 00:04:59.132 20:42:50 -- common/autotest_common.sh@934 -- # tail -1 00:04:59.132 20:42:50 -- common/autotest_common.sh@934 -- # process_name=bdevio 00:04:59.132 20:42:50 -- common/autotest_common.sh@936 -- # '[' bdevio = sudo ']' 00:04:59.132 killing process with pid 47042 00:04:59.132 20:42:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 47042' 00:04:59.132 20:42:50 -- common/autotest_common.sh@945 -- # kill 47042 00:04:59.132 20:42:50 -- common/autotest_common.sh@950 -- # wait 47042 00:04:59.392 20:42:50 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:04:59.392 00:04:59.392 real 0m1.486s 00:04:59.392 user 0m2.873s 00:04:59.392 sys 0m0.649s 00:04:59.392 20:42:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.392 20:42:50 -- common/autotest_common.sh@10 -- # set +x 00:04:59.392 ************************************ 00:04:59.392 END TEST bdev_bounds 00:04:59.392 ************************************ 00:04:59.392 20:42:50 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:04:59.392 20:42:50 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:04:59.392 20:42:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:59.392 20:42:50 -- common/autotest_common.sh@10 -- # set +x 00:04:59.392 ************************************ 00:04:59.392 START TEST bdev_nbd 00:04:59.392 ************************************ 00:04:59.392 20:42:50 -- common/autotest_common.sh@1104 -- # nbd_function_test /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:04:59.392 20:42:50 -- bdev/blockdev.sh@298 -- # uname -s 00:04:59.392 20:42:50 -- bdev/blockdev.sh@298 -- # [[ FreeBSD == Linux ]] 00:04:59.392 20:42:50 -- bdev/blockdev.sh@298 -- # return 0 00:04:59.392 00:04:59.392 real 0m0.007s 00:04:59.392 user 0m0.007s 00:04:59.392 sys 0m0.001s 00:04:59.392 20:42:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.392 20:42:50 -- common/autotest_common.sh@10 -- # set +x 00:04:59.392 ************************************ 00:04:59.392 END TEST bdev_nbd 00:04:59.392 ************************************ 00:04:59.392 20:42:50 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:04:59.392 20:42:50 -- bdev/blockdev.sh@762 -- # '[' bdev = nvme ']' 00:04:59.392 20:42:50 -- bdev/blockdev.sh@762 -- # '[' bdev = gpt ']' 00:04:59.392 20:42:50 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:04:59.392 20:42:50 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:04:59.392 20:42:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:59.392 20:42:50 -- common/autotest_common.sh@10 -- # set +x 00:04:59.392 ************************************ 00:04:59.392 START TEST bdev_fio 00:04:59.392 ************************************ 00:04:59.392 20:42:50 -- common/autotest_common.sh@1104 -- # fio_test_suite '' 00:04:59.392 20:42:50 -- bdev/blockdev.sh@329 -- # local env_context 00:04:59.392 20:42:50 -- bdev/blockdev.sh@333 -- # pushd /usr/home/vagrant/spdk_repo/spdk/test/bdev 00:04:59.392 /usr/home/vagrant/spdk_repo/spdk/test/bdev /usr/home/vagrant/spdk_repo/spdk 00:04:59.392 20:42:50 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:04:59.392 20:42:50 -- bdev/blockdev.sh@337 -- # echo '' 00:04:59.392 20:42:50 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:04:59.392 20:42:50 -- bdev/blockdev.sh@337 -- # env_context= 00:04:59.392 20:42:50 -- bdev/blockdev.sh@338 -- # fio_config_gen /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:04:59.392 20:42:50 -- common/autotest_common.sh@1259 -- # local config_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:04:59.392 20:42:50 -- common/autotest_common.sh@1260 -- # local workload=verify 00:04:59.392 20:42:50 -- common/autotest_common.sh@1261 -- # local bdev_type=AIO 00:04:59.392 20:42:50 -- common/autotest_common.sh@1262 -- # local env_context= 00:04:59.392 20:42:50 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:04:59.392 20:42:50 -- common/autotest_common.sh@1265 -- # '[' -e /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:04:59.392 20:42:50 -- common/autotest_common.sh@1270 -- # '[' -z verify ']' 00:04:59.392 20:42:50 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:04:59.392 20:42:50 -- common/autotest_common.sh@1278 -- # touch /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:04:59.392 20:42:50 -- common/autotest_common.sh@1280 -- # cat 00:04:59.392 20:42:50 -- common/autotest_common.sh@1292 -- # '[' verify == verify ']' 00:04:59.392 20:42:50 -- common/autotest_common.sh@1293 -- # cat 00:04:59.392 20:42:50 -- common/autotest_common.sh@1302 -- # '[' AIO == AIO ']' 00:04:59.392 20:42:50 -- common/autotest_common.sh@1303 -- # /usr/src/fio/fio --version 00:04:59.651 20:42:50 -- common/autotest_common.sh@1303 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:04:59.651 20:42:50 -- common/autotest_common.sh@1304 -- # echo serialize_overlap=1 00:04:59.651 20:42:50 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:04:59.651 20:42:50 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc0]' 00:04:59.651 20:42:50 -- bdev/blockdev.sh@341 -- # echo filename=Malloc0 00:04:59.651 20:42:50 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:04:59.651 20:42:50 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p0]' 00:04:59.651 20:42:50 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p0 00:04:59.651 20:42:50 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:04:59.651 20:42:50 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p1]' 00:04:59.651 20:42:50 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p1 00:04:59.651 20:42:50 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:04:59.651 20:42:50 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p0]' 00:04:59.651 20:42:50 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p0 00:04:59.651 20:42:50 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:04:59.652 20:42:50 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p1]' 00:04:59.652 20:42:50 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p1 00:04:59.652 20:42:50 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:04:59.652 20:42:50 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p2]' 00:04:59.652 20:42:50 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p2 00:04:59.652 20:42:50 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:04:59.652 20:42:50 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p3]' 00:04:59.652 20:42:50 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p3 00:04:59.652 20:42:50 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:04:59.652 20:42:50 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p4]' 00:04:59.652 20:42:50 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p4 00:04:59.652 20:42:50 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:04:59.911 20:42:50 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p5]' 00:04:59.911 20:42:50 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p5 00:04:59.911 20:42:50 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:04:59.911 20:42:50 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p6]' 00:04:59.911 20:42:50 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p6 00:04:59.911 20:42:50 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:04:59.911 20:42:50 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p7]' 00:04:59.911 20:42:50 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p7 00:04:59.911 20:42:50 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:04:59.911 20:42:50 -- bdev/blockdev.sh@340 -- # echo '[job_TestPT]' 00:04:59.911 20:42:50 -- bdev/blockdev.sh@341 -- # echo filename=TestPT 00:04:59.911 20:42:50 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:04:59.911 20:42:50 -- bdev/blockdev.sh@340 -- # echo '[job_raid0]' 00:04:59.911 20:42:50 -- bdev/blockdev.sh@341 -- # echo filename=raid0 00:04:59.911 20:42:50 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:04:59.911 20:42:50 -- bdev/blockdev.sh@340 -- # echo '[job_concat0]' 00:04:59.911 20:42:50 -- bdev/blockdev.sh@341 -- # echo filename=concat0 00:04:59.911 20:42:50 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:04:59.911 20:42:50 -- bdev/blockdev.sh@340 -- # echo '[job_raid1]' 00:04:59.911 20:42:50 -- bdev/blockdev.sh@341 -- # echo filename=raid1 00:04:59.911 20:42:50 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:04:59.911 20:42:50 -- bdev/blockdev.sh@340 -- # echo '[job_AIO0]' 00:04:59.911 20:42:50 -- bdev/blockdev.sh@341 -- # echo filename=AIO0 00:04:59.911 20:42:50 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:04:59.911 20:42:50 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:04:59.911 20:42:50 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:04:59.911 20:42:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:59.911 20:42:50 -- common/autotest_common.sh@10 -- # set +x 00:04:59.911 ************************************ 00:04:59.911 START TEST bdev_fio_rw_verify 00:04:59.911 ************************************ 00:04:59.911 20:42:50 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:04:59.911 20:42:50 -- common/autotest_common.sh@1335 -- # fio_plugin /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:04:59.911 20:42:50 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:04:59.911 20:42:50 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:04:59.911 20:42:50 -- common/autotest_common.sh@1318 -- # local sanitizers 00:04:59.911 20:42:50 -- common/autotest_common.sh@1319 -- # local plugin=/usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:04:59.911 20:42:50 -- common/autotest_common.sh@1320 -- # shift 00:04:59.911 20:42:50 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:04:59.911 20:42:50 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:04:59.911 20:42:50 -- common/autotest_common.sh@1324 -- # ldd /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:04:59.911 20:42:50 -- common/autotest_common.sh@1324 -- # grep libasan 00:04:59.911 20:42:50 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:04:59.911 20:42:50 -- common/autotest_common.sh@1324 -- # asan_lib= 00:04:59.911 20:42:50 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:04:59.911 20:42:50 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:04:59.911 20:42:50 -- common/autotest_common.sh@1324 -- # ldd /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:04:59.911 20:42:50 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:04:59.911 20:42:50 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:04:59.911 20:42:50 -- common/autotest_common.sh@1324 -- # asan_lib= 00:04:59.911 20:42:50 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:04:59.911 20:42:50 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:04:59.912 20:42:50 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:04:59.912 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:04:59.912 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:04:59.912 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:04:59.912 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:04:59.912 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:04:59.912 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:04:59.912 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:04:59.912 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:04:59.912 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:04:59.912 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:04:59.912 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:04:59.912 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:04:59.912 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:04:59.912 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:04:59.912 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:04:59.912 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:04:59.912 fio-3.35 00:04:59.912 Starting 16 threads 00:05:00.479 EAL: TSC is not safe to use in SMP mode 00:05:00.479 EAL: TSC is not invariant 00:05:12.691 00:05:12.691 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=102671: Tue Apr 16 20:43:01 2024 00:05:12.691 read: IOPS=273k, BW=1067MiB/s (1118MB/s)(10.4GiB/10003msec) 00:05:12.691 slat (nsec): min=215, max=118979k, avg=3417.09, stdev=359592.75 00:05:12.691 clat (nsec): min=686, max=228612k, avg=44003.39, stdev=1318688.46 00:05:12.691 lat (nsec): min=1575, max=228612k, avg=47420.48, stdev=1366858.98 00:05:12.691 clat percentiles (usec): 00:05:12.691 | 50.000th=[ 8], 99.000th=[ 775], 99.900th=[ 1057], 00:05:12.691 | 99.990th=[ 67634], 99.999th=[154141] 00:05:12.691 write: IOPS=461k, BW=1801MiB/s (1888MB/s)(17.4GiB/9905msec); 0 zone resets 00:05:12.691 slat (nsec): min=453, max=662771k, avg=18059.29, stdev=907502.11 00:05:12.691 clat (nsec): min=639, max=748410k, avg=92901.33, stdev=2284855.47 00:05:12.691 lat (usec): min=10, max=748419, avg=110.96, stdev=2458.39 00:05:12.691 clat percentiles (usec): 00:05:12.691 | 50.000th=[ 42], 99.000th=[ 742], 99.900th=[ 2507], 00:05:12.691 | 99.990th=[ 94897], 99.999th=[210764] 00:05:12.691 bw ( MiB/s): min= 651, max= 2906, per=99.30%, avg=1788.15, stdev=45.11, samples=298 00:05:12.691 iops : min=166886, max=744125, avg=457762.23, stdev=11549.40, samples=298 00:05:12.691 lat (nsec) : 750=0.01%, 1000=0.01% 00:05:12.691 lat (usec) : 2=0.09%, 4=13.88%, 10=18.60%, 20=18.98%, 50=22.36% 00:05:12.691 lat (usec) : 100=23.95%, 250=0.57%, 500=0.12%, 750=0.40%, 1000=0.89% 00:05:12.691 lat (msec) : 2=0.04%, 4=0.03%, 10=0.01%, 20=0.01%, 50=0.02% 00:05:12.691 lat (msec) : 100=0.02%, 250=0.01%, 500=0.01%, 750=0.01% 00:05:12.691 cpu : usr=55.55%, sys=3.44%, ctx=883103, majf=0, minf=670 00:05:12.691 IO depths : 1=12.5%, 2=25.0%, 4=49.9%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:05:12.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:05:12.691 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:05:12.691 issued rwts: total=2731109,4566054,0,0 short=0,0,0,0 dropped=0,0,0,0 00:05:12.691 latency : target=0, window=0, percentile=100.00%, depth=8 00:05:12.691 00:05:12.691 Run status group 0 (all jobs): 00:05:12.691 READ: bw=1067MiB/s (1118MB/s), 1067MiB/s-1067MiB/s (1118MB/s-1118MB/s), io=10.4GiB (11.2GB), run=10003-10003msec 00:05:12.691 WRITE: bw=1801MiB/s (1888MB/s), 1801MiB/s-1801MiB/s (1888MB/s-1888MB/s), io=17.4GiB (18.7GB), run=9905-9905msec 00:05:12.691 00:05:12.691 real 0m11.715s 00:05:12.691 user 1m32.792s 00:05:12.691 sys 0m7.225s 00:05:12.691 20:43:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.691 20:43:02 -- common/autotest_common.sh@10 -- # set +x 00:05:12.691 ************************************ 00:05:12.691 END TEST bdev_fio_rw_verify 00:05:12.691 ************************************ 00:05:12.691 20:43:02 -- bdev/blockdev.sh@348 -- # rm -f 00:05:12.691 20:43:02 -- bdev/blockdev.sh@349 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:05:12.691 20:43:02 -- bdev/blockdev.sh@352 -- # fio_config_gen /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:05:12.691 20:43:02 -- common/autotest_common.sh@1259 -- # local config_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:05:12.691 20:43:02 -- common/autotest_common.sh@1260 -- # local workload=trim 00:05:12.691 20:43:02 -- common/autotest_common.sh@1261 -- # local bdev_type= 00:05:12.691 20:43:02 -- common/autotest_common.sh@1262 -- # local env_context= 00:05:12.691 20:43:02 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:05:12.691 20:43:02 -- common/autotest_common.sh@1265 -- # '[' -e /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:05:12.691 20:43:02 -- common/autotest_common.sh@1270 -- # '[' -z trim ']' 00:05:12.691 20:43:02 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:05:12.691 20:43:02 -- common/autotest_common.sh@1278 -- # touch /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:05:12.691 20:43:02 -- common/autotest_common.sh@1280 -- # cat 00:05:12.691 20:43:02 -- common/autotest_common.sh@1292 -- # '[' trim == verify ']' 00:05:12.691 20:43:02 -- common/autotest_common.sh@1307 -- # '[' trim == trim ']' 00:05:12.691 20:43:02 -- common/autotest_common.sh@1308 -- # echo rw=trimwrite 00:05:12.691 20:43:02 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:05:12.692 20:43:02 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "e2096141-fc31-11ee-80f8-ef3e42bb1492"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "e2096141-fc31-11ee-80f8-ef3e42bb1492",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "d2d5b68a-faee-6459-98b9-a091e7ebd4e9"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "d2d5b68a-faee-6459-98b9-a091e7ebd4e9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "20f82f24-803b-b253-ac6c-6bae8fadaf59"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "20f82f24-803b-b253-ac6c-6bae8fadaf59",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "3033d337-8fc7-1c57-a46a-4c6b94703ec6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "3033d337-8fc7-1c57-a46a-4c6b94703ec6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "e98efa15-4823-305e-b1fb-8fbdc8adf67f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e98efa15-4823-305e-b1fb-8fbdc8adf67f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "523e2494-77cb-ed5d-9955-91754f176c62"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "523e2494-77cb-ed5d-9955-91754f176c62",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "a53dc61c-22e6-8455-9e9a-2d9b22e4187a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a53dc61c-22e6-8455-9e9a-2d9b22e4187a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "922bb871-a1b4-8857-9311-3ee9dbcdb0c0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "922bb871-a1b4-8857-9311-3ee9dbcdb0c0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "64f1ac13-ef95-ca5f-a6d6-2440beaeacc9"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "64f1ac13-ef95-ca5f-a6d6-2440beaeacc9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "20c8ccb3-6ab5-8d5a-8afc-777b332e53e2"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "20c8ccb3-6ab5-8d5a-8afc-777b332e53e2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "20f789a9-25c1-5850-b432-da135bb0ab9a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "20f789a9-25c1-5850-b432-da135bb0ab9a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "fd78039d-eacd-035a-880b-970f03d55d19"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "fd78039d-eacd-035a-880b-970f03d55d19",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "e216d69d-fc31-11ee-80f8-ef3e42bb1492"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "e216d69d-fc31-11ee-80f8-ef3e42bb1492",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "e216d69d-fc31-11ee-80f8-ef3e42bb1492",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "e20e4314-fc31-11ee-80f8-ef3e42bb1492",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "e20f7b69-fc31-11ee-80f8-ef3e42bb1492",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "e218077c-fc31-11ee-80f8-ef3e42bb1492"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "e218077c-fc31-11ee-80f8-ef3e42bb1492",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "e218077c-fc31-11ee-80f8-ef3e42bb1492",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "e210b3d8-fc31-11ee-80f8-ef3e42bb1492",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "e211ec56-fc31-11ee-80f8-ef3e42bb1492",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "e2193faa-fc31-11ee-80f8-ef3e42bb1492"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "e2193faa-fc31-11ee-80f8-ef3e42bb1492",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "e2193faa-fc31-11ee-80f8-ef3e42bb1492",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "e21324da-fc31-11ee-80f8-ef3e42bb1492",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "e2145d5a-fc31-11ee-80f8-ef3e42bb1492",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "e223044c-fc31-11ee-80f8-ef3e42bb1492"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "e223044c-fc31-11ee-80f8-ef3e42bb1492",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/usr/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:05:12.692 20:43:02 -- bdev/blockdev.sh@353 -- # [[ -n Malloc0 00:05:12.692 Malloc1p0 00:05:12.692 Malloc1p1 00:05:12.692 Malloc2p0 00:05:12.692 Malloc2p1 00:05:12.692 Malloc2p2 00:05:12.692 Malloc2p3 00:05:12.692 Malloc2p4 00:05:12.692 Malloc2p5 00:05:12.692 Malloc2p6 00:05:12.692 Malloc2p7 00:05:12.692 TestPT 00:05:12.692 raid0 00:05:12.692 concat0 ]] 00:05:12.692 20:43:02 -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:05:12.693 20:43:02 -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "e2096141-fc31-11ee-80f8-ef3e42bb1492"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "e2096141-fc31-11ee-80f8-ef3e42bb1492",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "d2d5b68a-faee-6459-98b9-a091e7ebd4e9"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "d2d5b68a-faee-6459-98b9-a091e7ebd4e9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "20f82f24-803b-b253-ac6c-6bae8fadaf59"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "20f82f24-803b-b253-ac6c-6bae8fadaf59",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "3033d337-8fc7-1c57-a46a-4c6b94703ec6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "3033d337-8fc7-1c57-a46a-4c6b94703ec6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "e98efa15-4823-305e-b1fb-8fbdc8adf67f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e98efa15-4823-305e-b1fb-8fbdc8adf67f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "523e2494-77cb-ed5d-9955-91754f176c62"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "523e2494-77cb-ed5d-9955-91754f176c62",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "a53dc61c-22e6-8455-9e9a-2d9b22e4187a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a53dc61c-22e6-8455-9e9a-2d9b22e4187a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "922bb871-a1b4-8857-9311-3ee9dbcdb0c0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "922bb871-a1b4-8857-9311-3ee9dbcdb0c0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "64f1ac13-ef95-ca5f-a6d6-2440beaeacc9"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "64f1ac13-ef95-ca5f-a6d6-2440beaeacc9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "20c8ccb3-6ab5-8d5a-8afc-777b332e53e2"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "20c8ccb3-6ab5-8d5a-8afc-777b332e53e2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "20f789a9-25c1-5850-b432-da135bb0ab9a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "20f789a9-25c1-5850-b432-da135bb0ab9a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "fd78039d-eacd-035a-880b-970f03d55d19"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "fd78039d-eacd-035a-880b-970f03d55d19",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "e216d69d-fc31-11ee-80f8-ef3e42bb1492"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "e216d69d-fc31-11ee-80f8-ef3e42bb1492",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "e216d69d-fc31-11ee-80f8-ef3e42bb1492",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "e20e4314-fc31-11ee-80f8-ef3e42bb1492",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "e20f7b69-fc31-11ee-80f8-ef3e42bb1492",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "e218077c-fc31-11ee-80f8-ef3e42bb1492"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "e218077c-fc31-11ee-80f8-ef3e42bb1492",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "e218077c-fc31-11ee-80f8-ef3e42bb1492",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "e210b3d8-fc31-11ee-80f8-ef3e42bb1492",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "e211ec56-fc31-11ee-80f8-ef3e42bb1492",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "e2193faa-fc31-11ee-80f8-ef3e42bb1492"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "e2193faa-fc31-11ee-80f8-ef3e42bb1492",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "e2193faa-fc31-11ee-80f8-ef3e42bb1492",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "e21324da-fc31-11ee-80f8-ef3e42bb1492",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "e2145d5a-fc31-11ee-80f8-ef3e42bb1492",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "e223044c-fc31-11ee-80f8-ef3e42bb1492"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "e223044c-fc31-11ee-80f8-ef3e42bb1492",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/usr/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:05:12.693 20:43:02 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:12.693 20:43:02 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc0]' 00:05:12.693 20:43:02 -- bdev/blockdev.sh@356 -- # echo filename=Malloc0 00:05:12.693 20:43:02 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:12.693 20:43:02 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p0]' 00:05:12.693 20:43:02 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p0 00:05:12.693 20:43:02 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:12.693 20:43:02 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p1]' 00:05:12.693 20:43:02 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p1 00:05:12.693 20:43:02 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:12.693 20:43:02 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p0]' 00:05:12.693 20:43:02 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p0 00:05:12.693 20:43:02 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:12.693 20:43:02 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p1]' 00:05:12.693 20:43:02 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p1 00:05:12.693 20:43:02 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:12.693 20:43:02 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p2]' 00:05:12.694 20:43:02 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p2 00:05:12.694 20:43:02 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:12.694 20:43:02 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p3]' 00:05:12.694 20:43:02 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p3 00:05:12.694 20:43:02 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:12.694 20:43:02 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p4]' 00:05:12.694 20:43:02 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p4 00:05:12.694 20:43:02 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:12.694 20:43:02 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p5]' 00:05:12.694 20:43:02 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p5 00:05:12.694 20:43:02 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:12.694 20:43:02 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p6]' 00:05:12.694 20:43:02 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p6 00:05:12.694 20:43:02 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:12.694 20:43:02 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p7]' 00:05:12.694 20:43:02 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p7 00:05:12.694 20:43:02 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:12.694 20:43:02 -- bdev/blockdev.sh@355 -- # echo '[job_TestPT]' 00:05:12.694 20:43:02 -- bdev/blockdev.sh@356 -- # echo filename=TestPT 00:05:12.694 20:43:02 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:12.694 20:43:02 -- bdev/blockdev.sh@355 -- # echo '[job_raid0]' 00:05:12.694 20:43:02 -- bdev/blockdev.sh@356 -- # echo filename=raid0 00:05:12.694 20:43:02 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:12.694 20:43:02 -- bdev/blockdev.sh@355 -- # echo '[job_concat0]' 00:05:12.694 20:43:02 -- bdev/blockdev.sh@356 -- # echo filename=concat0 00:05:12.694 20:43:02 -- bdev/blockdev.sh@365 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:05:12.694 20:43:02 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:05:12.694 20:43:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:12.694 20:43:02 -- common/autotest_common.sh@10 -- # set +x 00:05:12.694 ************************************ 00:05:12.694 START TEST bdev_fio_trim 00:05:12.694 ************************************ 00:05:12.694 20:43:02 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:05:12.694 20:43:02 -- common/autotest_common.sh@1335 -- # fio_plugin /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:05:12.694 20:43:02 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:05:12.694 20:43:02 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:05:12.694 20:43:02 -- common/autotest_common.sh@1318 -- # local sanitizers 00:05:12.694 20:43:02 -- common/autotest_common.sh@1319 -- # local plugin=/usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:05:12.694 20:43:02 -- common/autotest_common.sh@1320 -- # shift 00:05:12.694 20:43:02 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:05:12.694 20:43:02 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:05:12.694 20:43:02 -- common/autotest_common.sh@1324 -- # ldd /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:05:12.694 20:43:02 -- common/autotest_common.sh@1324 -- # grep libasan 00:05:12.694 20:43:02 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:05:12.694 20:43:02 -- common/autotest_common.sh@1324 -- # asan_lib= 00:05:12.694 20:43:02 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:05:12.694 20:43:02 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:05:12.694 20:43:02 -- common/autotest_common.sh@1324 -- # ldd /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:05:12.694 20:43:02 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:05:12.694 20:43:02 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:05:12.694 20:43:02 -- common/autotest_common.sh@1324 -- # asan_lib= 00:05:12.694 20:43:02 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:05:12.694 20:43:02 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:05:12.694 20:43:02 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:05:12.694 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:12.694 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:12.694 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:12.694 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:12.694 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:12.694 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:12.694 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:12.694 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:12.694 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:12.694 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:12.694 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:12.694 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:12.694 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:12.694 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:12.694 fio-3.35 00:05:12.694 Starting 14 threads 00:05:12.694 EAL: TSC is not safe to use in SMP mode 00:05:12.694 EAL: TSC is not invariant 00:05:22.688 00:05:22.688 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=102690: Tue Apr 16 20:43:13 2024 00:05:22.688 write: IOPS=2632k, BW=10.0GiB/s (10.8GB/s)(100GiB/10001msec); 0 zone resets 00:05:22.688 slat (nsec): min=209, max=1480.3M, avg=1234.94, stdev=389029.81 00:05:22.688 clat (nsec): min=1209, max=1504.2M, avg=14559.38, stdev=1206475.10 00:05:22.688 lat (nsec): min=1706, max=1504.2M, avg=15794.32, stdev=1267643.53 00:05:22.688 clat percentiles (usec): 00:05:22.688 | 50.000th=[ 6], 99.000th=[ 12], 99.900th=[ 955], 99.990th=[ 971], 00:05:22.688 | 99.999th=[94897] 00:05:22.688 bw ( MiB/s): min= 4230, max=16240, per=100.00%, avg=10812.89, stdev=300.26, samples=254 00:05:22.688 iops : min=1083055, max=4157630, avg=2768094.36, stdev=76866.33, samples=254 00:05:22.688 trim: IOPS=2632k, BW=10.0GiB/s (10.8GB/s)(100GiB/10001msec); 0 zone resets 00:05:22.688 slat (nsec): min=438, max=1504.2M, avg=1678.04, stdev=517368.12 00:05:22.688 clat (nsec): min=316, max=1504.2M, avg=10260.85, stdev=1100958.97 00:05:22.688 lat (nsec): min=1438, max=1504.2M, avg=11938.89, stdev=1229645.87 00:05:22.688 clat percentiles (usec): 00:05:22.688 | 50.000th=[ 7], 99.000th=[ 13], 99.900th=[ 25], 99.990th=[ 36], 00:05:22.688 | 99.999th=[94897] 00:05:22.688 bw ( MiB/s): min= 4230, max=16240, per=100.00%, avg=10812.89, stdev=300.26, samples=254 00:05:22.688 iops : min=1083053, max=4157647, avg=2768095.93, stdev=76866.34, samples=254 00:05:22.688 lat (nsec) : 500=0.02%, 750=0.01%, 1000=0.02% 00:05:22.688 lat (usec) : 2=0.11%, 4=25.39%, 10=64.41%, 20=9.66%, 50=0.20% 00:05:22.688 lat (usec) : 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.17% 00:05:22.688 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01% 00:05:22.688 lat (msec) : 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 2000=0.01% 00:05:22.688 cpu : usr=62.60%, sys=5.25%, ctx=1389346, majf=0, minf=0 00:05:22.688 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:05:22.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:05:22.688 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:05:22.688 issued rwts: total=0,26326807,26326814,0 short=0,0,0,0 dropped=0,0,0,0 00:05:22.688 latency : target=0, window=0, percentile=100.00%, depth=8 00:05:22.688 00:05:22.688 Run status group 0 (all jobs): 00:05:22.688 WRITE: bw=10.0GiB/s (10.8GB/s), 10.0GiB/s-10.0GiB/s (10.8GB/s-10.8GB/s), io=100GiB (108GB), run=10001-10001msec 00:05:22.688 TRIM: bw=10.0GiB/s (10.8GB/s), 10.0GiB/s-10.0GiB/s (10.8GB/s-10.8GB/s), io=100GiB (108GB), run=10001-10001msec 00:05:23.270 00:05:23.270 real 0m11.703s 00:05:23.270 user 1m32.796s 00:05:23.270 sys 0m9.720s 00:05:23.270 20:43:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.270 20:43:14 -- common/autotest_common.sh@10 -- # set +x 00:05:23.270 ************************************ 00:05:23.270 END TEST bdev_fio_trim 00:05:23.270 ************************************ 00:05:23.270 20:43:14 -- bdev/blockdev.sh@366 -- # rm -f 00:05:23.270 20:43:14 -- bdev/blockdev.sh@367 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:05:23.270 /usr/home/vagrant/spdk_repo/spdk 00:05:23.270 20:43:14 -- bdev/blockdev.sh@368 -- # popd 00:05:23.270 20:43:14 -- bdev/blockdev.sh@369 -- # trap - SIGINT SIGTERM EXIT 00:05:23.270 00:05:23.270 real 0m23.982s 00:05:23.270 user 3m5.740s 00:05:23.270 sys 0m17.341s 00:05:23.270 20:43:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.270 20:43:14 -- common/autotest_common.sh@10 -- # set +x 00:05:23.270 ************************************ 00:05:23.270 END TEST bdev_fio 00:05:23.270 ************************************ 00:05:23.529 20:43:14 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:05:23.529 20:43:14 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:05:23.529 20:43:14 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:05:23.529 20:43:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:23.529 20:43:14 -- common/autotest_common.sh@10 -- # set +x 00:05:23.529 ************************************ 00:05:23.529 START TEST bdev_verify 00:05:23.529 ************************************ 00:05:23.530 20:43:14 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:05:23.530 [2024-04-16 20:43:14.420786] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:05:23.530 [2024-04-16 20:43:14.421114] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:23.789 EAL: TSC is not safe to use in SMP mode 00:05:23.789 EAL: TSC is not invariant 00:05:23.789 [2024-04-16 20:43:14.862429] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:24.048 [2024-04-16 20:43:14.954998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.048 [2024-04-16 20:43:14.954999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.048 [2024-04-16 20:43:15.010758] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:24.048 [2024-04-16 20:43:15.010809] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:24.048 [2024-04-16 20:43:15.018737] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:24.048 [2024-04-16 20:43:15.018756] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:24.048 [2024-04-16 20:43:15.026753] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:24.048 [2024-04-16 20:43:15.026770] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:05:24.048 [2024-04-16 20:43:15.026775] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:05:24.048 [2024-04-16 20:43:15.074754] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:24.048 [2024-04-16 20:43:15.074804] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:24.048 [2024-04-16 20:43:15.074816] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b6cf800 00:05:24.048 [2024-04-16 20:43:15.074822] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:24.048 [2024-04-16 20:43:15.075132] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:24.048 [2024-04-16 20:43:15.075155] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:05:24.308 Running I/O for 5 seconds... 00:05:29.581 00:05:29.581 Latency(us) 00:05:29.581 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:05:29.581 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:29.581 Verification LBA range: start 0x0 length 0x1000 00:05:29.581 Malloc0 : 5.02 12747.80 49.80 0.00 0.00 10028.80 124.95 20906.62 00:05:29.581 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:29.581 Verification LBA range: start 0x1000 length 0x1000 00:05:29.581 Malloc0 : 5.02 37.45 0.15 0.00 0.00 3414824.10 380.22 5030383.16 00:05:29.581 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:29.581 Verification LBA range: start 0x0 length 0x800 00:05:29.581 Malloc1p0 : 5.02 10630.56 41.53 0.00 0.00 12028.24 335.59 11824.23 00:05:29.581 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:29.581 Verification LBA range: start 0x800 length 0x800 00:05:29.581 Malloc1p0 : 5.02 11860.42 46.33 0.00 0.00 10781.24 340.95 16679.60 00:05:29.581 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:29.581 Verification LBA range: start 0x0 length 0x800 00:05:29.581 Malloc1p1 : 5.02 10630.31 41.52 0.00 0.00 12026.97 317.74 11881.36 00:05:29.581 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:29.581 Verification LBA range: start 0x800 length 0x800 00:05:29.581 Malloc1p1 : 5.02 11860.05 46.33 0.00 0.00 10780.66 323.10 17479.30 00:05:29.581 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:29.581 Verification LBA range: start 0x0 length 0x200 00:05:29.581 Malloc2p0 : 5.02 10630.08 41.52 0.00 0.00 12026.42 330.24 11881.36 00:05:29.581 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:29.581 Verification LBA range: start 0x200 length 0x200 00:05:29.581 Malloc2p0 : 5.02 11859.71 46.33 0.00 0.00 10779.40 332.02 18164.76 00:05:29.581 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:29.581 Verification LBA range: start 0x0 length 0x200 00:05:29.581 Malloc2p1 : 5.02 10629.84 41.52 0.00 0.00 12025.22 310.60 11652.87 00:05:29.581 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:29.581 Verification LBA range: start 0x200 length 0x200 00:05:29.581 Malloc2p1 : 5.02 11859.39 46.33 0.00 0.00 10777.94 312.39 18850.23 00:05:29.581 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:29.581 Verification LBA range: start 0x0 length 0x200 00:05:29.581 Malloc2p2 : 5.02 10629.59 41.52 0.00 0.00 12023.87 330.24 11253.01 00:05:29.581 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:29.581 Verification LBA range: start 0x200 length 0x200 00:05:29.581 Malloc2p2 : 5.02 11859.07 46.32 0.00 0.00 10776.93 319.53 19649.93 00:05:29.581 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:29.581 Verification LBA range: start 0x0 length 0x200 00:05:29.581 Malloc2p3 : 5.02 10629.36 41.52 0.00 0.00 12022.91 312.39 11195.89 00:05:29.581 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:29.581 Verification LBA range: start 0x200 length 0x200 00:05:29.581 Malloc2p3 : 5.02 11858.75 46.32 0.00 0.00 10776.02 319.53 20792.37 00:05:29.581 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:29.581 Verification LBA range: start 0x0 length 0x200 00:05:29.581 Malloc2p4 : 5.02 10629.14 41.52 0.00 0.00 12021.48 314.17 11367.26 00:05:29.581 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:29.581 Verification LBA range: start 0x200 length 0x200 00:05:29.581 Malloc2p4 : 5.02 11858.47 46.32 0.00 0.00 10775.04 323.10 20792.37 00:05:29.581 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:29.581 Verification LBA range: start 0x0 length 0x200 00:05:29.581 Malloc2p5 : 5.02 10628.89 41.52 0.00 0.00 12019.69 319.53 11424.38 00:05:29.581 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:29.581 Verification LBA range: start 0x200 length 0x200 00:05:29.581 Malloc2p5 : 5.02 11858.17 46.32 0.00 0.00 10773.92 328.45 20106.91 00:05:29.581 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:29.581 Verification LBA range: start 0x0 length 0x200 00:05:29.581 Malloc2p6 : 5.02 10628.64 41.52 0.00 0.00 12018.73 323.10 11538.62 00:05:29.581 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:29.581 Verification LBA range: start 0x200 length 0x200 00:05:29.581 Malloc2p6 : 5.02 11857.84 46.32 0.00 0.00 10773.08 339.16 19649.93 00:05:29.581 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:29.581 Verification LBA range: start 0x0 length 0x200 00:05:29.581 Malloc2p7 : 5.02 10628.42 41.52 0.00 0.00 12018.11 319.53 11709.99 00:05:29.581 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:29.581 Verification LBA range: start 0x200 length 0x200 00:05:29.581 Malloc2p7 : 5.02 11857.56 46.32 0.00 0.00 10771.59 324.88 19192.96 00:05:29.581 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:29.581 Verification LBA range: start 0x0 length 0x1000 00:05:29.581 TestPT : 5.02 10616.17 41.47 0.00 0.00 12031.28 739.01 11767.11 00:05:29.581 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:29.581 Verification LBA range: start 0x1000 length 0x1000 00:05:29.581 TestPT : 5.02 4583.84 17.91 0.00 0.00 27859.62 978.21 56893.41 00:05:29.581 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:29.581 Verification LBA range: start 0x0 length 0x2000 00:05:29.581 raid0 : 5.02 10627.88 41.52 0.00 0.00 12014.32 332.02 11709.99 00:05:29.581 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:29.581 Verification LBA range: start 0x2000 length 0x2000 00:05:29.581 raid0 : 5.02 11856.80 46.32 0.00 0.00 10768.65 328.45 17822.03 00:05:29.581 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:29.581 Verification LBA range: start 0x0 length 0x2000 00:05:29.581 concat0 : 5.02 10627.65 41.51 0.00 0.00 12012.85 333.81 11709.99 00:05:29.581 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:29.581 Verification LBA range: start 0x2000 length 0x2000 00:05:29.581 concat0 : 5.02 11856.49 46.31 0.00 0.00 10767.85 328.45 17250.81 00:05:29.581 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:29.581 Verification LBA range: start 0x0 length 0x1000 00:05:29.581 raid1 : 5.02 10627.41 41.51 0.00 0.00 12012.01 387.36 11652.87 00:05:29.581 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:29.581 Verification LBA range: start 0x1000 length 0x1000 00:05:29.581 raid1 : 5.02 11856.19 46.31 0.00 0.00 10766.37 385.57 17022.33 00:05:29.581 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:29.581 Verification LBA range: start 0x0 length 0x4e2 00:05:29.581 AIO0 : 5.15 703.21 2.75 0.00 0.00 179898.02 8796.77 286980.43 00:05:29.581 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:29.581 Verification LBA range: start 0x4e2 length 0x4e2 00:05:29.581 AIO0 : 5.15 701.33 2.74 0.00 0.00 180161.99 8054.19 294292.04 00:05:29.581 =================================================================================================================== 00:05:29.581 Total : 321726.46 1256.74 0.00 0.00 12719.06 124.95 5030383.16 00:05:29.581 ************************************ 00:05:29.581 END TEST bdev_verify 00:05:29.581 ************************************ 00:05:29.581 00:05:29.581 real 0m6.133s 00:05:29.581 user 0m10.762s 00:05:29.581 sys 0m0.576s 00:05:29.581 20:43:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.581 20:43:20 -- common/autotest_common.sh@10 -- # set +x 00:05:29.581 20:43:20 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:05:29.581 20:43:20 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:05:29.581 20:43:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:29.581 20:43:20 -- common/autotest_common.sh@10 -- # set +x 00:05:29.581 ************************************ 00:05:29.581 START TEST bdev_verify_big_io 00:05:29.581 ************************************ 00:05:29.582 20:43:20 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:05:29.582 [2024-04-16 20:43:20.601327] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:05:29.582 [2024-04-16 20:43:20.601688] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:30.191 EAL: TSC is not safe to use in SMP mode 00:05:30.191 EAL: TSC is not invariant 00:05:30.191 [2024-04-16 20:43:21.040352] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:30.191 [2024-04-16 20:43:21.132811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.191 [2024-04-16 20:43:21.132812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.191 [2024-04-16 20:43:21.188728] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:30.191 [2024-04-16 20:43:21.188785] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:30.191 [2024-04-16 20:43:21.196714] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:30.191 [2024-04-16 20:43:21.196737] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:30.191 [2024-04-16 20:43:21.204731] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:30.191 [2024-04-16 20:43:21.204749] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:05:30.191 [2024-04-16 20:43:21.204755] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:05:30.192 [2024-04-16 20:43:21.252734] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:30.192 [2024-04-16 20:43:21.252797] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:30.192 [2024-04-16 20:43:21.252810] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d83f800 00:05:30.192 [2024-04-16 20:43:21.252815] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:30.192 [2024-04-16 20:43:21.253170] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:30.192 [2024-04-16 20:43:21.253195] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:05:30.455 [2024-04-16 20:43:21.353492] bdevperf.c:1810:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:05:30.455 [2024-04-16 20:43:21.353628] bdevperf.c:1810:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:05:30.455 [2024-04-16 20:43:21.353690] bdevperf.c:1810:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:05:30.455 [2024-04-16 20:43:21.353758] bdevperf.c:1810:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:05:30.455 [2024-04-16 20:43:21.353827] bdevperf.c:1810:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:05:30.455 [2024-04-16 20:43:21.353889] bdevperf.c:1810:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:05:30.455 [2024-04-16 20:43:21.353977] bdevperf.c:1810:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:05:30.456 [2024-04-16 20:43:21.354103] bdevperf.c:1810:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:05:30.456 [2024-04-16 20:43:21.354208] bdevperf.c:1810:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:05:30.456 [2024-04-16 20:43:21.354307] bdevperf.c:1810:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:05:30.456 [2024-04-16 20:43:21.354408] bdevperf.c:1810:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:05:30.456 [2024-04-16 20:43:21.354509] bdevperf.c:1810:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:05:30.456 [2024-04-16 20:43:21.354606] bdevperf.c:1810:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:05:30.456 [2024-04-16 20:43:21.354697] bdevperf.c:1810:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:05:30.456 [2024-04-16 20:43:21.354801] bdevperf.c:1810:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:05:30.456 [2024-04-16 20:43:21.354906] bdevperf.c:1810:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:05:30.456 [2024-04-16 20:43:21.356131] bdevperf.c:1810:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:05:30.456 [2024-04-16 20:43:21.356281] bdevperf.c:1810:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:05:30.456 Running I/O for 5 seconds... 00:05:35.736 00:05:35.736 Latency(us) 00:05:35.736 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:05:35.736 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:35.736 Verification LBA range: start 0x0 length 0x100 00:05:35.736 Malloc0 : 5.06 4723.03 295.19 0.00 0.00 27005.77 2013.55 88653.19 00:05:35.736 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:35.736 Verification LBA range: start 0x100 length 0x100 00:05:35.736 Malloc0 : 5.05 4773.59 298.35 0.00 0.00 26714.36 1970.71 109217.08 00:05:35.736 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:35.736 Verification LBA range: start 0x0 length 0x80 00:05:35.736 Malloc1p0 : 5.06 2374.99 148.44 0.00 0.00 53679.07 3084.58 83169.49 00:05:35.736 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:35.736 Verification LBA range: start 0x80 length 0x80 00:05:35.736 Malloc1p0 : 5.06 3171.37 198.21 0.00 0.00 40167.67 3098.86 74943.94 00:05:35.736 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:35.736 Verification LBA range: start 0x0 length 0x80 00:05:35.736 Malloc1p1 : 5.07 1204.17 75.26 0.00 0.00 105749.83 2870.38 191015.64 00:05:35.736 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:35.736 Verification LBA range: start 0x80 length 0x80 00:05:35.736 Malloc1p1 : 5.07 1226.30 76.64 0.00 0.00 103856.86 2770.41 187359.84 00:05:35.736 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:05:35.736 Verification LBA range: start 0x0 length 0x20 00:05:35.736 Malloc2p0 : 5.06 798.68 49.92 0.00 0.00 39841.64 778.29 52552.15 00:05:35.736 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:05:35.736 Verification LBA range: start 0x20 length 0x20 00:05:35.736 Malloc2p0 : 5.05 812.00 50.75 0.00 0.00 39189.87 778.29 51638.20 00:05:35.736 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:05:35.736 Verification LBA range: start 0x0 length 0x20 00:05:35.736 Malloc2p1 : 5.06 798.63 49.91 0.00 0.00 39830.44 771.15 53009.12 00:05:35.736 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:05:35.736 Verification LBA range: start 0x20 length 0x20 00:05:35.736 Malloc2p1 : 5.05 811.95 50.75 0.00 0.00 39178.62 781.86 51866.69 00:05:35.736 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:05:35.736 Verification LBA range: start 0x0 length 0x20 00:05:35.736 Malloc2p2 : 5.06 798.59 49.91 0.00 0.00 39818.34 785.43 53466.10 00:05:35.736 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:05:35.736 Verification LBA range: start 0x20 length 0x20 00:05:35.736 Malloc2p2 : 5.05 811.91 50.74 0.00 0.00 39167.97 778.29 52095.17 00:05:35.736 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:05:35.736 Verification LBA range: start 0x0 length 0x20 00:05:35.736 Malloc2p3 : 5.06 798.54 49.91 0.00 0.00 39807.88 803.28 53923.08 00:05:35.736 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:05:35.736 Verification LBA range: start 0x20 length 0x20 00:05:35.736 Malloc2p3 : 5.06 811.86 50.74 0.00 0.00 39156.74 817.56 52323.66 00:05:35.736 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:05:35.736 Verification LBA range: start 0x0 length 0x20 00:05:35.736 Malloc2p4 : 5.06 798.50 49.91 0.00 0.00 39796.06 785.43 54380.05 00:05:35.736 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:05:35.736 Verification LBA range: start 0x20 length 0x20 00:05:35.736 Malloc2p4 : 5.06 811.81 50.74 0.00 0.00 39146.17 789.00 52780.64 00:05:35.736 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:05:35.736 Verification LBA range: start 0x0 length 0x20 00:05:35.736 Malloc2p5 : 5.06 798.46 49.90 0.00 0.00 39785.77 803.28 54837.03 00:05:35.736 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:05:35.736 Verification LBA range: start 0x20 length 0x20 00:05:35.736 Malloc2p5 : 5.06 811.76 50.74 0.00 0.00 39135.81 810.42 53009.12 00:05:35.736 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:05:35.736 Verification LBA range: start 0x0 length 0x20 00:05:35.736 Malloc2p6 : 5.06 798.41 49.90 0.00 0.00 39771.73 806.85 55294.00 00:05:35.736 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:05:35.736 Verification LBA range: start 0x20 length 0x20 00:05:35.736 Malloc2p6 : 5.06 811.72 50.73 0.00 0.00 39122.48 785.43 53466.10 00:05:35.736 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:05:35.736 Verification LBA range: start 0x0 length 0x20 00:05:35.736 Malloc2p7 : 5.06 798.37 49.90 0.00 0.00 39761.92 799.71 55750.98 00:05:35.736 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:05:35.736 Verification LBA range: start 0x20 length 0x20 00:05:35.736 Malloc2p7 : 5.06 811.67 50.73 0.00 0.00 39112.02 792.57 53923.08 00:05:35.736 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:35.736 Verification LBA range: start 0x0 length 0x100 00:05:35.736 TestPT : 5.13 1048.46 65.53 0.00 0.00 120310.41 11367.26 215692.30 00:05:35.736 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:35.736 Verification LBA range: start 0x100 length 0x100 00:05:35.736 TestPT : 5.25 24.77 1.55 0.00 0.00 5084472.66 2756.13 5176615.23 00:05:35.736 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:35.736 Verification LBA range: start 0x0 length 0x200 00:05:35.736 raid0 : 5.07 1204.06 75.25 0.00 0.00 105290.15 2770.41 190101.69 00:05:35.736 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:35.736 Verification LBA range: start 0x200 length 0x200 00:05:35.736 raid0 : 5.07 1232.64 77.04 0.00 0.00 102918.54 2798.97 187359.84 00:05:35.736 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:35.736 Verification LBA range: start 0x0 length 0x200 00:05:35.736 concat0 : 5.08 1211.20 75.70 0.00 0.00 104614.70 2741.85 191015.64 00:05:35.736 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:35.736 Verification LBA range: start 0x200 length 0x200 00:05:35.736 concat0 : 5.07 1232.59 77.04 0.00 0.00 102795.52 2841.81 188273.79 00:05:35.736 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:35.736 Verification LBA range: start 0x0 length 0x100 00:05:35.736 raid1 : 5.07 1211.95 75.75 0.00 0.00 104419.86 3213.11 191015.64 00:05:35.736 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:35.736 Verification LBA range: start 0x100 length 0x100 00:05:35.736 raid1 : 5.07 1232.53 77.03 0.00 0.00 102652.85 3241.67 188273.79 00:05:35.736 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:05:35.736 Verification LBA range: start 0x0 length 0x4e 00:05:35.736 AIO0 : 5.07 1189.62 74.35 0.00 0.00 64755.66 1742.22 110131.03 00:05:35.736 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:05:35.736 Verification LBA range: start 0x4e length 0x4e 00:05:35.736 AIO0 : 5.07 1211.98 75.75 0.00 0.00 63542.66 1613.69 108303.13 00:05:35.736 =================================================================================================================== 00:05:35.736 Total : 41156.12 2572.26 0.00 0.00 59443.39 771.15 5176615.23 00:05:35.736 00:05:35.736 real 0m6.250s 00:05:35.736 user 0m11.148s 00:05:35.736 sys 0m0.695s 00:05:35.736 20:43:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.736 20:43:26 -- common/autotest_common.sh@10 -- # set +x 00:05:35.736 ************************************ 00:05:35.736 END TEST bdev_verify_big_io 00:05:35.736 ************************************ 00:05:35.996 20:43:26 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:05:35.996 20:43:26 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:05:35.996 20:43:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:35.996 20:43:26 -- common/autotest_common.sh@10 -- # set +x 00:05:35.996 ************************************ 00:05:35.996 START TEST bdev_write_zeroes 00:05:35.996 ************************************ 00:05:35.996 20:43:26 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:05:35.996 [2024-04-16 20:43:26.904346] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:05:35.996 [2024-04-16 20:43:26.904666] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:36.255 EAL: TSC is not safe to use in SMP mode 00:05:36.255 EAL: TSC is not invariant 00:05:36.255 [2024-04-16 20:43:27.335719] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.515 [2024-04-16 20:43:27.427260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.515 [2024-04-16 20:43:27.481622] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:36.515 [2024-04-16 20:43:27.481668] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:36.515 [2024-04-16 20:43:27.489613] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:36.515 [2024-04-16 20:43:27.489627] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:36.515 [2024-04-16 20:43:27.497628] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:36.515 [2024-04-16 20:43:27.497642] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:05:36.515 [2024-04-16 20:43:27.497648] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:05:36.515 [2024-04-16 20:43:27.545631] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:36.515 [2024-04-16 20:43:27.545697] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:36.515 [2024-04-16 20:43:27.545708] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82ccc0800 00:05:36.515 [2024-04-16 20:43:27.545714] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:36.515 [2024-04-16 20:43:27.546047] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:36.515 [2024-04-16 20:43:27.546069] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:05:36.775 Running I/O for 1 seconds... 00:05:37.715 00:05:37.715 Latency(us) 00:05:37.715 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:05:37.715 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:37.715 Malloc0 : 1.01 37798.96 147.65 0.00 0.00 3385.62 143.70 6226.29 00:05:37.715 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:37.715 Malloc1p0 : 1.01 37795.16 147.64 0.00 0.00 3385.28 166.90 6083.48 00:05:37.715 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:37.715 Malloc1p1 : 1.01 37791.78 147.62 0.00 0.00 3384.15 184.75 5912.12 00:05:37.715 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:37.715 Malloc2p0 : 1.01 37787.61 147.61 0.00 0.00 3383.17 165.12 5797.87 00:05:37.715 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:37.715 Malloc2p1 : 1.01 37784.61 147.60 0.00 0.00 3382.46 156.19 5712.19 00:05:37.716 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:37.716 Malloc2p2 : 1.01 37778.82 147.57 0.00 0.00 3381.55 156.19 5626.51 00:05:37.716 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:37.716 Malloc2p3 : 1.01 37775.86 147.56 0.00 0.00 3380.85 155.30 5483.70 00:05:37.716 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:37.716 Malloc2p4 : 1.01 37772.89 147.55 0.00 0.00 3380.24 158.87 5369.46 00:05:37.716 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:37.716 Malloc2p5 : 1.01 37769.97 147.54 0.00 0.00 3379.09 153.52 5283.78 00:05:37.716 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:37.716 Malloc2p6 : 1.01 37767.15 147.53 0.00 0.00 3378.35 153.52 5169.53 00:05:37.716 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:37.716 Malloc2p7 : 1.01 37763.81 147.51 0.00 0.00 3377.82 152.62 5226.65 00:05:37.716 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:37.716 TestPT : 1.01 37760.36 147.50 0.00 0.00 3376.72 153.52 5140.97 00:05:37.716 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:37.716 raid0 : 1.01 37756.54 147.49 0.00 0.00 3375.65 199.93 5226.65 00:05:37.716 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:37.716 concat0 : 1.01 37752.79 147.47 0.00 0.00 3374.58 196.36 5112.41 00:05:37.716 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:37.716 raid1 : 1.01 37747.91 147.45 0.00 0.00 3373.37 373.08 5198.09 00:05:37.716 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:37.716 AIO0 : 1.09 1452.34 5.67 0.00 0.00 84143.77 642.62 336333.76 00:05:37.716 =================================================================================================================== 00:05:37.716 Total : 568056.58 2218.97 0.00 0.00 3604.35 143.70 336333.76 00:05:37.976 00:05:37.976 real 0m2.055s 00:05:37.976 user 0m1.490s 00:05:37.976 sys 0m0.454s 00:05:37.976 20:43:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.976 20:43:28 -- common/autotest_common.sh@10 -- # set +x 00:05:37.976 ************************************ 00:05:37.976 END TEST bdev_write_zeroes 00:05:37.976 ************************************ 00:05:37.976 20:43:28 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:05:37.976 20:43:29 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:05:37.976 20:43:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:37.976 20:43:29 -- common/autotest_common.sh@10 -- # set +x 00:05:37.976 ************************************ 00:05:37.976 START TEST bdev_json_nonenclosed 00:05:37.976 ************************************ 00:05:37.976 20:43:29 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:05:37.976 [2024-04-16 20:43:29.018382] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:05:37.976 [2024-04-16 20:43:29.018629] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:38.545 EAL: TSC is not safe to use in SMP mode 00:05:38.545 EAL: TSC is not invariant 00:05:38.545 [2024-04-16 20:43:29.446711] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.545 [2024-04-16 20:43:29.537795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.545 [2024-04-16 20:43:29.537883] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:05:38.545 [2024-04-16 20:43:29.537892] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:38.545 00:05:38.545 real 0m0.622s 00:05:38.545 user 0m0.148s 00:05:38.545 sys 0m0.472s 00:05:38.545 20:43:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.545 20:43:29 -- common/autotest_common.sh@10 -- # set +x 00:05:38.545 ************************************ 00:05:38.545 END TEST bdev_json_nonenclosed 00:05:38.545 ************************************ 00:05:38.804 20:43:29 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:05:38.804 20:43:29 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:05:38.804 20:43:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:38.805 20:43:29 -- common/autotest_common.sh@10 -- # set +x 00:05:38.805 ************************************ 00:05:38.805 START TEST bdev_json_nonarray 00:05:38.805 ************************************ 00:05:38.805 20:43:29 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:05:38.805 [2024-04-16 20:43:29.688105] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:05:38.805 [2024-04-16 20:43:29.688421] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:39.063 EAL: TSC is not safe to use in SMP mode 00:05:39.063 EAL: TSC is not invariant 00:05:39.063 [2024-04-16 20:43:30.117592] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.323 [2024-04-16 20:43:30.208197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.323 [2024-04-16 20:43:30.208277] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:05:39.323 [2024-04-16 20:43:30.208286] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:39.323 00:05:39.323 real 0m0.623s 00:05:39.323 user 0m0.149s 00:05:39.323 sys 0m0.472s 00:05:39.323 20:43:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.323 20:43:30 -- common/autotest_common.sh@10 -- # set +x 00:05:39.323 ************************************ 00:05:39.323 END TEST bdev_json_nonarray 00:05:39.323 ************************************ 00:05:39.323 20:43:30 -- bdev/blockdev.sh@785 -- # [[ bdev == bdev ]] 00:05:39.323 20:43:30 -- bdev/blockdev.sh@786 -- # run_test bdev_qos qos_test_suite '' 00:05:39.323 20:43:30 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:05:39.323 20:43:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:39.323 20:43:30 -- common/autotest_common.sh@10 -- # set +x 00:05:39.323 ************************************ 00:05:39.323 START TEST bdev_qos 00:05:39.323 ************************************ 00:05:39.323 20:43:30 -- common/autotest_common.sh@1104 -- # qos_test_suite '' 00:05:39.323 20:43:30 -- bdev/blockdev.sh@444 -- # QOS_PID=47299 00:05:39.323 20:43:30 -- bdev/blockdev.sh@443 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:05:39.323 20:43:30 -- bdev/blockdev.sh@445 -- # echo 'Process qos testing pid: 47299' 00:05:39.323 Process qos testing pid: 47299 00:05:39.323 20:43:30 -- bdev/blockdev.sh@446 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:05:39.323 20:43:30 -- bdev/blockdev.sh@447 -- # waitforlisten 47299 00:05:39.323 20:43:30 -- common/autotest_common.sh@819 -- # '[' -z 47299 ']' 00:05:39.323 20:43:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.323 20:43:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:39.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.323 20:43:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.323 20:43:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:39.323 20:43:30 -- common/autotest_common.sh@10 -- # set +x 00:05:39.323 [2024-04-16 20:43:30.368588] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:05:39.323 [2024-04-16 20:43:30.368858] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:39.893 EAL: TSC is not safe to use in SMP mode 00:05:39.893 EAL: TSC is not invariant 00:05:39.893 [2024-04-16 20:43:30.803702] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.893 [2024-04-16 20:43:30.894213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.462 20:43:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:40.462 20:43:31 -- common/autotest_common.sh@852 -- # return 0 00:05:40.462 20:43:31 -- bdev/blockdev.sh@449 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:05:40.462 20:43:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:40.462 20:43:31 -- common/autotest_common.sh@10 -- # set +x 00:05:40.462 Malloc_0 00:05:40.462 20:43:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:40.462 20:43:31 -- bdev/blockdev.sh@450 -- # waitforbdev Malloc_0 00:05:40.462 20:43:31 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_0 00:05:40.462 20:43:31 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:05:40.462 20:43:31 -- common/autotest_common.sh@889 -- # local i 00:05:40.462 20:43:31 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:05:40.462 20:43:31 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:05:40.462 20:43:31 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:05:40.462 20:43:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:40.462 20:43:31 -- common/autotest_common.sh@10 -- # set +x 00:05:40.462 20:43:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:40.462 20:43:31 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:05:40.462 20:43:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:40.462 20:43:31 -- common/autotest_common.sh@10 -- # set +x 00:05:40.462 [ 00:05:40.462 { 00:05:40.462 "name": "Malloc_0", 00:05:40.462 "aliases": [ 00:05:40.462 "fc63bf69-fc31-11ee-80f8-ef3e42bb1492" 00:05:40.462 ], 00:05:40.462 "product_name": "Malloc disk", 00:05:40.462 "block_size": 512, 00:05:40.462 "num_blocks": 262144, 00:05:40.462 "uuid": "fc63bf69-fc31-11ee-80f8-ef3e42bb1492", 00:05:40.462 "assigned_rate_limits": { 00:05:40.462 "rw_ios_per_sec": 0, 00:05:40.462 "rw_mbytes_per_sec": 0, 00:05:40.462 "r_mbytes_per_sec": 0, 00:05:40.462 "w_mbytes_per_sec": 0 00:05:40.462 }, 00:05:40.462 "claimed": false, 00:05:40.462 "zoned": false, 00:05:40.462 "supported_io_types": { 00:05:40.463 "read": true, 00:05:40.463 "write": true, 00:05:40.463 "unmap": true, 00:05:40.463 "write_zeroes": true, 00:05:40.463 "flush": true, 00:05:40.463 "reset": true, 00:05:40.463 "compare": false, 00:05:40.463 "compare_and_write": false, 00:05:40.463 "abort": true, 00:05:40.463 "nvme_admin": false, 00:05:40.463 "nvme_io": false 00:05:40.463 }, 00:05:40.463 "memory_domains": [ 00:05:40.463 { 00:05:40.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:40.463 "dma_device_type": 2 00:05:40.463 } 00:05:40.463 ], 00:05:40.463 "driver_specific": {} 00:05:40.463 } 00:05:40.463 ] 00:05:40.463 20:43:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:40.463 20:43:31 -- common/autotest_common.sh@895 -- # return 0 00:05:40.463 20:43:31 -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_null_create Null_1 128 512 00:05:40.463 20:43:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:40.463 20:43:31 -- common/autotest_common.sh@10 -- # set +x 00:05:40.463 Null_1 00:05:40.463 20:43:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:40.463 20:43:31 -- bdev/blockdev.sh@452 -- # waitforbdev Null_1 00:05:40.463 20:43:31 -- common/autotest_common.sh@887 -- # local bdev_name=Null_1 00:05:40.463 20:43:31 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:05:40.463 20:43:31 -- common/autotest_common.sh@889 -- # local i 00:05:40.463 20:43:31 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:05:40.463 20:43:31 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:05:40.463 20:43:31 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:05:40.463 20:43:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:40.463 20:43:31 -- common/autotest_common.sh@10 -- # set +x 00:05:40.463 20:43:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:40.463 20:43:31 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:05:40.463 20:43:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:40.463 20:43:31 -- common/autotest_common.sh@10 -- # set +x 00:05:40.463 [ 00:05:40.463 { 00:05:40.463 "name": "Null_1", 00:05:40.463 "aliases": [ 00:05:40.463 "fc69d955-fc31-11ee-80f8-ef3e42bb1492" 00:05:40.463 ], 00:05:40.463 "product_name": "Null disk", 00:05:40.463 "block_size": 512, 00:05:40.463 "num_blocks": 262144, 00:05:40.463 "uuid": "fc69d955-fc31-11ee-80f8-ef3e42bb1492", 00:05:40.463 "assigned_rate_limits": { 00:05:40.463 "rw_ios_per_sec": 0, 00:05:40.463 "rw_mbytes_per_sec": 0, 00:05:40.463 "r_mbytes_per_sec": 0, 00:05:40.463 "w_mbytes_per_sec": 0 00:05:40.463 }, 00:05:40.463 "claimed": false, 00:05:40.463 "zoned": false, 00:05:40.463 "supported_io_types": { 00:05:40.463 "read": true, 00:05:40.463 "write": true, 00:05:40.463 "unmap": false, 00:05:40.463 "write_zeroes": true, 00:05:40.463 "flush": false, 00:05:40.463 "reset": true, 00:05:40.463 "compare": false, 00:05:40.463 "compare_and_write": false, 00:05:40.463 "abort": true, 00:05:40.463 "nvme_admin": false, 00:05:40.463 "nvme_io": false 00:05:40.463 }, 00:05:40.463 "driver_specific": {} 00:05:40.463 } 00:05:40.463 ] 00:05:40.463 20:43:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:40.463 20:43:31 -- common/autotest_common.sh@895 -- # return 0 00:05:40.463 20:43:31 -- bdev/blockdev.sh@455 -- # qos_function_test 00:05:40.463 20:43:31 -- bdev/blockdev.sh@408 -- # local qos_lower_iops_limit=1000 00:05:40.463 20:43:31 -- bdev/blockdev.sh@454 -- # /usr/home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:05:40.463 20:43:31 -- bdev/blockdev.sh@409 -- # local qos_lower_bw_limit=2 00:05:40.463 20:43:31 -- bdev/blockdev.sh@410 -- # local io_result=0 00:05:40.463 20:43:31 -- bdev/blockdev.sh@411 -- # local iops_limit=0 00:05:40.463 20:43:31 -- bdev/blockdev.sh@412 -- # local bw_limit=0 00:05:40.463 20:43:31 -- bdev/blockdev.sh@414 -- # get_io_result IOPS Malloc_0 00:05:40.463 20:43:31 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:05:40.463 20:43:31 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:05:40.463 20:43:31 -- bdev/blockdev.sh@375 -- # local iostat_result 00:05:40.463 20:43:31 -- bdev/blockdev.sh@376 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:05:40.463 20:43:31 -- bdev/blockdev.sh@376 -- # tail -1 00:05:40.463 20:43:31 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:05:40.463 Running I/O for 60 seconds... 00:05:45.798 20:43:36 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 737366.24 2949464.95 0.00 0.00 3066880.00 0.00 0.00 ' 00:05:45.798 20:43:36 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:05:45.798 20:43:36 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:05:45.798 20:43:36 -- bdev/blockdev.sh@378 -- # iostat_result=737366.24 00:05:45.798 20:43:36 -- bdev/blockdev.sh@383 -- # echo 737366 00:05:45.798 20:43:36 -- bdev/blockdev.sh@414 -- # io_result=737366 00:05:45.798 20:43:36 -- bdev/blockdev.sh@416 -- # iops_limit=184000 00:05:45.798 20:43:36 -- bdev/blockdev.sh@417 -- # '[' 184000 -gt 1000 ']' 00:05:45.798 20:43:36 -- bdev/blockdev.sh@420 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 184000 Malloc_0 00:05:45.798 20:43:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:45.798 20:43:36 -- common/autotest_common.sh@10 -- # set +x 00:05:45.798 20:43:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:45.798 20:43:36 -- bdev/blockdev.sh@421 -- # run_test bdev_qos_iops run_qos_test 184000 IOPS Malloc_0 00:05:45.798 20:43:36 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:05:45.798 20:43:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:45.798 20:43:36 -- common/autotest_common.sh@10 -- # set +x 00:05:45.798 ************************************ 00:05:45.798 START TEST bdev_qos_iops 00:05:45.798 ************************************ 00:05:45.798 20:43:36 -- common/autotest_common.sh@1104 -- # run_qos_test 184000 IOPS Malloc_0 00:05:45.798 20:43:36 -- bdev/blockdev.sh@387 -- # local qos_limit=184000 00:05:45.798 20:43:36 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:05:45.798 20:43:36 -- bdev/blockdev.sh@390 -- # get_io_result IOPS Malloc_0 00:05:45.798 20:43:36 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:05:45.798 20:43:36 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:05:45.798 20:43:36 -- bdev/blockdev.sh@375 -- # local iostat_result 00:05:45.798 20:43:36 -- bdev/blockdev.sh@376 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:05:45.798 20:43:36 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:05:45.798 20:43:36 -- bdev/blockdev.sh@376 -- # tail -1 00:05:52.374 20:43:42 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 184023.81 736095.26 0.00 0.00 793408.00 0.00 0.00 ' 00:05:52.374 20:43:42 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:05:52.374 20:43:42 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:05:52.374 20:43:42 -- bdev/blockdev.sh@378 -- # iostat_result=184023.81 00:05:52.374 20:43:42 -- bdev/blockdev.sh@383 -- # echo 184023 00:05:52.374 20:43:42 -- bdev/blockdev.sh@390 -- # qos_result=184023 00:05:52.374 20:43:42 -- bdev/blockdev.sh@391 -- # '[' IOPS = BANDWIDTH ']' 00:05:52.374 20:43:42 -- bdev/blockdev.sh@394 -- # lower_limit=165600 00:05:52.374 20:43:42 -- bdev/blockdev.sh@395 -- # upper_limit=202400 00:05:52.374 20:43:42 -- bdev/blockdev.sh@398 -- # '[' 184023 -lt 165600 ']' 00:05:52.374 20:43:42 -- bdev/blockdev.sh@398 -- # '[' 184023 -gt 202400 ']' 00:05:52.374 00:05:52.374 real 0m5.507s 00:05:52.374 user 0m0.108s 00:05:52.374 sys 0m0.036s 00:05:52.374 20:43:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.374 20:43:42 -- common/autotest_common.sh@10 -- # set +x 00:05:52.374 ************************************ 00:05:52.374 END TEST bdev_qos_iops 00:05:52.374 ************************************ 00:05:52.374 20:43:42 -- bdev/blockdev.sh@425 -- # get_io_result BANDWIDTH Null_1 00:05:52.374 20:43:42 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:05:52.374 20:43:42 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:05:52.374 20:43:42 -- bdev/blockdev.sh@375 -- # local iostat_result 00:05:52.374 20:43:42 -- bdev/blockdev.sh@376 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:05:52.374 20:43:42 -- bdev/blockdev.sh@376 -- # grep Null_1 00:05:52.374 20:43:42 -- bdev/blockdev.sh@376 -- # tail -1 00:05:57.685 20:43:47 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 713148.81 2852595.24 0.00 0.00 2980864.00 0.00 0.00 ' 00:05:57.685 20:43:47 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:05:57.685 20:43:47 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:05:57.685 20:43:47 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:05:57.685 20:43:47 -- bdev/blockdev.sh@380 -- # iostat_result=2980864.00 00:05:57.685 20:43:47 -- bdev/blockdev.sh@383 -- # echo 2980864 00:05:57.685 20:43:47 -- bdev/blockdev.sh@425 -- # bw_limit=2980864 00:05:57.685 20:43:47 -- bdev/blockdev.sh@426 -- # bw_limit=291 00:05:57.685 20:43:47 -- bdev/blockdev.sh@427 -- # '[' 291 -lt 2 ']' 00:05:57.685 20:43:47 -- bdev/blockdev.sh@430 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 291 Null_1 00:05:57.685 20:43:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:57.685 20:43:47 -- common/autotest_common.sh@10 -- # set +x 00:05:57.685 20:43:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:57.685 20:43:47 -- bdev/blockdev.sh@431 -- # run_test bdev_qos_bw run_qos_test 291 BANDWIDTH Null_1 00:05:57.685 20:43:47 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:05:57.685 20:43:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:57.685 20:43:47 -- common/autotest_common.sh@10 -- # set +x 00:05:57.685 ************************************ 00:05:57.685 START TEST bdev_qos_bw 00:05:57.685 ************************************ 00:05:57.685 20:43:47 -- common/autotest_common.sh@1104 -- # run_qos_test 291 BANDWIDTH Null_1 00:05:57.685 20:43:47 -- bdev/blockdev.sh@387 -- # local qos_limit=291 00:05:57.685 20:43:47 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:05:57.685 20:43:47 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Null_1 00:05:57.685 20:43:47 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:05:57.685 20:43:47 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:05:57.685 20:43:47 -- bdev/blockdev.sh@375 -- # local iostat_result 00:05:57.685 20:43:47 -- bdev/blockdev.sh@376 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:05:57.685 20:43:47 -- bdev/blockdev.sh@376 -- # grep Null_1 00:05:57.685 20:43:47 -- bdev/blockdev.sh@376 -- # tail -1 00:06:02.966 20:43:53 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 74498.21 297992.85 0.00 0.00 308116.00 0.00 0.00 ' 00:06:02.966 20:43:53 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:06:02.966 20:43:53 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:06:02.966 20:43:53 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:06:02.966 20:43:53 -- bdev/blockdev.sh@380 -- # iostat_result=308116.00 00:06:02.966 20:43:53 -- bdev/blockdev.sh@383 -- # echo 308116 00:06:02.966 20:43:53 -- bdev/blockdev.sh@390 -- # qos_result=308116 00:06:02.966 20:43:53 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:06:02.966 20:43:53 -- bdev/blockdev.sh@392 -- # qos_limit=297984 00:06:02.966 20:43:53 -- bdev/blockdev.sh@394 -- # lower_limit=268185 00:06:02.966 20:43:53 -- bdev/blockdev.sh@395 -- # upper_limit=327782 00:06:02.966 20:43:53 -- bdev/blockdev.sh@398 -- # '[' 308116 -lt 268185 ']' 00:06:02.966 20:43:53 -- bdev/blockdev.sh@398 -- # '[' 308116 -gt 327782 ']' 00:06:02.966 00:06:02.966 real 0m5.411s 00:06:02.966 user 0m0.115s 00:06:02.966 sys 0m0.026s 00:06:02.966 20:43:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.966 20:43:53 -- common/autotest_common.sh@10 -- # set +x 00:06:02.966 ************************************ 00:06:02.966 END TEST bdev_qos_bw 00:06:02.966 ************************************ 00:06:02.966 20:43:53 -- bdev/blockdev.sh@434 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:06:02.966 20:43:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:02.966 20:43:53 -- common/autotest_common.sh@10 -- # set +x 00:06:02.966 20:43:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:02.966 20:43:53 -- bdev/blockdev.sh@435 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:06:02.966 20:43:53 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:06:02.966 20:43:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:02.966 20:43:53 -- common/autotest_common.sh@10 -- # set +x 00:06:02.966 ************************************ 00:06:02.966 START TEST bdev_qos_ro_bw 00:06:02.966 ************************************ 00:06:02.966 20:43:53 -- common/autotest_common.sh@1104 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:06:02.966 20:43:53 -- bdev/blockdev.sh@387 -- # local qos_limit=2 00:06:02.966 20:43:53 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:06:02.966 20:43:53 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Malloc_0 00:06:02.966 20:43:53 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:06:02.966 20:43:53 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:06:02.966 20:43:53 -- bdev/blockdev.sh@375 -- # local iostat_result 00:06:02.966 20:43:53 -- bdev/blockdev.sh@376 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:06:02.966 20:43:53 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:06:02.966 20:43:53 -- bdev/blockdev.sh@376 -- # tail -1 00:06:08.242 20:43:58 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 512.16 2048.64 0.00 0.00 2172.00 0.00 0.00 ' 00:06:08.242 20:43:58 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:06:08.242 20:43:58 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:06:08.243 20:43:58 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:06:08.243 20:43:58 -- bdev/blockdev.sh@380 -- # iostat_result=2172.00 00:06:08.243 20:43:58 -- bdev/blockdev.sh@383 -- # echo 2172 00:06:08.243 20:43:58 -- bdev/blockdev.sh@390 -- # qos_result=2172 00:06:08.243 20:43:58 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:06:08.243 20:43:58 -- bdev/blockdev.sh@392 -- # qos_limit=2048 00:06:08.243 20:43:58 -- bdev/blockdev.sh@394 -- # lower_limit=1843 00:06:08.243 20:43:58 -- bdev/blockdev.sh@395 -- # upper_limit=2252 00:06:08.243 20:43:58 -- bdev/blockdev.sh@398 -- # '[' 2172 -lt 1843 ']' 00:06:08.243 20:43:58 -- bdev/blockdev.sh@398 -- # '[' 2172 -gt 2252 ']' 00:06:08.243 00:06:08.243 real 0m5.423s 00:06:08.243 user 0m0.117s 00:06:08.243 sys 0m0.023s 00:06:08.243 20:43:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.243 20:43:58 -- common/autotest_common.sh@10 -- # set +x 00:06:08.243 ************************************ 00:06:08.243 END TEST bdev_qos_ro_bw 00:06:08.243 ************************************ 00:06:08.243 20:43:58 -- bdev/blockdev.sh@457 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:06:08.243 20:43:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:08.243 20:43:58 -- common/autotest_common.sh@10 -- # set +x 00:06:08.243 20:43:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:08.243 20:43:59 -- bdev/blockdev.sh@458 -- # rpc_cmd bdev_null_delete Null_1 00:06:08.243 20:43:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:08.243 20:43:59 -- common/autotest_common.sh@10 -- # set +x 00:06:08.243 00:06:08.243 Latency(us) 00:06:08.243 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:08.243 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:06:08.243 Malloc_0 : 27.82 250817.62 979.76 0.00 0.00 1011.01 315.96 504500.64 00:06:08.243 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:06:08.243 Null_1 : 27.85 475492.55 1857.39 0.00 0.00 538.19 47.08 22049.05 00:06:08.243 =================================================================================================================== 00:06:08.243 Total : 726310.16 2837.15 0.00 0.00 701.37 47.08 504500.64 00:06:08.243 0 00:06:08.243 20:43:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:08.243 20:43:59 -- bdev/blockdev.sh@459 -- # killprocess 47299 00:06:08.243 20:43:59 -- common/autotest_common.sh@926 -- # '[' -z 47299 ']' 00:06:08.243 20:43:59 -- common/autotest_common.sh@930 -- # kill -0 47299 00:06:08.243 20:43:59 -- common/autotest_common.sh@931 -- # uname 00:06:08.243 20:43:59 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:06:08.243 20:43:59 -- common/autotest_common.sh@934 -- # ps -c -o command 47299 00:06:08.243 20:43:59 -- common/autotest_common.sh@934 -- # tail -1 00:06:08.243 20:43:59 -- common/autotest_common.sh@934 -- # process_name=bdevperf 00:06:08.243 20:43:59 -- common/autotest_common.sh@936 -- # '[' bdevperf = sudo ']' 00:06:08.243 killing process with pid 47299 00:06:08.243 20:43:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 47299' 00:06:08.243 20:43:59 -- common/autotest_common.sh@945 -- # kill 47299 00:06:08.243 Received shutdown signal, test time was about 27.870236 seconds 00:06:08.243 00:06:08.243 Latency(us) 00:06:08.243 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:08.243 =================================================================================================================== 00:06:08.243 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:06:08.243 20:43:59 -- common/autotest_common.sh@950 -- # wait 47299 00:06:08.503 20:43:59 -- bdev/blockdev.sh@460 -- # trap - SIGINT SIGTERM EXIT 00:06:08.503 00:06:08.503 real 0m29.126s 00:06:08.503 user 0m29.803s 00:06:08.503 sys 0m0.728s 00:06:08.503 20:43:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.503 20:43:59 -- common/autotest_common.sh@10 -- # set +x 00:06:08.503 ************************************ 00:06:08.503 END TEST bdev_qos 00:06:08.503 ************************************ 00:06:08.503 20:43:59 -- bdev/blockdev.sh@787 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:06:08.503 20:43:59 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:08.503 20:43:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:08.503 20:43:59 -- common/autotest_common.sh@10 -- # set +x 00:06:08.503 ************************************ 00:06:08.503 START TEST bdev_qd_sampling 00:06:08.503 ************************************ 00:06:08.503 20:43:59 -- common/autotest_common.sh@1104 -- # qd_sampling_test_suite '' 00:06:08.503 20:43:59 -- bdev/blockdev.sh@536 -- # QD_DEV=Malloc_QD 00:06:08.503 20:43:59 -- bdev/blockdev.sh@539 -- # QD_PID=47412 00:06:08.503 Process bdev QD sampling period testing pid: 47412 00:06:08.503 20:43:59 -- bdev/blockdev.sh@540 -- # echo 'Process bdev QD sampling period testing pid: 47412' 00:06:08.503 20:43:59 -- bdev/blockdev.sh@541 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:06:08.503 20:43:59 -- bdev/blockdev.sh@538 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:06:08.503 20:43:59 -- bdev/blockdev.sh@542 -- # waitforlisten 47412 00:06:08.503 20:43:59 -- common/autotest_common.sh@819 -- # '[' -z 47412 ']' 00:06:08.503 20:43:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.503 20:43:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:08.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.503 20:43:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.503 20:43:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:08.503 20:43:59 -- common/autotest_common.sh@10 -- # set +x 00:06:08.503 [2024-04-16 20:43:59.547750] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:06:08.503 [2024-04-16 20:43:59.548099] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:09.077 EAL: TSC is not safe to use in SMP mode 00:06:09.077 EAL: TSC is not invariant 00:06:09.077 [2024-04-16 20:43:59.981652] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:09.077 [2024-04-16 20:44:00.074121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.077 [2024-04-16 20:44:00.074122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.649 20:44:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:09.649 20:44:00 -- common/autotest_common.sh@852 -- # return 0 00:06:09.649 20:44:00 -- bdev/blockdev.sh@544 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:06:09.649 20:44:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:09.649 20:44:00 -- common/autotest_common.sh@10 -- # set +x 00:06:09.649 Malloc_QD 00:06:09.649 20:44:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:09.649 20:44:00 -- bdev/blockdev.sh@545 -- # waitforbdev Malloc_QD 00:06:09.649 20:44:00 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_QD 00:06:09.649 20:44:00 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:06:09.649 20:44:00 -- common/autotest_common.sh@889 -- # local i 00:06:09.649 20:44:00 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:06:09.649 20:44:00 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:06:09.649 20:44:00 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:06:09.649 20:44:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:09.649 20:44:00 -- common/autotest_common.sh@10 -- # set +x 00:06:09.649 20:44:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:09.649 20:44:00 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:06:09.649 20:44:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:09.649 20:44:00 -- common/autotest_common.sh@10 -- # set +x 00:06:09.649 [ 00:06:09.649 { 00:06:09.649 "name": "Malloc_QD", 00:06:09.649 "aliases": [ 00:06:09.649 "0dc505fe-fc32-11ee-80f8-ef3e42bb1492" 00:06:09.649 ], 00:06:09.649 "product_name": "Malloc disk", 00:06:09.649 "block_size": 512, 00:06:09.649 "num_blocks": 262144, 00:06:09.649 "uuid": "0dc505fe-fc32-11ee-80f8-ef3e42bb1492", 00:06:09.649 "assigned_rate_limits": { 00:06:09.649 "rw_ios_per_sec": 0, 00:06:09.649 "rw_mbytes_per_sec": 0, 00:06:09.649 "r_mbytes_per_sec": 0, 00:06:09.649 "w_mbytes_per_sec": 0 00:06:09.649 }, 00:06:09.649 "claimed": false, 00:06:09.649 "zoned": false, 00:06:09.649 "supported_io_types": { 00:06:09.649 "read": true, 00:06:09.649 "write": true, 00:06:09.649 "unmap": true, 00:06:09.649 "write_zeroes": true, 00:06:09.649 "flush": true, 00:06:09.649 "reset": true, 00:06:09.649 "compare": false, 00:06:09.649 "compare_and_write": false, 00:06:09.649 "abort": true, 00:06:09.649 "nvme_admin": false, 00:06:09.649 "nvme_io": false 00:06:09.649 }, 00:06:09.649 "memory_domains": [ 00:06:09.650 { 00:06:09.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:09.650 "dma_device_type": 2 00:06:09.650 } 00:06:09.650 ], 00:06:09.650 "driver_specific": {} 00:06:09.650 } 00:06:09.650 ] 00:06:09.650 20:44:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:09.650 20:44:00 -- common/autotest_common.sh@895 -- # return 0 00:06:09.650 20:44:00 -- bdev/blockdev.sh@548 -- # sleep 2 00:06:09.650 20:44:00 -- bdev/blockdev.sh@547 -- # /usr/home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:06:09.650 Running I/O for 5 seconds... 00:06:11.557 20:44:02 -- bdev/blockdev.sh@549 -- # qd_sampling_function_test Malloc_QD 00:06:11.557 20:44:02 -- bdev/blockdev.sh@517 -- # local bdev_name=Malloc_QD 00:06:11.557 20:44:02 -- bdev/blockdev.sh@518 -- # local sampling_period=10 00:06:11.557 20:44:02 -- bdev/blockdev.sh@519 -- # local iostats 00:06:11.557 20:44:02 -- bdev/blockdev.sh@521 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:06:11.557 20:44:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:11.557 20:44:02 -- common/autotest_common.sh@10 -- # set +x 00:06:11.557 20:44:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:11.557 20:44:02 -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:06:11.557 20:44:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:11.557 20:44:02 -- common/autotest_common.sh@10 -- # set +x 00:06:11.817 20:44:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:11.817 20:44:02 -- bdev/blockdev.sh@523 -- # iostats='{ 00:06:11.817 "tick_rate": 2294601473, 00:06:11.817 "ticks": 714494684092, 00:06:11.817 "bdevs": [ 00:06:11.817 { 00:06:11.817 "name": "Malloc_QD", 00:06:11.817 "bytes_read": 14622429696, 00:06:11.817 "num_read_ops": 3569923, 00:06:11.817 "bytes_written": 0, 00:06:11.817 "num_write_ops": 0, 00:06:11.817 "bytes_unmapped": 0, 00:06:11.817 "num_unmap_ops": 0, 00:06:11.817 "bytes_copied": 0, 00:06:11.817 "num_copy_ops": 0, 00:06:11.817 "read_latency_ticks": 2407427591542, 00:06:11.817 "max_read_latency_ticks": 964360, 00:06:11.817 "min_read_latency_ticks": 35164, 00:06:11.817 "write_latency_ticks": 0, 00:06:11.817 "max_write_latency_ticks": 0, 00:06:11.817 "min_write_latency_ticks": 0, 00:06:11.817 "unmap_latency_ticks": 0, 00:06:11.817 "max_unmap_latency_ticks": 0, 00:06:11.817 "min_unmap_latency_ticks": 0, 00:06:11.817 "copy_latency_ticks": 0, 00:06:11.817 "max_copy_latency_ticks": 0, 00:06:11.817 "min_copy_latency_ticks": 0, 00:06:11.817 "io_error": {}, 00:06:11.817 "queue_depth_polling_period": 10, 00:06:11.817 "queue_depth": 512, 00:06:11.817 "io_time": 410, 00:06:11.817 "weighted_io_time": 215040 00:06:11.817 } 00:06:11.817 ] 00:06:11.817 }' 00:06:11.817 20:44:02 -- bdev/blockdev.sh@525 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:06:11.817 20:44:02 -- bdev/blockdev.sh@525 -- # qd_sampling_period=10 00:06:11.817 20:44:02 -- bdev/blockdev.sh@527 -- # '[' 10 == null ']' 00:06:11.817 20:44:02 -- bdev/blockdev.sh@527 -- # '[' 10 -ne 10 ']' 00:06:11.817 20:44:02 -- bdev/blockdev.sh@551 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:06:11.817 20:44:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:11.817 20:44:02 -- common/autotest_common.sh@10 -- # set +x 00:06:11.817 00:06:11.817 Latency(us) 00:06:11.817 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:11.817 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:06:11.817 Malloc_QD : 2.08 861307.68 3364.48 0.00 0.00 297.02 41.73 419.49 00:06:11.817 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:06:11.817 Malloc_QD : 2.08 879546.13 3435.73 0.00 0.00 290.86 45.30 421.27 00:06:11.817 =================================================================================================================== 00:06:11.817 Total : 1740853.80 6800.21 0.00 0.00 293.91 41.73 421.27 00:06:11.817 0 00:06:11.817 20:44:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:11.817 20:44:02 -- bdev/blockdev.sh@552 -- # killprocess 47412 00:06:11.817 20:44:02 -- common/autotest_common.sh@926 -- # '[' -z 47412 ']' 00:06:11.817 20:44:02 -- common/autotest_common.sh@930 -- # kill -0 47412 00:06:11.817 20:44:02 -- common/autotest_common.sh@931 -- # uname 00:06:11.817 20:44:02 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:06:11.817 20:44:02 -- common/autotest_common.sh@934 -- # ps -c -o command 47412 00:06:11.817 20:44:02 -- common/autotest_common.sh@934 -- # tail -1 00:06:11.817 20:44:02 -- common/autotest_common.sh@934 -- # process_name=bdevperf 00:06:11.817 20:44:02 -- common/autotest_common.sh@936 -- # '[' bdevperf = sudo ']' 00:06:11.817 killing process with pid 47412 00:06:11.817 20:44:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 47412' 00:06:11.817 20:44:02 -- common/autotest_common.sh@945 -- # kill 47412 00:06:11.817 Received shutdown signal, test time was about 2.119870 seconds 00:06:11.817 00:06:11.817 Latency(us) 00:06:11.817 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:11.817 =================================================================================================================== 00:06:11.817 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:06:11.817 20:44:02 -- common/autotest_common.sh@950 -- # wait 47412 00:06:11.817 20:44:02 -- bdev/blockdev.sh@553 -- # trap - SIGINT SIGTERM EXIT 00:06:11.817 00:06:11.817 real 0m3.332s 00:06:11.817 user 0m6.076s 00:06:11.817 sys 0m0.553s 00:06:11.817 20:44:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.817 20:44:02 -- common/autotest_common.sh@10 -- # set +x 00:06:11.817 ************************************ 00:06:11.817 END TEST bdev_qd_sampling 00:06:11.817 ************************************ 00:06:11.817 20:44:02 -- bdev/blockdev.sh@788 -- # run_test bdev_error error_test_suite '' 00:06:11.817 20:44:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:11.817 20:44:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:11.817 20:44:02 -- common/autotest_common.sh@10 -- # set +x 00:06:11.817 ************************************ 00:06:11.817 START TEST bdev_error 00:06:11.817 ************************************ 00:06:11.817 20:44:02 -- common/autotest_common.sh@1104 -- # error_test_suite '' 00:06:11.817 20:44:02 -- bdev/blockdev.sh@464 -- # DEV_1=Dev_1 00:06:11.817 20:44:02 -- bdev/blockdev.sh@465 -- # DEV_2=Dev_2 00:06:11.817 20:44:02 -- bdev/blockdev.sh@466 -- # ERR_DEV=EE_Dev_1 00:06:11.817 20:44:02 -- bdev/blockdev.sh@470 -- # ERR_PID=47443 00:06:11.817 20:44:02 -- bdev/blockdev.sh@471 -- # echo 'Process error testing pid: 47443' 00:06:11.817 Process error testing pid: 47443 00:06:11.817 20:44:02 -- bdev/blockdev.sh@472 -- # waitforlisten 47443 00:06:11.817 20:44:02 -- bdev/blockdev.sh@469 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:06:11.817 20:44:02 -- common/autotest_common.sh@819 -- # '[' -z 47443 ']' 00:06:11.817 20:44:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.817 20:44:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:11.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.817 20:44:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.817 20:44:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:11.818 20:44:02 -- common/autotest_common.sh@10 -- # set +x 00:06:12.077 [2024-04-16 20:44:02.934176] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:06:12.077 [2024-04-16 20:44:02.934540] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:12.337 EAL: TSC is not safe to use in SMP mode 00:06:12.337 EAL: TSC is not invariant 00:06:12.337 [2024-04-16 20:44:03.358646] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.337 [2024-04-16 20:44:03.437077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.908 20:44:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:12.908 20:44:03 -- common/autotest_common.sh@852 -- # return 0 00:06:12.908 20:44:03 -- bdev/blockdev.sh@474 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:06:12.908 20:44:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:12.908 20:44:03 -- common/autotest_common.sh@10 -- # set +x 00:06:12.908 Dev_1 00:06:12.908 20:44:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:12.908 20:44:03 -- bdev/blockdev.sh@475 -- # waitforbdev Dev_1 00:06:12.908 20:44:03 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_1 00:06:12.908 20:44:03 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:06:12.908 20:44:03 -- common/autotest_common.sh@889 -- # local i 00:06:12.908 20:44:03 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:06:12.908 20:44:03 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:06:12.908 20:44:03 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:06:12.908 20:44:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:12.908 20:44:03 -- common/autotest_common.sh@10 -- # set +x 00:06:12.908 20:44:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:12.908 20:44:03 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:06:12.908 20:44:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:12.908 20:44:03 -- common/autotest_common.sh@10 -- # set +x 00:06:12.908 [ 00:06:12.908 { 00:06:12.908 "name": "Dev_1", 00:06:12.908 "aliases": [ 00:06:12.908 "0fc83099-fc32-11ee-80f8-ef3e42bb1492" 00:06:12.908 ], 00:06:12.908 "product_name": "Malloc disk", 00:06:12.908 "block_size": 512, 00:06:12.908 "num_blocks": 262144, 00:06:12.908 "uuid": "0fc83099-fc32-11ee-80f8-ef3e42bb1492", 00:06:12.908 "assigned_rate_limits": { 00:06:12.908 "rw_ios_per_sec": 0, 00:06:12.908 "rw_mbytes_per_sec": 0, 00:06:12.908 "r_mbytes_per_sec": 0, 00:06:12.908 "w_mbytes_per_sec": 0 00:06:12.908 }, 00:06:12.908 "claimed": false, 00:06:12.908 "zoned": false, 00:06:12.908 "supported_io_types": { 00:06:12.908 "read": true, 00:06:12.908 "write": true, 00:06:12.908 "unmap": true, 00:06:12.908 "write_zeroes": true, 00:06:12.908 "flush": true, 00:06:12.908 "reset": true, 00:06:12.908 "compare": false, 00:06:12.908 "compare_and_write": false, 00:06:12.908 "abort": true, 00:06:12.908 "nvme_admin": false, 00:06:12.908 "nvme_io": false 00:06:12.908 }, 00:06:12.908 "memory_domains": [ 00:06:12.908 { 00:06:12.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:12.908 "dma_device_type": 2 00:06:12.908 } 00:06:12.908 ], 00:06:12.908 "driver_specific": {} 00:06:12.908 } 00:06:12.908 ] 00:06:12.908 20:44:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:12.908 20:44:03 -- common/autotest_common.sh@895 -- # return 0 00:06:12.908 20:44:03 -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_error_create Dev_1 00:06:12.908 20:44:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:12.908 20:44:03 -- common/autotest_common.sh@10 -- # set +x 00:06:12.908 true 00:06:12.908 20:44:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:12.908 20:44:03 -- bdev/blockdev.sh@477 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:06:12.908 20:44:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:12.908 20:44:03 -- common/autotest_common.sh@10 -- # set +x 00:06:12.908 Dev_2 00:06:12.908 20:44:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:12.908 20:44:03 -- bdev/blockdev.sh@478 -- # waitforbdev Dev_2 00:06:12.908 20:44:03 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_2 00:06:12.908 20:44:03 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:06:12.908 20:44:03 -- common/autotest_common.sh@889 -- # local i 00:06:12.908 20:44:03 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:06:12.908 20:44:03 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:06:12.908 20:44:03 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:06:12.908 20:44:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:12.908 20:44:03 -- common/autotest_common.sh@10 -- # set +x 00:06:12.908 20:44:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:12.908 20:44:03 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:06:12.908 20:44:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:12.908 20:44:03 -- common/autotest_common.sh@10 -- # set +x 00:06:12.908 [ 00:06:12.908 { 00:06:12.908 "name": "Dev_2", 00:06:12.908 "aliases": [ 00:06:12.908 "0fd0bc21-fc32-11ee-80f8-ef3e42bb1492" 00:06:12.908 ], 00:06:12.908 "product_name": "Malloc disk", 00:06:12.908 "block_size": 512, 00:06:12.908 "num_blocks": 262144, 00:06:12.908 "uuid": "0fd0bc21-fc32-11ee-80f8-ef3e42bb1492", 00:06:12.908 "assigned_rate_limits": { 00:06:12.908 "rw_ios_per_sec": 0, 00:06:12.908 "rw_mbytes_per_sec": 0, 00:06:12.908 "r_mbytes_per_sec": 0, 00:06:12.908 "w_mbytes_per_sec": 0 00:06:12.908 }, 00:06:12.908 "claimed": false, 00:06:12.908 "zoned": false, 00:06:12.908 "supported_io_types": { 00:06:12.908 "read": true, 00:06:12.908 "write": true, 00:06:12.908 "unmap": true, 00:06:12.908 "write_zeroes": true, 00:06:12.908 "flush": true, 00:06:12.908 "reset": true, 00:06:12.908 "compare": false, 00:06:12.908 "compare_and_write": false, 00:06:12.908 "abort": true, 00:06:12.908 "nvme_admin": false, 00:06:12.908 "nvme_io": false 00:06:12.908 }, 00:06:12.908 "memory_domains": [ 00:06:12.908 { 00:06:12.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:12.908 "dma_device_type": 2 00:06:12.908 } 00:06:12.908 ], 00:06:12.908 "driver_specific": {} 00:06:12.908 } 00:06:12.908 ] 00:06:12.908 20:44:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:12.908 20:44:03 -- common/autotest_common.sh@895 -- # return 0 00:06:12.908 20:44:03 -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:06:12.908 20:44:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:12.908 20:44:03 -- common/autotest_common.sh@10 -- # set +x 00:06:12.908 20:44:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:12.908 20:44:03 -- bdev/blockdev.sh@482 -- # sleep 1 00:06:12.909 20:44:03 -- bdev/blockdev.sh@481 -- # /usr/home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:06:13.168 Running I/O for 5 seconds... 00:06:14.187 20:44:05 -- bdev/blockdev.sh@485 -- # kill -0 47443 00:06:14.187 Process is existed as continue on error is set. Pid: 47443 00:06:14.187 20:44:05 -- bdev/blockdev.sh@486 -- # echo 'Process is existed as continue on error is set. Pid: 47443' 00:06:14.187 20:44:05 -- bdev/blockdev.sh@493 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:06:14.187 20:44:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:14.187 20:44:05 -- common/autotest_common.sh@10 -- # set +x 00:06:14.187 20:44:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:14.187 20:44:05 -- bdev/blockdev.sh@494 -- # rpc_cmd bdev_malloc_delete Dev_1 00:06:14.187 20:44:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:14.187 20:44:05 -- common/autotest_common.sh@10 -- # set +x 00:06:14.187 20:44:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:14.187 20:44:05 -- bdev/blockdev.sh@495 -- # sleep 5 00:06:14.187 Timeout while waiting for response: 00:06:14.187 00:06:14.187 00:06:18.384 00:06:18.384 Latency(us) 00:06:18.384 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:18.384 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:06:18.384 EE_Dev_1 : 0.98 384107.84 1500.42 5.08 0.00 41.49 17.85 112.91 00:06:18.384 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:06:18.384 Dev_2 : 5.00 791381.43 3091.33 0.00 0.00 20.05 4.85 18164.76 00:06:18.384 =================================================================================================================== 00:06:18.384 Total : 1175489.27 4591.75 5.08 0.00 21.92 4.85 18164.76 00:06:19.324 20:44:10 -- bdev/blockdev.sh@497 -- # killprocess 47443 00:06:19.324 20:44:10 -- common/autotest_common.sh@926 -- # '[' -z 47443 ']' 00:06:19.324 20:44:10 -- common/autotest_common.sh@930 -- # kill -0 47443 00:06:19.324 20:44:10 -- common/autotest_common.sh@931 -- # uname 00:06:19.324 20:44:10 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:06:19.324 20:44:10 -- common/autotest_common.sh@934 -- # ps -c -o command 47443 00:06:19.324 20:44:10 -- common/autotest_common.sh@934 -- # tail -1 00:06:19.324 20:44:10 -- common/autotest_common.sh@934 -- # process_name=bdevperf 00:06:19.324 20:44:10 -- common/autotest_common.sh@936 -- # '[' bdevperf = sudo ']' 00:06:19.324 killing process with pid 47443 00:06:19.324 20:44:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 47443' 00:06:19.324 20:44:10 -- common/autotest_common.sh@945 -- # kill 47443 00:06:19.324 Received shutdown signal, test time was about 5.000000 seconds 00:06:19.324 00:06:19.324 Latency(us) 00:06:19.324 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:19.324 =================================================================================================================== 00:06:19.324 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:06:19.324 20:44:10 -- common/autotest_common.sh@950 -- # wait 47443 00:06:19.324 20:44:10 -- bdev/blockdev.sh@501 -- # ERR_PID=47455 00:06:19.324 Process error testing pid: 47455 00:06:19.324 20:44:10 -- bdev/blockdev.sh@502 -- # echo 'Process error testing pid: 47455' 00:06:19.324 20:44:10 -- bdev/blockdev.sh@500 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:06:19.324 20:44:10 -- bdev/blockdev.sh@503 -- # waitforlisten 47455 00:06:19.324 20:44:10 -- common/autotest_common.sh@819 -- # '[' -z 47455 ']' 00:06:19.324 20:44:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.324 20:44:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:19.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.324 20:44:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.324 20:44:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:19.324 20:44:10 -- common/autotest_common.sh@10 -- # set +x 00:06:19.324 [2024-04-16 20:44:10.337274] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:06:19.324 [2024-04-16 20:44:10.337510] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:19.893 EAL: TSC is not safe to use in SMP mode 00:06:19.893 EAL: TSC is not invariant 00:06:19.893 [2024-04-16 20:44:10.769197] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.893 [2024-04-16 20:44:10.856064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.153 20:44:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:20.153 20:44:11 -- common/autotest_common.sh@852 -- # return 0 00:06:20.153 20:44:11 -- bdev/blockdev.sh@505 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:06:20.153 20:44:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:20.153 20:44:11 -- common/autotest_common.sh@10 -- # set +x 00:06:20.413 Dev_1 00:06:20.413 20:44:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:20.413 20:44:11 -- bdev/blockdev.sh@506 -- # waitforbdev Dev_1 00:06:20.413 20:44:11 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_1 00:06:20.413 20:44:11 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:06:20.413 20:44:11 -- common/autotest_common.sh@889 -- # local i 00:06:20.413 20:44:11 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:06:20.413 20:44:11 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:06:20.413 20:44:11 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:06:20.413 20:44:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:20.413 20:44:11 -- common/autotest_common.sh@10 -- # set +x 00:06:20.413 20:44:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:20.413 20:44:11 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:06:20.413 20:44:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:20.413 20:44:11 -- common/autotest_common.sh@10 -- # set +x 00:06:20.413 [ 00:06:20.413 { 00:06:20.413 "name": "Dev_1", 00:06:20.413 "aliases": [ 00:06:20.413 "14343bf8-fc32-11ee-80f8-ef3e42bb1492" 00:06:20.413 ], 00:06:20.414 "product_name": "Malloc disk", 00:06:20.414 "block_size": 512, 00:06:20.414 "num_blocks": 262144, 00:06:20.414 "uuid": "14343bf8-fc32-11ee-80f8-ef3e42bb1492", 00:06:20.414 "assigned_rate_limits": { 00:06:20.414 "rw_ios_per_sec": 0, 00:06:20.414 "rw_mbytes_per_sec": 0, 00:06:20.414 "r_mbytes_per_sec": 0, 00:06:20.414 "w_mbytes_per_sec": 0 00:06:20.414 }, 00:06:20.414 "claimed": false, 00:06:20.414 "zoned": false, 00:06:20.414 "supported_io_types": { 00:06:20.414 "read": true, 00:06:20.414 "write": true, 00:06:20.414 "unmap": true, 00:06:20.414 "write_zeroes": true, 00:06:20.414 "flush": true, 00:06:20.414 "reset": true, 00:06:20.414 "compare": false, 00:06:20.414 "compare_and_write": false, 00:06:20.414 "abort": true, 00:06:20.414 "nvme_admin": false, 00:06:20.414 "nvme_io": false 00:06:20.414 }, 00:06:20.414 "memory_domains": [ 00:06:20.414 { 00:06:20.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:20.414 "dma_device_type": 2 00:06:20.414 } 00:06:20.414 ], 00:06:20.414 "driver_specific": {} 00:06:20.414 } 00:06:20.414 ] 00:06:20.414 20:44:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:20.414 20:44:11 -- common/autotest_common.sh@895 -- # return 0 00:06:20.414 20:44:11 -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_error_create Dev_1 00:06:20.414 20:44:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:20.414 20:44:11 -- common/autotest_common.sh@10 -- # set +x 00:06:20.414 true 00:06:20.414 20:44:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:20.414 20:44:11 -- bdev/blockdev.sh@508 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:06:20.414 20:44:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:20.414 20:44:11 -- common/autotest_common.sh@10 -- # set +x 00:06:20.414 Dev_2 00:06:20.414 20:44:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:20.414 20:44:11 -- bdev/blockdev.sh@509 -- # waitforbdev Dev_2 00:06:20.414 20:44:11 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_2 00:06:20.414 20:44:11 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:06:20.414 20:44:11 -- common/autotest_common.sh@889 -- # local i 00:06:20.414 20:44:11 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:06:20.414 20:44:11 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:06:20.414 20:44:11 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:06:20.414 20:44:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:20.414 20:44:11 -- common/autotest_common.sh@10 -- # set +x 00:06:20.414 20:44:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:20.414 20:44:11 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:06:20.414 20:44:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:20.414 20:44:11 -- common/autotest_common.sh@10 -- # set +x 00:06:20.414 [ 00:06:20.414 { 00:06:20.414 "name": "Dev_2", 00:06:20.414 "aliases": [ 00:06:20.414 "143c2ada-fc32-11ee-80f8-ef3e42bb1492" 00:06:20.414 ], 00:06:20.414 "product_name": "Malloc disk", 00:06:20.414 "block_size": 512, 00:06:20.414 "num_blocks": 262144, 00:06:20.414 "uuid": "143c2ada-fc32-11ee-80f8-ef3e42bb1492", 00:06:20.414 "assigned_rate_limits": { 00:06:20.414 "rw_ios_per_sec": 0, 00:06:20.414 "rw_mbytes_per_sec": 0, 00:06:20.414 "r_mbytes_per_sec": 0, 00:06:20.414 "w_mbytes_per_sec": 0 00:06:20.414 }, 00:06:20.414 "claimed": false, 00:06:20.414 "zoned": false, 00:06:20.414 "supported_io_types": { 00:06:20.414 "read": true, 00:06:20.414 "write": true, 00:06:20.414 "unmap": true, 00:06:20.414 "write_zeroes": true, 00:06:20.414 "flush": true, 00:06:20.414 "reset": true, 00:06:20.414 "compare": false, 00:06:20.414 "compare_and_write": false, 00:06:20.414 "abort": true, 00:06:20.414 "nvme_admin": false, 00:06:20.414 "nvme_io": false 00:06:20.414 }, 00:06:20.414 "memory_domains": [ 00:06:20.414 { 00:06:20.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:20.414 "dma_device_type": 2 00:06:20.414 } 00:06:20.414 ], 00:06:20.414 "driver_specific": {} 00:06:20.414 } 00:06:20.414 ] 00:06:20.414 20:44:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:20.414 20:44:11 -- common/autotest_common.sh@895 -- # return 0 00:06:20.414 20:44:11 -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:06:20.414 20:44:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:20.414 20:44:11 -- common/autotest_common.sh@10 -- # set +x 00:06:20.414 20:44:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:20.414 20:44:11 -- bdev/blockdev.sh@513 -- # NOT wait 47455 00:06:20.414 20:44:11 -- common/autotest_common.sh@640 -- # local es=0 00:06:20.414 20:44:11 -- bdev/blockdev.sh@512 -- # /usr/home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:06:20.414 20:44:11 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 47455 00:06:20.414 20:44:11 -- common/autotest_common.sh@628 -- # local arg=wait 00:06:20.414 20:44:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:20.414 20:44:11 -- common/autotest_common.sh@632 -- # type -t wait 00:06:20.414 20:44:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:20.414 20:44:11 -- common/autotest_common.sh@643 -- # wait 47455 00:06:20.414 Running I/O for 5 seconds... 00:06:20.414 task offset: 168960 on job bdev=EE_Dev_1 fails 00:06:20.414 00:06:20.414 Latency(us) 00:06:20.414 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:20.414 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:06:20.414 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:06:20.414 EE_Dev_1 : 0.00 200000.00 781.25 45454.55 0.00 49.49 18.85 86.13 00:06:20.414 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:06:20.414 Dev_2 : 0.00 242424.24 946.97 0.00 0.00 36.45 19.30 59.80 00:06:20.414 =================================================================================================================== 00:06:20.414 Total : 442424.24 1728.22 45454.55 0.00 42.42 18.85 86.13 00:06:20.414 [2024-04-16 20:44:11.449156] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:20.414 request: 00:06:20.414 { 00:06:20.414 "method": "perform_tests", 00:06:20.414 "req_id": 1 00:06:20.414 } 00:06:20.414 Got JSON-RPC error response 00:06:20.414 response: 00:06:20.414 { 00:06:20.414 "code": -32603, 00:06:20.414 "message": "bdevperf failed with error Operation not permitted" 00:06:20.414 } 00:06:20.674 20:44:11 -- common/autotest_common.sh@643 -- # es=255 00:06:20.674 20:44:11 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:20.674 20:44:11 -- common/autotest_common.sh@652 -- # es=127 00:06:20.674 20:44:11 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:20.674 20:44:11 -- common/autotest_common.sh@660 -- # es=1 00:06:20.675 20:44:11 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:20.675 00:06:20.675 real 0m8.704s 00:06:20.675 user 0m8.734s 00:06:20.675 sys 0m1.018s 00:06:20.675 20:44:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.675 20:44:11 -- common/autotest_common.sh@10 -- # set +x 00:06:20.675 ************************************ 00:06:20.675 END TEST bdev_error 00:06:20.675 ************************************ 00:06:20.675 20:44:11 -- bdev/blockdev.sh@789 -- # run_test bdev_stat stat_test_suite '' 00:06:20.675 20:44:11 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:20.675 20:44:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:20.675 20:44:11 -- common/autotest_common.sh@10 -- # set +x 00:06:20.675 ************************************ 00:06:20.675 START TEST bdev_stat 00:06:20.675 ************************************ 00:06:20.675 20:44:11 -- common/autotest_common.sh@1104 -- # stat_test_suite '' 00:06:20.675 20:44:11 -- bdev/blockdev.sh@590 -- # STAT_DEV=Malloc_STAT 00:06:20.675 20:44:11 -- bdev/blockdev.sh@594 -- # STAT_PID=47478 00:06:20.675 Process Bdev IO statistics testing pid: 47478 00:06:20.675 20:44:11 -- bdev/blockdev.sh@595 -- # echo 'Process Bdev IO statistics testing pid: 47478' 00:06:20.675 20:44:11 -- bdev/blockdev.sh@593 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:06:20.675 20:44:11 -- bdev/blockdev.sh@596 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:06:20.675 20:44:11 -- bdev/blockdev.sh@597 -- # waitforlisten 47478 00:06:20.675 20:44:11 -- common/autotest_common.sh@819 -- # '[' -z 47478 ']' 00:06:20.675 20:44:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.675 20:44:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:20.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.675 20:44:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.675 20:44:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:20.675 20:44:11 -- common/autotest_common.sh@10 -- # set +x 00:06:20.675 [2024-04-16 20:44:11.687698] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:06:20.675 [2024-04-16 20:44:11.688013] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:21.244 EAL: TSC is not safe to use in SMP mode 00:06:21.244 EAL: TSC is not invariant 00:06:21.244 [2024-04-16 20:44:12.112465] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:21.244 [2024-04-16 20:44:12.193493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.244 [2024-04-16 20:44:12.193493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.504 20:44:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:21.504 20:44:12 -- common/autotest_common.sh@852 -- # return 0 00:06:21.504 20:44:12 -- bdev/blockdev.sh@599 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:06:21.504 20:44:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:21.504 20:44:12 -- common/autotest_common.sh@10 -- # set +x 00:06:21.504 Malloc_STAT 00:06:21.504 20:44:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:21.504 20:44:12 -- bdev/blockdev.sh@600 -- # waitforbdev Malloc_STAT 00:06:21.504 20:44:12 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_STAT 00:06:21.504 20:44:12 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:06:21.504 20:44:12 -- common/autotest_common.sh@889 -- # local i 00:06:21.504 20:44:12 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:06:21.504 20:44:12 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:06:21.504 20:44:12 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:06:21.504 20:44:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:21.504 20:44:12 -- common/autotest_common.sh@10 -- # set +x 00:06:21.504 20:44:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:21.504 20:44:12 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:06:21.504 20:44:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:21.504 20:44:12 -- common/autotest_common.sh@10 -- # set +x 00:06:21.763 [ 00:06:21.763 { 00:06:21.763 "name": "Malloc_STAT", 00:06:21.763 "aliases": [ 00:06:21.763 "14ff89c3-fc32-11ee-80f8-ef3e42bb1492" 00:06:21.763 ], 00:06:21.763 "product_name": "Malloc disk", 00:06:21.763 "block_size": 512, 00:06:21.763 "num_blocks": 262144, 00:06:21.763 "uuid": "14ff89c3-fc32-11ee-80f8-ef3e42bb1492", 00:06:21.763 "assigned_rate_limits": { 00:06:21.763 "rw_ios_per_sec": 0, 00:06:21.763 "rw_mbytes_per_sec": 0, 00:06:21.763 "r_mbytes_per_sec": 0, 00:06:21.763 "w_mbytes_per_sec": 0 00:06:21.763 }, 00:06:21.763 "claimed": false, 00:06:21.763 "zoned": false, 00:06:21.763 "supported_io_types": { 00:06:21.763 "read": true, 00:06:21.763 "write": true, 00:06:21.763 "unmap": true, 00:06:21.763 "write_zeroes": true, 00:06:21.763 "flush": true, 00:06:21.763 "reset": true, 00:06:21.763 "compare": false, 00:06:21.763 "compare_and_write": false, 00:06:21.763 "abort": true, 00:06:21.763 "nvme_admin": false, 00:06:21.763 "nvme_io": false 00:06:21.763 }, 00:06:21.763 "memory_domains": [ 00:06:21.763 { 00:06:21.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:21.763 "dma_device_type": 2 00:06:21.763 } 00:06:21.763 ], 00:06:21.763 "driver_specific": {} 00:06:21.763 } 00:06:21.764 ] 00:06:21.764 20:44:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:21.764 20:44:12 -- common/autotest_common.sh@895 -- # return 0 00:06:21.764 20:44:12 -- bdev/blockdev.sh@603 -- # sleep 2 00:06:21.764 20:44:12 -- bdev/blockdev.sh@602 -- # /usr/home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:06:21.764 Running I/O for 10 seconds... 00:06:23.700 20:44:14 -- bdev/blockdev.sh@604 -- # stat_function_test Malloc_STAT 00:06:23.700 20:44:14 -- bdev/blockdev.sh@557 -- # local bdev_name=Malloc_STAT 00:06:23.700 20:44:14 -- bdev/blockdev.sh@558 -- # local iostats 00:06:23.700 20:44:14 -- bdev/blockdev.sh@559 -- # local io_count1 00:06:23.700 20:44:14 -- bdev/blockdev.sh@560 -- # local io_count2 00:06:23.700 20:44:14 -- bdev/blockdev.sh@561 -- # local iostats_per_channel 00:06:23.700 20:44:14 -- bdev/blockdev.sh@562 -- # local io_count_per_channel1 00:06:23.700 20:44:14 -- bdev/blockdev.sh@563 -- # local io_count_per_channel2 00:06:23.700 20:44:14 -- bdev/blockdev.sh@564 -- # local io_count_per_channel_all=0 00:06:23.700 20:44:14 -- bdev/blockdev.sh@566 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:06:23.700 20:44:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:23.700 20:44:14 -- common/autotest_common.sh@10 -- # set +x 00:06:23.700 20:44:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:23.700 20:44:14 -- bdev/blockdev.sh@566 -- # iostats='{ 00:06:23.700 "tick_rate": 2294601473, 00:06:23.700 "ticks": 742074704334, 00:06:23.700 "bdevs": [ 00:06:23.700 { 00:06:23.700 "name": "Malloc_STAT", 00:06:23.700 "bytes_read": 14110724608, 00:06:23.700 "num_read_ops": 3444995, 00:06:23.700 "bytes_written": 0, 00:06:23.700 "num_write_ops": 0, 00:06:23.700 "bytes_unmapped": 0, 00:06:23.700 "num_unmap_ops": 0, 00:06:23.700 "bytes_copied": 0, 00:06:23.700 "num_copy_ops": 0, 00:06:23.700 "read_latency_ticks": 2294785676852, 00:06:23.700 "max_read_latency_ticks": 1178092, 00:06:23.700 "min_read_latency_ticks": 35482, 00:06:23.700 "write_latency_ticks": 0, 00:06:23.700 "max_write_latency_ticks": 0, 00:06:23.700 "min_write_latency_ticks": 0, 00:06:23.700 "unmap_latency_ticks": 0, 00:06:23.700 "max_unmap_latency_ticks": 0, 00:06:23.700 "min_unmap_latency_ticks": 0, 00:06:23.700 "copy_latency_ticks": 0, 00:06:23.700 "max_copy_latency_ticks": 0, 00:06:23.700 "min_copy_latency_ticks": 0, 00:06:23.700 "io_error": {} 00:06:23.700 } 00:06:23.700 ] 00:06:23.700 }' 00:06:23.700 20:44:14 -- bdev/blockdev.sh@567 -- # jq -r '.bdevs[0].num_read_ops' 00:06:23.700 20:44:14 -- bdev/blockdev.sh@567 -- # io_count1=3444995 00:06:23.700 20:44:14 -- bdev/blockdev.sh@569 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:06:23.700 20:44:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:23.700 20:44:14 -- common/autotest_common.sh@10 -- # set +x 00:06:23.700 20:44:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:23.700 20:44:14 -- bdev/blockdev.sh@569 -- # iostats_per_channel='{ 00:06:23.700 "tick_rate": 2294601473, 00:06:23.700 "ticks": 742151008050, 00:06:23.700 "name": "Malloc_STAT", 00:06:23.700 "channels": [ 00:06:23.700 { 00:06:23.700 "thread_id": 2, 00:06:23.700 "bytes_read": 7165968384, 00:06:23.700 "num_read_ops": 1749504, 00:06:23.700 "bytes_written": 0, 00:06:23.700 "num_write_ops": 0, 00:06:23.700 "bytes_unmapped": 0, 00:06:23.700 "num_unmap_ops": 0, 00:06:23.700 "bytes_copied": 0, 00:06:23.700 "num_copy_ops": 0, 00:06:23.700 "read_latency_ticks": 1166832327068, 00:06:23.700 "max_read_latency_ticks": 1127004, 00:06:23.700 "min_read_latency_ticks": 608924, 00:06:23.700 "write_latency_ticks": 0, 00:06:23.700 "max_write_latency_ticks": 0, 00:06:23.700 "min_write_latency_ticks": 0, 00:06:23.700 "unmap_latency_ticks": 0, 00:06:23.700 "max_unmap_latency_ticks": 0, 00:06:23.700 "min_unmap_latency_ticks": 0, 00:06:23.700 "copy_latency_ticks": 0, 00:06:23.700 "max_copy_latency_ticks": 0, 00:06:23.700 "min_copy_latency_ticks": 0 00:06:23.700 }, 00:06:23.700 { 00:06:23.700 "thread_id": 3, 00:06:23.700 "bytes_read": 7173308416, 00:06:23.700 "num_read_ops": 1751296, 00:06:23.700 "bytes_written": 0, 00:06:23.700 "num_write_ops": 0, 00:06:23.700 "bytes_unmapped": 0, 00:06:23.700 "num_unmap_ops": 0, 00:06:23.700 "bytes_copied": 0, 00:06:23.700 "num_copy_ops": 0, 00:06:23.700 "read_latency_ticks": 1166981914842, 00:06:23.700 "max_read_latency_ticks": 1178092, 00:06:23.700 "min_read_latency_ticks": 611920, 00:06:23.700 "write_latency_ticks": 0, 00:06:23.700 "max_write_latency_ticks": 0, 00:06:23.700 "min_write_latency_ticks": 0, 00:06:23.700 "unmap_latency_ticks": 0, 00:06:23.700 "max_unmap_latency_ticks": 0, 00:06:23.700 "min_unmap_latency_ticks": 0, 00:06:23.700 "copy_latency_ticks": 0, 00:06:23.700 "max_copy_latency_ticks": 0, 00:06:23.700 "min_copy_latency_ticks": 0 00:06:23.700 } 00:06:23.700 ] 00:06:23.700 }' 00:06:23.700 20:44:14 -- bdev/blockdev.sh@570 -- # jq -r '.channels[0].num_read_ops' 00:06:23.700 20:44:14 -- bdev/blockdev.sh@570 -- # io_count_per_channel1=1749504 00:06:23.700 20:44:14 -- bdev/blockdev.sh@571 -- # io_count_per_channel_all=1749504 00:06:23.700 20:44:14 -- bdev/blockdev.sh@572 -- # jq -r '.channels[1].num_read_ops' 00:06:23.700 20:44:14 -- bdev/blockdev.sh@572 -- # io_count_per_channel2=1751296 00:06:23.700 20:44:14 -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=3500800 00:06:23.700 20:44:14 -- bdev/blockdev.sh@575 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:06:23.700 20:44:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:23.700 20:44:14 -- common/autotest_common.sh@10 -- # set +x 00:06:23.700 20:44:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:23.700 20:44:14 -- bdev/blockdev.sh@575 -- # iostats='{ 00:06:23.700 "tick_rate": 2294601473, 00:06:23.700 "ticks": 742265421912, 00:06:23.700 "bdevs": [ 00:06:23.700 { 00:06:23.700 "name": "Malloc_STAT", 00:06:23.700 "bytes_read": 14691635712, 00:06:23.700 "num_read_ops": 3586819, 00:06:23.700 "bytes_written": 0, 00:06:23.700 "num_write_ops": 0, 00:06:23.700 "bytes_unmapped": 0, 00:06:23.700 "num_unmap_ops": 0, 00:06:23.700 "bytes_copied": 0, 00:06:23.700 "num_copy_ops": 0, 00:06:23.700 "read_latency_ticks": 2392302996496, 00:06:23.700 "max_read_latency_ticks": 1178092, 00:06:23.700 "min_read_latency_ticks": 35482, 00:06:23.700 "write_latency_ticks": 0, 00:06:23.700 "max_write_latency_ticks": 0, 00:06:23.700 "min_write_latency_ticks": 0, 00:06:23.700 "unmap_latency_ticks": 0, 00:06:23.700 "max_unmap_latency_ticks": 0, 00:06:23.700 "min_unmap_latency_ticks": 0, 00:06:23.700 "copy_latency_ticks": 0, 00:06:23.700 "max_copy_latency_ticks": 0, 00:06:23.700 "min_copy_latency_ticks": 0, 00:06:23.700 "io_error": {} 00:06:23.700 } 00:06:23.700 ] 00:06:23.700 }' 00:06:23.700 20:44:14 -- bdev/blockdev.sh@576 -- # jq -r '.bdevs[0].num_read_ops' 00:06:23.700 20:44:14 -- bdev/blockdev.sh@576 -- # io_count2=3586819 00:06:23.700 20:44:14 -- bdev/blockdev.sh@581 -- # '[' 3500800 -lt 3444995 ']' 00:06:23.700 20:44:14 -- bdev/blockdev.sh@581 -- # '[' 3500800 -gt 3586819 ']' 00:06:23.700 20:44:14 -- bdev/blockdev.sh@606 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:06:23.700 20:44:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:23.700 20:44:14 -- common/autotest_common.sh@10 -- # set +x 00:06:23.700 00:06:23.700 Latency(us) 00:06:23.700 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:23.700 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:06:23.700 Malloc_STAT : 2.07 879688.13 3436.28 0.00 0.00 290.81 46.41 492.68 00:06:23.700 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:06:23.700 Malloc_STAT : 2.07 880531.36 3439.58 0.00 0.00 290.53 51.54 514.10 00:06:23.701 =================================================================================================================== 00:06:23.701 Total : 1760219.49 6875.86 0.00 0.00 290.67 46.41 514.10 00:06:23.960 0 00:06:23.960 20:44:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:23.960 20:44:14 -- bdev/blockdev.sh@607 -- # killprocess 47478 00:06:23.960 20:44:14 -- common/autotest_common.sh@926 -- # '[' -z 47478 ']' 00:06:23.960 20:44:14 -- common/autotest_common.sh@930 -- # kill -0 47478 00:06:23.960 20:44:14 -- common/autotest_common.sh@931 -- # uname 00:06:23.960 20:44:14 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:06:23.960 20:44:14 -- common/autotest_common.sh@934 -- # ps -c -o command 47478 00:06:23.960 20:44:14 -- common/autotest_common.sh@934 -- # tail -1 00:06:23.960 20:44:14 -- common/autotest_common.sh@934 -- # process_name=bdevperf 00:06:23.960 20:44:14 -- common/autotest_common.sh@936 -- # '[' bdevperf = sudo ']' 00:06:23.960 killing process with pid 47478 00:06:23.960 20:44:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 47478' 00:06:23.960 20:44:14 -- common/autotest_common.sh@945 -- # kill 47478 00:06:23.960 Received shutdown signal, test time was about 2.106478 seconds 00:06:23.960 00:06:23.960 Latency(us) 00:06:23.960 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:23.960 =================================================================================================================== 00:06:23.960 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:06:23.960 20:44:14 -- common/autotest_common.sh@950 -- # wait 47478 00:06:23.960 20:44:14 -- bdev/blockdev.sh@608 -- # trap - SIGINT SIGTERM EXIT 00:06:23.960 00:06:23.960 real 0m3.294s 00:06:23.960 user 0m6.057s 00:06:23.960 sys 0m0.524s 00:06:23.960 20:44:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.960 20:44:14 -- common/autotest_common.sh@10 -- # set +x 00:06:23.960 ************************************ 00:06:23.960 END TEST bdev_stat 00:06:23.960 ************************************ 00:06:23.960 20:44:15 -- bdev/blockdev.sh@792 -- # [[ bdev == gpt ]] 00:06:23.960 20:44:15 -- bdev/blockdev.sh@796 -- # [[ bdev == crypto_sw ]] 00:06:23.960 20:44:15 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:06:23.960 20:44:15 -- bdev/blockdev.sh@809 -- # cleanup 00:06:23.960 20:44:15 -- bdev/blockdev.sh@21 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:23.960 20:44:15 -- bdev/blockdev.sh@22 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:23.960 20:44:15 -- bdev/blockdev.sh@24 -- # [[ bdev == rbd ]] 00:06:23.960 20:44:15 -- bdev/blockdev.sh@28 -- # [[ bdev == daos ]] 00:06:23.960 20:44:15 -- bdev/blockdev.sh@32 -- # [[ bdev = \g\p\t ]] 00:06:23.960 20:44:15 -- bdev/blockdev.sh@38 -- # [[ bdev == xnvme ]] 00:06:23.960 00:06:23.960 real 1m29.058s 00:06:23.960 user 4m25.443s 00:06:23.960 sys 0m25.111s 00:06:23.960 20:44:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.960 20:44:15 -- common/autotest_common.sh@10 -- # set +x 00:06:23.960 ************************************ 00:06:23.960 END TEST blockdev_general 00:06:23.960 ************************************ 00:06:23.960 20:44:15 -- spdk/autotest.sh@196 -- # run_test bdev_raid /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:23.960 20:44:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:23.960 20:44:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:23.960 20:44:15 -- common/autotest_common.sh@10 -- # set +x 00:06:24.220 ************************************ 00:06:24.220 START TEST bdev_raid 00:06:24.220 ************************************ 00:06:24.220 20:44:15 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:24.220 * Looking for test storage... 00:06:24.220 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/bdev 00:06:24.220 20:44:15 -- bdev/bdev_raid.sh@12 -- # source /usr/home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:24.220 20:44:15 -- bdev/nbd_common.sh@6 -- # set -e 00:06:24.220 20:44:15 -- bdev/bdev_raid.sh@14 -- # rpc_py='/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:06:24.220 20:44:15 -- bdev/bdev_raid.sh@714 -- # trap 'on_error_exit;' ERR 00:06:24.220 20:44:15 -- bdev/bdev_raid.sh@716 -- # uname -s 00:06:24.220 20:44:15 -- bdev/bdev_raid.sh@716 -- # '[' FreeBSD = Linux ']' 00:06:24.220 20:44:15 -- bdev/bdev_raid.sh@723 -- # run_test raid0_resize_test raid0_resize_test 00:06:24.220 20:44:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:24.220 20:44:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:24.220 20:44:15 -- common/autotest_common.sh@10 -- # set +x 00:06:24.220 ************************************ 00:06:24.220 START TEST raid0_resize_test 00:06:24.220 ************************************ 00:06:24.220 20:44:15 -- common/autotest_common.sh@1104 -- # raid0_resize_test 00:06:24.220 20:44:15 -- bdev/bdev_raid.sh@293 -- # local blksize=512 00:06:24.220 20:44:15 -- bdev/bdev_raid.sh@294 -- # local bdev_size_mb=32 00:06:24.220 20:44:15 -- bdev/bdev_raid.sh@295 -- # local new_bdev_size_mb=64 00:06:24.220 20:44:15 -- bdev/bdev_raid.sh@296 -- # local blkcnt 00:06:24.220 20:44:15 -- bdev/bdev_raid.sh@297 -- # local raid_size_mb 00:06:24.220 20:44:15 -- bdev/bdev_raid.sh@298 -- # local new_raid_size_mb 00:06:24.220 20:44:15 -- bdev/bdev_raid.sh@301 -- # raid_pid=47565 00:06:24.220 20:44:15 -- bdev/bdev_raid.sh@302 -- # echo 'Process raid pid: 47565' 00:06:24.220 Process raid pid: 47565 00:06:24.220 20:44:15 -- bdev/bdev_raid.sh@300 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:06:24.220 20:44:15 -- bdev/bdev_raid.sh@303 -- # waitforlisten 47565 /var/tmp/spdk-raid.sock 00:06:24.220 20:44:15 -- common/autotest_common.sh@819 -- # '[' -z 47565 ']' 00:06:24.220 20:44:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:06:24.220 20:44:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:24.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:06:24.220 20:44:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:06:24.220 20:44:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:24.220 20:44:15 -- common/autotest_common.sh@10 -- # set +x 00:06:24.220 [2024-04-16 20:44:15.292476] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:06:24.220 [2024-04-16 20:44:15.292799] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:24.790 EAL: TSC is not safe to use in SMP mode 00:06:24.790 EAL: TSC is not invariant 00:06:24.790 [2024-04-16 20:44:15.723863] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.790 [2024-04-16 20:44:15.815223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.790 [2024-04-16 20:44:15.815658] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:24.790 [2024-04-16 20:44:15.815669] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:25.358 20:44:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:25.358 20:44:16 -- common/autotest_common.sh@852 -- # return 0 00:06:25.358 20:44:16 -- bdev/bdev_raid.sh@305 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:06:25.358 Base_1 00:06:25.359 20:44:16 -- bdev/bdev_raid.sh@306 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:06:25.618 Base_2 00:06:25.618 20:44:16 -- bdev/bdev_raid.sh@308 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:06:25.877 [2024-04-16 20:44:16.766803] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:25.877 [2024-04-16 20:44:16.767258] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:25.877 [2024-04-16 20:44:16.767279] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82ab0ea00 00:06:25.877 [2024-04-16 20:44:16.767283] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:25.877 [2024-04-16 20:44:16.767316] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82ab71e20 00:06:25.877 [2024-04-16 20:44:16.767370] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82ab0ea00 00:06:25.877 [2024-04-16 20:44:16.767373] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x82ab0ea00 00:06:25.877 [2024-04-16 20:44:16.767400] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:25.877 20:44:16 -- bdev/bdev_raid.sh@311 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:06:25.877 [2024-04-16 20:44:16.966803] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:25.877 [2024-04-16 20:44:16.966824] bdev_raid.c:2083:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:25.877 true 00:06:25.877 20:44:16 -- bdev/bdev_raid.sh@314 -- # jq '.[].num_blocks' 00:06:25.877 20:44:16 -- bdev/bdev_raid.sh@314 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:06:26.136 [2024-04-16 20:44:17.178823] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:26.136 20:44:17 -- bdev/bdev_raid.sh@314 -- # blkcnt=131072 00:06:26.136 20:44:17 -- bdev/bdev_raid.sh@315 -- # raid_size_mb=64 00:06:26.136 20:44:17 -- bdev/bdev_raid.sh@316 -- # '[' 64 '!=' 64 ']' 00:06:26.136 20:44:17 -- bdev/bdev_raid.sh@322 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:06:26.395 [2024-04-16 20:44:17.350814] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:26.395 [2024-04-16 20:44:17.350838] bdev_raid.c:2083:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:26.395 [2024-04-16 20:44:17.350864] raid0.c: 405:raid0_resize: *NOTICE*: raid0 'Raid': min blockcount was changed from 262144 to 262144 00:06:26.395 [2024-04-16 20:44:17.350873] bdev_raid.c:1572:raid_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:06:26.395 true 00:06:26.395 20:44:17 -- bdev/bdev_raid.sh@325 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:06:26.395 20:44:17 -- bdev/bdev_raid.sh@325 -- # jq '.[].num_blocks' 00:06:26.655 [2024-04-16 20:44:17.542839] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:26.655 20:44:17 -- bdev/bdev_raid.sh@325 -- # blkcnt=262144 00:06:26.655 20:44:17 -- bdev/bdev_raid.sh@326 -- # raid_size_mb=128 00:06:26.655 20:44:17 -- bdev/bdev_raid.sh@327 -- # '[' 128 '!=' 128 ']' 00:06:26.655 20:44:17 -- bdev/bdev_raid.sh@332 -- # killprocess 47565 00:06:26.655 20:44:17 -- common/autotest_common.sh@926 -- # '[' -z 47565 ']' 00:06:26.655 20:44:17 -- common/autotest_common.sh@930 -- # kill -0 47565 00:06:26.655 20:44:17 -- common/autotest_common.sh@931 -- # uname 00:06:26.655 20:44:17 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:06:26.655 20:44:17 -- common/autotest_common.sh@934 -- # ps -c -o command 47565 00:06:26.655 20:44:17 -- common/autotest_common.sh@934 -- # tail -1 00:06:26.655 20:44:17 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:06:26.655 20:44:17 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:06:26.655 killing process with pid 47565 00:06:26.655 20:44:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 47565' 00:06:26.655 20:44:17 -- common/autotest_common.sh@945 -- # kill 47565 00:06:26.655 [2024-04-16 20:44:17.574089] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:26.655 [2024-04-16 20:44:17.574123] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:26.655 [2024-04-16 20:44:17.574135] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:26.655 [2024-04-16 20:44:17.574139] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82ab0ea00 name Raid, state offline 00:06:26.655 20:44:17 -- common/autotest_common.sh@950 -- # wait 47565 00:06:26.655 [2024-04-16 20:44:17.574273] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:26.655 20:44:17 -- bdev/bdev_raid.sh@334 -- # return 0 00:06:26.655 00:06:26.655 real 0m2.435s 00:06:26.655 user 0m3.517s 00:06:26.655 sys 0m0.643s 00:06:26.655 20:44:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.655 20:44:17 -- common/autotest_common.sh@10 -- # set +x 00:06:26.655 ************************************ 00:06:26.655 END TEST raid0_resize_test 00:06:26.655 ************************************ 00:06:26.655 20:44:17 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:06:26.655 20:44:17 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:06:26.655 20:44:17 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:06:26.655 20:44:17 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:06:26.655 20:44:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:26.655 20:44:17 -- common/autotest_common.sh@10 -- # set +x 00:06:26.915 ************************************ 00:06:26.915 START TEST raid_state_function_test 00:06:26.915 ************************************ 00:06:26.915 20:44:17 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 2 false 00:06:26.915 20:44:17 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:06:26.915 20:44:17 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:06:26.915 20:44:17 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:06:26.915 20:44:17 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:06:26.915 20:44:17 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:06:26.915 20:44:17 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:06:26.915 20:44:17 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:06:26.915 20:44:17 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:06:26.915 20:44:17 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:06:26.915 20:44:17 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:06:26.915 20:44:17 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:06:26.915 20:44:17 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:06:26.915 20:44:17 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:26.915 20:44:17 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:06:26.915 20:44:17 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:06:26.915 20:44:17 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:06:26.915 20:44:17 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:06:26.915 20:44:17 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:06:26.915 20:44:17 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:06:26.915 20:44:17 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:06:26.915 20:44:17 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:06:26.915 20:44:17 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:06:26.915 20:44:17 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:06:26.915 20:44:17 -- bdev/bdev_raid.sh@226 -- # raid_pid=47603 00:06:26.915 20:44:17 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 47603' 00:06:26.915 Process raid pid: 47603 00:06:26.915 20:44:17 -- bdev/bdev_raid.sh@228 -- # waitforlisten 47603 /var/tmp/spdk-raid.sock 00:06:26.915 20:44:17 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:06:26.915 20:44:17 -- common/autotest_common.sh@819 -- # '[' -z 47603 ']' 00:06:26.915 20:44:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:06:26.915 20:44:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:26.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:06:26.915 20:44:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:06:26.915 20:44:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:26.915 20:44:17 -- common/autotest_common.sh@10 -- # set +x 00:06:26.915 [2024-04-16 20:44:17.788955] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:06:26.915 [2024-04-16 20:44:17.789327] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:27.174 EAL: TSC is not safe to use in SMP mode 00:06:27.174 EAL: TSC is not invariant 00:06:27.174 [2024-04-16 20:44:18.226381] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.434 [2024-04-16 20:44:18.307283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.434 [2024-04-16 20:44:18.307692] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:27.434 [2024-04-16 20:44:18.307697] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:27.693 20:44:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:27.693 20:44:18 -- common/autotest_common.sh@852 -- # return 0 00:06:27.693 20:44:18 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:06:27.952 [2024-04-16 20:44:18.910915] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:27.952 [2024-04-16 20:44:18.910961] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:27.952 [2024-04-16 20:44:18.910965] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:27.952 [2024-04-16 20:44:18.910972] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:27.952 20:44:18 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:27.952 20:44:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:06:27.952 20:44:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:06:27.952 20:44:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:06:27.952 20:44:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:06:27.952 20:44:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:06:27.952 20:44:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:06:27.952 20:44:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:06:27.952 20:44:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:06:27.952 20:44:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:06:27.952 20:44:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:27.952 20:44:18 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:28.211 20:44:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:06:28.212 "name": "Existed_Raid", 00:06:28.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:28.212 "strip_size_kb": 64, 00:06:28.212 "state": "configuring", 00:06:28.212 "raid_level": "raid0", 00:06:28.212 "superblock": false, 00:06:28.212 "num_base_bdevs": 2, 00:06:28.212 "num_base_bdevs_discovered": 0, 00:06:28.212 "num_base_bdevs_operational": 2, 00:06:28.212 "base_bdevs_list": [ 00:06:28.212 { 00:06:28.212 "name": "BaseBdev1", 00:06:28.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:28.212 "is_configured": false, 00:06:28.212 "data_offset": 0, 00:06:28.212 "data_size": 0 00:06:28.212 }, 00:06:28.212 { 00:06:28.212 "name": "BaseBdev2", 00:06:28.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:28.212 "is_configured": false, 00:06:28.212 "data_offset": 0, 00:06:28.212 "data_size": 0 00:06:28.212 } 00:06:28.212 ] 00:06:28.212 }' 00:06:28.212 20:44:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:06:28.212 20:44:19 -- common/autotest_common.sh@10 -- # set +x 00:06:28.471 20:44:19 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:06:28.471 [2024-04-16 20:44:19.578943] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:28.471 [2024-04-16 20:44:19.578974] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c0cd500 name Existed_Raid, state configuring 00:06:28.730 20:44:19 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:06:28.730 [2024-04-16 20:44:19.774961] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:28.730 [2024-04-16 20:44:19.775010] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:28.730 [2024-04-16 20:44:19.775015] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:28.730 [2024-04-16 20:44:19.775021] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:28.730 20:44:19 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:06:28.989 [2024-04-16 20:44:19.971776] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:28.989 BaseBdev1 00:06:28.989 20:44:19 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:06:28.989 20:44:19 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:06:28.989 20:44:19 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:06:28.989 20:44:19 -- common/autotest_common.sh@889 -- # local i 00:06:28.989 20:44:19 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:06:28.989 20:44:19 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:06:28.989 20:44:19 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:06:29.248 20:44:20 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:29.507 [ 00:06:29.507 { 00:06:29.507 "name": "BaseBdev1", 00:06:29.507 "aliases": [ 00:06:29.507 "19657c16-fc32-11ee-80f8-ef3e42bb1492" 00:06:29.507 ], 00:06:29.507 "product_name": "Malloc disk", 00:06:29.507 "block_size": 512, 00:06:29.507 "num_blocks": 65536, 00:06:29.507 "uuid": "19657c16-fc32-11ee-80f8-ef3e42bb1492", 00:06:29.507 "assigned_rate_limits": { 00:06:29.507 "rw_ios_per_sec": 0, 00:06:29.507 "rw_mbytes_per_sec": 0, 00:06:29.507 "r_mbytes_per_sec": 0, 00:06:29.507 "w_mbytes_per_sec": 0 00:06:29.507 }, 00:06:29.507 "claimed": true, 00:06:29.507 "claim_type": "exclusive_write", 00:06:29.507 "zoned": false, 00:06:29.507 "supported_io_types": { 00:06:29.507 "read": true, 00:06:29.507 "write": true, 00:06:29.507 "unmap": true, 00:06:29.507 "write_zeroes": true, 00:06:29.507 "flush": true, 00:06:29.507 "reset": true, 00:06:29.507 "compare": false, 00:06:29.507 "compare_and_write": false, 00:06:29.507 "abort": true, 00:06:29.507 "nvme_admin": false, 00:06:29.507 "nvme_io": false 00:06:29.507 }, 00:06:29.507 "memory_domains": [ 00:06:29.507 { 00:06:29.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:29.507 "dma_device_type": 2 00:06:29.507 } 00:06:29.507 ], 00:06:29.507 "driver_specific": {} 00:06:29.507 } 00:06:29.507 ] 00:06:29.507 20:44:20 -- common/autotest_common.sh@895 -- # return 0 00:06:29.507 20:44:20 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:29.507 20:44:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:06:29.507 20:44:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:06:29.507 20:44:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:06:29.507 20:44:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:06:29.508 20:44:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:06:29.508 20:44:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:06:29.508 20:44:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:06:29.508 20:44:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:06:29.508 20:44:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:06:29.508 20:44:20 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:29.508 20:44:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:29.508 20:44:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:06:29.508 "name": "Existed_Raid", 00:06:29.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:29.508 "strip_size_kb": 64, 00:06:29.508 "state": "configuring", 00:06:29.508 "raid_level": "raid0", 00:06:29.508 "superblock": false, 00:06:29.508 "num_base_bdevs": 2, 00:06:29.508 "num_base_bdevs_discovered": 1, 00:06:29.508 "num_base_bdevs_operational": 2, 00:06:29.508 "base_bdevs_list": [ 00:06:29.508 { 00:06:29.508 "name": "BaseBdev1", 00:06:29.508 "uuid": "19657c16-fc32-11ee-80f8-ef3e42bb1492", 00:06:29.508 "is_configured": true, 00:06:29.508 "data_offset": 0, 00:06:29.508 "data_size": 65536 00:06:29.508 }, 00:06:29.508 { 00:06:29.508 "name": "BaseBdev2", 00:06:29.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:29.508 "is_configured": false, 00:06:29.508 "data_offset": 0, 00:06:29.508 "data_size": 0 00:06:29.508 } 00:06:29.508 ] 00:06:29.508 }' 00:06:29.508 20:44:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:06:29.508 20:44:20 -- common/autotest_common.sh@10 -- # set +x 00:06:30.077 20:44:20 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:06:30.077 [2024-04-16 20:44:21.051025] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:30.077 [2024-04-16 20:44:21.051054] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c0cd500 name Existed_Raid, state configuring 00:06:30.077 20:44:21 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:06:30.077 20:44:21 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:06:30.336 [2024-04-16 20:44:21.243041] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:30.336 [2024-04-16 20:44:21.243677] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:30.336 [2024-04-16 20:44:21.243718] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:30.336 20:44:21 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:06:30.336 20:44:21 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:06:30.336 20:44:21 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:30.336 20:44:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:06:30.336 20:44:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:06:30.336 20:44:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:06:30.336 20:44:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:06:30.336 20:44:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:06:30.336 20:44:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:06:30.336 20:44:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:06:30.336 20:44:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:06:30.336 20:44:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:06:30.336 20:44:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:30.336 20:44:21 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:30.594 20:44:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:06:30.594 "name": "Existed_Raid", 00:06:30.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:30.594 "strip_size_kb": 64, 00:06:30.594 "state": "configuring", 00:06:30.595 "raid_level": "raid0", 00:06:30.595 "superblock": false, 00:06:30.595 "num_base_bdevs": 2, 00:06:30.595 "num_base_bdevs_discovered": 1, 00:06:30.595 "num_base_bdevs_operational": 2, 00:06:30.595 "base_bdevs_list": [ 00:06:30.595 { 00:06:30.595 "name": "BaseBdev1", 00:06:30.595 "uuid": "19657c16-fc32-11ee-80f8-ef3e42bb1492", 00:06:30.595 "is_configured": true, 00:06:30.595 "data_offset": 0, 00:06:30.595 "data_size": 65536 00:06:30.595 }, 00:06:30.595 { 00:06:30.595 "name": "BaseBdev2", 00:06:30.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:30.595 "is_configured": false, 00:06:30.595 "data_offset": 0, 00:06:30.595 "data_size": 0 00:06:30.595 } 00:06:30.595 ] 00:06:30.595 }' 00:06:30.595 20:44:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:06:30.595 20:44:21 -- common/autotest_common.sh@10 -- # set +x 00:06:30.853 20:44:21 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:06:30.853 [2024-04-16 20:44:21.911153] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:30.853 [2024-04-16 20:44:21.911179] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c0cda00 00:06:30.853 [2024-04-16 20:44:21.911182] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:30.853 [2024-04-16 20:44:21.911199] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c130ec0 00:06:30.853 [2024-04-16 20:44:21.911269] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c0cda00 00:06:30.853 [2024-04-16 20:44:21.911273] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82c0cda00 00:06:30.853 [2024-04-16 20:44:21.911298] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:30.853 BaseBdev2 00:06:30.853 20:44:21 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:06:30.853 20:44:21 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:06:30.853 20:44:21 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:06:30.853 20:44:21 -- common/autotest_common.sh@889 -- # local i 00:06:30.853 20:44:21 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:06:30.853 20:44:21 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:06:30.853 20:44:21 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:06:31.111 20:44:22 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:31.369 [ 00:06:31.369 { 00:06:31.369 "name": "BaseBdev2", 00:06:31.369 "aliases": [ 00:06:31.369 "1a8d8444-fc32-11ee-80f8-ef3e42bb1492" 00:06:31.369 ], 00:06:31.369 "product_name": "Malloc disk", 00:06:31.369 "block_size": 512, 00:06:31.369 "num_blocks": 65536, 00:06:31.369 "uuid": "1a8d8444-fc32-11ee-80f8-ef3e42bb1492", 00:06:31.369 "assigned_rate_limits": { 00:06:31.369 "rw_ios_per_sec": 0, 00:06:31.369 "rw_mbytes_per_sec": 0, 00:06:31.369 "r_mbytes_per_sec": 0, 00:06:31.369 "w_mbytes_per_sec": 0 00:06:31.369 }, 00:06:31.369 "claimed": true, 00:06:31.369 "claim_type": "exclusive_write", 00:06:31.369 "zoned": false, 00:06:31.369 "supported_io_types": { 00:06:31.369 "read": true, 00:06:31.369 "write": true, 00:06:31.369 "unmap": true, 00:06:31.369 "write_zeroes": true, 00:06:31.369 "flush": true, 00:06:31.369 "reset": true, 00:06:31.369 "compare": false, 00:06:31.369 "compare_and_write": false, 00:06:31.369 "abort": true, 00:06:31.369 "nvme_admin": false, 00:06:31.369 "nvme_io": false 00:06:31.369 }, 00:06:31.369 "memory_domains": [ 00:06:31.369 { 00:06:31.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:31.369 "dma_device_type": 2 00:06:31.369 } 00:06:31.369 ], 00:06:31.369 "driver_specific": {} 00:06:31.369 } 00:06:31.369 ] 00:06:31.369 20:44:22 -- common/autotest_common.sh@895 -- # return 0 00:06:31.369 20:44:22 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:06:31.369 20:44:22 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:06:31.369 20:44:22 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:06:31.369 20:44:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:06:31.369 20:44:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:06:31.369 20:44:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:06:31.369 20:44:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:06:31.369 20:44:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:06:31.369 20:44:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:06:31.369 20:44:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:06:31.369 20:44:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:06:31.369 20:44:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:06:31.369 20:44:22 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:31.369 20:44:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:31.628 20:44:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:06:31.628 "name": "Existed_Raid", 00:06:31.628 "uuid": "1a8d89bf-fc32-11ee-80f8-ef3e42bb1492", 00:06:31.628 "strip_size_kb": 64, 00:06:31.628 "state": "online", 00:06:31.628 "raid_level": "raid0", 00:06:31.628 "superblock": false, 00:06:31.628 "num_base_bdevs": 2, 00:06:31.628 "num_base_bdevs_discovered": 2, 00:06:31.628 "num_base_bdevs_operational": 2, 00:06:31.628 "base_bdevs_list": [ 00:06:31.628 { 00:06:31.628 "name": "BaseBdev1", 00:06:31.628 "uuid": "19657c16-fc32-11ee-80f8-ef3e42bb1492", 00:06:31.628 "is_configured": true, 00:06:31.628 "data_offset": 0, 00:06:31.628 "data_size": 65536 00:06:31.628 }, 00:06:31.628 { 00:06:31.628 "name": "BaseBdev2", 00:06:31.628 "uuid": "1a8d8444-fc32-11ee-80f8-ef3e42bb1492", 00:06:31.628 "is_configured": true, 00:06:31.628 "data_offset": 0, 00:06:31.628 "data_size": 65536 00:06:31.628 } 00:06:31.628 ] 00:06:31.628 }' 00:06:31.628 20:44:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:06:31.628 20:44:22 -- common/autotest_common.sh@10 -- # set +x 00:06:31.887 20:44:22 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:06:31.887 [2024-04-16 20:44:22.935073] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:31.887 [2024-04-16 20:44:22.935095] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:31.887 [2024-04-16 20:44:22.935108] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:31.887 20:44:22 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:06:31.887 20:44:22 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:06:31.887 20:44:22 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:06:31.887 20:44:22 -- bdev/bdev_raid.sh@197 -- # return 1 00:06:31.887 20:44:22 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:06:31.887 20:44:22 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:06:31.887 20:44:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:06:31.887 20:44:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:06:31.887 20:44:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:06:31.887 20:44:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:06:31.887 20:44:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:06:31.887 20:44:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:06:31.887 20:44:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:06:31.887 20:44:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:06:31.887 20:44:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:06:31.887 20:44:22 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:31.887 20:44:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:32.145 20:44:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:06:32.145 "name": "Existed_Raid", 00:06:32.145 "uuid": "1a8d89bf-fc32-11ee-80f8-ef3e42bb1492", 00:06:32.145 "strip_size_kb": 64, 00:06:32.145 "state": "offline", 00:06:32.145 "raid_level": "raid0", 00:06:32.145 "superblock": false, 00:06:32.145 "num_base_bdevs": 2, 00:06:32.145 "num_base_bdevs_discovered": 1, 00:06:32.145 "num_base_bdevs_operational": 1, 00:06:32.145 "base_bdevs_list": [ 00:06:32.145 { 00:06:32.145 "name": null, 00:06:32.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:32.146 "is_configured": false, 00:06:32.146 "data_offset": 0, 00:06:32.146 "data_size": 65536 00:06:32.146 }, 00:06:32.146 { 00:06:32.146 "name": "BaseBdev2", 00:06:32.146 "uuid": "1a8d8444-fc32-11ee-80f8-ef3e42bb1492", 00:06:32.146 "is_configured": true, 00:06:32.146 "data_offset": 0, 00:06:32.146 "data_size": 65536 00:06:32.146 } 00:06:32.146 ] 00:06:32.146 }' 00:06:32.146 20:44:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:06:32.146 20:44:23 -- common/autotest_common.sh@10 -- # set +x 00:06:32.405 20:44:23 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:06:32.405 20:44:23 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:06:32.405 20:44:23 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:32.405 20:44:23 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:06:32.664 20:44:23 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:06:32.664 20:44:23 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:32.664 20:44:23 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:06:32.664 [2024-04-16 20:44:23.767645] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:32.664 [2024-04-16 20:44:23.767665] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c0cda00 name Existed_Raid, state offline 00:06:32.922 20:44:23 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:06:32.922 20:44:23 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:06:32.922 20:44:23 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:32.922 20:44:23 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:06:32.922 20:44:23 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:06:32.922 20:44:23 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:06:32.922 20:44:23 -- bdev/bdev_raid.sh@287 -- # killprocess 47603 00:06:32.922 20:44:23 -- common/autotest_common.sh@926 -- # '[' -z 47603 ']' 00:06:32.922 20:44:23 -- common/autotest_common.sh@930 -- # kill -0 47603 00:06:32.922 20:44:23 -- common/autotest_common.sh@931 -- # uname 00:06:32.922 20:44:23 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:06:32.922 20:44:23 -- common/autotest_common.sh@934 -- # ps -c -o command 47603 00:06:32.922 20:44:23 -- common/autotest_common.sh@934 -- # tail -1 00:06:32.922 20:44:23 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:06:32.922 20:44:23 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:06:32.922 killing process with pid 47603 00:06:32.922 20:44:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 47603' 00:06:32.922 20:44:23 -- common/autotest_common.sh@945 -- # kill 47603 00:06:32.922 [2024-04-16 20:44:23.991882] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:32.922 [2024-04-16 20:44:23.991916] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:32.922 20:44:23 -- common/autotest_common.sh@950 -- # wait 47603 00:06:33.181 20:44:24 -- bdev/bdev_raid.sh@289 -- # return 0 00:06:33.181 00:06:33.181 real 0m6.360s 00:06:33.181 user 0m10.795s 00:06:33.181 sys 0m1.239s 00:06:33.181 20:44:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.181 20:44:24 -- common/autotest_common.sh@10 -- # set +x 00:06:33.181 ************************************ 00:06:33.181 END TEST raid_state_function_test 00:06:33.181 ************************************ 00:06:33.181 20:44:24 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:06:33.181 20:44:24 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:06:33.181 20:44:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:33.181 20:44:24 -- common/autotest_common.sh@10 -- # set +x 00:06:33.181 ************************************ 00:06:33.181 START TEST raid_state_function_test_sb 00:06:33.181 ************************************ 00:06:33.181 20:44:24 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 2 true 00:06:33.181 20:44:24 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:06:33.181 20:44:24 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:06:33.181 20:44:24 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:06:33.181 20:44:24 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:06:33.181 20:44:24 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:06:33.181 20:44:24 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:06:33.181 20:44:24 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:06:33.181 20:44:24 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:06:33.181 20:44:24 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:06:33.181 20:44:24 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:06:33.181 20:44:24 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:06:33.181 20:44:24 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:06:33.181 20:44:24 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:33.181 20:44:24 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:06:33.181 20:44:24 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:06:33.181 20:44:24 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:06:33.181 20:44:24 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:06:33.181 20:44:24 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:06:33.181 20:44:24 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:06:33.181 20:44:24 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:06:33.181 20:44:24 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:06:33.181 20:44:24 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:06:33.181 20:44:24 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:06:33.181 20:44:24 -- bdev/bdev_raid.sh@226 -- # raid_pid=47799 00:06:33.181 20:44:24 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 47799' 00:06:33.181 Process raid pid: 47799 00:06:33.181 20:44:24 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:06:33.181 20:44:24 -- bdev/bdev_raid.sh@228 -- # waitforlisten 47799 /var/tmp/spdk-raid.sock 00:06:33.181 20:44:24 -- common/autotest_common.sh@819 -- # '[' -z 47799 ']' 00:06:33.181 20:44:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:06:33.181 20:44:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:33.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:06:33.181 20:44:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:06:33.181 20:44:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:33.181 20:44:24 -- common/autotest_common.sh@10 -- # set +x 00:06:33.181 [2024-04-16 20:44:24.198683] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:06:33.181 [2024-04-16 20:44:24.198951] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:33.748 EAL: TSC is not safe to use in SMP mode 00:06:33.748 EAL: TSC is not invariant 00:06:33.748 [2024-04-16 20:44:24.632965] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.748 [2024-04-16 20:44:24.725356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.748 [2024-04-16 20:44:24.725781] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:33.748 [2024-04-16 20:44:24.725791] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:34.317 20:44:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:34.317 20:44:25 -- common/autotest_common.sh@852 -- # return 0 00:06:34.317 20:44:25 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:06:34.317 [2024-04-16 20:44:25.300792] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:34.317 [2024-04-16 20:44:25.300851] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:34.317 [2024-04-16 20:44:25.300855] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:34.317 [2024-04-16 20:44:25.300862] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:34.317 20:44:25 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:34.317 20:44:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:06:34.317 20:44:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:06:34.317 20:44:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:06:34.317 20:44:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:06:34.317 20:44:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:06:34.317 20:44:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:06:34.317 20:44:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:06:34.317 20:44:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:06:34.317 20:44:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:06:34.317 20:44:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:34.317 20:44:25 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:34.576 20:44:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:06:34.576 "name": "Existed_Raid", 00:06:34.576 "uuid": "1c92c009-fc32-11ee-80f8-ef3e42bb1492", 00:06:34.576 "strip_size_kb": 64, 00:06:34.576 "state": "configuring", 00:06:34.576 "raid_level": "raid0", 00:06:34.576 "superblock": true, 00:06:34.577 "num_base_bdevs": 2, 00:06:34.577 "num_base_bdevs_discovered": 0, 00:06:34.577 "num_base_bdevs_operational": 2, 00:06:34.577 "base_bdevs_list": [ 00:06:34.577 { 00:06:34.577 "name": "BaseBdev1", 00:06:34.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:34.577 "is_configured": false, 00:06:34.577 "data_offset": 0, 00:06:34.577 "data_size": 0 00:06:34.577 }, 00:06:34.577 { 00:06:34.577 "name": "BaseBdev2", 00:06:34.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:34.577 "is_configured": false, 00:06:34.577 "data_offset": 0, 00:06:34.577 "data_size": 0 00:06:34.577 } 00:06:34.577 ] 00:06:34.577 }' 00:06:34.577 20:44:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:06:34.577 20:44:25 -- common/autotest_common.sh@10 -- # set +x 00:06:34.835 20:44:25 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:06:35.094 [2024-04-16 20:44:25.956771] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:35.094 [2024-04-16 20:44:25.956791] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82aac7500 name Existed_Raid, state configuring 00:06:35.094 20:44:25 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:06:35.094 [2024-04-16 20:44:26.124777] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:35.094 [2024-04-16 20:44:26.124810] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:35.094 [2024-04-16 20:44:26.124813] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:35.094 [2024-04-16 20:44:26.124819] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:35.094 20:44:26 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:06:35.354 [2024-04-16 20:44:26.309542] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:35.354 BaseBdev1 00:06:35.354 20:44:26 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:06:35.354 20:44:26 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:06:35.354 20:44:26 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:06:35.354 20:44:26 -- common/autotest_common.sh@889 -- # local i 00:06:35.354 20:44:26 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:06:35.354 20:44:26 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:06:35.354 20:44:26 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:06:35.614 20:44:26 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:35.614 [ 00:06:35.614 { 00:06:35.614 "name": "BaseBdev1", 00:06:35.614 "aliases": [ 00:06:35.614 "1d2c8ea3-fc32-11ee-80f8-ef3e42bb1492" 00:06:35.614 ], 00:06:35.614 "product_name": "Malloc disk", 00:06:35.614 "block_size": 512, 00:06:35.614 "num_blocks": 65536, 00:06:35.614 "uuid": "1d2c8ea3-fc32-11ee-80f8-ef3e42bb1492", 00:06:35.614 "assigned_rate_limits": { 00:06:35.614 "rw_ios_per_sec": 0, 00:06:35.614 "rw_mbytes_per_sec": 0, 00:06:35.614 "r_mbytes_per_sec": 0, 00:06:35.614 "w_mbytes_per_sec": 0 00:06:35.614 }, 00:06:35.614 "claimed": true, 00:06:35.614 "claim_type": "exclusive_write", 00:06:35.614 "zoned": false, 00:06:35.614 "supported_io_types": { 00:06:35.614 "read": true, 00:06:35.614 "write": true, 00:06:35.614 "unmap": true, 00:06:35.614 "write_zeroes": true, 00:06:35.614 "flush": true, 00:06:35.614 "reset": true, 00:06:35.614 "compare": false, 00:06:35.614 "compare_and_write": false, 00:06:35.614 "abort": true, 00:06:35.614 "nvme_admin": false, 00:06:35.614 "nvme_io": false 00:06:35.614 }, 00:06:35.614 "memory_domains": [ 00:06:35.614 { 00:06:35.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:35.615 "dma_device_type": 2 00:06:35.615 } 00:06:35.615 ], 00:06:35.615 "driver_specific": {} 00:06:35.615 } 00:06:35.615 ] 00:06:35.615 20:44:26 -- common/autotest_common.sh@895 -- # return 0 00:06:35.615 20:44:26 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:35.615 20:44:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:06:35.615 20:44:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:06:35.615 20:44:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:06:35.615 20:44:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:06:35.615 20:44:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:06:35.615 20:44:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:06:35.615 20:44:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:06:35.615 20:44:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:06:35.615 20:44:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:06:35.615 20:44:26 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:35.615 20:44:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:35.874 20:44:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:06:35.874 "name": "Existed_Raid", 00:06:35.874 "uuid": "1d107b09-fc32-11ee-80f8-ef3e42bb1492", 00:06:35.874 "strip_size_kb": 64, 00:06:35.874 "state": "configuring", 00:06:35.874 "raid_level": "raid0", 00:06:35.874 "superblock": true, 00:06:35.874 "num_base_bdevs": 2, 00:06:35.874 "num_base_bdevs_discovered": 1, 00:06:35.874 "num_base_bdevs_operational": 2, 00:06:35.874 "base_bdevs_list": [ 00:06:35.874 { 00:06:35.874 "name": "BaseBdev1", 00:06:35.874 "uuid": "1d2c8ea3-fc32-11ee-80f8-ef3e42bb1492", 00:06:35.874 "is_configured": true, 00:06:35.874 "data_offset": 2048, 00:06:35.874 "data_size": 63488 00:06:35.874 }, 00:06:35.874 { 00:06:35.874 "name": "BaseBdev2", 00:06:35.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:35.874 "is_configured": false, 00:06:35.874 "data_offset": 0, 00:06:35.874 "data_size": 0 00:06:35.874 } 00:06:35.874 ] 00:06:35.874 }' 00:06:35.874 20:44:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:06:35.874 20:44:26 -- common/autotest_common.sh@10 -- # set +x 00:06:36.134 20:44:27 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:06:36.394 [2024-04-16 20:44:27.320806] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:36.394 [2024-04-16 20:44:27.320826] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82aac7500 name Existed_Raid, state configuring 00:06:36.394 20:44:27 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:06:36.394 20:44:27 -- bdev/bdev_raid.sh@246 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:06:36.653 20:44:27 -- bdev/bdev_raid.sh@247 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:06:36.653 BaseBdev1 00:06:36.653 20:44:27 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:06:36.653 20:44:27 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:06:36.653 20:44:27 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:06:36.653 20:44:27 -- common/autotest_common.sh@889 -- # local i 00:06:36.653 20:44:27 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:06:36.653 20:44:27 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:06:36.653 20:44:27 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:06:36.912 20:44:27 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:37.171 [ 00:06:37.171 { 00:06:37.171 "name": "BaseBdev1", 00:06:37.171 "aliases": [ 00:06:37.171 "1e024461-fc32-11ee-80f8-ef3e42bb1492" 00:06:37.171 ], 00:06:37.171 "product_name": "Malloc disk", 00:06:37.171 "block_size": 512, 00:06:37.171 "num_blocks": 65536, 00:06:37.171 "uuid": "1e024461-fc32-11ee-80f8-ef3e42bb1492", 00:06:37.171 "assigned_rate_limits": { 00:06:37.171 "rw_ios_per_sec": 0, 00:06:37.171 "rw_mbytes_per_sec": 0, 00:06:37.171 "r_mbytes_per_sec": 0, 00:06:37.171 "w_mbytes_per_sec": 0 00:06:37.171 }, 00:06:37.171 "claimed": false, 00:06:37.171 "zoned": false, 00:06:37.171 "supported_io_types": { 00:06:37.171 "read": true, 00:06:37.171 "write": true, 00:06:37.171 "unmap": true, 00:06:37.171 "write_zeroes": true, 00:06:37.171 "flush": true, 00:06:37.171 "reset": true, 00:06:37.171 "compare": false, 00:06:37.171 "compare_and_write": false, 00:06:37.171 "abort": true, 00:06:37.171 "nvme_admin": false, 00:06:37.171 "nvme_io": false 00:06:37.171 }, 00:06:37.171 "memory_domains": [ 00:06:37.171 { 00:06:37.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:37.171 "dma_device_type": 2 00:06:37.171 } 00:06:37.171 ], 00:06:37.171 "driver_specific": {} 00:06:37.171 } 00:06:37.171 ] 00:06:37.171 20:44:28 -- common/autotest_common.sh@895 -- # return 0 00:06:37.171 20:44:28 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:06:37.171 [2024-04-16 20:44:28.269363] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:37.171 [2024-04-16 20:44:28.269745] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:37.171 [2024-04-16 20:44:28.269781] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:37.430 20:44:28 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:06:37.430 20:44:28 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:06:37.430 20:44:28 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:37.430 20:44:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:06:37.430 20:44:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:06:37.430 20:44:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:06:37.430 20:44:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:06:37.430 20:44:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:06:37.430 20:44:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:06:37.430 20:44:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:06:37.430 20:44:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:06:37.430 20:44:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:06:37.430 20:44:28 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:37.430 20:44:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:37.430 20:44:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:06:37.430 "name": "Existed_Raid", 00:06:37.430 "uuid": "1e57b7db-fc32-11ee-80f8-ef3e42bb1492", 00:06:37.430 "strip_size_kb": 64, 00:06:37.430 "state": "configuring", 00:06:37.430 "raid_level": "raid0", 00:06:37.430 "superblock": true, 00:06:37.430 "num_base_bdevs": 2, 00:06:37.430 "num_base_bdevs_discovered": 1, 00:06:37.430 "num_base_bdevs_operational": 2, 00:06:37.430 "base_bdevs_list": [ 00:06:37.430 { 00:06:37.430 "name": "BaseBdev1", 00:06:37.430 "uuid": "1e024461-fc32-11ee-80f8-ef3e42bb1492", 00:06:37.430 "is_configured": true, 00:06:37.430 "data_offset": 2048, 00:06:37.430 "data_size": 63488 00:06:37.430 }, 00:06:37.430 { 00:06:37.430 "name": "BaseBdev2", 00:06:37.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:37.430 "is_configured": false, 00:06:37.430 "data_offset": 0, 00:06:37.430 "data_size": 0 00:06:37.430 } 00:06:37.430 ] 00:06:37.430 }' 00:06:37.430 20:44:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:06:37.430 20:44:28 -- common/autotest_common.sh@10 -- # set +x 00:06:37.690 20:44:28 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:06:37.949 [2024-04-16 20:44:28.929454] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:37.949 [2024-04-16 20:44:28.929527] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82aac7a00 00:06:37.949 [2024-04-16 20:44:28.929532] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:37.949 [2024-04-16 20:44:28.929548] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82ab2aec0 00:06:37.949 [2024-04-16 20:44:28.929575] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82aac7a00 00:06:37.949 [2024-04-16 20:44:28.929578] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82aac7a00 00:06:37.949 [2024-04-16 20:44:28.929591] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:37.949 BaseBdev2 00:06:37.949 20:44:28 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:06:37.949 20:44:28 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:06:37.949 20:44:28 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:06:37.949 20:44:28 -- common/autotest_common.sh@889 -- # local i 00:06:37.949 20:44:28 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:06:37.949 20:44:28 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:06:37.949 20:44:28 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:06:38.208 20:44:29 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:38.470 [ 00:06:38.470 { 00:06:38.470 "name": "BaseBdev2", 00:06:38.470 "aliases": [ 00:06:38.470 "1ebc6d7d-fc32-11ee-80f8-ef3e42bb1492" 00:06:38.470 ], 00:06:38.470 "product_name": "Malloc disk", 00:06:38.470 "block_size": 512, 00:06:38.470 "num_blocks": 65536, 00:06:38.470 "uuid": "1ebc6d7d-fc32-11ee-80f8-ef3e42bb1492", 00:06:38.470 "assigned_rate_limits": { 00:06:38.470 "rw_ios_per_sec": 0, 00:06:38.470 "rw_mbytes_per_sec": 0, 00:06:38.470 "r_mbytes_per_sec": 0, 00:06:38.470 "w_mbytes_per_sec": 0 00:06:38.470 }, 00:06:38.470 "claimed": true, 00:06:38.470 "claim_type": "exclusive_write", 00:06:38.470 "zoned": false, 00:06:38.470 "supported_io_types": { 00:06:38.470 "read": true, 00:06:38.470 "write": true, 00:06:38.470 "unmap": true, 00:06:38.470 "write_zeroes": true, 00:06:38.470 "flush": true, 00:06:38.470 "reset": true, 00:06:38.470 "compare": false, 00:06:38.470 "compare_and_write": false, 00:06:38.470 "abort": true, 00:06:38.470 "nvme_admin": false, 00:06:38.470 "nvme_io": false 00:06:38.470 }, 00:06:38.470 "memory_domains": [ 00:06:38.470 { 00:06:38.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:38.470 "dma_device_type": 2 00:06:38.470 } 00:06:38.470 ], 00:06:38.470 "driver_specific": {} 00:06:38.470 } 00:06:38.470 ] 00:06:38.470 20:44:29 -- common/autotest_common.sh@895 -- # return 0 00:06:38.470 20:44:29 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:06:38.470 20:44:29 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:06:38.470 20:44:29 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:06:38.470 20:44:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:06:38.470 20:44:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:06:38.470 20:44:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:06:38.470 20:44:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:06:38.470 20:44:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:06:38.470 20:44:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:06:38.470 20:44:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:06:38.470 20:44:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:06:38.470 20:44:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:06:38.470 20:44:29 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:38.470 20:44:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:38.470 20:44:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:06:38.470 "name": "Existed_Raid", 00:06:38.470 "uuid": "1e57b7db-fc32-11ee-80f8-ef3e42bb1492", 00:06:38.470 "strip_size_kb": 64, 00:06:38.470 "state": "online", 00:06:38.470 "raid_level": "raid0", 00:06:38.470 "superblock": true, 00:06:38.470 "num_base_bdevs": 2, 00:06:38.470 "num_base_bdevs_discovered": 2, 00:06:38.470 "num_base_bdevs_operational": 2, 00:06:38.470 "base_bdevs_list": [ 00:06:38.470 { 00:06:38.470 "name": "BaseBdev1", 00:06:38.470 "uuid": "1e024461-fc32-11ee-80f8-ef3e42bb1492", 00:06:38.470 "is_configured": true, 00:06:38.470 "data_offset": 2048, 00:06:38.470 "data_size": 63488 00:06:38.470 }, 00:06:38.470 { 00:06:38.470 "name": "BaseBdev2", 00:06:38.470 "uuid": "1ebc6d7d-fc32-11ee-80f8-ef3e42bb1492", 00:06:38.470 "is_configured": true, 00:06:38.470 "data_offset": 2048, 00:06:38.470 "data_size": 63488 00:06:38.470 } 00:06:38.470 ] 00:06:38.470 }' 00:06:38.470 20:44:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:06:38.470 20:44:29 -- common/autotest_common.sh@10 -- # set +x 00:06:38.730 20:44:29 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:06:38.989 [2024-04-16 20:44:30.017381] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:38.989 [2024-04-16 20:44:30.017400] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:38.989 [2024-04-16 20:44:30.017408] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:38.989 20:44:30 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:06:38.989 20:44:30 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:06:38.989 20:44:30 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:06:38.989 20:44:30 -- bdev/bdev_raid.sh@197 -- # return 1 00:06:38.989 20:44:30 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:06:38.989 20:44:30 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:06:38.989 20:44:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:06:38.989 20:44:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:06:38.989 20:44:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:06:38.989 20:44:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:06:38.989 20:44:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:06:38.989 20:44:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:06:38.989 20:44:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:06:38.989 20:44:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:06:38.989 20:44:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:06:38.989 20:44:30 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:38.989 20:44:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:39.248 20:44:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:06:39.248 "name": "Existed_Raid", 00:06:39.248 "uuid": "1e57b7db-fc32-11ee-80f8-ef3e42bb1492", 00:06:39.248 "strip_size_kb": 64, 00:06:39.248 "state": "offline", 00:06:39.248 "raid_level": "raid0", 00:06:39.248 "superblock": true, 00:06:39.248 "num_base_bdevs": 2, 00:06:39.248 "num_base_bdevs_discovered": 1, 00:06:39.248 "num_base_bdevs_operational": 1, 00:06:39.248 "base_bdevs_list": [ 00:06:39.248 { 00:06:39.248 "name": null, 00:06:39.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:39.248 "is_configured": false, 00:06:39.248 "data_offset": 2048, 00:06:39.248 "data_size": 63488 00:06:39.248 }, 00:06:39.248 { 00:06:39.248 "name": "BaseBdev2", 00:06:39.248 "uuid": "1ebc6d7d-fc32-11ee-80f8-ef3e42bb1492", 00:06:39.248 "is_configured": true, 00:06:39.248 "data_offset": 2048, 00:06:39.248 "data_size": 63488 00:06:39.248 } 00:06:39.248 ] 00:06:39.248 }' 00:06:39.248 20:44:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:06:39.248 20:44:30 -- common/autotest_common.sh@10 -- # set +x 00:06:39.507 20:44:30 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:06:39.507 20:44:30 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:06:39.508 20:44:30 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:39.508 20:44:30 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:06:39.767 20:44:30 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:06:39.767 20:44:30 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:39.767 20:44:30 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:06:39.767 [2024-04-16 20:44:30.850042] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:39.767 [2024-04-16 20:44:30.850060] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82aac7a00 name Existed_Raid, state offline 00:06:39.767 20:44:30 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:06:39.767 20:44:30 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:06:39.767 20:44:30 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:39.767 20:44:30 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:06:40.027 20:44:31 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:06:40.027 20:44:31 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:06:40.027 20:44:31 -- bdev/bdev_raid.sh@287 -- # killprocess 47799 00:06:40.027 20:44:31 -- common/autotest_common.sh@926 -- # '[' -z 47799 ']' 00:06:40.027 20:44:31 -- common/autotest_common.sh@930 -- # kill -0 47799 00:06:40.027 20:44:31 -- common/autotest_common.sh@931 -- # uname 00:06:40.027 20:44:31 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:06:40.027 20:44:31 -- common/autotest_common.sh@934 -- # ps -c -o command 47799 00:06:40.027 20:44:31 -- common/autotest_common.sh@934 -- # tail -1 00:06:40.027 20:44:31 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:06:40.027 20:44:31 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:06:40.027 killing process with pid 47799 00:06:40.027 20:44:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 47799' 00:06:40.027 20:44:31 -- common/autotest_common.sh@945 -- # kill 47799 00:06:40.027 [2024-04-16 20:44:31.087464] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:40.027 20:44:31 -- common/autotest_common.sh@950 -- # wait 47799 00:06:40.027 [2024-04-16 20:44:31.087497] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:40.287 ************************************ 00:06:40.287 END TEST raid_state_function_test_sb 00:06:40.287 ************************************ 00:06:40.287 20:44:31 -- bdev/bdev_raid.sh@289 -- # return 0 00:06:40.287 00:06:40.287 real 0m7.046s 00:06:40.287 user 0m12.107s 00:06:40.287 sys 0m1.285s 00:06:40.287 20:44:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.287 20:44:31 -- common/autotest_common.sh@10 -- # set +x 00:06:40.287 20:44:31 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:06:40.287 20:44:31 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:40.287 20:44:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:40.287 20:44:31 -- common/autotest_common.sh@10 -- # set +x 00:06:40.287 ************************************ 00:06:40.287 START TEST raid_superblock_test 00:06:40.287 ************************************ 00:06:40.287 20:44:31 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 2 00:06:40.287 20:44:31 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:06:40.287 20:44:31 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:06:40.287 20:44:31 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:06:40.287 20:44:31 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:06:40.287 20:44:31 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:06:40.287 20:44:31 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:06:40.287 20:44:31 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:06:40.287 20:44:31 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:06:40.287 20:44:31 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:06:40.287 20:44:31 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:06:40.287 20:44:31 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:06:40.287 20:44:31 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:06:40.287 20:44:31 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:06:40.287 20:44:31 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:06:40.287 20:44:31 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:06:40.287 20:44:31 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:06:40.287 20:44:31 -- bdev/bdev_raid.sh@357 -- # raid_pid=47998 00:06:40.287 20:44:31 -- bdev/bdev_raid.sh@358 -- # waitforlisten 47998 /var/tmp/spdk-raid.sock 00:06:40.287 20:44:31 -- bdev/bdev_raid.sh@356 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:06:40.287 20:44:31 -- common/autotest_common.sh@819 -- # '[' -z 47998 ']' 00:06:40.287 20:44:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:06:40.287 20:44:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:40.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:06:40.287 20:44:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:06:40.287 20:44:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:40.287 20:44:31 -- common/autotest_common.sh@10 -- # set +x 00:06:40.287 [2024-04-16 20:44:31.283564] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:06:40.287 [2024-04-16 20:44:31.283822] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:40.856 EAL: TSC is not safe to use in SMP mode 00:06:40.856 EAL: TSC is not invariant 00:06:40.856 [2024-04-16 20:44:31.711014] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.856 [2024-04-16 20:44:31.800765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.856 [2024-04-16 20:44:31.801173] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:40.856 [2024-04-16 20:44:31.801182] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:41.115 20:44:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:41.115 20:44:32 -- common/autotest_common.sh@852 -- # return 0 00:06:41.115 20:44:32 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:06:41.115 20:44:32 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:06:41.115 20:44:32 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:06:41.115 20:44:32 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:06:41.115 20:44:32 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:06:41.115 20:44:32 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:06:41.115 20:44:32 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:06:41.115 20:44:32 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:06:41.115 20:44:32 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:06:41.375 malloc1 00:06:41.375 20:44:32 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:06:41.635 [2024-04-16 20:44:32.516195] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:06:41.635 [2024-04-16 20:44:32.516244] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:41.635 [2024-04-16 20:44:32.516757] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bf2e780 00:06:41.635 [2024-04-16 20:44:32.516780] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:41.635 [2024-04-16 20:44:32.517453] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:41.635 [2024-04-16 20:44:32.517483] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:06:41.635 pt1 00:06:41.635 20:44:32 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:06:41.635 20:44:32 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:06:41.635 20:44:32 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:06:41.635 20:44:32 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:06:41.635 20:44:32 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:06:41.635 20:44:32 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:06:41.635 20:44:32 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:06:41.635 20:44:32 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:06:41.635 20:44:32 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:06:41.635 malloc2 00:06:41.635 20:44:32 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:06:41.895 [2024-04-16 20:44:32.884199] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:06:41.895 [2024-04-16 20:44:32.884238] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:41.895 [2024-04-16 20:44:32.884258] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bf2ec80 00:06:41.895 [2024-04-16 20:44:32.884264] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:41.895 [2024-04-16 20:44:32.884687] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:41.895 [2024-04-16 20:44:32.884723] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:06:41.895 pt2 00:06:41.895 20:44:32 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:06:41.895 20:44:32 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:06:41.895 20:44:32 -- bdev/bdev_raid.sh@375 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:06:42.154 [2024-04-16 20:44:33.084208] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:06:42.154 [2024-04-16 20:44:33.084594] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:06:42.154 [2024-04-16 20:44:33.084649] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82bf2ef00 00:06:42.154 [2024-04-16 20:44:33.084654] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:42.154 [2024-04-16 20:44:33.084681] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82bf91e20 00:06:42.154 [2024-04-16 20:44:33.084732] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82bf2ef00 00:06:42.154 [2024-04-16 20:44:33.084737] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82bf2ef00 00:06:42.154 [2024-04-16 20:44:33.084755] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:42.154 20:44:33 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:42.154 20:44:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:06:42.154 20:44:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:06:42.154 20:44:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:06:42.154 20:44:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:06:42.154 20:44:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:06:42.154 20:44:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:06:42.154 20:44:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:06:42.154 20:44:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:06:42.154 20:44:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:06:42.154 20:44:33 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:42.154 20:44:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:42.413 20:44:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:06:42.413 "name": "raid_bdev1", 00:06:42.413 "uuid": "213667b5-fc32-11ee-80f8-ef3e42bb1492", 00:06:42.413 "strip_size_kb": 64, 00:06:42.413 "state": "online", 00:06:42.413 "raid_level": "raid0", 00:06:42.413 "superblock": true, 00:06:42.413 "num_base_bdevs": 2, 00:06:42.413 "num_base_bdevs_discovered": 2, 00:06:42.413 "num_base_bdevs_operational": 2, 00:06:42.413 "base_bdevs_list": [ 00:06:42.413 { 00:06:42.413 "name": "pt1", 00:06:42.413 "uuid": "f39c9723-fe04-cf5e-bbae-fb9097413c90", 00:06:42.413 "is_configured": true, 00:06:42.413 "data_offset": 2048, 00:06:42.413 "data_size": 63488 00:06:42.413 }, 00:06:42.413 { 00:06:42.413 "name": "pt2", 00:06:42.413 "uuid": "7a5a6782-a49e-785b-a581-7fbff7d40cd9", 00:06:42.413 "is_configured": true, 00:06:42.413 "data_offset": 2048, 00:06:42.413 "data_size": 63488 00:06:42.413 } 00:06:42.413 ] 00:06:42.413 }' 00:06:42.413 20:44:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:06:42.413 20:44:33 -- common/autotest_common.sh@10 -- # set +x 00:06:42.672 20:44:33 -- bdev/bdev_raid.sh@379 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:06:42.672 20:44:33 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:06:42.672 [2024-04-16 20:44:33.748223] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:42.672 20:44:33 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=213667b5-fc32-11ee-80f8-ef3e42bb1492 00:06:42.672 20:44:33 -- bdev/bdev_raid.sh@380 -- # '[' -z 213667b5-fc32-11ee-80f8-ef3e42bb1492 ']' 00:06:42.672 20:44:33 -- bdev/bdev_raid.sh@385 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:06:42.931 [2024-04-16 20:44:33.964200] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:42.931 [2024-04-16 20:44:33.964214] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:42.931 [2024-04-16 20:44:33.964226] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:42.931 [2024-04-16 20:44:33.964234] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:42.931 [2024-04-16 20:44:33.964237] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bf2ef00 name raid_bdev1, state offline 00:06:42.931 20:44:33 -- bdev/bdev_raid.sh@386 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:42.931 20:44:33 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:06:43.191 20:44:34 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:06:43.191 20:44:34 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:06:43.191 20:44:34 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:06:43.191 20:44:34 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:06:43.450 20:44:34 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:06:43.450 20:44:34 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:06:43.450 20:44:34 -- bdev/bdev_raid.sh@395 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:06:43.450 20:44:34 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:06:43.709 20:44:34 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:06:43.709 20:44:34 -- bdev/bdev_raid.sh@401 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:06:43.709 20:44:34 -- common/autotest_common.sh@640 -- # local es=0 00:06:43.709 20:44:34 -- common/autotest_common.sh@642 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:06:43.709 20:44:34 -- common/autotest_common.sh@628 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:43.709 20:44:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:43.709 20:44:34 -- common/autotest_common.sh@632 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:43.709 20:44:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:43.709 20:44:34 -- common/autotest_common.sh@634 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:43.709 20:44:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:43.709 20:44:34 -- common/autotest_common.sh@634 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:43.709 20:44:34 -- common/autotest_common.sh@634 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:43.709 20:44:34 -- common/autotest_common.sh@643 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:06:43.967 [2024-04-16 20:44:34.936224] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:43.967 [2024-04-16 20:44:34.936651] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:43.967 [2024-04-16 20:44:34.936670] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:06:43.967 [2024-04-16 20:44:34.936699] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:06:43.967 [2024-04-16 20:44:34.936710] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:43.967 [2024-04-16 20:44:34.936714] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bf2ec80 name raid_bdev1, state configuring 00:06:43.968 request: 00:06:43.968 { 00:06:43.968 "name": "raid_bdev1", 00:06:43.968 "raid_level": "raid0", 00:06:43.968 "base_bdevs": [ 00:06:43.968 "malloc1", 00:06:43.968 "malloc2" 00:06:43.968 ], 00:06:43.968 "superblock": false, 00:06:43.968 "strip_size_kb": 64, 00:06:43.968 "method": "bdev_raid_create", 00:06:43.968 "req_id": 1 00:06:43.968 } 00:06:43.968 Got JSON-RPC error response 00:06:43.968 response: 00:06:43.968 { 00:06:43.968 "code": -17, 00:06:43.968 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:06:43.968 } 00:06:43.968 20:44:34 -- common/autotest_common.sh@643 -- # es=1 00:06:43.968 20:44:34 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:43.968 20:44:34 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:43.968 20:44:34 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:43.968 20:44:34 -- bdev/bdev_raid.sh@403 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:43.968 20:44:34 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:06:44.226 20:44:35 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:06:44.226 20:44:35 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:06:44.226 20:44:35 -- bdev/bdev_raid.sh@409 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:06:44.226 [2024-04-16 20:44:35.308226] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:06:44.226 [2024-04-16 20:44:35.308259] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:44.226 [2024-04-16 20:44:35.308280] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bf2e780 00:06:44.226 [2024-04-16 20:44:35.308286] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:44.226 [2024-04-16 20:44:35.308727] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:44.226 [2024-04-16 20:44:35.308755] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:06:44.226 [2024-04-16 20:44:35.308771] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:06:44.226 [2024-04-16 20:44:35.308779] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:06:44.226 pt1 00:06:44.226 20:44:35 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:06:44.226 20:44:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:06:44.226 20:44:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:06:44.226 20:44:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:06:44.226 20:44:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:06:44.226 20:44:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:06:44.226 20:44:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:06:44.226 20:44:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:06:44.226 20:44:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:06:44.226 20:44:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:06:44.226 20:44:35 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:44.226 20:44:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:44.485 20:44:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:06:44.485 "name": "raid_bdev1", 00:06:44.485 "uuid": "213667b5-fc32-11ee-80f8-ef3e42bb1492", 00:06:44.485 "strip_size_kb": 64, 00:06:44.485 "state": "configuring", 00:06:44.485 "raid_level": "raid0", 00:06:44.485 "superblock": true, 00:06:44.485 "num_base_bdevs": 2, 00:06:44.485 "num_base_bdevs_discovered": 1, 00:06:44.485 "num_base_bdevs_operational": 2, 00:06:44.485 "base_bdevs_list": [ 00:06:44.485 { 00:06:44.485 "name": "pt1", 00:06:44.485 "uuid": "f39c9723-fe04-cf5e-bbae-fb9097413c90", 00:06:44.485 "is_configured": true, 00:06:44.485 "data_offset": 2048, 00:06:44.485 "data_size": 63488 00:06:44.485 }, 00:06:44.485 { 00:06:44.485 "name": null, 00:06:44.485 "uuid": "7a5a6782-a49e-785b-a581-7fbff7d40cd9", 00:06:44.485 "is_configured": false, 00:06:44.485 "data_offset": 2048, 00:06:44.485 "data_size": 63488 00:06:44.485 } 00:06:44.485 ] 00:06:44.485 }' 00:06:44.485 20:44:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:06:44.485 20:44:35 -- common/autotest_common.sh@10 -- # set +x 00:06:44.744 20:44:35 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:06:44.744 20:44:35 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:06:44.744 20:44:35 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:06:44.744 20:44:35 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:06:45.002 [2024-04-16 20:44:35.972235] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:06:45.002 [2024-04-16 20:44:35.972275] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:45.002 [2024-04-16 20:44:35.972298] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bf2ef00 00:06:45.002 [2024-04-16 20:44:35.972304] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:45.002 [2024-04-16 20:44:35.972371] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:45.002 [2024-04-16 20:44:35.972378] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:06:45.002 [2024-04-16 20:44:35.972391] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:06:45.002 [2024-04-16 20:44:35.972396] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:06:45.002 [2024-04-16 20:44:35.972411] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82bf2f180 00:06:45.002 [2024-04-16 20:44:35.972414] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:45.002 [2024-04-16 20:44:35.972428] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82bf91e20 00:06:45.002 [2024-04-16 20:44:35.972461] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82bf2f180 00:06:45.002 [2024-04-16 20:44:35.972463] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82bf2f180 00:06:45.002 [2024-04-16 20:44:35.972482] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:45.002 pt2 00:06:45.002 20:44:35 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:06:45.002 20:44:35 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:06:45.002 20:44:35 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:45.002 20:44:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:06:45.002 20:44:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:06:45.002 20:44:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:06:45.002 20:44:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:06:45.002 20:44:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:06:45.002 20:44:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:06:45.002 20:44:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:06:45.002 20:44:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:06:45.002 20:44:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:06:45.002 20:44:35 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:45.002 20:44:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:45.260 20:44:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:06:45.260 "name": "raid_bdev1", 00:06:45.260 "uuid": "213667b5-fc32-11ee-80f8-ef3e42bb1492", 00:06:45.260 "strip_size_kb": 64, 00:06:45.260 "state": "online", 00:06:45.260 "raid_level": "raid0", 00:06:45.260 "superblock": true, 00:06:45.260 "num_base_bdevs": 2, 00:06:45.260 "num_base_bdevs_discovered": 2, 00:06:45.260 "num_base_bdevs_operational": 2, 00:06:45.260 "base_bdevs_list": [ 00:06:45.260 { 00:06:45.260 "name": "pt1", 00:06:45.260 "uuid": "f39c9723-fe04-cf5e-bbae-fb9097413c90", 00:06:45.260 "is_configured": true, 00:06:45.260 "data_offset": 2048, 00:06:45.260 "data_size": 63488 00:06:45.260 }, 00:06:45.260 { 00:06:45.260 "name": "pt2", 00:06:45.260 "uuid": "7a5a6782-a49e-785b-a581-7fbff7d40cd9", 00:06:45.260 "is_configured": true, 00:06:45.260 "data_offset": 2048, 00:06:45.260 "data_size": 63488 00:06:45.260 } 00:06:45.260 ] 00:06:45.260 }' 00:06:45.260 20:44:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:06:45.260 20:44:36 -- common/autotest_common.sh@10 -- # set +x 00:06:45.551 20:44:36 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:06:45.551 20:44:36 -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:06:45.551 [2024-04-16 20:44:36.644259] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:45.811 20:44:36 -- bdev/bdev_raid.sh@430 -- # '[' 213667b5-fc32-11ee-80f8-ef3e42bb1492 '!=' 213667b5-fc32-11ee-80f8-ef3e42bb1492 ']' 00:06:45.811 20:44:36 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:06:45.811 20:44:36 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:06:45.811 20:44:36 -- bdev/bdev_raid.sh@197 -- # return 1 00:06:45.811 20:44:36 -- bdev/bdev_raid.sh@511 -- # killprocess 47998 00:06:45.811 20:44:36 -- common/autotest_common.sh@926 -- # '[' -z 47998 ']' 00:06:45.811 20:44:36 -- common/autotest_common.sh@930 -- # kill -0 47998 00:06:45.811 20:44:36 -- common/autotest_common.sh@931 -- # uname 00:06:45.811 20:44:36 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:06:45.811 20:44:36 -- common/autotest_common.sh@934 -- # ps -c -o command 47998 00:06:45.811 20:44:36 -- common/autotest_common.sh@934 -- # tail -1 00:06:45.811 20:44:36 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:06:45.811 20:44:36 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:06:45.811 killing process with pid 47998 00:06:45.811 20:44:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 47998' 00:06:45.811 20:44:36 -- common/autotest_common.sh@945 -- # kill 47998 00:06:45.811 [2024-04-16 20:44:36.675634] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:45.811 [2024-04-16 20:44:36.675649] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:45.811 [2024-04-16 20:44:36.675668] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:45.811 [2024-04-16 20:44:36.675672] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bf2f180 name raid_bdev1, state offline 00:06:45.811 20:44:36 -- common/autotest_common.sh@950 -- # wait 47998 00:06:45.811 [2024-04-16 20:44:36.685142] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:45.811 20:44:36 -- bdev/bdev_raid.sh@513 -- # return 0 00:06:45.811 00:06:45.811 real 0m5.552s 00:06:45.811 user 0m9.276s 00:06:45.811 sys 0m1.143s 00:06:45.811 20:44:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.811 20:44:36 -- common/autotest_common.sh@10 -- # set +x 00:06:45.812 ************************************ 00:06:45.812 END TEST raid_superblock_test 00:06:45.812 ************************************ 00:06:45.812 20:44:36 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:06:45.812 20:44:36 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:06:45.812 20:44:36 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:06:45.812 20:44:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:45.812 20:44:36 -- common/autotest_common.sh@10 -- # set +x 00:06:45.812 ************************************ 00:06:45.812 START TEST raid_state_function_test 00:06:45.812 ************************************ 00:06:45.812 20:44:36 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 2 false 00:06:45.812 20:44:36 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:06:45.812 20:44:36 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:06:45.812 20:44:36 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:06:45.812 20:44:36 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:06:45.812 20:44:36 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:06:45.812 20:44:36 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:06:45.812 20:44:36 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:06:45.812 20:44:36 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:06:45.812 20:44:36 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:06:45.812 20:44:36 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:06:45.812 20:44:36 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:06:45.812 20:44:36 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:06:45.812 20:44:36 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:45.812 20:44:36 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:06:45.812 20:44:36 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:06:45.812 20:44:36 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:06:45.812 20:44:36 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:06:45.812 20:44:36 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:06:45.812 20:44:36 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:06:45.812 20:44:36 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:06:45.812 20:44:36 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:06:45.812 20:44:36 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:06:45.812 20:44:36 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:06:45.812 20:44:36 -- bdev/bdev_raid.sh@226 -- # raid_pid=48143 00:06:45.812 20:44:36 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 48143' 00:06:45.812 Process raid pid: 48143 00:06:45.812 20:44:36 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:06:45.812 20:44:36 -- bdev/bdev_raid.sh@228 -- # waitforlisten 48143 /var/tmp/spdk-raid.sock 00:06:45.812 20:44:36 -- common/autotest_common.sh@819 -- # '[' -z 48143 ']' 00:06:45.812 20:44:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:06:45.812 20:44:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:45.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:06:45.812 20:44:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:06:45.812 20:44:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:45.812 20:44:36 -- common/autotest_common.sh@10 -- # set +x 00:06:45.812 [2024-04-16 20:44:36.886084] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:06:45.812 [2024-04-16 20:44:36.886307] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:46.380 EAL: TSC is not safe to use in SMP mode 00:06:46.380 EAL: TSC is not invariant 00:06:46.380 [2024-04-16 20:44:37.323156] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.380 [2024-04-16 20:44:37.412752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.380 [2024-04-16 20:44:37.413147] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:46.380 [2024-04-16 20:44:37.413155] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:46.947 20:44:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:46.947 20:44:37 -- common/autotest_common.sh@852 -- # return 0 00:06:46.947 20:44:37 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:06:46.947 [2024-04-16 20:44:37.960136] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:46.947 [2024-04-16 20:44:37.960181] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:46.947 [2024-04-16 20:44:37.960184] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:46.947 [2024-04-16 20:44:37.960190] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:46.947 20:44:37 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:46.947 20:44:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:06:46.947 20:44:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:06:46.947 20:44:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:06:46.947 20:44:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:06:46.947 20:44:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:06:46.947 20:44:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:06:46.947 20:44:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:06:46.947 20:44:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:06:46.947 20:44:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:06:46.947 20:44:37 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:46.947 20:44:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:47.207 20:44:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:06:47.207 "name": "Existed_Raid", 00:06:47.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:47.207 "strip_size_kb": 64, 00:06:47.207 "state": "configuring", 00:06:47.207 "raid_level": "concat", 00:06:47.207 "superblock": false, 00:06:47.207 "num_base_bdevs": 2, 00:06:47.207 "num_base_bdevs_discovered": 0, 00:06:47.207 "num_base_bdevs_operational": 2, 00:06:47.207 "base_bdevs_list": [ 00:06:47.207 { 00:06:47.207 "name": "BaseBdev1", 00:06:47.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:47.207 "is_configured": false, 00:06:47.207 "data_offset": 0, 00:06:47.207 "data_size": 0 00:06:47.207 }, 00:06:47.207 { 00:06:47.207 "name": "BaseBdev2", 00:06:47.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:47.207 "is_configured": false, 00:06:47.207 "data_offset": 0, 00:06:47.207 "data_size": 0 00:06:47.207 } 00:06:47.207 ] 00:06:47.207 }' 00:06:47.207 20:44:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:06:47.207 20:44:38 -- common/autotest_common.sh@10 -- # set +x 00:06:47.467 20:44:38 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:06:47.726 [2024-04-16 20:44:38.608122] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:47.726 [2024-04-16 20:44:38.608141] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b387500 name Existed_Raid, state configuring 00:06:47.726 20:44:38 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:06:47.726 [2024-04-16 20:44:38.796133] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:47.726 [2024-04-16 20:44:38.796167] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:47.726 [2024-04-16 20:44:38.796186] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:47.726 [2024-04-16 20:44:38.796191] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:47.726 20:44:38 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:06:47.987 [2024-04-16 20:44:38.980896] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:47.987 BaseBdev1 00:06:47.987 20:44:38 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:06:47.987 20:44:38 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:06:47.987 20:44:38 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:06:47.987 20:44:38 -- common/autotest_common.sh@889 -- # local i 00:06:47.987 20:44:38 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:06:47.987 20:44:38 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:06:47.987 20:44:38 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:06:48.247 20:44:39 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:48.506 [ 00:06:48.506 { 00:06:48.506 "name": "BaseBdev1", 00:06:48.506 "aliases": [ 00:06:48.506 "24ba0dd2-fc32-11ee-80f8-ef3e42bb1492" 00:06:48.506 ], 00:06:48.506 "product_name": "Malloc disk", 00:06:48.506 "block_size": 512, 00:06:48.506 "num_blocks": 65536, 00:06:48.506 "uuid": "24ba0dd2-fc32-11ee-80f8-ef3e42bb1492", 00:06:48.507 "assigned_rate_limits": { 00:06:48.507 "rw_ios_per_sec": 0, 00:06:48.507 "rw_mbytes_per_sec": 0, 00:06:48.507 "r_mbytes_per_sec": 0, 00:06:48.507 "w_mbytes_per_sec": 0 00:06:48.507 }, 00:06:48.507 "claimed": true, 00:06:48.507 "claim_type": "exclusive_write", 00:06:48.507 "zoned": false, 00:06:48.507 "supported_io_types": { 00:06:48.507 "read": true, 00:06:48.507 "write": true, 00:06:48.507 "unmap": true, 00:06:48.507 "write_zeroes": true, 00:06:48.507 "flush": true, 00:06:48.507 "reset": true, 00:06:48.507 "compare": false, 00:06:48.507 "compare_and_write": false, 00:06:48.507 "abort": true, 00:06:48.507 "nvme_admin": false, 00:06:48.507 "nvme_io": false 00:06:48.507 }, 00:06:48.507 "memory_domains": [ 00:06:48.507 { 00:06:48.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:48.507 "dma_device_type": 2 00:06:48.507 } 00:06:48.507 ], 00:06:48.507 "driver_specific": {} 00:06:48.507 } 00:06:48.507 ] 00:06:48.507 20:44:39 -- common/autotest_common.sh@895 -- # return 0 00:06:48.507 20:44:39 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:48.507 20:44:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:06:48.507 20:44:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:06:48.507 20:44:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:06:48.507 20:44:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:06:48.507 20:44:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:06:48.507 20:44:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:06:48.507 20:44:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:06:48.507 20:44:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:06:48.507 20:44:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:06:48.507 20:44:39 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:48.507 20:44:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:48.507 20:44:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:06:48.507 "name": "Existed_Raid", 00:06:48.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:48.507 "strip_size_kb": 64, 00:06:48.507 "state": "configuring", 00:06:48.507 "raid_level": "concat", 00:06:48.507 "superblock": false, 00:06:48.507 "num_base_bdevs": 2, 00:06:48.507 "num_base_bdevs_discovered": 1, 00:06:48.507 "num_base_bdevs_operational": 2, 00:06:48.507 "base_bdevs_list": [ 00:06:48.507 { 00:06:48.507 "name": "BaseBdev1", 00:06:48.507 "uuid": "24ba0dd2-fc32-11ee-80f8-ef3e42bb1492", 00:06:48.507 "is_configured": true, 00:06:48.507 "data_offset": 0, 00:06:48.507 "data_size": 65536 00:06:48.507 }, 00:06:48.507 { 00:06:48.507 "name": "BaseBdev2", 00:06:48.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:48.507 "is_configured": false, 00:06:48.507 "data_offset": 0, 00:06:48.507 "data_size": 0 00:06:48.507 } 00:06:48.507 ] 00:06:48.507 }' 00:06:48.507 20:44:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:06:48.507 20:44:39 -- common/autotest_common.sh@10 -- # set +x 00:06:48.766 20:44:39 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:06:49.026 [2024-04-16 20:44:40.020146] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:49.026 [2024-04-16 20:44:40.020166] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b387500 name Existed_Raid, state configuring 00:06:49.026 20:44:40 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:06:49.026 20:44:40 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:06:49.286 [2024-04-16 20:44:40.216173] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:49.286 [2024-04-16 20:44:40.216788] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:49.286 [2024-04-16 20:44:40.216829] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:49.286 20:44:40 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:06:49.286 20:44:40 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:06:49.286 20:44:40 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:49.286 20:44:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:06:49.286 20:44:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:06:49.286 20:44:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:06:49.286 20:44:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:06:49.286 20:44:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:06:49.286 20:44:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:06:49.286 20:44:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:06:49.286 20:44:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:06:49.286 20:44:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:06:49.286 20:44:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:49.286 20:44:40 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:49.548 20:44:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:06:49.548 "name": "Existed_Raid", 00:06:49.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:49.548 "strip_size_kb": 64, 00:06:49.548 "state": "configuring", 00:06:49.548 "raid_level": "concat", 00:06:49.548 "superblock": false, 00:06:49.548 "num_base_bdevs": 2, 00:06:49.548 "num_base_bdevs_discovered": 1, 00:06:49.548 "num_base_bdevs_operational": 2, 00:06:49.548 "base_bdevs_list": [ 00:06:49.548 { 00:06:49.548 "name": "BaseBdev1", 00:06:49.548 "uuid": "24ba0dd2-fc32-11ee-80f8-ef3e42bb1492", 00:06:49.548 "is_configured": true, 00:06:49.548 "data_offset": 0, 00:06:49.548 "data_size": 65536 00:06:49.548 }, 00:06:49.548 { 00:06:49.548 "name": "BaseBdev2", 00:06:49.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:49.548 "is_configured": false, 00:06:49.548 "data_offset": 0, 00:06:49.548 "data_size": 0 00:06:49.548 } 00:06:49.548 ] 00:06:49.548 }' 00:06:49.548 20:44:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:06:49.548 20:44:40 -- common/autotest_common.sh@10 -- # set +x 00:06:49.807 20:44:40 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:06:49.807 [2024-04-16 20:44:40.856267] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:49.807 [2024-04-16 20:44:40.856283] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b387a00 00:06:49.807 [2024-04-16 20:44:40.856286] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:49.807 [2024-04-16 20:44:40.856300] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b3eaec0 00:06:49.807 [2024-04-16 20:44:40.856384] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b387a00 00:06:49.807 [2024-04-16 20:44:40.856387] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82b387a00 00:06:49.807 [2024-04-16 20:44:40.856410] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:49.807 BaseBdev2 00:06:49.807 20:44:40 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:06:49.808 20:44:40 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:06:49.808 20:44:40 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:06:49.808 20:44:40 -- common/autotest_common.sh@889 -- # local i 00:06:49.808 20:44:40 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:06:49.808 20:44:40 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:06:49.808 20:44:40 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:06:50.067 20:44:41 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:50.326 [ 00:06:50.326 { 00:06:50.326 "name": "BaseBdev2", 00:06:50.326 "aliases": [ 00:06:50.326 "25d8508e-fc32-11ee-80f8-ef3e42bb1492" 00:06:50.326 ], 00:06:50.326 "product_name": "Malloc disk", 00:06:50.326 "block_size": 512, 00:06:50.326 "num_blocks": 65536, 00:06:50.326 "uuid": "25d8508e-fc32-11ee-80f8-ef3e42bb1492", 00:06:50.326 "assigned_rate_limits": { 00:06:50.326 "rw_ios_per_sec": 0, 00:06:50.326 "rw_mbytes_per_sec": 0, 00:06:50.326 "r_mbytes_per_sec": 0, 00:06:50.326 "w_mbytes_per_sec": 0 00:06:50.326 }, 00:06:50.326 "claimed": true, 00:06:50.326 "claim_type": "exclusive_write", 00:06:50.326 "zoned": false, 00:06:50.326 "supported_io_types": { 00:06:50.326 "read": true, 00:06:50.326 "write": true, 00:06:50.326 "unmap": true, 00:06:50.326 "write_zeroes": true, 00:06:50.326 "flush": true, 00:06:50.326 "reset": true, 00:06:50.326 "compare": false, 00:06:50.326 "compare_and_write": false, 00:06:50.326 "abort": true, 00:06:50.326 "nvme_admin": false, 00:06:50.326 "nvme_io": false 00:06:50.326 }, 00:06:50.326 "memory_domains": [ 00:06:50.326 { 00:06:50.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:50.326 "dma_device_type": 2 00:06:50.326 } 00:06:50.326 ], 00:06:50.326 "driver_specific": {} 00:06:50.326 } 00:06:50.326 ] 00:06:50.326 20:44:41 -- common/autotest_common.sh@895 -- # return 0 00:06:50.326 20:44:41 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:06:50.326 20:44:41 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:06:50.326 20:44:41 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:06:50.326 20:44:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:06:50.326 20:44:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:06:50.326 20:44:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:06:50.326 20:44:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:06:50.326 20:44:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:06:50.326 20:44:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:06:50.326 20:44:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:06:50.326 20:44:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:06:50.326 20:44:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:06:50.326 20:44:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:50.326 20:44:41 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:50.586 20:44:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:06:50.586 "name": "Existed_Raid", 00:06:50.586 "uuid": "25d854d5-fc32-11ee-80f8-ef3e42bb1492", 00:06:50.586 "strip_size_kb": 64, 00:06:50.586 "state": "online", 00:06:50.586 "raid_level": "concat", 00:06:50.586 "superblock": false, 00:06:50.586 "num_base_bdevs": 2, 00:06:50.586 "num_base_bdevs_discovered": 2, 00:06:50.586 "num_base_bdevs_operational": 2, 00:06:50.586 "base_bdevs_list": [ 00:06:50.586 { 00:06:50.586 "name": "BaseBdev1", 00:06:50.586 "uuid": "24ba0dd2-fc32-11ee-80f8-ef3e42bb1492", 00:06:50.586 "is_configured": true, 00:06:50.586 "data_offset": 0, 00:06:50.586 "data_size": 65536 00:06:50.586 }, 00:06:50.586 { 00:06:50.586 "name": "BaseBdev2", 00:06:50.586 "uuid": "25d8508e-fc32-11ee-80f8-ef3e42bb1492", 00:06:50.586 "is_configured": true, 00:06:50.586 "data_offset": 0, 00:06:50.586 "data_size": 65536 00:06:50.586 } 00:06:50.586 ] 00:06:50.586 }' 00:06:50.586 20:44:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:06:50.586 20:44:41 -- common/autotest_common.sh@10 -- # set +x 00:06:50.846 20:44:41 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:06:50.846 [2024-04-16 20:44:41.876196] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:50.846 [2024-04-16 20:44:41.876215] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:50.846 [2024-04-16 20:44:41.876224] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:50.846 20:44:41 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:06:50.846 20:44:41 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:06:50.846 20:44:41 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:06:50.846 20:44:41 -- bdev/bdev_raid.sh@197 -- # return 1 00:06:50.846 20:44:41 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:06:50.846 20:44:41 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:06:50.846 20:44:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:06:50.846 20:44:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:06:50.846 20:44:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:06:50.846 20:44:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:06:50.846 20:44:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:06:50.846 20:44:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:06:50.846 20:44:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:06:50.846 20:44:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:06:50.846 20:44:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:06:50.846 20:44:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:50.846 20:44:41 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:51.106 20:44:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:06:51.106 "name": "Existed_Raid", 00:06:51.106 "uuid": "25d854d5-fc32-11ee-80f8-ef3e42bb1492", 00:06:51.106 "strip_size_kb": 64, 00:06:51.106 "state": "offline", 00:06:51.106 "raid_level": "concat", 00:06:51.106 "superblock": false, 00:06:51.106 "num_base_bdevs": 2, 00:06:51.106 "num_base_bdevs_discovered": 1, 00:06:51.106 "num_base_bdevs_operational": 1, 00:06:51.106 "base_bdevs_list": [ 00:06:51.106 { 00:06:51.106 "name": null, 00:06:51.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:51.106 "is_configured": false, 00:06:51.106 "data_offset": 0, 00:06:51.106 "data_size": 65536 00:06:51.106 }, 00:06:51.106 { 00:06:51.106 "name": "BaseBdev2", 00:06:51.106 "uuid": "25d8508e-fc32-11ee-80f8-ef3e42bb1492", 00:06:51.106 "is_configured": true, 00:06:51.106 "data_offset": 0, 00:06:51.106 "data_size": 65536 00:06:51.106 } 00:06:51.106 ] 00:06:51.106 }' 00:06:51.106 20:44:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:06:51.106 20:44:42 -- common/autotest_common.sh@10 -- # set +x 00:06:51.366 20:44:42 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:06:51.366 20:44:42 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:06:51.366 20:44:42 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:51.366 20:44:42 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:06:51.624 20:44:42 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:06:51.624 20:44:42 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:51.624 20:44:42 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:06:51.884 [2024-04-16 20:44:42.740853] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:51.884 [2024-04-16 20:44:42.740873] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b387a00 name Existed_Raid, state offline 00:06:51.884 20:44:42 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:06:51.884 20:44:42 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:06:51.884 20:44:42 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:51.884 20:44:42 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:06:51.884 20:44:42 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:06:51.884 20:44:42 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:06:51.884 20:44:42 -- bdev/bdev_raid.sh@287 -- # killprocess 48143 00:06:51.884 20:44:42 -- common/autotest_common.sh@926 -- # '[' -z 48143 ']' 00:06:51.884 20:44:42 -- common/autotest_common.sh@930 -- # kill -0 48143 00:06:51.884 20:44:42 -- common/autotest_common.sh@931 -- # uname 00:06:51.884 20:44:42 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:06:51.884 20:44:42 -- common/autotest_common.sh@934 -- # ps -c -o command 48143 00:06:51.884 20:44:42 -- common/autotest_common.sh@934 -- # tail -1 00:06:51.884 20:44:42 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:06:51.884 20:44:42 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:06:51.884 killing process with pid 48143 00:06:51.884 20:44:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 48143' 00:06:51.884 20:44:42 -- common/autotest_common.sh@945 -- # kill 48143 00:06:51.884 [2024-04-16 20:44:42.967371] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:51.884 [2024-04-16 20:44:42.967405] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:51.884 20:44:42 -- common/autotest_common.sh@950 -- # wait 48143 00:06:52.144 20:44:43 -- bdev/bdev_raid.sh@289 -- # return 0 00:06:52.144 00:06:52.144 real 0m6.235s 00:06:52.144 user 0m10.539s 00:06:52.144 sys 0m1.274s 00:06:52.144 20:44:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.144 20:44:43 -- common/autotest_common.sh@10 -- # set +x 00:06:52.144 ************************************ 00:06:52.144 END TEST raid_state_function_test 00:06:52.144 ************************************ 00:06:52.144 20:44:43 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:06:52.144 20:44:43 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:06:52.144 20:44:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:52.144 20:44:43 -- common/autotest_common.sh@10 -- # set +x 00:06:52.144 ************************************ 00:06:52.144 START TEST raid_state_function_test_sb 00:06:52.144 ************************************ 00:06:52.144 20:44:43 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 2 true 00:06:52.144 20:44:43 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:06:52.144 20:44:43 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:06:52.144 20:44:43 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:06:52.144 20:44:43 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:06:52.144 20:44:43 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:06:52.144 20:44:43 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:06:52.144 20:44:43 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:06:52.144 20:44:43 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:06:52.144 20:44:43 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:06:52.144 20:44:43 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:06:52.144 20:44:43 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:06:52.144 20:44:43 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:06:52.144 20:44:43 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:52.144 20:44:43 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:06:52.144 20:44:43 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:06:52.144 20:44:43 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:06:52.144 20:44:43 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:06:52.144 20:44:43 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:06:52.144 20:44:43 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:06:52.144 20:44:43 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:06:52.144 20:44:43 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:06:52.144 20:44:43 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:06:52.144 20:44:43 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:06:52.144 20:44:43 -- bdev/bdev_raid.sh@226 -- # raid_pid=48339 00:06:52.144 20:44:43 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 48339' 00:06:52.144 Process raid pid: 48339 00:06:52.144 20:44:43 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:06:52.144 20:44:43 -- bdev/bdev_raid.sh@228 -- # waitforlisten 48339 /var/tmp/spdk-raid.sock 00:06:52.144 20:44:43 -- common/autotest_common.sh@819 -- # '[' -z 48339 ']' 00:06:52.144 20:44:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:06:52.144 20:44:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:52.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:06:52.144 20:44:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:06:52.144 20:44:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:52.144 20:44:43 -- common/autotest_common.sh@10 -- # set +x 00:06:52.144 [2024-04-16 20:44:43.172715] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:06:52.144 [2024-04-16 20:44:43.173006] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:52.713 EAL: TSC is not safe to use in SMP mode 00:06:52.713 EAL: TSC is not invariant 00:06:52.713 [2024-04-16 20:44:43.606454] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.713 [2024-04-16 20:44:43.699074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.713 [2024-04-16 20:44:43.699485] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:52.713 [2024-04-16 20:44:43.699495] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:52.971 20:44:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:52.971 20:44:44 -- common/autotest_common.sh@852 -- # return 0 00:06:53.230 20:44:44 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:06:53.230 [2024-04-16 20:44:44.246554] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:53.230 [2024-04-16 20:44:44.246600] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:53.230 [2024-04-16 20:44:44.246605] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:53.230 [2024-04-16 20:44:44.246611] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:53.230 20:44:44 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:53.230 20:44:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:06:53.230 20:44:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:06:53.230 20:44:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:06:53.230 20:44:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:06:53.230 20:44:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:06:53.230 20:44:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:06:53.230 20:44:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:06:53.230 20:44:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:06:53.230 20:44:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:06:53.230 20:44:44 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:53.230 20:44:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:53.494 20:44:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:06:53.494 "name": "Existed_Raid", 00:06:53.494 "uuid": "27dda4d5-fc32-11ee-80f8-ef3e42bb1492", 00:06:53.494 "strip_size_kb": 64, 00:06:53.494 "state": "configuring", 00:06:53.494 "raid_level": "concat", 00:06:53.494 "superblock": true, 00:06:53.494 "num_base_bdevs": 2, 00:06:53.494 "num_base_bdevs_discovered": 0, 00:06:53.494 "num_base_bdevs_operational": 2, 00:06:53.494 "base_bdevs_list": [ 00:06:53.494 { 00:06:53.494 "name": "BaseBdev1", 00:06:53.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:53.494 "is_configured": false, 00:06:53.494 "data_offset": 0, 00:06:53.494 "data_size": 0 00:06:53.494 }, 00:06:53.494 { 00:06:53.494 "name": "BaseBdev2", 00:06:53.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:53.494 "is_configured": false, 00:06:53.494 "data_offset": 0, 00:06:53.494 "data_size": 0 00:06:53.494 } 00:06:53.494 ] 00:06:53.494 }' 00:06:53.494 20:44:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:06:53.494 20:44:44 -- common/autotest_common.sh@10 -- # set +x 00:06:53.763 20:44:44 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:06:54.022 [2024-04-16 20:44:44.926533] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:54.022 [2024-04-16 20:44:44.926557] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b59e500 name Existed_Raid, state configuring 00:06:54.022 20:44:44 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:06:54.022 [2024-04-16 20:44:45.110553] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:54.022 [2024-04-16 20:44:45.110599] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:54.022 [2024-04-16 20:44:45.110603] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:54.022 [2024-04-16 20:44:45.110610] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:54.022 20:44:45 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:06:54.280 [2024-04-16 20:44:45.299335] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:54.280 BaseBdev1 00:06:54.280 20:44:45 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:06:54.280 20:44:45 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:06:54.280 20:44:45 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:06:54.280 20:44:45 -- common/autotest_common.sh@889 -- # local i 00:06:54.280 20:44:45 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:06:54.280 20:44:45 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:06:54.280 20:44:45 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:06:54.538 20:44:45 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:54.798 [ 00:06:54.798 { 00:06:54.798 "name": "BaseBdev1", 00:06:54.798 "aliases": [ 00:06:54.798 "287e2af6-fc32-11ee-80f8-ef3e42bb1492" 00:06:54.798 ], 00:06:54.798 "product_name": "Malloc disk", 00:06:54.798 "block_size": 512, 00:06:54.798 "num_blocks": 65536, 00:06:54.798 "uuid": "287e2af6-fc32-11ee-80f8-ef3e42bb1492", 00:06:54.798 "assigned_rate_limits": { 00:06:54.798 "rw_ios_per_sec": 0, 00:06:54.798 "rw_mbytes_per_sec": 0, 00:06:54.798 "r_mbytes_per_sec": 0, 00:06:54.798 "w_mbytes_per_sec": 0 00:06:54.798 }, 00:06:54.798 "claimed": true, 00:06:54.798 "claim_type": "exclusive_write", 00:06:54.798 "zoned": false, 00:06:54.798 "supported_io_types": { 00:06:54.798 "read": true, 00:06:54.798 "write": true, 00:06:54.798 "unmap": true, 00:06:54.798 "write_zeroes": true, 00:06:54.798 "flush": true, 00:06:54.798 "reset": true, 00:06:54.798 "compare": false, 00:06:54.798 "compare_and_write": false, 00:06:54.798 "abort": true, 00:06:54.798 "nvme_admin": false, 00:06:54.798 "nvme_io": false 00:06:54.798 }, 00:06:54.798 "memory_domains": [ 00:06:54.798 { 00:06:54.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:54.798 "dma_device_type": 2 00:06:54.798 } 00:06:54.798 ], 00:06:54.798 "driver_specific": {} 00:06:54.798 } 00:06:54.798 ] 00:06:54.798 20:44:45 -- common/autotest_common.sh@895 -- # return 0 00:06:54.798 20:44:45 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:54.798 20:44:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:06:54.798 20:44:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:06:54.798 20:44:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:06:54.798 20:44:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:06:54.798 20:44:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:06:54.798 20:44:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:06:54.798 20:44:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:06:54.798 20:44:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:06:54.798 20:44:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:06:54.798 20:44:45 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:54.798 20:44:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:54.798 20:44:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:06:54.798 "name": "Existed_Raid", 00:06:54.798 "uuid": "28617ad0-fc32-11ee-80f8-ef3e42bb1492", 00:06:54.798 "strip_size_kb": 64, 00:06:54.798 "state": "configuring", 00:06:54.798 "raid_level": "concat", 00:06:54.798 "superblock": true, 00:06:54.798 "num_base_bdevs": 2, 00:06:54.798 "num_base_bdevs_discovered": 1, 00:06:54.798 "num_base_bdevs_operational": 2, 00:06:54.798 "base_bdevs_list": [ 00:06:54.798 { 00:06:54.798 "name": "BaseBdev1", 00:06:54.798 "uuid": "287e2af6-fc32-11ee-80f8-ef3e42bb1492", 00:06:54.798 "is_configured": true, 00:06:54.798 "data_offset": 2048, 00:06:54.798 "data_size": 63488 00:06:54.798 }, 00:06:54.798 { 00:06:54.798 "name": "BaseBdev2", 00:06:54.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:54.798 "is_configured": false, 00:06:54.798 "data_offset": 0, 00:06:54.798 "data_size": 0 00:06:54.798 } 00:06:54.798 ] 00:06:54.798 }' 00:06:54.798 20:44:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:06:54.798 20:44:45 -- common/autotest_common.sh@10 -- # set +x 00:06:55.057 20:44:46 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:06:55.315 [2024-04-16 20:44:46.298557] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:55.315 [2024-04-16 20:44:46.298585] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b59e500 name Existed_Raid, state configuring 00:06:55.315 20:44:46 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:06:55.315 20:44:46 -- bdev/bdev_raid.sh@246 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:06:55.573 20:44:46 -- bdev/bdev_raid.sh@247 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:06:55.831 BaseBdev1 00:06:55.831 20:44:46 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:06:55.831 20:44:46 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:06:55.831 20:44:46 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:06:55.831 20:44:46 -- common/autotest_common.sh@889 -- # local i 00:06:55.831 20:44:46 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:06:55.831 20:44:46 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:06:55.831 20:44:46 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:06:55.831 20:44:46 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:56.090 [ 00:06:56.090 { 00:06:56.090 "name": "BaseBdev1", 00:06:56.090 "aliases": [ 00:06:56.090 "29503714-fc32-11ee-80f8-ef3e42bb1492" 00:06:56.090 ], 00:06:56.090 "product_name": "Malloc disk", 00:06:56.090 "block_size": 512, 00:06:56.090 "num_blocks": 65536, 00:06:56.090 "uuid": "29503714-fc32-11ee-80f8-ef3e42bb1492", 00:06:56.090 "assigned_rate_limits": { 00:06:56.090 "rw_ios_per_sec": 0, 00:06:56.090 "rw_mbytes_per_sec": 0, 00:06:56.090 "r_mbytes_per_sec": 0, 00:06:56.090 "w_mbytes_per_sec": 0 00:06:56.090 }, 00:06:56.090 "claimed": false, 00:06:56.090 "zoned": false, 00:06:56.090 "supported_io_types": { 00:06:56.090 "read": true, 00:06:56.090 "write": true, 00:06:56.090 "unmap": true, 00:06:56.090 "write_zeroes": true, 00:06:56.090 "flush": true, 00:06:56.090 "reset": true, 00:06:56.090 "compare": false, 00:06:56.090 "compare_and_write": false, 00:06:56.090 "abort": true, 00:06:56.090 "nvme_admin": false, 00:06:56.090 "nvme_io": false 00:06:56.090 }, 00:06:56.090 "memory_domains": [ 00:06:56.090 { 00:06:56.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:56.090 "dma_device_type": 2 00:06:56.090 } 00:06:56.090 ], 00:06:56.090 "driver_specific": {} 00:06:56.090 } 00:06:56.090 ] 00:06:56.090 20:44:47 -- common/autotest_common.sh@895 -- # return 0 00:06:56.090 20:44:47 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:06:56.090 [2024-04-16 20:44:47.191145] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:56.090 [2024-04-16 20:44:47.191586] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:56.090 [2024-04-16 20:44:47.191625] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:56.348 20:44:47 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:06:56.348 20:44:47 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:06:56.348 20:44:47 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:56.348 20:44:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:06:56.348 20:44:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:06:56.349 20:44:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:06:56.349 20:44:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:06:56.349 20:44:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:06:56.349 20:44:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:06:56.349 20:44:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:06:56.349 20:44:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:06:56.349 20:44:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:06:56.349 20:44:47 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:56.349 20:44:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:56.349 20:44:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:06:56.349 "name": "Existed_Raid", 00:06:56.349 "uuid": "299ef3b1-fc32-11ee-80f8-ef3e42bb1492", 00:06:56.349 "strip_size_kb": 64, 00:06:56.349 "state": "configuring", 00:06:56.349 "raid_level": "concat", 00:06:56.349 "superblock": true, 00:06:56.349 "num_base_bdevs": 2, 00:06:56.349 "num_base_bdevs_discovered": 1, 00:06:56.349 "num_base_bdevs_operational": 2, 00:06:56.349 "base_bdevs_list": [ 00:06:56.349 { 00:06:56.349 "name": "BaseBdev1", 00:06:56.349 "uuid": "29503714-fc32-11ee-80f8-ef3e42bb1492", 00:06:56.349 "is_configured": true, 00:06:56.349 "data_offset": 2048, 00:06:56.349 "data_size": 63488 00:06:56.349 }, 00:06:56.349 { 00:06:56.349 "name": "BaseBdev2", 00:06:56.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:56.349 "is_configured": false, 00:06:56.349 "data_offset": 0, 00:06:56.349 "data_size": 0 00:06:56.349 } 00:06:56.349 ] 00:06:56.349 }' 00:06:56.349 20:44:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:06:56.349 20:44:47 -- common/autotest_common.sh@10 -- # set +x 00:06:56.607 20:44:47 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:06:56.866 [2024-04-16 20:44:47.851250] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:56.866 [2024-04-16 20:44:47.851307] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b59ea00 00:06:56.866 [2024-04-16 20:44:47.851311] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:56.866 [2024-04-16 20:44:47.851327] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b601ec0 00:06:56.866 [2024-04-16 20:44:47.851357] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b59ea00 00:06:56.866 [2024-04-16 20:44:47.851360] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82b59ea00 00:06:56.866 [2024-04-16 20:44:47.851374] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:56.866 BaseBdev2 00:06:56.866 20:44:47 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:06:56.866 20:44:47 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:06:56.866 20:44:47 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:06:56.866 20:44:47 -- common/autotest_common.sh@889 -- # local i 00:06:56.866 20:44:47 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:06:56.866 20:44:47 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:06:56.866 20:44:47 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:06:57.124 20:44:48 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:57.124 [ 00:06:57.124 { 00:06:57.124 "name": "BaseBdev2", 00:06:57.124 "aliases": [ 00:06:57.124 "2a03a985-fc32-11ee-80f8-ef3e42bb1492" 00:06:57.124 ], 00:06:57.124 "product_name": "Malloc disk", 00:06:57.124 "block_size": 512, 00:06:57.124 "num_blocks": 65536, 00:06:57.124 "uuid": "2a03a985-fc32-11ee-80f8-ef3e42bb1492", 00:06:57.124 "assigned_rate_limits": { 00:06:57.124 "rw_ios_per_sec": 0, 00:06:57.124 "rw_mbytes_per_sec": 0, 00:06:57.124 "r_mbytes_per_sec": 0, 00:06:57.124 "w_mbytes_per_sec": 0 00:06:57.124 }, 00:06:57.124 "claimed": true, 00:06:57.124 "claim_type": "exclusive_write", 00:06:57.124 "zoned": false, 00:06:57.124 "supported_io_types": { 00:06:57.124 "read": true, 00:06:57.124 "write": true, 00:06:57.124 "unmap": true, 00:06:57.124 "write_zeroes": true, 00:06:57.124 "flush": true, 00:06:57.124 "reset": true, 00:06:57.124 "compare": false, 00:06:57.124 "compare_and_write": false, 00:06:57.124 "abort": true, 00:06:57.124 "nvme_admin": false, 00:06:57.124 "nvme_io": false 00:06:57.124 }, 00:06:57.124 "memory_domains": [ 00:06:57.124 { 00:06:57.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:57.124 "dma_device_type": 2 00:06:57.124 } 00:06:57.124 ], 00:06:57.124 "driver_specific": {} 00:06:57.124 } 00:06:57.124 ] 00:06:57.124 20:44:48 -- common/autotest_common.sh@895 -- # return 0 00:06:57.124 20:44:48 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:06:57.124 20:44:48 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:06:57.124 20:44:48 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:06:57.124 20:44:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:06:57.124 20:44:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:06:57.124 20:44:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:06:57.124 20:44:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:06:57.124 20:44:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:06:57.124 20:44:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:06:57.124 20:44:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:06:57.124 20:44:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:06:57.124 20:44:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:06:57.124 20:44:48 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:57.124 20:44:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:57.382 20:44:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:06:57.382 "name": "Existed_Raid", 00:06:57.382 "uuid": "299ef3b1-fc32-11ee-80f8-ef3e42bb1492", 00:06:57.382 "strip_size_kb": 64, 00:06:57.382 "state": "online", 00:06:57.382 "raid_level": "concat", 00:06:57.382 "superblock": true, 00:06:57.382 "num_base_bdevs": 2, 00:06:57.382 "num_base_bdevs_discovered": 2, 00:06:57.382 "num_base_bdevs_operational": 2, 00:06:57.382 "base_bdevs_list": [ 00:06:57.382 { 00:06:57.382 "name": "BaseBdev1", 00:06:57.382 "uuid": "29503714-fc32-11ee-80f8-ef3e42bb1492", 00:06:57.382 "is_configured": true, 00:06:57.382 "data_offset": 2048, 00:06:57.382 "data_size": 63488 00:06:57.382 }, 00:06:57.382 { 00:06:57.382 "name": "BaseBdev2", 00:06:57.382 "uuid": "2a03a985-fc32-11ee-80f8-ef3e42bb1492", 00:06:57.383 "is_configured": true, 00:06:57.383 "data_offset": 2048, 00:06:57.383 "data_size": 63488 00:06:57.383 } 00:06:57.383 ] 00:06:57.383 }' 00:06:57.383 20:44:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:06:57.383 20:44:48 -- common/autotest_common.sh@10 -- # set +x 00:06:57.640 20:44:48 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:06:57.900 [2024-04-16 20:44:48.827171] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:57.900 [2024-04-16 20:44:48.827197] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:57.900 [2024-04-16 20:44:48.827208] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:57.900 20:44:48 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:06:57.900 20:44:48 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:06:57.900 20:44:48 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:06:57.900 20:44:48 -- bdev/bdev_raid.sh@197 -- # return 1 00:06:57.900 20:44:48 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:06:57.900 20:44:48 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:06:57.900 20:44:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:06:57.900 20:44:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:06:57.900 20:44:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:06:57.900 20:44:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:06:57.900 20:44:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:06:57.900 20:44:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:06:57.900 20:44:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:06:57.900 20:44:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:06:57.900 20:44:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:06:57.900 20:44:48 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:57.900 20:44:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:58.159 20:44:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:06:58.159 "name": "Existed_Raid", 00:06:58.159 "uuid": "299ef3b1-fc32-11ee-80f8-ef3e42bb1492", 00:06:58.159 "strip_size_kb": 64, 00:06:58.159 "state": "offline", 00:06:58.159 "raid_level": "concat", 00:06:58.159 "superblock": true, 00:06:58.159 "num_base_bdevs": 2, 00:06:58.159 "num_base_bdevs_discovered": 1, 00:06:58.159 "num_base_bdevs_operational": 1, 00:06:58.159 "base_bdevs_list": [ 00:06:58.159 { 00:06:58.159 "name": null, 00:06:58.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:58.159 "is_configured": false, 00:06:58.159 "data_offset": 2048, 00:06:58.159 "data_size": 63488 00:06:58.159 }, 00:06:58.159 { 00:06:58.159 "name": "BaseBdev2", 00:06:58.159 "uuid": "2a03a985-fc32-11ee-80f8-ef3e42bb1492", 00:06:58.159 "is_configured": true, 00:06:58.159 "data_offset": 2048, 00:06:58.159 "data_size": 63488 00:06:58.159 } 00:06:58.159 ] 00:06:58.159 }' 00:06:58.159 20:44:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:06:58.159 20:44:49 -- common/autotest_common.sh@10 -- # set +x 00:06:58.418 20:44:49 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:06:58.418 20:44:49 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:06:58.418 20:44:49 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:58.418 20:44:49 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:06:58.418 20:44:49 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:06:58.418 20:44:49 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:58.418 20:44:49 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:06:58.677 [2024-04-16 20:44:49.643849] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:58.677 [2024-04-16 20:44:49.643882] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b59ea00 name Existed_Raid, state offline 00:06:58.677 20:44:49 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:06:58.677 20:44:49 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:06:58.677 20:44:49 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:58.677 20:44:49 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:06:58.936 20:44:49 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:06:58.936 20:44:49 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:06:58.936 20:44:49 -- bdev/bdev_raid.sh@287 -- # killprocess 48339 00:06:58.936 20:44:49 -- common/autotest_common.sh@926 -- # '[' -z 48339 ']' 00:06:58.936 20:44:49 -- common/autotest_common.sh@930 -- # kill -0 48339 00:06:58.936 20:44:49 -- common/autotest_common.sh@931 -- # uname 00:06:58.936 20:44:49 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:06:58.936 20:44:49 -- common/autotest_common.sh@934 -- # ps -c -o command 48339 00:06:58.936 20:44:49 -- common/autotest_common.sh@934 -- # tail -1 00:06:58.936 20:44:49 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:06:58.937 20:44:49 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:06:58.937 killing process with pid 48339 00:06:58.937 20:44:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 48339' 00:06:58.937 20:44:49 -- common/autotest_common.sh@945 -- # kill 48339 00:06:58.937 [2024-04-16 20:44:49.862086] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:58.937 20:44:49 -- common/autotest_common.sh@950 -- # wait 48339 00:06:58.937 [2024-04-16 20:44:49.862128] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:58.937 20:44:49 -- bdev/bdev_raid.sh@289 -- # return 0 00:06:58.937 00:06:58.937 real 0m6.845s 00:06:58.937 user 0m11.730s 00:06:58.937 sys 0m1.265s 00:06:58.937 20:44:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.937 20:44:49 -- common/autotest_common.sh@10 -- # set +x 00:06:58.937 ************************************ 00:06:58.937 END TEST raid_state_function_test_sb 00:06:58.937 ************************************ 00:06:58.937 20:44:50 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:06:58.937 20:44:50 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:58.937 20:44:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:58.937 20:44:50 -- common/autotest_common.sh@10 -- # set +x 00:06:59.196 ************************************ 00:06:59.196 START TEST raid_superblock_test 00:06:59.196 ************************************ 00:06:59.196 20:44:50 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 2 00:06:59.196 20:44:50 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:06:59.196 20:44:50 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:06:59.196 20:44:50 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:06:59.196 20:44:50 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:06:59.196 20:44:50 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:06:59.196 20:44:50 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:06:59.196 20:44:50 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:06:59.196 20:44:50 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:06:59.197 20:44:50 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:06:59.197 20:44:50 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:06:59.197 20:44:50 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:06:59.197 20:44:50 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:06:59.197 20:44:50 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:06:59.197 20:44:50 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:06:59.197 20:44:50 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:06:59.197 20:44:50 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:06:59.197 20:44:50 -- bdev/bdev_raid.sh@357 -- # raid_pid=48538 00:06:59.197 20:44:50 -- bdev/bdev_raid.sh@358 -- # waitforlisten 48538 /var/tmp/spdk-raid.sock 00:06:59.197 20:44:50 -- bdev/bdev_raid.sh@356 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:06:59.197 20:44:50 -- common/autotest_common.sh@819 -- # '[' -z 48538 ']' 00:06:59.197 20:44:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:06:59.197 20:44:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:59.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:06:59.197 20:44:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:06:59.197 20:44:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:59.197 20:44:50 -- common/autotest_common.sh@10 -- # set +x 00:06:59.197 [2024-04-16 20:44:50.064510] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:06:59.197 [2024-04-16 20:44:50.064875] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:59.455 EAL: TSC is not safe to use in SMP mode 00:06:59.455 EAL: TSC is not invariant 00:06:59.455 [2024-04-16 20:44:50.495542] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.713 [2024-04-16 20:44:50.586089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.713 [2024-04-16 20:44:50.586487] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:59.713 [2024-04-16 20:44:50.586497] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:59.971 20:44:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:59.971 20:44:50 -- common/autotest_common.sh@852 -- # return 0 00:06:59.971 20:44:50 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:06:59.971 20:44:50 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:06:59.971 20:44:50 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:06:59.971 20:44:50 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:06:59.971 20:44:50 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:06:59.971 20:44:50 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:06:59.971 20:44:50 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:06:59.971 20:44:50 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:06:59.971 20:44:50 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:07:00.230 malloc1 00:07:00.230 20:44:51 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:00.230 [2024-04-16 20:44:51.325649] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:00.230 [2024-04-16 20:44:51.325698] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:00.230 [2024-04-16 20:44:51.326235] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82af03780 00:07:00.230 [2024-04-16 20:44:51.326261] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:00.230 [2024-04-16 20:44:51.326996] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:00.230 [2024-04-16 20:44:51.327027] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:00.230 pt1 00:07:00.489 20:44:51 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:07:00.489 20:44:51 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:07:00.489 20:44:51 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:07:00.489 20:44:51 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:07:00.489 20:44:51 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:00.489 20:44:51 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:00.489 20:44:51 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:07:00.489 20:44:51 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:00.489 20:44:51 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:07:00.489 malloc2 00:07:00.489 20:44:51 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:00.748 [2024-04-16 20:44:51.705657] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:00.748 [2024-04-16 20:44:51.705704] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:00.748 [2024-04-16 20:44:51.705730] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82af03c80 00:07:00.748 [2024-04-16 20:44:51.705736] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:00.748 [2024-04-16 20:44:51.706239] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:00.748 [2024-04-16 20:44:51.706264] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:00.748 pt2 00:07:00.748 20:44:51 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:07:00.748 20:44:51 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:07:00.748 20:44:51 -- bdev/bdev_raid.sh@375 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:07:01.007 [2024-04-16 20:44:51.893674] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:01.007 [2024-04-16 20:44:51.894101] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:01.007 [2024-04-16 20:44:51.894162] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82af03f00 00:07:01.007 [2024-04-16 20:44:51.894167] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:01.007 [2024-04-16 20:44:51.894194] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82af66e20 00:07:01.007 [2024-04-16 20:44:51.894250] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82af03f00 00:07:01.007 [2024-04-16 20:44:51.894253] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82af03f00 00:07:01.007 [2024-04-16 20:44:51.894274] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:01.007 20:44:51 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:01.007 20:44:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:07:01.007 20:44:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:07:01.007 20:44:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:07:01.007 20:44:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:01.007 20:44:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:01.007 20:44:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:01.007 20:44:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:01.007 20:44:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:01.007 20:44:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:01.007 20:44:51 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:01.007 20:44:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:01.007 20:44:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:01.007 "name": "raid_bdev1", 00:07:01.007 "uuid": "2c6c8050-fc32-11ee-80f8-ef3e42bb1492", 00:07:01.007 "strip_size_kb": 64, 00:07:01.007 "state": "online", 00:07:01.007 "raid_level": "concat", 00:07:01.007 "superblock": true, 00:07:01.007 "num_base_bdevs": 2, 00:07:01.007 "num_base_bdevs_discovered": 2, 00:07:01.007 "num_base_bdevs_operational": 2, 00:07:01.007 "base_bdevs_list": [ 00:07:01.007 { 00:07:01.007 "name": "pt1", 00:07:01.007 "uuid": "6060cbf7-ec6f-ed5e-93c7-3f865d4280e8", 00:07:01.007 "is_configured": true, 00:07:01.007 "data_offset": 2048, 00:07:01.007 "data_size": 63488 00:07:01.007 }, 00:07:01.007 { 00:07:01.007 "name": "pt2", 00:07:01.007 "uuid": "028094fe-0d67-7652-be56-fdf04938290c", 00:07:01.007 "is_configured": true, 00:07:01.007 "data_offset": 2048, 00:07:01.007 "data_size": 63488 00:07:01.007 } 00:07:01.007 ] 00:07:01.007 }' 00:07:01.007 20:44:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:01.007 20:44:52 -- common/autotest_common.sh@10 -- # set +x 00:07:01.266 20:44:52 -- bdev/bdev_raid.sh@379 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:01.266 20:44:52 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:07:01.525 [2024-04-16 20:44:52.533705] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:01.525 20:44:52 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=2c6c8050-fc32-11ee-80f8-ef3e42bb1492 00:07:01.525 20:44:52 -- bdev/bdev_raid.sh@380 -- # '[' -z 2c6c8050-fc32-11ee-80f8-ef3e42bb1492 ']' 00:07:01.525 20:44:52 -- bdev/bdev_raid.sh@385 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:07:01.783 [2024-04-16 20:44:52.725678] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:01.783 [2024-04-16 20:44:52.725701] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:01.783 [2024-04-16 20:44:52.725737] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:01.783 [2024-04-16 20:44:52.725747] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:01.783 [2024-04-16 20:44:52.725750] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82af03f00 name raid_bdev1, state offline 00:07:01.783 20:44:52 -- bdev/bdev_raid.sh@386 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:01.783 20:44:52 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:07:02.042 20:44:52 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:07:02.042 20:44:52 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:07:02.042 20:44:52 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:07:02.042 20:44:52 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:07:02.042 20:44:53 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:07:02.042 20:44:53 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:07:02.299 20:44:53 -- bdev/bdev_raid.sh@395 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:07:02.299 20:44:53 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:02.556 20:44:53 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:07:02.556 20:44:53 -- bdev/bdev_raid.sh@401 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:07:02.556 20:44:53 -- common/autotest_common.sh@640 -- # local es=0 00:07:02.557 20:44:53 -- common/autotest_common.sh@642 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:07:02.557 20:44:53 -- common/autotest_common.sh@628 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:02.557 20:44:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:02.557 20:44:53 -- common/autotest_common.sh@632 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:02.557 20:44:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:02.557 20:44:53 -- common/autotest_common.sh@634 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:02.557 20:44:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:02.557 20:44:53 -- common/autotest_common.sh@634 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:02.557 20:44:53 -- common/autotest_common.sh@634 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:02.557 20:44:53 -- common/autotest_common.sh@643 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:07:02.814 [2024-04-16 20:44:53.673706] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:02.814 [2024-04-16 20:44:53.674149] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:02.814 [2024-04-16 20:44:53.674170] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:07:02.814 [2024-04-16 20:44:53.674202] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:07:02.814 [2024-04-16 20:44:53.674210] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:02.814 [2024-04-16 20:44:53.674213] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82af03c80 name raid_bdev1, state configuring 00:07:02.814 request: 00:07:02.814 { 00:07:02.814 "name": "raid_bdev1", 00:07:02.814 "raid_level": "concat", 00:07:02.814 "base_bdevs": [ 00:07:02.814 "malloc1", 00:07:02.814 "malloc2" 00:07:02.814 ], 00:07:02.814 "superblock": false, 00:07:02.814 "strip_size_kb": 64, 00:07:02.814 "method": "bdev_raid_create", 00:07:02.814 "req_id": 1 00:07:02.814 } 00:07:02.814 Got JSON-RPC error response 00:07:02.814 response: 00:07:02.814 { 00:07:02.814 "code": -17, 00:07:02.814 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:02.814 } 00:07:02.814 20:44:53 -- common/autotest_common.sh@643 -- # es=1 00:07:02.814 20:44:53 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:02.814 20:44:53 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:02.814 20:44:53 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:02.814 20:44:53 -- bdev/bdev_raid.sh@403 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:02.814 20:44:53 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:07:02.814 20:44:53 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:07:02.814 20:44:53 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:07:02.814 20:44:53 -- bdev/bdev_raid.sh@409 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:03.072 [2024-04-16 20:44:54.057722] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:03.072 [2024-04-16 20:44:54.057779] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:03.072 [2024-04-16 20:44:54.057805] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82af03780 00:07:03.072 [2024-04-16 20:44:54.057812] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:03.072 [2024-04-16 20:44:54.058325] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:03.072 [2024-04-16 20:44:54.058353] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:03.072 [2024-04-16 20:44:54.058375] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:07:03.072 [2024-04-16 20:44:54.058385] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:03.072 pt1 00:07:03.072 20:44:54 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:07:03.072 20:44:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:07:03.072 20:44:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:03.072 20:44:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:07:03.072 20:44:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:03.072 20:44:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:03.072 20:44:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:03.072 20:44:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:03.072 20:44:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:03.072 20:44:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:03.072 20:44:54 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:03.072 20:44:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:03.347 20:44:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:03.347 "name": "raid_bdev1", 00:07:03.347 "uuid": "2c6c8050-fc32-11ee-80f8-ef3e42bb1492", 00:07:03.347 "strip_size_kb": 64, 00:07:03.347 "state": "configuring", 00:07:03.347 "raid_level": "concat", 00:07:03.347 "superblock": true, 00:07:03.347 "num_base_bdevs": 2, 00:07:03.347 "num_base_bdevs_discovered": 1, 00:07:03.348 "num_base_bdevs_operational": 2, 00:07:03.348 "base_bdevs_list": [ 00:07:03.348 { 00:07:03.348 "name": "pt1", 00:07:03.348 "uuid": "6060cbf7-ec6f-ed5e-93c7-3f865d4280e8", 00:07:03.348 "is_configured": true, 00:07:03.348 "data_offset": 2048, 00:07:03.348 "data_size": 63488 00:07:03.348 }, 00:07:03.348 { 00:07:03.348 "name": null, 00:07:03.348 "uuid": "028094fe-0d67-7652-be56-fdf04938290c", 00:07:03.348 "is_configured": false, 00:07:03.348 "data_offset": 2048, 00:07:03.348 "data_size": 63488 00:07:03.348 } 00:07:03.348 ] 00:07:03.348 }' 00:07:03.348 20:44:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:03.348 20:44:54 -- common/autotest_common.sh@10 -- # set +x 00:07:03.608 20:44:54 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:07:03.608 20:44:54 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:07:03.608 20:44:54 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:07:03.608 20:44:54 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:03.608 [2024-04-16 20:44:54.673741] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:03.608 [2024-04-16 20:44:54.673787] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:03.608 [2024-04-16 20:44:54.673811] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82af03f00 00:07:03.608 [2024-04-16 20:44:54.673818] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:03.608 [2024-04-16 20:44:54.673907] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:03.608 [2024-04-16 20:44:54.673914] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:03.608 [2024-04-16 20:44:54.673931] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:07:03.608 [2024-04-16 20:44:54.673937] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:03.608 [2024-04-16 20:44:54.673956] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82af04180 00:07:03.608 [2024-04-16 20:44:54.673959] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:03.608 [2024-04-16 20:44:54.673974] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82af66e20 00:07:03.608 [2024-04-16 20:44:54.674007] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82af04180 00:07:03.608 [2024-04-16 20:44:54.674010] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82af04180 00:07:03.608 [2024-04-16 20:44:54.674028] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:03.608 pt2 00:07:03.608 20:44:54 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:07:03.608 20:44:54 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:07:03.608 20:44:54 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:03.608 20:44:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:07:03.608 20:44:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:07:03.608 20:44:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:07:03.608 20:44:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:03.608 20:44:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:03.608 20:44:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:03.608 20:44:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:03.608 20:44:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:03.608 20:44:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:03.608 20:44:54 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:03.608 20:44:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:03.866 20:44:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:03.866 "name": "raid_bdev1", 00:07:03.866 "uuid": "2c6c8050-fc32-11ee-80f8-ef3e42bb1492", 00:07:03.866 "strip_size_kb": 64, 00:07:03.866 "state": "online", 00:07:03.866 "raid_level": "concat", 00:07:03.866 "superblock": true, 00:07:03.866 "num_base_bdevs": 2, 00:07:03.866 "num_base_bdevs_discovered": 2, 00:07:03.866 "num_base_bdevs_operational": 2, 00:07:03.866 "base_bdevs_list": [ 00:07:03.866 { 00:07:03.866 "name": "pt1", 00:07:03.866 "uuid": "6060cbf7-ec6f-ed5e-93c7-3f865d4280e8", 00:07:03.866 "is_configured": true, 00:07:03.866 "data_offset": 2048, 00:07:03.866 "data_size": 63488 00:07:03.866 }, 00:07:03.866 { 00:07:03.866 "name": "pt2", 00:07:03.866 "uuid": "028094fe-0d67-7652-be56-fdf04938290c", 00:07:03.866 "is_configured": true, 00:07:03.866 "data_offset": 2048, 00:07:03.866 "data_size": 63488 00:07:03.866 } 00:07:03.866 ] 00:07:03.866 }' 00:07:03.866 20:44:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:03.866 20:44:54 -- common/autotest_common.sh@10 -- # set +x 00:07:04.124 20:44:55 -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:04.124 20:44:55 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:07:04.382 [2024-04-16 20:44:55.309789] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:04.383 20:44:55 -- bdev/bdev_raid.sh@430 -- # '[' 2c6c8050-fc32-11ee-80f8-ef3e42bb1492 '!=' 2c6c8050-fc32-11ee-80f8-ef3e42bb1492 ']' 00:07:04.383 20:44:55 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:07:04.383 20:44:55 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:07:04.383 20:44:55 -- bdev/bdev_raid.sh@197 -- # return 1 00:07:04.383 20:44:55 -- bdev/bdev_raid.sh@511 -- # killprocess 48538 00:07:04.383 20:44:55 -- common/autotest_common.sh@926 -- # '[' -z 48538 ']' 00:07:04.383 20:44:55 -- common/autotest_common.sh@930 -- # kill -0 48538 00:07:04.383 20:44:55 -- common/autotest_common.sh@931 -- # uname 00:07:04.383 20:44:55 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:07:04.383 20:44:55 -- common/autotest_common.sh@934 -- # ps -c -o command 48538 00:07:04.383 20:44:55 -- common/autotest_common.sh@934 -- # tail -1 00:07:04.383 20:44:55 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:07:04.383 20:44:55 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:07:04.383 killing process with pid 48538 00:07:04.383 20:44:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 48538' 00:07:04.383 20:44:55 -- common/autotest_common.sh@945 -- # kill 48538 00:07:04.383 [2024-04-16 20:44:55.343019] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:04.383 [2024-04-16 20:44:55.343035] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:04.383 [2024-04-16 20:44:55.343054] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:04.383 [2024-04-16 20:44:55.343058] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82af04180 name raid_bdev1, state offline 00:07:04.383 20:44:55 -- common/autotest_common.sh@950 -- # wait 48538 00:07:04.383 [2024-04-16 20:44:55.352211] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:04.383 20:44:55 -- bdev/bdev_raid.sh@513 -- # return 0 00:07:04.383 00:07:04.383 real 0m5.440s 00:07:04.383 user 0m9.154s 00:07:04.383 sys 0m1.046s 00:07:04.642 20:44:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.642 20:44:55 -- common/autotest_common.sh@10 -- # set +x 00:07:04.642 ************************************ 00:07:04.642 END TEST raid_superblock_test 00:07:04.642 ************************************ 00:07:04.642 20:44:55 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:07:04.642 20:44:55 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:07:04.642 20:44:55 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:07:04.642 20:44:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:04.642 20:44:55 -- common/autotest_common.sh@10 -- # set +x 00:07:04.642 ************************************ 00:07:04.642 START TEST raid_state_function_test 00:07:04.642 ************************************ 00:07:04.642 20:44:55 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 2 false 00:07:04.642 20:44:55 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:07:04.642 20:44:55 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:07:04.642 20:44:55 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:07:04.642 20:44:55 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:07:04.642 20:44:55 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:07:04.642 20:44:55 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:04.642 20:44:55 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:07:04.642 20:44:55 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:07:04.642 20:44:55 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:04.642 20:44:55 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:07:04.642 20:44:55 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:07:04.642 20:44:55 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:04.642 20:44:55 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:04.642 20:44:55 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:07:04.642 20:44:55 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:07:04.642 20:44:55 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:07:04.642 20:44:55 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:07:04.643 20:44:55 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:07:04.643 20:44:55 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:07:04.643 20:44:55 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:07:04.643 20:44:55 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:07:04.643 20:44:55 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:07:04.643 20:44:55 -- bdev/bdev_raid.sh@226 -- # raid_pid=48683 00:07:04.643 20:44:55 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 48683' 00:07:04.643 Process raid pid: 48683 00:07:04.643 20:44:55 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:07:04.643 20:44:55 -- bdev/bdev_raid.sh@228 -- # waitforlisten 48683 /var/tmp/spdk-raid.sock 00:07:04.643 20:44:55 -- common/autotest_common.sh@819 -- # '[' -z 48683 ']' 00:07:04.643 20:44:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:04.643 20:44:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:04.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:04.643 20:44:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:04.643 20:44:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:04.643 20:44:55 -- common/autotest_common.sh@10 -- # set +x 00:07:04.643 [2024-04-16 20:44:55.560320] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:07:04.643 [2024-04-16 20:44:55.560674] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:07:04.925 EAL: TSC is not safe to use in SMP mode 00:07:04.925 EAL: TSC is not invariant 00:07:04.925 [2024-04-16 20:44:55.990362] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.199 [2024-04-16 20:44:56.081225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.199 [2024-04-16 20:44:56.081646] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:05.199 [2024-04-16 20:44:56.081654] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:05.458 20:44:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:05.458 20:44:56 -- common/autotest_common.sh@852 -- # return 0 00:07:05.458 20:44:56 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:05.717 [2024-04-16 20:44:56.608723] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:05.717 [2024-04-16 20:44:56.608773] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:05.717 [2024-04-16 20:44:56.608776] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:05.717 [2024-04-16 20:44:56.608782] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:05.717 20:44:56 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:05.717 20:44:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:05.717 20:44:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:05.717 20:44:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:07:05.717 20:44:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:07:05.717 20:44:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:05.717 20:44:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:05.717 20:44:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:05.717 20:44:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:05.717 20:44:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:05.717 20:44:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:05.717 20:44:56 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:05.717 20:44:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:05.717 "name": "Existed_Raid", 00:07:05.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:05.717 "strip_size_kb": 0, 00:07:05.717 "state": "configuring", 00:07:05.717 "raid_level": "raid1", 00:07:05.717 "superblock": false, 00:07:05.717 "num_base_bdevs": 2, 00:07:05.717 "num_base_bdevs_discovered": 0, 00:07:05.717 "num_base_bdevs_operational": 2, 00:07:05.717 "base_bdevs_list": [ 00:07:05.717 { 00:07:05.717 "name": "BaseBdev1", 00:07:05.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:05.717 "is_configured": false, 00:07:05.717 "data_offset": 0, 00:07:05.717 "data_size": 0 00:07:05.717 }, 00:07:05.717 { 00:07:05.717 "name": "BaseBdev2", 00:07:05.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:05.717 "is_configured": false, 00:07:05.717 "data_offset": 0, 00:07:05.717 "data_size": 0 00:07:05.717 } 00:07:05.717 ] 00:07:05.717 }' 00:07:05.717 20:44:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:05.717 20:44:56 -- common/autotest_common.sh@10 -- # set +x 00:07:06.284 20:44:57 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:06.284 [2024-04-16 20:44:57.260734] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:06.285 [2024-04-16 20:44:57.260760] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c24a500 name Existed_Raid, state configuring 00:07:06.285 20:44:57 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:06.543 [2024-04-16 20:44:57.420741] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:06.543 [2024-04-16 20:44:57.420794] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:06.543 [2024-04-16 20:44:57.420797] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:06.543 [2024-04-16 20:44:57.420803] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:06.543 20:44:57 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:07:06.543 [2024-04-16 20:44:57.577563] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:06.543 BaseBdev1 00:07:06.543 20:44:57 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:07:06.543 20:44:57 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:07:06.543 20:44:57 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:07:06.543 20:44:57 -- common/autotest_common.sh@889 -- # local i 00:07:06.543 20:44:57 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:07:06.543 20:44:57 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:07:06.543 20:44:57 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:06.800 20:44:57 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:07.058 [ 00:07:07.058 { 00:07:07.058 "name": "BaseBdev1", 00:07:07.058 "aliases": [ 00:07:07.058 "2fcfac0f-fc32-11ee-80f8-ef3e42bb1492" 00:07:07.058 ], 00:07:07.058 "product_name": "Malloc disk", 00:07:07.058 "block_size": 512, 00:07:07.058 "num_blocks": 65536, 00:07:07.058 "uuid": "2fcfac0f-fc32-11ee-80f8-ef3e42bb1492", 00:07:07.058 "assigned_rate_limits": { 00:07:07.058 "rw_ios_per_sec": 0, 00:07:07.058 "rw_mbytes_per_sec": 0, 00:07:07.058 "r_mbytes_per_sec": 0, 00:07:07.058 "w_mbytes_per_sec": 0 00:07:07.058 }, 00:07:07.058 "claimed": true, 00:07:07.058 "claim_type": "exclusive_write", 00:07:07.058 "zoned": false, 00:07:07.058 "supported_io_types": { 00:07:07.058 "read": true, 00:07:07.058 "write": true, 00:07:07.058 "unmap": true, 00:07:07.058 "write_zeroes": true, 00:07:07.058 "flush": true, 00:07:07.058 "reset": true, 00:07:07.058 "compare": false, 00:07:07.058 "compare_and_write": false, 00:07:07.058 "abort": true, 00:07:07.058 "nvme_admin": false, 00:07:07.058 "nvme_io": false 00:07:07.058 }, 00:07:07.058 "memory_domains": [ 00:07:07.058 { 00:07:07.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:07.058 "dma_device_type": 2 00:07:07.058 } 00:07:07.058 ], 00:07:07.058 "driver_specific": {} 00:07:07.058 } 00:07:07.058 ] 00:07:07.058 20:44:57 -- common/autotest_common.sh@895 -- # return 0 00:07:07.058 20:44:57 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:07.058 20:44:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:07.058 20:44:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:07.058 20:44:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:07:07.058 20:44:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:07:07.058 20:44:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:07.058 20:44:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:07.058 20:44:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:07.058 20:44:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:07.058 20:44:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:07.058 20:44:57 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:07.058 20:44:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:07.058 20:44:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:07.058 "name": "Existed_Raid", 00:07:07.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:07.058 "strip_size_kb": 0, 00:07:07.058 "state": "configuring", 00:07:07.058 "raid_level": "raid1", 00:07:07.058 "superblock": false, 00:07:07.058 "num_base_bdevs": 2, 00:07:07.058 "num_base_bdevs_discovered": 1, 00:07:07.058 "num_base_bdevs_operational": 2, 00:07:07.058 "base_bdevs_list": [ 00:07:07.058 { 00:07:07.058 "name": "BaseBdev1", 00:07:07.058 "uuid": "2fcfac0f-fc32-11ee-80f8-ef3e42bb1492", 00:07:07.058 "is_configured": true, 00:07:07.058 "data_offset": 0, 00:07:07.058 "data_size": 65536 00:07:07.058 }, 00:07:07.058 { 00:07:07.058 "name": "BaseBdev2", 00:07:07.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:07.058 "is_configured": false, 00:07:07.058 "data_offset": 0, 00:07:07.058 "data_size": 0 00:07:07.058 } 00:07:07.058 ] 00:07:07.058 }' 00:07:07.058 20:44:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:07.058 20:44:58 -- common/autotest_common.sh@10 -- # set +x 00:07:07.625 20:44:58 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:07.625 [2024-04-16 20:44:58.596769] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:07.625 [2024-04-16 20:44:58.596819] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c24a500 name Existed_Raid, state configuring 00:07:07.625 20:44:58 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:07:07.625 20:44:58 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:07.882 [2024-04-16 20:44:58.760785] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:07.882 [2024-04-16 20:44:58.761388] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:07.882 [2024-04-16 20:44:58.761424] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:07.882 20:44:58 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:07:07.882 20:44:58 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:07:07.882 20:44:58 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:07.882 20:44:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:07.882 20:44:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:07.882 20:44:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:07:07.882 20:44:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:07:07.882 20:44:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:07.882 20:44:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:07.882 20:44:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:07.882 20:44:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:07.882 20:44:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:07.882 20:44:58 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:07.882 20:44:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:07.882 20:44:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:07.882 "name": "Existed_Raid", 00:07:07.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:07.882 "strip_size_kb": 0, 00:07:07.882 "state": "configuring", 00:07:07.882 "raid_level": "raid1", 00:07:07.882 "superblock": false, 00:07:07.882 "num_base_bdevs": 2, 00:07:07.882 "num_base_bdevs_discovered": 1, 00:07:07.882 "num_base_bdevs_operational": 2, 00:07:07.882 "base_bdevs_list": [ 00:07:07.882 { 00:07:07.882 "name": "BaseBdev1", 00:07:07.882 "uuid": "2fcfac0f-fc32-11ee-80f8-ef3e42bb1492", 00:07:07.882 "is_configured": true, 00:07:07.883 "data_offset": 0, 00:07:07.883 "data_size": 65536 00:07:07.883 }, 00:07:07.883 { 00:07:07.883 "name": "BaseBdev2", 00:07:07.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:07.883 "is_configured": false, 00:07:07.883 "data_offset": 0, 00:07:07.883 "data_size": 0 00:07:07.883 } 00:07:07.883 ] 00:07:07.883 }' 00:07:07.883 20:44:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:07.883 20:44:58 -- common/autotest_common.sh@10 -- # set +x 00:07:08.141 20:44:59 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:07:08.399 [2024-04-16 20:44:59.392894] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:08.399 [2024-04-16 20:44:59.392917] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c24aa00 00:07:08.399 [2024-04-16 20:44:59.392920] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:08.399 [2024-04-16 20:44:59.392936] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c2adec0 00:07:08.399 [2024-04-16 20:44:59.393004] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c24aa00 00:07:08.399 [2024-04-16 20:44:59.393007] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82c24aa00 00:07:08.399 [2024-04-16 20:44:59.393031] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:08.399 BaseBdev2 00:07:08.399 20:44:59 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:07:08.399 20:44:59 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:07:08.399 20:44:59 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:07:08.399 20:44:59 -- common/autotest_common.sh@889 -- # local i 00:07:08.399 20:44:59 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:07:08.399 20:44:59 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:07:08.399 20:44:59 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:08.658 20:44:59 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:08.658 [ 00:07:08.658 { 00:07:08.658 "name": "BaseBdev2", 00:07:08.658 "aliases": [ 00:07:08.658 "30e4c6cf-fc32-11ee-80f8-ef3e42bb1492" 00:07:08.658 ], 00:07:08.658 "product_name": "Malloc disk", 00:07:08.658 "block_size": 512, 00:07:08.658 "num_blocks": 65536, 00:07:08.658 "uuid": "30e4c6cf-fc32-11ee-80f8-ef3e42bb1492", 00:07:08.658 "assigned_rate_limits": { 00:07:08.658 "rw_ios_per_sec": 0, 00:07:08.658 "rw_mbytes_per_sec": 0, 00:07:08.658 "r_mbytes_per_sec": 0, 00:07:08.658 "w_mbytes_per_sec": 0 00:07:08.658 }, 00:07:08.658 "claimed": true, 00:07:08.658 "claim_type": "exclusive_write", 00:07:08.658 "zoned": false, 00:07:08.658 "supported_io_types": { 00:07:08.658 "read": true, 00:07:08.658 "write": true, 00:07:08.658 "unmap": true, 00:07:08.658 "write_zeroes": true, 00:07:08.658 "flush": true, 00:07:08.658 "reset": true, 00:07:08.658 "compare": false, 00:07:08.658 "compare_and_write": false, 00:07:08.658 "abort": true, 00:07:08.658 "nvme_admin": false, 00:07:08.658 "nvme_io": false 00:07:08.658 }, 00:07:08.658 "memory_domains": [ 00:07:08.658 { 00:07:08.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:08.658 "dma_device_type": 2 00:07:08.658 } 00:07:08.658 ], 00:07:08.658 "driver_specific": {} 00:07:08.658 } 00:07:08.658 ] 00:07:08.658 20:44:59 -- common/autotest_common.sh@895 -- # return 0 00:07:08.658 20:44:59 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:07:08.658 20:44:59 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:07:08.658 20:44:59 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:08.658 20:44:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:08.658 20:44:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:07:08.658 20:44:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:07:08.658 20:44:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:07:08.658 20:44:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:08.658 20:44:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:08.658 20:44:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:08.658 20:44:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:08.658 20:44:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:08.658 20:44:59 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:08.658 20:44:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:08.916 20:44:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:08.917 "name": "Existed_Raid", 00:07:08.917 "uuid": "30e4cc1f-fc32-11ee-80f8-ef3e42bb1492", 00:07:08.917 "strip_size_kb": 0, 00:07:08.917 "state": "online", 00:07:08.917 "raid_level": "raid1", 00:07:08.917 "superblock": false, 00:07:08.917 "num_base_bdevs": 2, 00:07:08.917 "num_base_bdevs_discovered": 2, 00:07:08.917 "num_base_bdevs_operational": 2, 00:07:08.917 "base_bdevs_list": [ 00:07:08.917 { 00:07:08.917 "name": "BaseBdev1", 00:07:08.917 "uuid": "2fcfac0f-fc32-11ee-80f8-ef3e42bb1492", 00:07:08.917 "is_configured": true, 00:07:08.917 "data_offset": 0, 00:07:08.917 "data_size": 65536 00:07:08.917 }, 00:07:08.917 { 00:07:08.917 "name": "BaseBdev2", 00:07:08.917 "uuid": "30e4c6cf-fc32-11ee-80f8-ef3e42bb1492", 00:07:08.917 "is_configured": true, 00:07:08.917 "data_offset": 0, 00:07:08.917 "data_size": 65536 00:07:08.917 } 00:07:08.917 ] 00:07:08.917 }' 00:07:08.917 20:44:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:08.917 20:44:59 -- common/autotest_common.sh@10 -- # set +x 00:07:09.175 20:45:00 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:07:09.432 [2024-04-16 20:45:00.364827] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:09.433 20:45:00 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:07:09.433 20:45:00 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:07:09.433 20:45:00 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:07:09.433 20:45:00 -- bdev/bdev_raid.sh@196 -- # return 0 00:07:09.433 20:45:00 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:07:09.433 20:45:00 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:09.433 20:45:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:09.433 20:45:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:07:09.433 20:45:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:07:09.433 20:45:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:07:09.433 20:45:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:07:09.433 20:45:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:09.433 20:45:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:09.433 20:45:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:09.433 20:45:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:09.433 20:45:00 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:09.433 20:45:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:09.691 20:45:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:09.691 "name": "Existed_Raid", 00:07:09.691 "uuid": "30e4cc1f-fc32-11ee-80f8-ef3e42bb1492", 00:07:09.691 "strip_size_kb": 0, 00:07:09.691 "state": "online", 00:07:09.691 "raid_level": "raid1", 00:07:09.691 "superblock": false, 00:07:09.691 "num_base_bdevs": 2, 00:07:09.691 "num_base_bdevs_discovered": 1, 00:07:09.691 "num_base_bdevs_operational": 1, 00:07:09.691 "base_bdevs_list": [ 00:07:09.691 { 00:07:09.691 "name": null, 00:07:09.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:09.691 "is_configured": false, 00:07:09.691 "data_offset": 0, 00:07:09.691 "data_size": 65536 00:07:09.691 }, 00:07:09.691 { 00:07:09.691 "name": "BaseBdev2", 00:07:09.691 "uuid": "30e4c6cf-fc32-11ee-80f8-ef3e42bb1492", 00:07:09.691 "is_configured": true, 00:07:09.691 "data_offset": 0, 00:07:09.691 "data_size": 65536 00:07:09.691 } 00:07:09.691 ] 00:07:09.691 }' 00:07:09.691 20:45:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:09.691 20:45:00 -- common/autotest_common.sh@10 -- # set +x 00:07:09.949 20:45:00 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:07:09.949 20:45:00 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:07:09.949 20:45:00 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:09.949 20:45:00 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:07:09.949 20:45:01 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:07:09.949 20:45:01 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:09.949 20:45:01 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:07:10.208 [2024-04-16 20:45:01.205615] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:10.208 [2024-04-16 20:45:01.205638] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:10.208 [2024-04-16 20:45:01.205651] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:10.208 [2024-04-16 20:45:01.210256] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:10.208 [2024-04-16 20:45:01.210265] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c24aa00 name Existed_Raid, state offline 00:07:10.208 20:45:01 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:07:10.208 20:45:01 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:07:10.208 20:45:01 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:10.208 20:45:01 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:07:10.467 20:45:01 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:07:10.467 20:45:01 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:07:10.467 20:45:01 -- bdev/bdev_raid.sh@287 -- # killprocess 48683 00:07:10.467 20:45:01 -- common/autotest_common.sh@926 -- # '[' -z 48683 ']' 00:07:10.467 20:45:01 -- common/autotest_common.sh@930 -- # kill -0 48683 00:07:10.467 20:45:01 -- common/autotest_common.sh@931 -- # uname 00:07:10.467 20:45:01 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:07:10.467 20:45:01 -- common/autotest_common.sh@934 -- # tail -1 00:07:10.467 20:45:01 -- common/autotest_common.sh@934 -- # ps -c -o command 48683 00:07:10.467 20:45:01 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:07:10.467 20:45:01 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:07:10.467 killing process with pid 48683 00:07:10.467 20:45:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 48683' 00:07:10.467 20:45:01 -- common/autotest_common.sh@945 -- # kill 48683 00:07:10.467 [2024-04-16 20:45:01.429193] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:10.467 [2024-04-16 20:45:01.429232] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:10.467 20:45:01 -- common/autotest_common.sh@950 -- # wait 48683 00:07:10.467 20:45:01 -- bdev/bdev_raid.sh@289 -- # return 0 00:07:10.467 00:07:10.467 real 0m6.027s 00:07:10.467 user 0m10.179s 00:07:10.467 sys 0m1.196s 00:07:10.467 20:45:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.467 20:45:01 -- common/autotest_common.sh@10 -- # set +x 00:07:10.467 ************************************ 00:07:10.467 END TEST raid_state_function_test 00:07:10.467 ************************************ 00:07:10.727 20:45:01 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:07:10.727 20:45:01 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:07:10.727 20:45:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:10.727 20:45:01 -- common/autotest_common.sh@10 -- # set +x 00:07:10.727 ************************************ 00:07:10.727 START TEST raid_state_function_test_sb 00:07:10.727 ************************************ 00:07:10.727 20:45:01 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 2 true 00:07:10.727 20:45:01 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:07:10.727 20:45:01 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:07:10.727 20:45:01 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:07:10.727 20:45:01 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:07:10.727 20:45:01 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:07:10.727 20:45:01 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:10.727 20:45:01 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:07:10.727 20:45:01 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:07:10.727 20:45:01 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:10.727 20:45:01 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:07:10.727 20:45:01 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:07:10.727 20:45:01 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:10.727 20:45:01 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:10.727 20:45:01 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:07:10.727 20:45:01 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:07:10.727 20:45:01 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:07:10.727 20:45:01 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:07:10.727 20:45:01 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:07:10.727 20:45:01 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:07:10.727 20:45:01 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:07:10.727 20:45:01 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:07:10.727 20:45:01 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:07:10.727 20:45:01 -- bdev/bdev_raid.sh@226 -- # raid_pid=48879 00:07:10.727 20:45:01 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 48879' 00:07:10.727 Process raid pid: 48879 00:07:10.727 20:45:01 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:07:10.727 20:45:01 -- bdev/bdev_raid.sh@228 -- # waitforlisten 48879 /var/tmp/spdk-raid.sock 00:07:10.727 20:45:01 -- common/autotest_common.sh@819 -- # '[' -z 48879 ']' 00:07:10.727 20:45:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:10.727 20:45:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:10.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:10.727 20:45:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:10.727 20:45:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:10.727 20:45:01 -- common/autotest_common.sh@10 -- # set +x 00:07:10.727 [2024-04-16 20:45:01.637267] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:07:10.727 [2024-04-16 20:45:01.637612] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:07:10.994 EAL: TSC is not safe to use in SMP mode 00:07:10.995 EAL: TSC is not invariant 00:07:10.995 [2024-04-16 20:45:02.066973] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.265 [2024-04-16 20:45:02.157001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.265 [2024-04-16 20:45:02.157438] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:11.265 [2024-04-16 20:45:02.157448] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:11.523 20:45:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:11.523 20:45:02 -- common/autotest_common.sh@852 -- # return 0 00:07:11.523 20:45:02 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:11.782 [2024-04-16 20:45:02.716497] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:11.782 [2024-04-16 20:45:02.716544] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:11.782 [2024-04-16 20:45:02.716548] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:11.782 [2024-04-16 20:45:02.716555] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:11.782 20:45:02 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:11.782 20:45:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:11.782 20:45:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:11.782 20:45:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:07:11.782 20:45:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:07:11.782 20:45:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:11.782 20:45:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:11.782 20:45:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:11.782 20:45:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:11.782 20:45:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:11.782 20:45:02 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:11.782 20:45:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:12.040 20:45:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:12.040 "name": "Existed_Raid", 00:07:12.040 "uuid": "32dfef00-fc32-11ee-80f8-ef3e42bb1492", 00:07:12.040 "strip_size_kb": 0, 00:07:12.040 "state": "configuring", 00:07:12.040 "raid_level": "raid1", 00:07:12.040 "superblock": true, 00:07:12.040 "num_base_bdevs": 2, 00:07:12.040 "num_base_bdevs_discovered": 0, 00:07:12.040 "num_base_bdevs_operational": 2, 00:07:12.040 "base_bdevs_list": [ 00:07:12.040 { 00:07:12.040 "name": "BaseBdev1", 00:07:12.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:12.040 "is_configured": false, 00:07:12.040 "data_offset": 0, 00:07:12.040 "data_size": 0 00:07:12.040 }, 00:07:12.040 { 00:07:12.040 "name": "BaseBdev2", 00:07:12.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:12.040 "is_configured": false, 00:07:12.040 "data_offset": 0, 00:07:12.040 "data_size": 0 00:07:12.040 } 00:07:12.040 ] 00:07:12.040 }' 00:07:12.040 20:45:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:12.040 20:45:02 -- common/autotest_common.sh@10 -- # set +x 00:07:12.298 20:45:03 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:12.298 [2024-04-16 20:45:03.360483] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:12.298 [2024-04-16 20:45:03.360509] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b489500 name Existed_Raid, state configuring 00:07:12.298 20:45:03 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:12.555 [2024-04-16 20:45:03.544503] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:12.555 [2024-04-16 20:45:03.544551] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:12.555 [2024-04-16 20:45:03.544555] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:12.555 [2024-04-16 20:45:03.544562] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:12.555 20:45:03 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:07:12.813 [2024-04-16 20:45:03.733283] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:12.813 BaseBdev1 00:07:12.813 20:45:03 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:07:12.813 20:45:03 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:07:12.813 20:45:03 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:07:12.813 20:45:03 -- common/autotest_common.sh@889 -- # local i 00:07:12.813 20:45:03 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:07:12.813 20:45:03 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:07:12.813 20:45:03 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:12.813 20:45:03 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:13.073 [ 00:07:13.073 { 00:07:13.073 "name": "BaseBdev1", 00:07:13.073 "aliases": [ 00:07:13.073 "337af68d-fc32-11ee-80f8-ef3e42bb1492" 00:07:13.073 ], 00:07:13.073 "product_name": "Malloc disk", 00:07:13.073 "block_size": 512, 00:07:13.073 "num_blocks": 65536, 00:07:13.073 "uuid": "337af68d-fc32-11ee-80f8-ef3e42bb1492", 00:07:13.073 "assigned_rate_limits": { 00:07:13.073 "rw_ios_per_sec": 0, 00:07:13.073 "rw_mbytes_per_sec": 0, 00:07:13.073 "r_mbytes_per_sec": 0, 00:07:13.073 "w_mbytes_per_sec": 0 00:07:13.073 }, 00:07:13.073 "claimed": true, 00:07:13.073 "claim_type": "exclusive_write", 00:07:13.073 "zoned": false, 00:07:13.073 "supported_io_types": { 00:07:13.073 "read": true, 00:07:13.073 "write": true, 00:07:13.073 "unmap": true, 00:07:13.073 "write_zeroes": true, 00:07:13.073 "flush": true, 00:07:13.073 "reset": true, 00:07:13.073 "compare": false, 00:07:13.073 "compare_and_write": false, 00:07:13.073 "abort": true, 00:07:13.073 "nvme_admin": false, 00:07:13.073 "nvme_io": false 00:07:13.073 }, 00:07:13.073 "memory_domains": [ 00:07:13.073 { 00:07:13.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:13.073 "dma_device_type": 2 00:07:13.073 } 00:07:13.073 ], 00:07:13.073 "driver_specific": {} 00:07:13.073 } 00:07:13.073 ] 00:07:13.073 20:45:04 -- common/autotest_common.sh@895 -- # return 0 00:07:13.073 20:45:04 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:13.073 20:45:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:13.073 20:45:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:13.073 20:45:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:07:13.073 20:45:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:07:13.073 20:45:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:13.073 20:45:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:13.073 20:45:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:13.073 20:45:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:13.073 20:45:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:13.073 20:45:04 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:13.073 20:45:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:13.334 20:45:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:13.334 "name": "Existed_Raid", 00:07:13.334 "uuid": "335e46f2-fc32-11ee-80f8-ef3e42bb1492", 00:07:13.334 "strip_size_kb": 0, 00:07:13.334 "state": "configuring", 00:07:13.334 "raid_level": "raid1", 00:07:13.334 "superblock": true, 00:07:13.334 "num_base_bdevs": 2, 00:07:13.334 "num_base_bdevs_discovered": 1, 00:07:13.334 "num_base_bdevs_operational": 2, 00:07:13.334 "base_bdevs_list": [ 00:07:13.334 { 00:07:13.334 "name": "BaseBdev1", 00:07:13.334 "uuid": "337af68d-fc32-11ee-80f8-ef3e42bb1492", 00:07:13.334 "is_configured": true, 00:07:13.334 "data_offset": 2048, 00:07:13.334 "data_size": 63488 00:07:13.334 }, 00:07:13.334 { 00:07:13.334 "name": "BaseBdev2", 00:07:13.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:13.334 "is_configured": false, 00:07:13.334 "data_offset": 0, 00:07:13.334 "data_size": 0 00:07:13.334 } 00:07:13.334 ] 00:07:13.334 }' 00:07:13.334 20:45:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:13.334 20:45:04 -- common/autotest_common.sh@10 -- # set +x 00:07:13.595 20:45:04 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:13.595 [2024-04-16 20:45:04.688499] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:13.595 [2024-04-16 20:45:04.688528] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b489500 name Existed_Raid, state configuring 00:07:13.853 20:45:04 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:07:13.853 20:45:04 -- bdev/bdev_raid.sh@246 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:07:13.853 20:45:04 -- bdev/bdev_raid.sh@247 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:07:14.111 BaseBdev1 00:07:14.111 20:45:05 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:07:14.111 20:45:05 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:07:14.111 20:45:05 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:07:14.111 20:45:05 -- common/autotest_common.sh@889 -- # local i 00:07:14.111 20:45:05 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:07:14.111 20:45:05 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:07:14.111 20:45:05 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:14.369 20:45:05 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:14.369 [ 00:07:14.369 { 00:07:14.369 "name": "BaseBdev1", 00:07:14.369 "aliases": [ 00:07:14.369 "3445b144-fc32-11ee-80f8-ef3e42bb1492" 00:07:14.369 ], 00:07:14.369 "product_name": "Malloc disk", 00:07:14.369 "block_size": 512, 00:07:14.369 "num_blocks": 65536, 00:07:14.369 "uuid": "3445b144-fc32-11ee-80f8-ef3e42bb1492", 00:07:14.369 "assigned_rate_limits": { 00:07:14.369 "rw_ios_per_sec": 0, 00:07:14.369 "rw_mbytes_per_sec": 0, 00:07:14.369 "r_mbytes_per_sec": 0, 00:07:14.369 "w_mbytes_per_sec": 0 00:07:14.369 }, 00:07:14.369 "claimed": false, 00:07:14.369 "zoned": false, 00:07:14.369 "supported_io_types": { 00:07:14.369 "read": true, 00:07:14.369 "write": true, 00:07:14.369 "unmap": true, 00:07:14.369 "write_zeroes": true, 00:07:14.369 "flush": true, 00:07:14.369 "reset": true, 00:07:14.369 "compare": false, 00:07:14.369 "compare_and_write": false, 00:07:14.369 "abort": true, 00:07:14.369 "nvme_admin": false, 00:07:14.369 "nvme_io": false 00:07:14.369 }, 00:07:14.369 "memory_domains": [ 00:07:14.369 { 00:07:14.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:14.369 "dma_device_type": 2 00:07:14.369 } 00:07:14.369 ], 00:07:14.369 "driver_specific": {} 00:07:14.369 } 00:07:14.369 ] 00:07:14.369 20:45:05 -- common/autotest_common.sh@895 -- # return 0 00:07:14.369 20:45:05 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:14.627 [2024-04-16 20:45:05.617115] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:14.627 [2024-04-16 20:45:05.617562] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:14.627 [2024-04-16 20:45:05.617601] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:14.627 20:45:05 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:07:14.627 20:45:05 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:07:14.627 20:45:05 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:14.627 20:45:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:14.627 20:45:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:14.627 20:45:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:07:14.627 20:45:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:07:14.627 20:45:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:14.627 20:45:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:14.627 20:45:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:14.627 20:45:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:14.627 20:45:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:14.627 20:45:05 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:14.627 20:45:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:14.886 20:45:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:14.886 "name": "Existed_Raid", 00:07:14.886 "uuid": "349a8847-fc32-11ee-80f8-ef3e42bb1492", 00:07:14.886 "strip_size_kb": 0, 00:07:14.886 "state": "configuring", 00:07:14.886 "raid_level": "raid1", 00:07:14.886 "superblock": true, 00:07:14.886 "num_base_bdevs": 2, 00:07:14.886 "num_base_bdevs_discovered": 1, 00:07:14.886 "num_base_bdevs_operational": 2, 00:07:14.886 "base_bdevs_list": [ 00:07:14.886 { 00:07:14.886 "name": "BaseBdev1", 00:07:14.886 "uuid": "3445b144-fc32-11ee-80f8-ef3e42bb1492", 00:07:14.886 "is_configured": true, 00:07:14.886 "data_offset": 2048, 00:07:14.886 "data_size": 63488 00:07:14.886 }, 00:07:14.886 { 00:07:14.886 "name": "BaseBdev2", 00:07:14.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:14.886 "is_configured": false, 00:07:14.886 "data_offset": 0, 00:07:14.886 "data_size": 0 00:07:14.886 } 00:07:14.886 ] 00:07:14.886 }' 00:07:14.886 20:45:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:14.886 20:45:05 -- common/autotest_common.sh@10 -- # set +x 00:07:15.144 20:45:06 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:07:15.403 [2024-04-16 20:45:06.257217] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:15.403 [2024-04-16 20:45:06.257276] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b489a00 00:07:15.403 [2024-04-16 20:45:06.257281] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:15.403 [2024-04-16 20:45:06.257303] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b4ecec0 00:07:15.403 [2024-04-16 20:45:06.257332] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b489a00 00:07:15.403 [2024-04-16 20:45:06.257335] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82b489a00 00:07:15.403 [2024-04-16 20:45:06.257349] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:15.403 BaseBdev2 00:07:15.403 20:45:06 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:07:15.403 20:45:06 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:07:15.403 20:45:06 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:07:15.403 20:45:06 -- common/autotest_common.sh@889 -- # local i 00:07:15.403 20:45:06 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:07:15.403 20:45:06 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:07:15.403 20:45:06 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:15.403 20:45:06 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:15.661 [ 00:07:15.661 { 00:07:15.661 "name": "BaseBdev2", 00:07:15.661 "aliases": [ 00:07:15.661 "34fc30bb-fc32-11ee-80f8-ef3e42bb1492" 00:07:15.661 ], 00:07:15.661 "product_name": "Malloc disk", 00:07:15.661 "block_size": 512, 00:07:15.661 "num_blocks": 65536, 00:07:15.661 "uuid": "34fc30bb-fc32-11ee-80f8-ef3e42bb1492", 00:07:15.661 "assigned_rate_limits": { 00:07:15.661 "rw_ios_per_sec": 0, 00:07:15.661 "rw_mbytes_per_sec": 0, 00:07:15.661 "r_mbytes_per_sec": 0, 00:07:15.661 "w_mbytes_per_sec": 0 00:07:15.661 }, 00:07:15.661 "claimed": true, 00:07:15.661 "claim_type": "exclusive_write", 00:07:15.661 "zoned": false, 00:07:15.661 "supported_io_types": { 00:07:15.661 "read": true, 00:07:15.661 "write": true, 00:07:15.661 "unmap": true, 00:07:15.661 "write_zeroes": true, 00:07:15.661 "flush": true, 00:07:15.661 "reset": true, 00:07:15.661 "compare": false, 00:07:15.661 "compare_and_write": false, 00:07:15.661 "abort": true, 00:07:15.661 "nvme_admin": false, 00:07:15.661 "nvme_io": false 00:07:15.661 }, 00:07:15.661 "memory_domains": [ 00:07:15.661 { 00:07:15.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:15.661 "dma_device_type": 2 00:07:15.661 } 00:07:15.661 ], 00:07:15.661 "driver_specific": {} 00:07:15.661 } 00:07:15.661 ] 00:07:15.661 20:45:06 -- common/autotest_common.sh@895 -- # return 0 00:07:15.661 20:45:06 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:07:15.661 20:45:06 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:07:15.661 20:45:06 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:15.661 20:45:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:15.661 20:45:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:07:15.661 20:45:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:07:15.661 20:45:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:07:15.661 20:45:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:15.661 20:45:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:15.661 20:45:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:15.661 20:45:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:15.661 20:45:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:15.661 20:45:06 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:15.661 20:45:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:15.919 20:45:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:15.919 "name": "Existed_Raid", 00:07:15.920 "uuid": "349a8847-fc32-11ee-80f8-ef3e42bb1492", 00:07:15.920 "strip_size_kb": 0, 00:07:15.920 "state": "online", 00:07:15.920 "raid_level": "raid1", 00:07:15.920 "superblock": true, 00:07:15.920 "num_base_bdevs": 2, 00:07:15.920 "num_base_bdevs_discovered": 2, 00:07:15.920 "num_base_bdevs_operational": 2, 00:07:15.920 "base_bdevs_list": [ 00:07:15.920 { 00:07:15.920 "name": "BaseBdev1", 00:07:15.920 "uuid": "3445b144-fc32-11ee-80f8-ef3e42bb1492", 00:07:15.920 "is_configured": true, 00:07:15.920 "data_offset": 2048, 00:07:15.920 "data_size": 63488 00:07:15.920 }, 00:07:15.920 { 00:07:15.920 "name": "BaseBdev2", 00:07:15.920 "uuid": "34fc30bb-fc32-11ee-80f8-ef3e42bb1492", 00:07:15.920 "is_configured": true, 00:07:15.920 "data_offset": 2048, 00:07:15.920 "data_size": 63488 00:07:15.920 } 00:07:15.920 ] 00:07:15.920 }' 00:07:15.920 20:45:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:15.920 20:45:06 -- common/autotest_common.sh@10 -- # set +x 00:07:16.178 20:45:07 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:07:16.178 [2024-04-16 20:45:07.281137] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:16.436 20:45:07 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:07:16.436 20:45:07 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:07:16.436 20:45:07 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:07:16.436 20:45:07 -- bdev/bdev_raid.sh@196 -- # return 0 00:07:16.436 20:45:07 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:07:16.436 20:45:07 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:16.436 20:45:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:16.436 20:45:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:07:16.436 20:45:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:07:16.436 20:45:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:07:16.436 20:45:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:07:16.436 20:45:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:16.436 20:45:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:16.436 20:45:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:16.436 20:45:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:16.436 20:45:07 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:16.436 20:45:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:16.436 20:45:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:16.436 "name": "Existed_Raid", 00:07:16.436 "uuid": "349a8847-fc32-11ee-80f8-ef3e42bb1492", 00:07:16.436 "strip_size_kb": 0, 00:07:16.436 "state": "online", 00:07:16.436 "raid_level": "raid1", 00:07:16.436 "superblock": true, 00:07:16.436 "num_base_bdevs": 2, 00:07:16.436 "num_base_bdevs_discovered": 1, 00:07:16.436 "num_base_bdevs_operational": 1, 00:07:16.436 "base_bdevs_list": [ 00:07:16.436 { 00:07:16.436 "name": null, 00:07:16.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:16.436 "is_configured": false, 00:07:16.436 "data_offset": 2048, 00:07:16.436 "data_size": 63488 00:07:16.436 }, 00:07:16.436 { 00:07:16.436 "name": "BaseBdev2", 00:07:16.436 "uuid": "34fc30bb-fc32-11ee-80f8-ef3e42bb1492", 00:07:16.436 "is_configured": true, 00:07:16.436 "data_offset": 2048, 00:07:16.436 "data_size": 63488 00:07:16.436 } 00:07:16.436 ] 00:07:16.436 }' 00:07:16.436 20:45:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:16.436 20:45:07 -- common/autotest_common.sh@10 -- # set +x 00:07:16.694 20:45:07 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:07:16.694 20:45:07 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:07:16.694 20:45:07 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:16.694 20:45:07 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:07:16.952 20:45:07 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:07:16.952 20:45:07 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:16.952 20:45:07 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:07:17.211 [2024-04-16 20:45:08.077855] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:17.211 [2024-04-16 20:45:08.077879] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:17.211 [2024-04-16 20:45:08.077891] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:17.211 [2024-04-16 20:45:08.082491] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:17.211 [2024-04-16 20:45:08.082502] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b489a00 name Existed_Raid, state offline 00:07:17.211 20:45:08 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:07:17.211 20:45:08 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:07:17.211 20:45:08 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:17.211 20:45:08 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:07:17.211 20:45:08 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:07:17.211 20:45:08 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:07:17.211 20:45:08 -- bdev/bdev_raid.sh@287 -- # killprocess 48879 00:07:17.211 20:45:08 -- common/autotest_common.sh@926 -- # '[' -z 48879 ']' 00:07:17.211 20:45:08 -- common/autotest_common.sh@930 -- # kill -0 48879 00:07:17.211 20:45:08 -- common/autotest_common.sh@931 -- # uname 00:07:17.211 20:45:08 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:07:17.211 20:45:08 -- common/autotest_common.sh@934 -- # ps -c -o command 48879 00:07:17.211 20:45:08 -- common/autotest_common.sh@934 -- # tail -1 00:07:17.211 20:45:08 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:07:17.211 20:45:08 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:07:17.211 killing process with pid 48879 00:07:17.211 20:45:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 48879' 00:07:17.211 20:45:08 -- common/autotest_common.sh@945 -- # kill 48879 00:07:17.211 [2024-04-16 20:45:08.297632] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:17.211 20:45:08 -- common/autotest_common.sh@950 -- # wait 48879 00:07:17.211 [2024-04-16 20:45:08.297677] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:17.470 20:45:08 -- bdev/bdev_raid.sh@289 -- # return 0 00:07:17.470 00:07:17.470 real 0m6.818s 00:07:17.470 user 0m11.659s 00:07:17.470 sys 0m1.281s 00:07:17.470 20:45:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.470 20:45:08 -- common/autotest_common.sh@10 -- # set +x 00:07:17.470 ************************************ 00:07:17.470 END TEST raid_state_function_test_sb 00:07:17.470 ************************************ 00:07:17.470 20:45:08 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:07:17.470 20:45:08 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:17.471 20:45:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:17.471 20:45:08 -- common/autotest_common.sh@10 -- # set +x 00:07:17.471 ************************************ 00:07:17.471 START TEST raid_superblock_test 00:07:17.471 ************************************ 00:07:17.471 20:45:08 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 2 00:07:17.471 20:45:08 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:07:17.471 20:45:08 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:07:17.471 20:45:08 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:07:17.471 20:45:08 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:07:17.471 20:45:08 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:07:17.471 20:45:08 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:07:17.471 20:45:08 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:07:17.471 20:45:08 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:07:17.471 20:45:08 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:07:17.471 20:45:08 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:07:17.471 20:45:08 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:07:17.471 20:45:08 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:07:17.471 20:45:08 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:07:17.471 20:45:08 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:07:17.471 20:45:08 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:07:17.471 20:45:08 -- bdev/bdev_raid.sh@357 -- # raid_pid=49078 00:07:17.471 20:45:08 -- bdev/bdev_raid.sh@358 -- # waitforlisten 49078 /var/tmp/spdk-raid.sock 00:07:17.471 20:45:08 -- bdev/bdev_raid.sh@356 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:07:17.471 20:45:08 -- common/autotest_common.sh@819 -- # '[' -z 49078 ']' 00:07:17.471 20:45:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:17.471 20:45:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:17.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:17.471 20:45:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:17.471 20:45:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:17.471 20:45:08 -- common/autotest_common.sh@10 -- # set +x 00:07:17.471 [2024-04-16 20:45:08.500930] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:07:17.471 [2024-04-16 20:45:08.501243] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:07:18.038 EAL: TSC is not safe to use in SMP mode 00:07:18.038 EAL: TSC is not invariant 00:07:18.038 [2024-04-16 20:45:08.924844] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.038 [2024-04-16 20:45:09.006793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.038 [2024-04-16 20:45:09.007219] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:18.038 [2024-04-16 20:45:09.007228] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:18.606 20:45:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:18.606 20:45:09 -- common/autotest_common.sh@852 -- # return 0 00:07:18.606 20:45:09 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:07:18.606 20:45:09 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:07:18.606 20:45:09 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:07:18.606 20:45:09 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:07:18.606 20:45:09 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:18.606 20:45:09 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:18.606 20:45:09 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:07:18.606 20:45:09 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:18.606 20:45:09 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:07:18.606 malloc1 00:07:18.606 20:45:09 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:18.864 [2024-04-16 20:45:09.746254] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:18.864 [2024-04-16 20:45:09.746304] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:18.864 [2024-04-16 20:45:09.746817] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c206780 00:07:18.864 [2024-04-16 20:45:09.746838] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:18.864 [2024-04-16 20:45:09.747574] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:18.864 [2024-04-16 20:45:09.747603] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:18.864 pt1 00:07:18.864 20:45:09 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:07:18.864 20:45:09 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:07:18.864 20:45:09 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:07:18.864 20:45:09 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:07:18.864 20:45:09 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:18.865 20:45:09 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:18.865 20:45:09 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:07:18.865 20:45:09 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:18.865 20:45:09 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:07:18.865 malloc2 00:07:18.865 20:45:09 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:19.152 [2024-04-16 20:45:10.122258] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:19.152 [2024-04-16 20:45:10.122307] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:19.152 [2024-04-16 20:45:10.122331] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c206c80 00:07:19.152 [2024-04-16 20:45:10.122337] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:19.152 [2024-04-16 20:45:10.122887] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:19.152 [2024-04-16 20:45:10.122914] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:19.152 pt2 00:07:19.152 20:45:10 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:07:19.152 20:45:10 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:07:19.152 20:45:10 -- bdev/bdev_raid.sh@375 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:07:19.416 [2024-04-16 20:45:10.290268] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:19.416 [2024-04-16 20:45:10.290700] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:19.416 [2024-04-16 20:45:10.290766] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c206f00 00:07:19.416 [2024-04-16 20:45:10.290771] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:19.416 [2024-04-16 20:45:10.290804] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c269e20 00:07:19.416 [2024-04-16 20:45:10.290857] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c206f00 00:07:19.416 [2024-04-16 20:45:10.290860] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82c206f00 00:07:19.416 [2024-04-16 20:45:10.290879] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:19.416 20:45:10 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:19.416 20:45:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:07:19.416 20:45:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:07:19.416 20:45:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:07:19.416 20:45:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:07:19.416 20:45:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:19.416 20:45:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:19.416 20:45:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:19.416 20:45:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:19.416 20:45:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:19.416 20:45:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:19.416 20:45:10 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:19.416 20:45:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:19.416 "name": "raid_bdev1", 00:07:19.416 "uuid": "37639943-fc32-11ee-80f8-ef3e42bb1492", 00:07:19.416 "strip_size_kb": 0, 00:07:19.416 "state": "online", 00:07:19.416 "raid_level": "raid1", 00:07:19.416 "superblock": true, 00:07:19.416 "num_base_bdevs": 2, 00:07:19.416 "num_base_bdevs_discovered": 2, 00:07:19.416 "num_base_bdevs_operational": 2, 00:07:19.416 "base_bdevs_list": [ 00:07:19.416 { 00:07:19.416 "name": "pt1", 00:07:19.416 "uuid": "f29a3ffa-8a63-cd58-8bf2-fc6d6dd12b56", 00:07:19.416 "is_configured": true, 00:07:19.416 "data_offset": 2048, 00:07:19.416 "data_size": 63488 00:07:19.416 }, 00:07:19.416 { 00:07:19.416 "name": "pt2", 00:07:19.416 "uuid": "9a916f4a-a38f-ed53-88fc-3bff7e5af00e", 00:07:19.416 "is_configured": true, 00:07:19.416 "data_offset": 2048, 00:07:19.416 "data_size": 63488 00:07:19.416 } 00:07:19.416 ] 00:07:19.416 }' 00:07:19.416 20:45:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:19.416 20:45:10 -- common/autotest_common.sh@10 -- # set +x 00:07:19.981 20:45:10 -- bdev/bdev_raid.sh@379 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:19.981 20:45:10 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:07:19.981 [2024-04-16 20:45:10.946294] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:19.981 20:45:10 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=37639943-fc32-11ee-80f8-ef3e42bb1492 00:07:19.981 20:45:10 -- bdev/bdev_raid.sh@380 -- # '[' -z 37639943-fc32-11ee-80f8-ef3e42bb1492 ']' 00:07:19.981 20:45:10 -- bdev/bdev_raid.sh@385 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:07:20.239 [2024-04-16 20:45:11.118270] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:20.239 [2024-04-16 20:45:11.118293] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:20.239 [2024-04-16 20:45:11.118314] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:20.239 [2024-04-16 20:45:11.118327] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:20.239 [2024-04-16 20:45:11.118330] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c206f00 name raid_bdev1, state offline 00:07:20.239 20:45:11 -- bdev/bdev_raid.sh@386 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:20.239 20:45:11 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:07:20.239 20:45:11 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:07:20.239 20:45:11 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:07:20.239 20:45:11 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:07:20.239 20:45:11 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:07:20.497 20:45:11 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:07:20.497 20:45:11 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:07:20.755 20:45:11 -- bdev/bdev_raid.sh@395 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:07:20.755 20:45:11 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:21.013 20:45:11 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:07:21.013 20:45:11 -- bdev/bdev_raid.sh@401 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:07:21.013 20:45:11 -- common/autotest_common.sh@640 -- # local es=0 00:07:21.013 20:45:11 -- common/autotest_common.sh@642 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:07:21.013 20:45:11 -- common/autotest_common.sh@628 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:21.013 20:45:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:21.013 20:45:11 -- common/autotest_common.sh@632 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:21.013 20:45:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:21.013 20:45:11 -- common/autotest_common.sh@634 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:21.013 20:45:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:21.013 20:45:11 -- common/autotest_common.sh@634 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:21.013 20:45:11 -- common/autotest_common.sh@634 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:21.013 20:45:11 -- common/autotest_common.sh@643 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:07:21.013 [2024-04-16 20:45:12.038319] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:21.013 [2024-04-16 20:45:12.038775] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:21.013 [2024-04-16 20:45:12.038801] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:07:21.013 [2024-04-16 20:45:12.038849] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:07:21.013 [2024-04-16 20:45:12.038857] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:21.013 [2024-04-16 20:45:12.038861] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c206c80 name raid_bdev1, state configuring 00:07:21.013 request: 00:07:21.013 { 00:07:21.013 "name": "raid_bdev1", 00:07:21.013 "raid_level": "raid1", 00:07:21.013 "base_bdevs": [ 00:07:21.013 "malloc1", 00:07:21.013 "malloc2" 00:07:21.013 ], 00:07:21.013 "superblock": false, 00:07:21.013 "method": "bdev_raid_create", 00:07:21.013 "req_id": 1 00:07:21.013 } 00:07:21.013 Got JSON-RPC error response 00:07:21.013 response: 00:07:21.013 { 00:07:21.013 "code": -17, 00:07:21.013 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:21.013 } 00:07:21.013 20:45:12 -- common/autotest_common.sh@643 -- # es=1 00:07:21.013 20:45:12 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:21.013 20:45:12 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:21.013 20:45:12 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:21.013 20:45:12 -- bdev/bdev_raid.sh@403 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:21.013 20:45:12 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:07:21.271 20:45:12 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:07:21.271 20:45:12 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:07:21.271 20:45:12 -- bdev/bdev_raid.sh@409 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:21.529 [2024-04-16 20:45:12.398312] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:21.529 [2024-04-16 20:45:12.398381] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:21.529 [2024-04-16 20:45:12.398406] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c206780 00:07:21.529 [2024-04-16 20:45:12.398412] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:21.529 [2024-04-16 20:45:12.398912] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:21.529 [2024-04-16 20:45:12.398935] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:21.529 [2024-04-16 20:45:12.398957] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:07:21.529 [2024-04-16 20:45:12.398966] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:21.529 pt1 00:07:21.529 20:45:12 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:07:21.529 20:45:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:07:21.529 20:45:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:21.529 20:45:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:07:21.529 20:45:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:07:21.529 20:45:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:21.529 20:45:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:21.529 20:45:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:21.529 20:45:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:21.529 20:45:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:21.529 20:45:12 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:21.529 20:45:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:21.529 20:45:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:21.529 "name": "raid_bdev1", 00:07:21.529 "uuid": "37639943-fc32-11ee-80f8-ef3e42bb1492", 00:07:21.529 "strip_size_kb": 0, 00:07:21.530 "state": "configuring", 00:07:21.530 "raid_level": "raid1", 00:07:21.530 "superblock": true, 00:07:21.530 "num_base_bdevs": 2, 00:07:21.530 "num_base_bdevs_discovered": 1, 00:07:21.530 "num_base_bdevs_operational": 2, 00:07:21.530 "base_bdevs_list": [ 00:07:21.530 { 00:07:21.530 "name": "pt1", 00:07:21.530 "uuid": "f29a3ffa-8a63-cd58-8bf2-fc6d6dd12b56", 00:07:21.530 "is_configured": true, 00:07:21.530 "data_offset": 2048, 00:07:21.530 "data_size": 63488 00:07:21.530 }, 00:07:21.530 { 00:07:21.530 "name": null, 00:07:21.530 "uuid": "9a916f4a-a38f-ed53-88fc-3bff7e5af00e", 00:07:21.530 "is_configured": false, 00:07:21.530 "data_offset": 2048, 00:07:21.530 "data_size": 63488 00:07:21.530 } 00:07:21.530 ] 00:07:21.530 }' 00:07:21.530 20:45:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:21.530 20:45:12 -- common/autotest_common.sh@10 -- # set +x 00:07:21.788 20:45:12 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:07:21.788 20:45:12 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:07:21.788 20:45:12 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:07:21.788 20:45:12 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:22.047 [2024-04-16 20:45:13.030328] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:22.047 [2024-04-16 20:45:13.030376] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:22.047 [2024-04-16 20:45:13.030400] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c206f00 00:07:22.047 [2024-04-16 20:45:13.030406] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:22.047 [2024-04-16 20:45:13.030492] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:22.047 [2024-04-16 20:45:13.030498] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:22.047 [2024-04-16 20:45:13.030515] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:07:22.047 [2024-04-16 20:45:13.030521] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:22.047 [2024-04-16 20:45:13.030541] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c207180 00:07:22.047 [2024-04-16 20:45:13.030543] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:22.047 [2024-04-16 20:45:13.030557] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c269e20 00:07:22.047 [2024-04-16 20:45:13.030592] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c207180 00:07:22.047 [2024-04-16 20:45:13.030594] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82c207180 00:07:22.047 [2024-04-16 20:45:13.030612] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:22.047 pt2 00:07:22.047 20:45:13 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:07:22.047 20:45:13 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:07:22.047 20:45:13 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:22.047 20:45:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:07:22.047 20:45:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:07:22.047 20:45:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:07:22.047 20:45:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:07:22.047 20:45:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:22.047 20:45:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:22.047 20:45:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:22.047 20:45:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:22.047 20:45:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:22.047 20:45:13 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:22.047 20:45:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:22.305 20:45:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:22.305 "name": "raid_bdev1", 00:07:22.305 "uuid": "37639943-fc32-11ee-80f8-ef3e42bb1492", 00:07:22.305 "strip_size_kb": 0, 00:07:22.306 "state": "online", 00:07:22.306 "raid_level": "raid1", 00:07:22.306 "superblock": true, 00:07:22.306 "num_base_bdevs": 2, 00:07:22.306 "num_base_bdevs_discovered": 2, 00:07:22.306 "num_base_bdevs_operational": 2, 00:07:22.306 "base_bdevs_list": [ 00:07:22.306 { 00:07:22.306 "name": "pt1", 00:07:22.306 "uuid": "f29a3ffa-8a63-cd58-8bf2-fc6d6dd12b56", 00:07:22.306 "is_configured": true, 00:07:22.306 "data_offset": 2048, 00:07:22.306 "data_size": 63488 00:07:22.306 }, 00:07:22.306 { 00:07:22.306 "name": "pt2", 00:07:22.306 "uuid": "9a916f4a-a38f-ed53-88fc-3bff7e5af00e", 00:07:22.306 "is_configured": true, 00:07:22.306 "data_offset": 2048, 00:07:22.306 "data_size": 63488 00:07:22.306 } 00:07:22.306 ] 00:07:22.306 }' 00:07:22.306 20:45:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:22.306 20:45:13 -- common/autotest_common.sh@10 -- # set +x 00:07:22.564 20:45:13 -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:22.565 20:45:13 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:07:22.565 [2024-04-16 20:45:13.650367] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:22.565 20:45:13 -- bdev/bdev_raid.sh@430 -- # '[' 37639943-fc32-11ee-80f8-ef3e42bb1492 '!=' 37639943-fc32-11ee-80f8-ef3e42bb1492 ']' 00:07:22.565 20:45:13 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:07:22.822 20:45:13 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:07:22.822 20:45:13 -- bdev/bdev_raid.sh@196 -- # return 0 00:07:22.822 20:45:13 -- bdev/bdev_raid.sh@436 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:07:22.822 [2024-04-16 20:45:13.842353] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:07:22.822 20:45:13 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:22.822 20:45:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:07:22.822 20:45:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:07:22.822 20:45:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:07:22.822 20:45:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:07:22.822 20:45:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:07:22.822 20:45:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:22.822 20:45:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:22.822 20:45:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:22.822 20:45:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:22.822 20:45:13 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:22.822 20:45:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:23.081 20:45:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:23.081 "name": "raid_bdev1", 00:07:23.081 "uuid": "37639943-fc32-11ee-80f8-ef3e42bb1492", 00:07:23.081 "strip_size_kb": 0, 00:07:23.081 "state": "online", 00:07:23.081 "raid_level": "raid1", 00:07:23.081 "superblock": true, 00:07:23.081 "num_base_bdevs": 2, 00:07:23.081 "num_base_bdevs_discovered": 1, 00:07:23.081 "num_base_bdevs_operational": 1, 00:07:23.081 "base_bdevs_list": [ 00:07:23.081 { 00:07:23.081 "name": null, 00:07:23.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:23.081 "is_configured": false, 00:07:23.081 "data_offset": 2048, 00:07:23.081 "data_size": 63488 00:07:23.081 }, 00:07:23.081 { 00:07:23.081 "name": "pt2", 00:07:23.081 "uuid": "9a916f4a-a38f-ed53-88fc-3bff7e5af00e", 00:07:23.081 "is_configured": true, 00:07:23.081 "data_offset": 2048, 00:07:23.081 "data_size": 63488 00:07:23.081 } 00:07:23.081 ] 00:07:23.081 }' 00:07:23.081 20:45:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:23.081 20:45:14 -- common/autotest_common.sh@10 -- # set +x 00:07:23.337 20:45:14 -- bdev/bdev_raid.sh@442 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:07:23.593 [2024-04-16 20:45:14.474346] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:23.593 [2024-04-16 20:45:14.474368] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:23.593 [2024-04-16 20:45:14.474403] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:23.593 [2024-04-16 20:45:14.474413] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:23.593 [2024-04-16 20:45:14.474416] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c207180 name raid_bdev1, state offline 00:07:23.593 20:45:14 -- bdev/bdev_raid.sh@443 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:23.593 20:45:14 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:07:23.593 20:45:14 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:07:23.593 20:45:14 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:07:23.593 20:45:14 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:07:23.593 20:45:14 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:07:23.593 20:45:14 -- bdev/bdev_raid.sh@450 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:07:23.851 20:45:14 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:07:23.851 20:45:14 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:07:23.851 20:45:14 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:07:23.851 20:45:14 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:07:23.851 20:45:14 -- bdev/bdev_raid.sh@462 -- # i=1 00:07:23.851 20:45:14 -- bdev/bdev_raid.sh@463 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:24.109 [2024-04-16 20:45:15.030370] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:24.109 [2024-04-16 20:45:15.030423] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:24.109 [2024-04-16 20:45:15.030448] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c206f00 00:07:24.109 [2024-04-16 20:45:15.030454] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:24.109 [2024-04-16 20:45:15.030974] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:24.109 [2024-04-16 20:45:15.031001] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:24.109 [2024-04-16 20:45:15.031023] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:07:24.109 [2024-04-16 20:45:15.031031] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:24.109 [2024-04-16 20:45:15.031049] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c207180 00:07:24.109 [2024-04-16 20:45:15.031064] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:24.109 [2024-04-16 20:45:15.031081] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c269e20 00:07:24.109 [2024-04-16 20:45:15.031113] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c207180 00:07:24.109 [2024-04-16 20:45:15.031116] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82c207180 00:07:24.109 [2024-04-16 20:45:15.031133] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:24.109 pt2 00:07:24.109 20:45:15 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:24.109 20:45:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:07:24.109 20:45:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:07:24.109 20:45:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:07:24.109 20:45:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:07:24.109 20:45:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:07:24.109 20:45:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:24.109 20:45:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:24.109 20:45:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:24.109 20:45:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:24.109 20:45:15 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:24.109 20:45:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:24.366 20:45:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:24.366 "name": "raid_bdev1", 00:07:24.366 "uuid": "37639943-fc32-11ee-80f8-ef3e42bb1492", 00:07:24.367 "strip_size_kb": 0, 00:07:24.367 "state": "online", 00:07:24.367 "raid_level": "raid1", 00:07:24.367 "superblock": true, 00:07:24.367 "num_base_bdevs": 2, 00:07:24.367 "num_base_bdevs_discovered": 1, 00:07:24.367 "num_base_bdevs_operational": 1, 00:07:24.367 "base_bdevs_list": [ 00:07:24.367 { 00:07:24.367 "name": null, 00:07:24.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:24.367 "is_configured": false, 00:07:24.367 "data_offset": 2048, 00:07:24.367 "data_size": 63488 00:07:24.367 }, 00:07:24.367 { 00:07:24.367 "name": "pt2", 00:07:24.367 "uuid": "9a916f4a-a38f-ed53-88fc-3bff7e5af00e", 00:07:24.367 "is_configured": true, 00:07:24.367 "data_offset": 2048, 00:07:24.367 "data_size": 63488 00:07:24.367 } 00:07:24.367 ] 00:07:24.367 }' 00:07:24.367 20:45:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:24.367 20:45:15 -- common/autotest_common.sh@10 -- # set +x 00:07:24.624 20:45:15 -- bdev/bdev_raid.sh@468 -- # '[' 2 -gt 2 ']' 00:07:24.624 20:45:15 -- bdev/bdev_raid.sh@506 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:24.624 20:45:15 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:07:24.624 [2024-04-16 20:45:15.666455] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:24.624 20:45:15 -- bdev/bdev_raid.sh@506 -- # '[' 37639943-fc32-11ee-80f8-ef3e42bb1492 '!=' 37639943-fc32-11ee-80f8-ef3e42bb1492 ']' 00:07:24.624 20:45:15 -- bdev/bdev_raid.sh@511 -- # killprocess 49078 00:07:24.624 20:45:15 -- common/autotest_common.sh@926 -- # '[' -z 49078 ']' 00:07:24.624 20:45:15 -- common/autotest_common.sh@930 -- # kill -0 49078 00:07:24.624 20:45:15 -- common/autotest_common.sh@931 -- # uname 00:07:24.624 20:45:15 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:07:24.624 20:45:15 -- common/autotest_common.sh@934 -- # ps -c -o command 49078 00:07:24.624 20:45:15 -- common/autotest_common.sh@934 -- # tail -1 00:07:24.624 20:45:15 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:07:24.624 20:45:15 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:07:24.624 killing process with pid 49078 00:07:24.624 20:45:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 49078' 00:07:24.624 20:45:15 -- common/autotest_common.sh@945 -- # kill 49078 00:07:24.624 [2024-04-16 20:45:15.698926] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:24.624 [2024-04-16 20:45:15.698962] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:24.624 [2024-04-16 20:45:15.698973] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:24.624 [2024-04-16 20:45:15.698977] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c207180 name raid_bdev1, state offline 00:07:24.624 20:45:15 -- common/autotest_common.sh@950 -- # wait 49078 00:07:24.624 [2024-04-16 20:45:15.708299] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:24.883 20:45:15 -- bdev/bdev_raid.sh@513 -- # return 0 00:07:24.883 00:07:24.883 real 0m7.358s 00:07:24.883 user 0m12.568s 00:07:24.883 sys 0m1.488s 00:07:24.883 20:45:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.883 20:45:15 -- common/autotest_common.sh@10 -- # set +x 00:07:24.883 ************************************ 00:07:24.883 END TEST raid_superblock_test 00:07:24.883 ************************************ 00:07:24.883 20:45:15 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:07:24.883 20:45:15 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:07:24.883 20:45:15 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:07:24.883 20:45:15 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:07:24.883 20:45:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:24.883 20:45:15 -- common/autotest_common.sh@10 -- # set +x 00:07:24.883 ************************************ 00:07:24.883 START TEST raid_state_function_test 00:07:24.883 ************************************ 00:07:24.883 20:45:15 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 3 false 00:07:24.883 20:45:15 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:07:24.883 20:45:15 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:07:24.883 20:45:15 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:07:24.883 20:45:15 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:07:24.883 20:45:15 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:07:24.883 20:45:15 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:24.883 20:45:15 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:07:24.883 20:45:15 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:07:24.883 20:45:15 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:24.883 20:45:15 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:07:24.883 20:45:15 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:07:24.883 20:45:15 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:24.883 20:45:15 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:07:24.883 20:45:15 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:07:24.883 20:45:15 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:24.883 20:45:15 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:24.883 20:45:15 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:07:24.883 20:45:15 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:07:24.883 20:45:15 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:07:24.883 20:45:15 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:07:24.883 20:45:15 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:07:24.883 20:45:15 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:07:24.883 20:45:15 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:07:24.883 20:45:15 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:07:24.883 20:45:15 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:07:24.883 20:45:15 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:07:24.883 20:45:15 -- bdev/bdev_raid.sh@226 -- # raid_pid=49293 00:07:24.883 20:45:15 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 49293' 00:07:24.883 Process raid pid: 49293 00:07:24.883 20:45:15 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:07:24.883 20:45:15 -- bdev/bdev_raid.sh@228 -- # waitforlisten 49293 /var/tmp/spdk-raid.sock 00:07:24.883 20:45:15 -- common/autotest_common.sh@819 -- # '[' -z 49293 ']' 00:07:24.883 20:45:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:24.883 20:45:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:24.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:24.883 20:45:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:24.883 20:45:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:24.883 20:45:15 -- common/autotest_common.sh@10 -- # set +x 00:07:24.883 [2024-04-16 20:45:15.920253] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:07:24.883 [2024-04-16 20:45:15.920593] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:07:25.447 EAL: TSC is not safe to use in SMP mode 00:07:25.447 EAL: TSC is not invariant 00:07:25.447 [2024-04-16 20:45:16.343092] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.447 [2024-04-16 20:45:16.424729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.447 [2024-04-16 20:45:16.425150] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:25.447 [2024-04-16 20:45:16.425159] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:25.711 20:45:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:25.711 20:45:16 -- common/autotest_common.sh@852 -- # return 0 00:07:25.711 20:45:16 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:07:25.977 [2024-04-16 20:45:16.976521] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:25.977 [2024-04-16 20:45:16.976582] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:25.977 [2024-04-16 20:45:16.976586] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:25.977 [2024-04-16 20:45:16.976592] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:25.977 [2024-04-16 20:45:16.976595] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:25.977 [2024-04-16 20:45:16.976600] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:25.977 20:45:16 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:25.977 20:45:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:25.977 20:45:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:25.977 20:45:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:07:25.977 20:45:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:25.977 20:45:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:07:25.977 20:45:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:25.977 20:45:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:25.977 20:45:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:25.977 20:45:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:25.977 20:45:16 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:25.977 20:45:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:26.236 20:45:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:26.236 "name": "Existed_Raid", 00:07:26.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:26.236 "strip_size_kb": 64, 00:07:26.236 "state": "configuring", 00:07:26.236 "raid_level": "raid0", 00:07:26.236 "superblock": false, 00:07:26.236 "num_base_bdevs": 3, 00:07:26.236 "num_base_bdevs_discovered": 0, 00:07:26.236 "num_base_bdevs_operational": 3, 00:07:26.236 "base_bdevs_list": [ 00:07:26.236 { 00:07:26.236 "name": "BaseBdev1", 00:07:26.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:26.236 "is_configured": false, 00:07:26.236 "data_offset": 0, 00:07:26.236 "data_size": 0 00:07:26.236 }, 00:07:26.236 { 00:07:26.236 "name": "BaseBdev2", 00:07:26.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:26.236 "is_configured": false, 00:07:26.236 "data_offset": 0, 00:07:26.236 "data_size": 0 00:07:26.236 }, 00:07:26.236 { 00:07:26.236 "name": "BaseBdev3", 00:07:26.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:26.236 "is_configured": false, 00:07:26.236 "data_offset": 0, 00:07:26.236 "data_size": 0 00:07:26.236 } 00:07:26.236 ] 00:07:26.236 }' 00:07:26.236 20:45:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:26.236 20:45:17 -- common/autotest_common.sh@10 -- # set +x 00:07:26.495 20:45:17 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:26.753 [2024-04-16 20:45:17.604876] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:26.753 [2024-04-16 20:45:17.604899] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d5a9500 name Existed_Raid, state configuring 00:07:26.753 20:45:17 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:07:26.753 [2024-04-16 20:45:17.788989] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:26.753 [2024-04-16 20:45:17.789048] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:26.753 [2024-04-16 20:45:17.789052] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:26.753 [2024-04-16 20:45:17.789058] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:26.753 [2024-04-16 20:45:17.789061] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:26.753 [2024-04-16 20:45:17.789067] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:26.753 20:45:17 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:07:27.011 [2024-04-16 20:45:17.977881] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:27.011 BaseBdev1 00:07:27.011 20:45:17 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:07:27.011 20:45:17 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:07:27.011 20:45:17 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:07:27.011 20:45:17 -- common/autotest_common.sh@889 -- # local i 00:07:27.011 20:45:17 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:07:27.011 20:45:17 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:07:27.011 20:45:17 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:27.269 20:45:18 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:27.269 [ 00:07:27.269 { 00:07:27.269 "name": "BaseBdev1", 00:07:27.269 "aliases": [ 00:07:27.269 "3bf8853a-fc32-11ee-80f8-ef3e42bb1492" 00:07:27.269 ], 00:07:27.269 "product_name": "Malloc disk", 00:07:27.269 "block_size": 512, 00:07:27.269 "num_blocks": 65536, 00:07:27.269 "uuid": "3bf8853a-fc32-11ee-80f8-ef3e42bb1492", 00:07:27.269 "assigned_rate_limits": { 00:07:27.269 "rw_ios_per_sec": 0, 00:07:27.269 "rw_mbytes_per_sec": 0, 00:07:27.269 "r_mbytes_per_sec": 0, 00:07:27.269 "w_mbytes_per_sec": 0 00:07:27.269 }, 00:07:27.269 "claimed": true, 00:07:27.269 "claim_type": "exclusive_write", 00:07:27.269 "zoned": false, 00:07:27.269 "supported_io_types": { 00:07:27.269 "read": true, 00:07:27.269 "write": true, 00:07:27.269 "unmap": true, 00:07:27.269 "write_zeroes": true, 00:07:27.269 "flush": true, 00:07:27.269 "reset": true, 00:07:27.269 "compare": false, 00:07:27.269 "compare_and_write": false, 00:07:27.269 "abort": true, 00:07:27.269 "nvme_admin": false, 00:07:27.269 "nvme_io": false 00:07:27.269 }, 00:07:27.269 "memory_domains": [ 00:07:27.269 { 00:07:27.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:27.269 "dma_device_type": 2 00:07:27.269 } 00:07:27.269 ], 00:07:27.269 "driver_specific": {} 00:07:27.269 } 00:07:27.269 ] 00:07:27.269 20:45:18 -- common/autotest_common.sh@895 -- # return 0 00:07:27.269 20:45:18 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:27.269 20:45:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:27.269 20:45:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:27.269 20:45:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:07:27.270 20:45:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:27.270 20:45:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:07:27.270 20:45:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:27.270 20:45:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:27.270 20:45:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:27.270 20:45:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:27.270 20:45:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:27.270 20:45:18 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:27.528 20:45:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:27.528 "name": "Existed_Raid", 00:07:27.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.528 "strip_size_kb": 64, 00:07:27.528 "state": "configuring", 00:07:27.528 "raid_level": "raid0", 00:07:27.528 "superblock": false, 00:07:27.528 "num_base_bdevs": 3, 00:07:27.528 "num_base_bdevs_discovered": 1, 00:07:27.528 "num_base_bdevs_operational": 3, 00:07:27.528 "base_bdevs_list": [ 00:07:27.528 { 00:07:27.528 "name": "BaseBdev1", 00:07:27.528 "uuid": "3bf8853a-fc32-11ee-80f8-ef3e42bb1492", 00:07:27.528 "is_configured": true, 00:07:27.528 "data_offset": 0, 00:07:27.528 "data_size": 65536 00:07:27.528 }, 00:07:27.528 { 00:07:27.528 "name": "BaseBdev2", 00:07:27.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.528 "is_configured": false, 00:07:27.528 "data_offset": 0, 00:07:27.528 "data_size": 0 00:07:27.528 }, 00:07:27.528 { 00:07:27.528 "name": "BaseBdev3", 00:07:27.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.528 "is_configured": false, 00:07:27.528 "data_offset": 0, 00:07:27.528 "data_size": 0 00:07:27.528 } 00:07:27.528 ] 00:07:27.528 }' 00:07:27.528 20:45:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:27.528 20:45:18 -- common/autotest_common.sh@10 -- # set +x 00:07:27.787 20:45:18 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:28.045 [2024-04-16 20:45:18.969703] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:28.045 [2024-04-16 20:45:18.969730] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d5a9500 name Existed_Raid, state configuring 00:07:28.045 20:45:18 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:07:28.045 20:45:18 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:07:28.045 [2024-04-16 20:45:19.125818] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:28.045 [2024-04-16 20:45:19.126418] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:28.045 [2024-04-16 20:45:19.126454] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:28.045 [2024-04-16 20:45:19.126457] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:28.045 [2024-04-16 20:45:19.126464] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:28.045 20:45:19 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:07:28.045 20:45:19 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:07:28.045 20:45:19 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:28.045 20:45:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:28.045 20:45:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:28.045 20:45:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:07:28.045 20:45:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:28.045 20:45:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:07:28.045 20:45:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:28.045 20:45:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:28.045 20:45:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:28.045 20:45:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:28.045 20:45:19 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:28.045 20:45:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:28.304 20:45:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:28.304 "name": "Existed_Raid", 00:07:28.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.304 "strip_size_kb": 64, 00:07:28.304 "state": "configuring", 00:07:28.304 "raid_level": "raid0", 00:07:28.304 "superblock": false, 00:07:28.304 "num_base_bdevs": 3, 00:07:28.304 "num_base_bdevs_discovered": 1, 00:07:28.304 "num_base_bdevs_operational": 3, 00:07:28.304 "base_bdevs_list": [ 00:07:28.304 { 00:07:28.304 "name": "BaseBdev1", 00:07:28.304 "uuid": "3bf8853a-fc32-11ee-80f8-ef3e42bb1492", 00:07:28.304 "is_configured": true, 00:07:28.304 "data_offset": 0, 00:07:28.304 "data_size": 65536 00:07:28.304 }, 00:07:28.304 { 00:07:28.304 "name": "BaseBdev2", 00:07:28.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.304 "is_configured": false, 00:07:28.304 "data_offset": 0, 00:07:28.304 "data_size": 0 00:07:28.304 }, 00:07:28.304 { 00:07:28.304 "name": "BaseBdev3", 00:07:28.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.304 "is_configured": false, 00:07:28.304 "data_offset": 0, 00:07:28.304 "data_size": 0 00:07:28.304 } 00:07:28.304 ] 00:07:28.304 }' 00:07:28.304 20:45:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:28.304 20:45:19 -- common/autotest_common.sh@10 -- # set +x 00:07:28.563 20:45:19 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:07:28.821 [2024-04-16 20:45:19.742281] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:28.821 BaseBdev2 00:07:28.821 20:45:19 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:07:28.821 20:45:19 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:07:28.822 20:45:19 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:07:28.822 20:45:19 -- common/autotest_common.sh@889 -- # local i 00:07:28.822 20:45:19 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:07:28.822 20:45:19 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:07:28.822 20:45:19 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:29.080 20:45:19 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:29.080 [ 00:07:29.080 { 00:07:29.080 "name": "BaseBdev2", 00:07:29.080 "aliases": [ 00:07:29.080 "3d05d8ed-fc32-11ee-80f8-ef3e42bb1492" 00:07:29.080 ], 00:07:29.080 "product_name": "Malloc disk", 00:07:29.080 "block_size": 512, 00:07:29.080 "num_blocks": 65536, 00:07:29.080 "uuid": "3d05d8ed-fc32-11ee-80f8-ef3e42bb1492", 00:07:29.080 "assigned_rate_limits": { 00:07:29.080 "rw_ios_per_sec": 0, 00:07:29.080 "rw_mbytes_per_sec": 0, 00:07:29.080 "r_mbytes_per_sec": 0, 00:07:29.080 "w_mbytes_per_sec": 0 00:07:29.080 }, 00:07:29.080 "claimed": true, 00:07:29.080 "claim_type": "exclusive_write", 00:07:29.080 "zoned": false, 00:07:29.080 "supported_io_types": { 00:07:29.080 "read": true, 00:07:29.080 "write": true, 00:07:29.080 "unmap": true, 00:07:29.080 "write_zeroes": true, 00:07:29.080 "flush": true, 00:07:29.080 "reset": true, 00:07:29.080 "compare": false, 00:07:29.080 "compare_and_write": false, 00:07:29.080 "abort": true, 00:07:29.080 "nvme_admin": false, 00:07:29.080 "nvme_io": false 00:07:29.080 }, 00:07:29.080 "memory_domains": [ 00:07:29.080 { 00:07:29.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:29.080 "dma_device_type": 2 00:07:29.080 } 00:07:29.080 ], 00:07:29.080 "driver_specific": {} 00:07:29.080 } 00:07:29.080 ] 00:07:29.080 20:45:20 -- common/autotest_common.sh@895 -- # return 0 00:07:29.080 20:45:20 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:07:29.080 20:45:20 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:07:29.080 20:45:20 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:29.080 20:45:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:29.080 20:45:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:29.080 20:45:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:07:29.080 20:45:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:29.080 20:45:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:07:29.080 20:45:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:29.080 20:45:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:29.080 20:45:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:29.080 20:45:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:29.080 20:45:20 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:29.080 20:45:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:29.339 20:45:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:29.339 "name": "Existed_Raid", 00:07:29.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:29.339 "strip_size_kb": 64, 00:07:29.339 "state": "configuring", 00:07:29.339 "raid_level": "raid0", 00:07:29.339 "superblock": false, 00:07:29.339 "num_base_bdevs": 3, 00:07:29.339 "num_base_bdevs_discovered": 2, 00:07:29.339 "num_base_bdevs_operational": 3, 00:07:29.339 "base_bdevs_list": [ 00:07:29.339 { 00:07:29.339 "name": "BaseBdev1", 00:07:29.339 "uuid": "3bf8853a-fc32-11ee-80f8-ef3e42bb1492", 00:07:29.339 "is_configured": true, 00:07:29.339 "data_offset": 0, 00:07:29.339 "data_size": 65536 00:07:29.339 }, 00:07:29.339 { 00:07:29.339 "name": "BaseBdev2", 00:07:29.339 "uuid": "3d05d8ed-fc32-11ee-80f8-ef3e42bb1492", 00:07:29.339 "is_configured": true, 00:07:29.339 "data_offset": 0, 00:07:29.339 "data_size": 65536 00:07:29.339 }, 00:07:29.339 { 00:07:29.339 "name": "BaseBdev3", 00:07:29.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:29.339 "is_configured": false, 00:07:29.339 "data_offset": 0, 00:07:29.339 "data_size": 0 00:07:29.339 } 00:07:29.339 ] 00:07:29.339 }' 00:07:29.339 20:45:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:29.339 20:45:20 -- common/autotest_common.sh@10 -- # set +x 00:07:29.597 20:45:20 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:07:29.855 [2024-04-16 20:45:20.714857] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:29.855 [2024-04-16 20:45:20.714877] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82d5a9a00 00:07:29.855 [2024-04-16 20:45:20.714881] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:29.855 [2024-04-16 20:45:20.714897] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82d60cec0 00:07:29.855 [2024-04-16 20:45:20.714969] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82d5a9a00 00:07:29.855 [2024-04-16 20:45:20.714977] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82d5a9a00 00:07:29.855 [2024-04-16 20:45:20.714999] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:29.855 BaseBdev3 00:07:29.855 20:45:20 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:07:29.855 20:45:20 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:07:29.855 20:45:20 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:07:29.855 20:45:20 -- common/autotest_common.sh@889 -- # local i 00:07:29.855 20:45:20 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:07:29.855 20:45:20 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:07:29.855 20:45:20 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:29.855 20:45:20 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:30.114 [ 00:07:30.114 { 00:07:30.114 "name": "BaseBdev3", 00:07:30.114 "aliases": [ 00:07:30.114 "3d9a407b-fc32-11ee-80f8-ef3e42bb1492" 00:07:30.114 ], 00:07:30.114 "product_name": "Malloc disk", 00:07:30.114 "block_size": 512, 00:07:30.114 "num_blocks": 65536, 00:07:30.114 "uuid": "3d9a407b-fc32-11ee-80f8-ef3e42bb1492", 00:07:30.114 "assigned_rate_limits": { 00:07:30.114 "rw_ios_per_sec": 0, 00:07:30.114 "rw_mbytes_per_sec": 0, 00:07:30.114 "r_mbytes_per_sec": 0, 00:07:30.114 "w_mbytes_per_sec": 0 00:07:30.114 }, 00:07:30.114 "claimed": true, 00:07:30.114 "claim_type": "exclusive_write", 00:07:30.114 "zoned": false, 00:07:30.114 "supported_io_types": { 00:07:30.114 "read": true, 00:07:30.114 "write": true, 00:07:30.114 "unmap": true, 00:07:30.114 "write_zeroes": true, 00:07:30.114 "flush": true, 00:07:30.114 "reset": true, 00:07:30.114 "compare": false, 00:07:30.114 "compare_and_write": false, 00:07:30.114 "abort": true, 00:07:30.114 "nvme_admin": false, 00:07:30.114 "nvme_io": false 00:07:30.114 }, 00:07:30.114 "memory_domains": [ 00:07:30.114 { 00:07:30.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:30.114 "dma_device_type": 2 00:07:30.114 } 00:07:30.114 ], 00:07:30.114 "driver_specific": {} 00:07:30.114 } 00:07:30.114 ] 00:07:30.114 20:45:21 -- common/autotest_common.sh@895 -- # return 0 00:07:30.114 20:45:21 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:07:30.114 20:45:21 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:07:30.114 20:45:21 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:07:30.114 20:45:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:30.114 20:45:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:07:30.114 20:45:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:07:30.114 20:45:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:30.114 20:45:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:07:30.114 20:45:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:30.114 20:45:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:30.114 20:45:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:30.114 20:45:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:30.114 20:45:21 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:30.114 20:45:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:30.372 20:45:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:30.372 "name": "Existed_Raid", 00:07:30.372 "uuid": "3d9a4531-fc32-11ee-80f8-ef3e42bb1492", 00:07:30.372 "strip_size_kb": 64, 00:07:30.372 "state": "online", 00:07:30.372 "raid_level": "raid0", 00:07:30.372 "superblock": false, 00:07:30.372 "num_base_bdevs": 3, 00:07:30.372 "num_base_bdevs_discovered": 3, 00:07:30.372 "num_base_bdevs_operational": 3, 00:07:30.372 "base_bdevs_list": [ 00:07:30.372 { 00:07:30.372 "name": "BaseBdev1", 00:07:30.372 "uuid": "3bf8853a-fc32-11ee-80f8-ef3e42bb1492", 00:07:30.372 "is_configured": true, 00:07:30.372 "data_offset": 0, 00:07:30.372 "data_size": 65536 00:07:30.372 }, 00:07:30.372 { 00:07:30.372 "name": "BaseBdev2", 00:07:30.372 "uuid": "3d05d8ed-fc32-11ee-80f8-ef3e42bb1492", 00:07:30.372 "is_configured": true, 00:07:30.372 "data_offset": 0, 00:07:30.372 "data_size": 65536 00:07:30.372 }, 00:07:30.372 { 00:07:30.372 "name": "BaseBdev3", 00:07:30.372 "uuid": "3d9a407b-fc32-11ee-80f8-ef3e42bb1492", 00:07:30.372 "is_configured": true, 00:07:30.372 "data_offset": 0, 00:07:30.372 "data_size": 65536 00:07:30.372 } 00:07:30.372 ] 00:07:30.372 }' 00:07:30.372 20:45:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:30.372 20:45:21 -- common/autotest_common.sh@10 -- # set +x 00:07:30.631 20:45:21 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:07:30.631 [2024-04-16 20:45:21.715353] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:30.631 [2024-04-16 20:45:21.715374] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:30.631 [2024-04-16 20:45:21.715387] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:30.631 20:45:21 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:07:30.631 20:45:21 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:07:30.631 20:45:21 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:07:30.631 20:45:21 -- bdev/bdev_raid.sh@197 -- # return 1 00:07:30.631 20:45:21 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:07:30.631 20:45:21 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:07:30.631 20:45:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:30.631 20:45:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:07:30.631 20:45:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:07:30.631 20:45:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:30.631 20:45:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:30.631 20:45:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:30.631 20:45:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:30.631 20:45:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:30.631 20:45:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:30.889 20:45:21 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:30.889 20:45:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:30.889 20:45:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:30.889 "name": "Existed_Raid", 00:07:30.889 "uuid": "3d9a4531-fc32-11ee-80f8-ef3e42bb1492", 00:07:30.889 "strip_size_kb": 64, 00:07:30.889 "state": "offline", 00:07:30.889 "raid_level": "raid0", 00:07:30.889 "superblock": false, 00:07:30.889 "num_base_bdevs": 3, 00:07:30.889 "num_base_bdevs_discovered": 2, 00:07:30.889 "num_base_bdevs_operational": 2, 00:07:30.889 "base_bdevs_list": [ 00:07:30.889 { 00:07:30.889 "name": null, 00:07:30.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:30.889 "is_configured": false, 00:07:30.889 "data_offset": 0, 00:07:30.889 "data_size": 65536 00:07:30.889 }, 00:07:30.889 { 00:07:30.889 "name": "BaseBdev2", 00:07:30.889 "uuid": "3d05d8ed-fc32-11ee-80f8-ef3e42bb1492", 00:07:30.889 "is_configured": true, 00:07:30.889 "data_offset": 0, 00:07:30.889 "data_size": 65536 00:07:30.889 }, 00:07:30.889 { 00:07:30.889 "name": "BaseBdev3", 00:07:30.889 "uuid": "3d9a407b-fc32-11ee-80f8-ef3e42bb1492", 00:07:30.889 "is_configured": true, 00:07:30.889 "data_offset": 0, 00:07:30.889 "data_size": 65536 00:07:30.889 } 00:07:30.889 ] 00:07:30.889 }' 00:07:30.889 20:45:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:30.889 20:45:21 -- common/autotest_common.sh@10 -- # set +x 00:07:31.147 20:45:22 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:07:31.147 20:45:22 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:07:31.147 20:45:22 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:31.147 20:45:22 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:07:31.405 20:45:22 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:07:31.405 20:45:22 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:31.405 20:45:22 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:07:31.664 [2024-04-16 20:45:22.544433] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:31.664 20:45:22 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:07:31.664 20:45:22 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:07:31.664 20:45:22 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:31.664 20:45:22 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:07:31.664 20:45:22 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:07:31.664 20:45:22 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:31.664 20:45:22 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:07:31.922 [2024-04-16 20:45:22.885270] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:31.922 [2024-04-16 20:45:22.885297] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d5a9a00 name Existed_Raid, state offline 00:07:31.922 20:45:22 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:07:31.922 20:45:22 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:07:31.922 20:45:22 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:07:31.922 20:45:22 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:32.182 20:45:23 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:07:32.182 20:45:23 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:07:32.182 20:45:23 -- bdev/bdev_raid.sh@287 -- # killprocess 49293 00:07:32.182 20:45:23 -- common/autotest_common.sh@926 -- # '[' -z 49293 ']' 00:07:32.182 20:45:23 -- common/autotest_common.sh@930 -- # kill -0 49293 00:07:32.182 20:45:23 -- common/autotest_common.sh@931 -- # uname 00:07:32.182 20:45:23 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:07:32.182 20:45:23 -- common/autotest_common.sh@934 -- # ps -c -o command 49293 00:07:32.182 20:45:23 -- common/autotest_common.sh@934 -- # tail -1 00:07:32.182 20:45:23 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:07:32.182 20:45:23 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:07:32.182 killing process with pid 49293 00:07:32.182 20:45:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 49293' 00:07:32.182 20:45:23 -- common/autotest_common.sh@945 -- # kill 49293 00:07:32.182 [2024-04-16 20:45:23.097405] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:32.182 20:45:23 -- common/autotest_common.sh@950 -- # wait 49293 00:07:32.182 [2024-04-16 20:45:23.097446] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:32.182 20:45:23 -- bdev/bdev_raid.sh@289 -- # return 0 00:07:32.182 00:07:32.182 real 0m7.335s 00:07:32.182 user 0m12.637s 00:07:32.182 sys 0m1.356s 00:07:32.182 20:45:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.182 20:45:23 -- common/autotest_common.sh@10 -- # set +x 00:07:32.182 ************************************ 00:07:32.182 END TEST raid_state_function_test 00:07:32.182 ************************************ 00:07:32.182 20:45:23 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:07:32.182 20:45:23 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:07:32.182 20:45:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:32.182 20:45:23 -- common/autotest_common.sh@10 -- # set +x 00:07:32.182 ************************************ 00:07:32.182 START TEST raid_state_function_test_sb 00:07:32.182 ************************************ 00:07:32.182 20:45:23 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 3 true 00:07:32.182 20:45:23 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:07:32.182 20:45:23 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:07:32.182 20:45:23 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:07:32.182 20:45:23 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:07:32.182 20:45:23 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:07:32.182 20:45:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:32.182 20:45:23 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:07:32.182 20:45:23 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:07:32.182 20:45:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:32.182 20:45:23 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:07:32.182 20:45:23 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:07:32.182 20:45:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:32.182 20:45:23 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:07:32.182 20:45:23 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:07:32.182 20:45:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:32.442 20:45:23 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:32.442 20:45:23 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:07:32.442 20:45:23 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:07:32.442 20:45:23 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:07:32.442 20:45:23 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:07:32.442 20:45:23 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:07:32.442 20:45:23 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:07:32.442 20:45:23 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:07:32.442 20:45:23 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:07:32.442 20:45:23 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:07:32.442 20:45:23 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:07:32.442 20:45:23 -- bdev/bdev_raid.sh@226 -- # raid_pid=49526 00:07:32.442 20:45:23 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 49526' 00:07:32.442 Process raid pid: 49526 00:07:32.442 20:45:23 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:07:32.442 20:45:23 -- bdev/bdev_raid.sh@228 -- # waitforlisten 49526 /var/tmp/spdk-raid.sock 00:07:32.442 20:45:23 -- common/autotest_common.sh@819 -- # '[' -z 49526 ']' 00:07:32.442 20:45:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:32.442 20:45:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:32.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:32.442 20:45:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:32.442 20:45:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:32.442 20:45:23 -- common/autotest_common.sh@10 -- # set +x 00:07:32.442 [2024-04-16 20:45:23.304493] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:07:32.442 [2024-04-16 20:45:23.304853] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:07:32.703 EAL: TSC is not safe to use in SMP mode 00:07:32.703 EAL: TSC is not invariant 00:07:32.703 [2024-04-16 20:45:23.731861] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.973 [2024-04-16 20:45:23.819793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.973 [2024-04-16 20:45:23.820201] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:32.973 [2024-04-16 20:45:23.820210] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:33.231 20:45:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:33.231 20:45:24 -- common/autotest_common.sh@852 -- # return 0 00:07:33.231 20:45:24 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:07:33.489 [2024-04-16 20:45:24.351515] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:33.489 [2024-04-16 20:45:24.351555] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:33.489 [2024-04-16 20:45:24.351559] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:33.489 [2024-04-16 20:45:24.351566] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:33.489 [2024-04-16 20:45:24.351568] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:33.489 [2024-04-16 20:45:24.351574] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:33.489 20:45:24 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:33.489 20:45:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:33.489 20:45:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:33.489 20:45:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:07:33.489 20:45:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:33.489 20:45:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:07:33.489 20:45:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:33.489 20:45:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:33.489 20:45:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:33.489 20:45:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:33.489 20:45:24 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:33.489 20:45:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:33.489 20:45:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:33.489 "name": "Existed_Raid", 00:07:33.489 "uuid": "3fc52ce3-fc32-11ee-80f8-ef3e42bb1492", 00:07:33.489 "strip_size_kb": 64, 00:07:33.489 "state": "configuring", 00:07:33.489 "raid_level": "raid0", 00:07:33.489 "superblock": true, 00:07:33.489 "num_base_bdevs": 3, 00:07:33.489 "num_base_bdevs_discovered": 0, 00:07:33.489 "num_base_bdevs_operational": 3, 00:07:33.489 "base_bdevs_list": [ 00:07:33.489 { 00:07:33.489 "name": "BaseBdev1", 00:07:33.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:33.489 "is_configured": false, 00:07:33.489 "data_offset": 0, 00:07:33.489 "data_size": 0 00:07:33.489 }, 00:07:33.489 { 00:07:33.489 "name": "BaseBdev2", 00:07:33.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:33.489 "is_configured": false, 00:07:33.489 "data_offset": 0, 00:07:33.489 "data_size": 0 00:07:33.489 }, 00:07:33.489 { 00:07:33.489 "name": "BaseBdev3", 00:07:33.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:33.489 "is_configured": false, 00:07:33.489 "data_offset": 0, 00:07:33.489 "data_size": 0 00:07:33.489 } 00:07:33.489 ] 00:07:33.489 }' 00:07:33.489 20:45:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:33.489 20:45:24 -- common/autotest_common.sh@10 -- # set +x 00:07:33.747 20:45:24 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:34.006 [2024-04-16 20:45:24.979861] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:34.006 [2024-04-16 20:45:24.979886] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b90d500 name Existed_Raid, state configuring 00:07:34.006 20:45:24 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:07:34.264 [2024-04-16 20:45:25.163982] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:34.264 [2024-04-16 20:45:25.164027] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:34.264 [2024-04-16 20:45:25.164031] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:34.264 [2024-04-16 20:45:25.164037] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:34.264 [2024-04-16 20:45:25.164040] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:34.264 [2024-04-16 20:45:25.164045] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:34.264 20:45:25 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:07:34.264 [2024-04-16 20:45:25.344827] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:34.264 BaseBdev1 00:07:34.264 20:45:25 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:07:34.264 20:45:25 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:07:34.264 20:45:25 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:07:34.264 20:45:25 -- common/autotest_common.sh@889 -- # local i 00:07:34.264 20:45:25 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:07:34.264 20:45:25 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:07:34.264 20:45:25 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:34.523 20:45:25 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:34.781 [ 00:07:34.781 { 00:07:34.781 "name": "BaseBdev1", 00:07:34.781 "aliases": [ 00:07:34.781 "405ca123-fc32-11ee-80f8-ef3e42bb1492" 00:07:34.781 ], 00:07:34.781 "product_name": "Malloc disk", 00:07:34.781 "block_size": 512, 00:07:34.781 "num_blocks": 65536, 00:07:34.781 "uuid": "405ca123-fc32-11ee-80f8-ef3e42bb1492", 00:07:34.781 "assigned_rate_limits": { 00:07:34.781 "rw_ios_per_sec": 0, 00:07:34.781 "rw_mbytes_per_sec": 0, 00:07:34.781 "r_mbytes_per_sec": 0, 00:07:34.781 "w_mbytes_per_sec": 0 00:07:34.781 }, 00:07:34.781 "claimed": true, 00:07:34.781 "claim_type": "exclusive_write", 00:07:34.781 "zoned": false, 00:07:34.781 "supported_io_types": { 00:07:34.781 "read": true, 00:07:34.781 "write": true, 00:07:34.781 "unmap": true, 00:07:34.781 "write_zeroes": true, 00:07:34.781 "flush": true, 00:07:34.781 "reset": true, 00:07:34.781 "compare": false, 00:07:34.781 "compare_and_write": false, 00:07:34.781 "abort": true, 00:07:34.781 "nvme_admin": false, 00:07:34.781 "nvme_io": false 00:07:34.781 }, 00:07:34.781 "memory_domains": [ 00:07:34.781 { 00:07:34.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:34.781 "dma_device_type": 2 00:07:34.781 } 00:07:34.781 ], 00:07:34.781 "driver_specific": {} 00:07:34.781 } 00:07:34.781 ] 00:07:34.781 20:45:25 -- common/autotest_common.sh@895 -- # return 0 00:07:34.781 20:45:25 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:34.781 20:45:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:34.781 20:45:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:34.781 20:45:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:07:34.781 20:45:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:34.781 20:45:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:07:34.781 20:45:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:34.781 20:45:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:34.781 20:45:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:34.781 20:45:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:34.781 20:45:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:34.781 20:45:25 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:34.781 20:45:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:34.781 "name": "Existed_Raid", 00:07:34.781 "uuid": "404125e7-fc32-11ee-80f8-ef3e42bb1492", 00:07:34.781 "strip_size_kb": 64, 00:07:34.781 "state": "configuring", 00:07:34.781 "raid_level": "raid0", 00:07:34.781 "superblock": true, 00:07:34.781 "num_base_bdevs": 3, 00:07:34.781 "num_base_bdevs_discovered": 1, 00:07:34.781 "num_base_bdevs_operational": 3, 00:07:34.781 "base_bdevs_list": [ 00:07:34.781 { 00:07:34.781 "name": "BaseBdev1", 00:07:34.781 "uuid": "405ca123-fc32-11ee-80f8-ef3e42bb1492", 00:07:34.781 "is_configured": true, 00:07:34.781 "data_offset": 2048, 00:07:34.781 "data_size": 63488 00:07:34.781 }, 00:07:34.781 { 00:07:34.781 "name": "BaseBdev2", 00:07:34.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:34.781 "is_configured": false, 00:07:34.781 "data_offset": 0, 00:07:34.781 "data_size": 0 00:07:34.781 }, 00:07:34.781 { 00:07:34.781 "name": "BaseBdev3", 00:07:34.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:34.781 "is_configured": false, 00:07:34.781 "data_offset": 0, 00:07:34.781 "data_size": 0 00:07:34.781 } 00:07:34.781 ] 00:07:34.781 }' 00:07:34.781 20:45:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:34.781 20:45:25 -- common/autotest_common.sh@10 -- # set +x 00:07:35.040 20:45:26 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:35.298 [2024-04-16 20:45:26.288629] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:35.298 [2024-04-16 20:45:26.288660] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b90d500 name Existed_Raid, state configuring 00:07:35.298 20:45:26 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:07:35.298 20:45:26 -- bdev/bdev_raid.sh@246 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:07:35.556 20:45:26 -- bdev/bdev_raid.sh@247 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:07:35.556 BaseBdev1 00:07:35.556 20:45:26 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:07:35.556 20:45:26 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:07:35.556 20:45:26 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:07:35.556 20:45:26 -- common/autotest_common.sh@889 -- # local i 00:07:35.556 20:45:26 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:07:35.556 20:45:26 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:07:35.556 20:45:26 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:35.815 20:45:26 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:36.073 [ 00:07:36.073 { 00:07:36.073 "name": "BaseBdev1", 00:07:36.073 "aliases": [ 00:07:36.073 "4123cfa0-fc32-11ee-80f8-ef3e42bb1492" 00:07:36.073 ], 00:07:36.073 "product_name": "Malloc disk", 00:07:36.073 "block_size": 512, 00:07:36.073 "num_blocks": 65536, 00:07:36.073 "uuid": "4123cfa0-fc32-11ee-80f8-ef3e42bb1492", 00:07:36.073 "assigned_rate_limits": { 00:07:36.073 "rw_ios_per_sec": 0, 00:07:36.073 "rw_mbytes_per_sec": 0, 00:07:36.073 "r_mbytes_per_sec": 0, 00:07:36.073 "w_mbytes_per_sec": 0 00:07:36.073 }, 00:07:36.073 "claimed": false, 00:07:36.073 "zoned": false, 00:07:36.073 "supported_io_types": { 00:07:36.073 "read": true, 00:07:36.073 "write": true, 00:07:36.073 "unmap": true, 00:07:36.073 "write_zeroes": true, 00:07:36.073 "flush": true, 00:07:36.073 "reset": true, 00:07:36.073 "compare": false, 00:07:36.073 "compare_and_write": false, 00:07:36.073 "abort": true, 00:07:36.073 "nvme_admin": false, 00:07:36.073 "nvme_io": false 00:07:36.073 }, 00:07:36.073 "memory_domains": [ 00:07:36.073 { 00:07:36.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:36.073 "dma_device_type": 2 00:07:36.073 } 00:07:36.073 ], 00:07:36.073 "driver_specific": {} 00:07:36.073 } 00:07:36.073 ] 00:07:36.073 20:45:27 -- common/autotest_common.sh@895 -- # return 0 00:07:36.073 20:45:27 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:07:36.332 [2024-04-16 20:45:27.209784] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:36.332 [2024-04-16 20:45:27.210229] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:36.332 [2024-04-16 20:45:27.210265] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:36.332 [2024-04-16 20:45:27.210268] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:36.332 [2024-04-16 20:45:27.210275] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:36.332 20:45:27 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:07:36.332 20:45:27 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:07:36.332 20:45:27 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:36.332 20:45:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:36.332 20:45:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:36.332 20:45:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:07:36.332 20:45:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:36.332 20:45:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:07:36.332 20:45:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:36.332 20:45:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:36.332 20:45:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:36.332 20:45:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:36.332 20:45:27 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:36.332 20:45:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:36.332 20:45:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:36.332 "name": "Existed_Raid", 00:07:36.332 "uuid": "41794fe2-fc32-11ee-80f8-ef3e42bb1492", 00:07:36.332 "strip_size_kb": 64, 00:07:36.332 "state": "configuring", 00:07:36.332 "raid_level": "raid0", 00:07:36.332 "superblock": true, 00:07:36.332 "num_base_bdevs": 3, 00:07:36.332 "num_base_bdevs_discovered": 1, 00:07:36.332 "num_base_bdevs_operational": 3, 00:07:36.332 "base_bdevs_list": [ 00:07:36.332 { 00:07:36.332 "name": "BaseBdev1", 00:07:36.332 "uuid": "4123cfa0-fc32-11ee-80f8-ef3e42bb1492", 00:07:36.332 "is_configured": true, 00:07:36.332 "data_offset": 2048, 00:07:36.332 "data_size": 63488 00:07:36.332 }, 00:07:36.332 { 00:07:36.332 "name": "BaseBdev2", 00:07:36.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.332 "is_configured": false, 00:07:36.332 "data_offset": 0, 00:07:36.332 "data_size": 0 00:07:36.332 }, 00:07:36.332 { 00:07:36.332 "name": "BaseBdev3", 00:07:36.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.332 "is_configured": false, 00:07:36.332 "data_offset": 0, 00:07:36.332 "data_size": 0 00:07:36.332 } 00:07:36.332 ] 00:07:36.332 }' 00:07:36.332 20:45:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:36.332 20:45:27 -- common/autotest_common.sh@10 -- # set +x 00:07:36.591 20:45:27 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:07:36.849 [2024-04-16 20:45:27.846264] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:36.850 BaseBdev2 00:07:36.850 20:45:27 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:07:36.850 20:45:27 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:07:36.850 20:45:27 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:07:36.850 20:45:27 -- common/autotest_common.sh@889 -- # local i 00:07:36.850 20:45:27 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:07:36.850 20:45:27 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:07:36.850 20:45:27 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:37.108 20:45:28 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:37.377 [ 00:07:37.377 { 00:07:37.377 "name": "BaseBdev2", 00:07:37.377 "aliases": [ 00:07:37.377 "41da6af6-fc32-11ee-80f8-ef3e42bb1492" 00:07:37.377 ], 00:07:37.377 "product_name": "Malloc disk", 00:07:37.377 "block_size": 512, 00:07:37.377 "num_blocks": 65536, 00:07:37.377 "uuid": "41da6af6-fc32-11ee-80f8-ef3e42bb1492", 00:07:37.377 "assigned_rate_limits": { 00:07:37.377 "rw_ios_per_sec": 0, 00:07:37.377 "rw_mbytes_per_sec": 0, 00:07:37.377 "r_mbytes_per_sec": 0, 00:07:37.377 "w_mbytes_per_sec": 0 00:07:37.377 }, 00:07:37.377 "claimed": true, 00:07:37.377 "claim_type": "exclusive_write", 00:07:37.377 "zoned": false, 00:07:37.377 "supported_io_types": { 00:07:37.377 "read": true, 00:07:37.377 "write": true, 00:07:37.377 "unmap": true, 00:07:37.377 "write_zeroes": true, 00:07:37.377 "flush": true, 00:07:37.377 "reset": true, 00:07:37.377 "compare": false, 00:07:37.377 "compare_and_write": false, 00:07:37.377 "abort": true, 00:07:37.377 "nvme_admin": false, 00:07:37.377 "nvme_io": false 00:07:37.377 }, 00:07:37.377 "memory_domains": [ 00:07:37.377 { 00:07:37.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.377 "dma_device_type": 2 00:07:37.377 } 00:07:37.377 ], 00:07:37.377 "driver_specific": {} 00:07:37.377 } 00:07:37.377 ] 00:07:37.377 20:45:28 -- common/autotest_common.sh@895 -- # return 0 00:07:37.377 20:45:28 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:07:37.377 20:45:28 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:07:37.377 20:45:28 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:37.377 20:45:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:37.377 20:45:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:37.377 20:45:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:07:37.377 20:45:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:37.377 20:45:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:07:37.377 20:45:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:37.377 20:45:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:37.377 20:45:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:37.377 20:45:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:37.377 20:45:28 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:37.377 20:45:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:37.377 20:45:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:37.377 "name": "Existed_Raid", 00:07:37.377 "uuid": "41794fe2-fc32-11ee-80f8-ef3e42bb1492", 00:07:37.377 "strip_size_kb": 64, 00:07:37.377 "state": "configuring", 00:07:37.377 "raid_level": "raid0", 00:07:37.377 "superblock": true, 00:07:37.377 "num_base_bdevs": 3, 00:07:37.377 "num_base_bdevs_discovered": 2, 00:07:37.377 "num_base_bdevs_operational": 3, 00:07:37.377 "base_bdevs_list": [ 00:07:37.377 { 00:07:37.377 "name": "BaseBdev1", 00:07:37.377 "uuid": "4123cfa0-fc32-11ee-80f8-ef3e42bb1492", 00:07:37.377 "is_configured": true, 00:07:37.377 "data_offset": 2048, 00:07:37.377 "data_size": 63488 00:07:37.377 }, 00:07:37.377 { 00:07:37.377 "name": "BaseBdev2", 00:07:37.377 "uuid": "41da6af6-fc32-11ee-80f8-ef3e42bb1492", 00:07:37.377 "is_configured": true, 00:07:37.377 "data_offset": 2048, 00:07:37.377 "data_size": 63488 00:07:37.377 }, 00:07:37.377 { 00:07:37.377 "name": "BaseBdev3", 00:07:37.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.377 "is_configured": false, 00:07:37.377 "data_offset": 0, 00:07:37.377 "data_size": 0 00:07:37.377 } 00:07:37.377 ] 00:07:37.377 }' 00:07:37.377 20:45:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:37.377 20:45:28 -- common/autotest_common.sh@10 -- # set +x 00:07:37.635 20:45:28 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:07:37.894 [2024-04-16 20:45:28.858881] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:37.894 [2024-04-16 20:45:28.858953] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b90da00 00:07:37.894 [2024-04-16 20:45:28.858958] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:37.894 [2024-04-16 20:45:28.858973] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b970ec0 00:07:37.894 [2024-04-16 20:45:28.859007] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b90da00 00:07:37.894 [2024-04-16 20:45:28.859010] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82b90da00 00:07:37.894 [2024-04-16 20:45:28.859024] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:37.894 BaseBdev3 00:07:37.894 20:45:28 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:07:37.894 20:45:28 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:07:37.894 20:45:28 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:07:37.894 20:45:28 -- common/autotest_common.sh@889 -- # local i 00:07:37.894 20:45:28 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:07:37.894 20:45:28 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:07:37.894 20:45:28 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:38.153 20:45:29 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:38.153 [ 00:07:38.153 { 00:07:38.153 "name": "BaseBdev3", 00:07:38.153 "aliases": [ 00:07:38.153 "4274ee7e-fc32-11ee-80f8-ef3e42bb1492" 00:07:38.153 ], 00:07:38.153 "product_name": "Malloc disk", 00:07:38.153 "block_size": 512, 00:07:38.153 "num_blocks": 65536, 00:07:38.153 "uuid": "4274ee7e-fc32-11ee-80f8-ef3e42bb1492", 00:07:38.153 "assigned_rate_limits": { 00:07:38.153 "rw_ios_per_sec": 0, 00:07:38.153 "rw_mbytes_per_sec": 0, 00:07:38.153 "r_mbytes_per_sec": 0, 00:07:38.153 "w_mbytes_per_sec": 0 00:07:38.153 }, 00:07:38.153 "claimed": true, 00:07:38.153 "claim_type": "exclusive_write", 00:07:38.153 "zoned": false, 00:07:38.153 "supported_io_types": { 00:07:38.153 "read": true, 00:07:38.153 "write": true, 00:07:38.153 "unmap": true, 00:07:38.153 "write_zeroes": true, 00:07:38.153 "flush": true, 00:07:38.153 "reset": true, 00:07:38.153 "compare": false, 00:07:38.153 "compare_and_write": false, 00:07:38.153 "abort": true, 00:07:38.153 "nvme_admin": false, 00:07:38.153 "nvme_io": false 00:07:38.153 }, 00:07:38.153 "memory_domains": [ 00:07:38.153 { 00:07:38.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.153 "dma_device_type": 2 00:07:38.153 } 00:07:38.153 ], 00:07:38.153 "driver_specific": {} 00:07:38.153 } 00:07:38.153 ] 00:07:38.153 20:45:29 -- common/autotest_common.sh@895 -- # return 0 00:07:38.153 20:45:29 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:07:38.153 20:45:29 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:07:38.153 20:45:29 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:07:38.153 20:45:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:38.153 20:45:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:07:38.153 20:45:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:07:38.153 20:45:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:38.153 20:45:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:07:38.153 20:45:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:38.153 20:45:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:38.153 20:45:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:38.153 20:45:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:38.153 20:45:29 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:38.153 20:45:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:38.411 20:45:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:38.411 "name": "Existed_Raid", 00:07:38.411 "uuid": "41794fe2-fc32-11ee-80f8-ef3e42bb1492", 00:07:38.411 "strip_size_kb": 64, 00:07:38.411 "state": "online", 00:07:38.411 "raid_level": "raid0", 00:07:38.411 "superblock": true, 00:07:38.411 "num_base_bdevs": 3, 00:07:38.411 "num_base_bdevs_discovered": 3, 00:07:38.411 "num_base_bdevs_operational": 3, 00:07:38.411 "base_bdevs_list": [ 00:07:38.411 { 00:07:38.411 "name": "BaseBdev1", 00:07:38.411 "uuid": "4123cfa0-fc32-11ee-80f8-ef3e42bb1492", 00:07:38.411 "is_configured": true, 00:07:38.411 "data_offset": 2048, 00:07:38.411 "data_size": 63488 00:07:38.411 }, 00:07:38.411 { 00:07:38.411 "name": "BaseBdev2", 00:07:38.411 "uuid": "41da6af6-fc32-11ee-80f8-ef3e42bb1492", 00:07:38.411 "is_configured": true, 00:07:38.411 "data_offset": 2048, 00:07:38.411 "data_size": 63488 00:07:38.411 }, 00:07:38.411 { 00:07:38.411 "name": "BaseBdev3", 00:07:38.411 "uuid": "4274ee7e-fc32-11ee-80f8-ef3e42bb1492", 00:07:38.411 "is_configured": true, 00:07:38.411 "data_offset": 2048, 00:07:38.411 "data_size": 63488 00:07:38.411 } 00:07:38.411 ] 00:07:38.411 }' 00:07:38.411 20:45:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:38.411 20:45:29 -- common/autotest_common.sh@10 -- # set +x 00:07:38.669 20:45:29 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:07:38.927 [2024-04-16 20:45:29.827350] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:38.927 [2024-04-16 20:45:29.827372] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:38.927 [2024-04-16 20:45:29.827382] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:38.927 20:45:29 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:07:38.927 20:45:29 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:07:38.927 20:45:29 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:07:38.927 20:45:29 -- bdev/bdev_raid.sh@197 -- # return 1 00:07:38.927 20:45:29 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:07:38.927 20:45:29 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:07:38.927 20:45:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:38.927 20:45:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:07:38.927 20:45:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:07:38.927 20:45:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:38.927 20:45:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:38.927 20:45:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:38.927 20:45:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:38.927 20:45:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:38.927 20:45:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:38.927 20:45:29 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:38.927 20:45:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:39.185 20:45:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:39.185 "name": "Existed_Raid", 00:07:39.185 "uuid": "41794fe2-fc32-11ee-80f8-ef3e42bb1492", 00:07:39.185 "strip_size_kb": 64, 00:07:39.185 "state": "offline", 00:07:39.185 "raid_level": "raid0", 00:07:39.185 "superblock": true, 00:07:39.185 "num_base_bdevs": 3, 00:07:39.185 "num_base_bdevs_discovered": 2, 00:07:39.185 "num_base_bdevs_operational": 2, 00:07:39.185 "base_bdevs_list": [ 00:07:39.185 { 00:07:39.185 "name": null, 00:07:39.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:39.185 "is_configured": false, 00:07:39.185 "data_offset": 2048, 00:07:39.185 "data_size": 63488 00:07:39.185 }, 00:07:39.185 { 00:07:39.185 "name": "BaseBdev2", 00:07:39.185 "uuid": "41da6af6-fc32-11ee-80f8-ef3e42bb1492", 00:07:39.185 "is_configured": true, 00:07:39.185 "data_offset": 2048, 00:07:39.185 "data_size": 63488 00:07:39.185 }, 00:07:39.185 { 00:07:39.185 "name": "BaseBdev3", 00:07:39.185 "uuid": "4274ee7e-fc32-11ee-80f8-ef3e42bb1492", 00:07:39.185 "is_configured": true, 00:07:39.185 "data_offset": 2048, 00:07:39.185 "data_size": 63488 00:07:39.185 } 00:07:39.185 ] 00:07:39.185 }' 00:07:39.185 20:45:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:39.185 20:45:30 -- common/autotest_common.sh@10 -- # set +x 00:07:39.444 20:45:30 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:07:39.444 20:45:30 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:07:39.444 20:45:30 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:39.444 20:45:30 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:07:39.444 20:45:30 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:07:39.444 20:45:30 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:39.444 20:45:30 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:07:39.703 [2024-04-16 20:45:30.668480] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:39.703 20:45:30 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:07:39.703 20:45:30 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:07:39.703 20:45:30 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:39.703 20:45:30 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:07:39.993 20:45:30 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:07:39.993 20:45:30 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:39.993 20:45:30 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:07:39.993 [2024-04-16 20:45:30.993237] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:39.993 [2024-04-16 20:45:30.993277] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b90da00 name Existed_Raid, state offline 00:07:39.993 20:45:31 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:07:39.993 20:45:31 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:07:39.993 20:45:31 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:39.993 20:45:31 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:07:40.252 20:45:31 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:07:40.252 20:45:31 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:07:40.252 20:45:31 -- bdev/bdev_raid.sh@287 -- # killprocess 49526 00:07:40.252 20:45:31 -- common/autotest_common.sh@926 -- # '[' -z 49526 ']' 00:07:40.252 20:45:31 -- common/autotest_common.sh@930 -- # kill -0 49526 00:07:40.252 20:45:31 -- common/autotest_common.sh@931 -- # uname 00:07:40.252 20:45:31 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:07:40.252 20:45:31 -- common/autotest_common.sh@934 -- # tail -1 00:07:40.252 20:45:31 -- common/autotest_common.sh@934 -- # ps -c -o command 49526 00:07:40.252 20:45:31 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:07:40.252 20:45:31 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:07:40.252 killing process with pid 49526 00:07:40.252 20:45:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 49526' 00:07:40.252 20:45:31 -- common/autotest_common.sh@945 -- # kill 49526 00:07:40.252 [2024-04-16 20:45:31.217210] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:40.252 [2024-04-16 20:45:31.217250] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:40.252 20:45:31 -- common/autotest_common.sh@950 -- # wait 49526 00:07:40.252 20:45:31 -- bdev/bdev_raid.sh@289 -- # return 0 00:07:40.252 00:07:40.252 real 0m8.073s 00:07:40.252 user 0m13.951s 00:07:40.252 sys 0m1.476s 00:07:40.252 20:45:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.252 20:45:31 -- common/autotest_common.sh@10 -- # set +x 00:07:40.252 ************************************ 00:07:40.252 END TEST raid_state_function_test_sb 00:07:40.252 ************************************ 00:07:40.512 20:45:31 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:07:40.512 20:45:31 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:40.512 20:45:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:40.512 20:45:31 -- common/autotest_common.sh@10 -- # set +x 00:07:40.512 ************************************ 00:07:40.512 START TEST raid_superblock_test 00:07:40.512 ************************************ 00:07:40.512 20:45:31 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 3 00:07:40.512 20:45:31 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:07:40.512 20:45:31 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:07:40.512 20:45:31 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:07:40.512 20:45:31 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:07:40.512 20:45:31 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:07:40.512 20:45:31 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:07:40.512 20:45:31 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:07:40.512 20:45:31 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:07:40.512 20:45:31 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:07:40.512 20:45:31 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:07:40.512 20:45:31 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:07:40.512 20:45:31 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:07:40.512 20:45:31 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:07:40.512 20:45:31 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:07:40.512 20:45:31 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:07:40.512 20:45:31 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:07:40.512 20:45:31 -- bdev/bdev_raid.sh@357 -- # raid_pid=49762 00:07:40.512 20:45:31 -- bdev/bdev_raid.sh@358 -- # waitforlisten 49762 /var/tmp/spdk-raid.sock 00:07:40.512 20:45:31 -- bdev/bdev_raid.sh@356 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:07:40.512 20:45:31 -- common/autotest_common.sh@819 -- # '[' -z 49762 ']' 00:07:40.512 20:45:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:40.512 20:45:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:40.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:40.512 20:45:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:40.512 20:45:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:40.512 20:45:31 -- common/autotest_common.sh@10 -- # set +x 00:07:40.512 [2024-04-16 20:45:31.423518] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:07:40.512 [2024-04-16 20:45:31.423868] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:07:40.771 EAL: TSC is not safe to use in SMP mode 00:07:40.771 EAL: TSC is not invariant 00:07:40.771 [2024-04-16 20:45:31.854344] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.029 [2024-04-16 20:45:31.943123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.029 [2024-04-16 20:45:31.943543] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:41.029 [2024-04-16 20:45:31.943553] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:41.288 20:45:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:41.288 20:45:32 -- common/autotest_common.sh@852 -- # return 0 00:07:41.288 20:45:32 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:07:41.288 20:45:32 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:07:41.288 20:45:32 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:07:41.288 20:45:32 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:07:41.288 20:45:32 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:41.288 20:45:32 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:41.288 20:45:32 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:07:41.288 20:45:32 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:41.288 20:45:32 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:07:41.546 malloc1 00:07:41.547 20:45:32 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:41.805 [2024-04-16 20:45:32.695051] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:41.805 [2024-04-16 20:45:32.695109] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:41.805 [2024-04-16 20:45:32.695626] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c8ff780 00:07:41.805 [2024-04-16 20:45:32.695644] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:41.805 [2024-04-16 20:45:32.696351] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:41.805 [2024-04-16 20:45:32.696378] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:41.805 pt1 00:07:41.805 20:45:32 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:07:41.805 20:45:32 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:07:41.805 20:45:32 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:07:41.805 20:45:32 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:07:41.805 20:45:32 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:41.805 20:45:32 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:41.805 20:45:32 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:07:41.805 20:45:32 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:41.805 20:45:32 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:07:41.805 malloc2 00:07:41.805 20:45:32 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:42.063 [2024-04-16 20:45:33.071266] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:42.063 [2024-04-16 20:45:33.071318] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:42.063 [2024-04-16 20:45:33.071343] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c8ffc80 00:07:42.063 [2024-04-16 20:45:33.071349] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:42.063 [2024-04-16 20:45:33.071846] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:42.063 [2024-04-16 20:45:33.071870] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:42.063 pt2 00:07:42.063 20:45:33 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:07:42.063 20:45:33 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:07:42.063 20:45:33 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:07:42.063 20:45:33 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:07:42.063 20:45:33 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:07:42.063 20:45:33 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:42.063 20:45:33 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:07:42.063 20:45:33 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:42.063 20:45:33 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:07:42.321 malloc3 00:07:42.321 20:45:33 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:07:42.580 [2024-04-16 20:45:33.447483] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:07:42.580 [2024-04-16 20:45:33.447528] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:42.580 [2024-04-16 20:45:33.447552] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c900180 00:07:42.580 [2024-04-16 20:45:33.447558] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:42.580 [2024-04-16 20:45:33.448017] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:42.580 [2024-04-16 20:45:33.448043] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:07:42.580 pt3 00:07:42.580 20:45:33 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:07:42.580 20:45:33 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:07:42.580 20:45:33 -- bdev/bdev_raid.sh@375 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:07:42.580 [2024-04-16 20:45:33.631608] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:42.580 [2024-04-16 20:45:33.632030] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:42.580 [2024-04-16 20:45:33.632049] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:07:42.580 [2024-04-16 20:45:33.632099] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c900400 00:07:42.580 [2024-04-16 20:45:33.632104] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:42.580 [2024-04-16 20:45:33.632130] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c962e20 00:07:42.580 [2024-04-16 20:45:33.632183] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c900400 00:07:42.580 [2024-04-16 20:45:33.632186] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82c900400 00:07:42.580 [2024-04-16 20:45:33.632206] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:42.580 20:45:33 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:07:42.580 20:45:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:07:42.580 20:45:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:07:42.580 20:45:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:07:42.580 20:45:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:42.580 20:45:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:07:42.580 20:45:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:42.580 20:45:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:42.580 20:45:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:42.580 20:45:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:42.580 20:45:33 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:42.580 20:45:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:42.838 20:45:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:42.838 "name": "raid_bdev1", 00:07:42.838 "uuid": "454d346e-fc32-11ee-80f8-ef3e42bb1492", 00:07:42.838 "strip_size_kb": 64, 00:07:42.838 "state": "online", 00:07:42.839 "raid_level": "raid0", 00:07:42.839 "superblock": true, 00:07:42.839 "num_base_bdevs": 3, 00:07:42.839 "num_base_bdevs_discovered": 3, 00:07:42.839 "num_base_bdevs_operational": 3, 00:07:42.839 "base_bdevs_list": [ 00:07:42.839 { 00:07:42.839 "name": "pt1", 00:07:42.839 "uuid": "487fa639-b0b3-cf5b-91fd-57f00fc14cf0", 00:07:42.839 "is_configured": true, 00:07:42.839 "data_offset": 2048, 00:07:42.839 "data_size": 63488 00:07:42.839 }, 00:07:42.839 { 00:07:42.839 "name": "pt2", 00:07:42.839 "uuid": "52436266-de5c-015e-9310-419eb173f87a", 00:07:42.839 "is_configured": true, 00:07:42.839 "data_offset": 2048, 00:07:42.839 "data_size": 63488 00:07:42.839 }, 00:07:42.839 { 00:07:42.839 "name": "pt3", 00:07:42.839 "uuid": "015c54cb-df92-e055-af38-62426289f3e1", 00:07:42.839 "is_configured": true, 00:07:42.839 "data_offset": 2048, 00:07:42.839 "data_size": 63488 00:07:42.839 } 00:07:42.839 ] 00:07:42.839 }' 00:07:42.839 20:45:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:42.839 20:45:33 -- common/autotest_common.sh@10 -- # set +x 00:07:43.096 20:45:34 -- bdev/bdev_raid.sh@379 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:43.096 20:45:34 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:07:43.354 [2024-04-16 20:45:34.284018] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:43.354 20:45:34 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=454d346e-fc32-11ee-80f8-ef3e42bb1492 00:07:43.354 20:45:34 -- bdev/bdev_raid.sh@380 -- # '[' -z 454d346e-fc32-11ee-80f8-ef3e42bb1492 ']' 00:07:43.354 20:45:34 -- bdev/bdev_raid.sh@385 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:07:43.612 [2024-04-16 20:45:34.472087] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:43.612 [2024-04-16 20:45:34.472109] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:43.612 [2024-04-16 20:45:34.472129] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:43.612 [2024-04-16 20:45:34.472140] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:43.612 [2024-04-16 20:45:34.472144] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c900400 name raid_bdev1, state offline 00:07:43.612 20:45:34 -- bdev/bdev_raid.sh@386 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:43.612 20:45:34 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:07:43.612 20:45:34 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:07:43.612 20:45:34 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:07:43.612 20:45:34 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:07:43.612 20:45:34 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:07:43.870 20:45:34 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:07:43.870 20:45:34 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:07:44.128 20:45:35 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:07:44.128 20:45:35 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:07:44.128 20:45:35 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:44.128 20:45:35 -- bdev/bdev_raid.sh@395 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:07:44.386 20:45:35 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:07:44.386 20:45:35 -- bdev/bdev_raid.sh@401 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:07:44.386 20:45:35 -- common/autotest_common.sh@640 -- # local es=0 00:07:44.386 20:45:35 -- common/autotest_common.sh@642 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:07:44.386 20:45:35 -- common/autotest_common.sh@628 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:44.386 20:45:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:44.386 20:45:35 -- common/autotest_common.sh@632 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:44.386 20:45:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:44.386 20:45:35 -- common/autotest_common.sh@634 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:44.386 20:45:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:44.386 20:45:35 -- common/autotest_common.sh@634 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:44.386 20:45:35 -- common/autotest_common.sh@634 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:44.386 20:45:35 -- common/autotest_common.sh@643 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:07:44.644 [2024-04-16 20:45:35.548709] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:44.644 [2024-04-16 20:45:35.549147] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:44.644 [2024-04-16 20:45:35.549164] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:07:44.644 [2024-04-16 20:45:35.549173] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:07:44.644 [2024-04-16 20:45:35.549202] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:07:44.644 [2024-04-16 20:45:35.549210] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:07:44.644 [2024-04-16 20:45:35.549216] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:44.644 [2024-04-16 20:45:35.549220] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c900180 name raid_bdev1, state configuring 00:07:44.644 request: 00:07:44.644 { 00:07:44.644 "name": "raid_bdev1", 00:07:44.644 "raid_level": "raid0", 00:07:44.644 "base_bdevs": [ 00:07:44.644 "malloc1", 00:07:44.644 "malloc2", 00:07:44.644 "malloc3" 00:07:44.644 ], 00:07:44.644 "superblock": false, 00:07:44.644 "strip_size_kb": 64, 00:07:44.644 "method": "bdev_raid_create", 00:07:44.644 "req_id": 1 00:07:44.644 } 00:07:44.644 Got JSON-RPC error response 00:07:44.644 response: 00:07:44.644 { 00:07:44.644 "code": -17, 00:07:44.644 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:44.644 } 00:07:44.644 20:45:35 -- common/autotest_common.sh@643 -- # es=1 00:07:44.644 20:45:35 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:44.644 20:45:35 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:44.644 20:45:35 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:44.644 20:45:35 -- bdev/bdev_raid.sh@403 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:44.644 20:45:35 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:07:44.644 20:45:35 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:07:44.644 20:45:35 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:07:44.644 20:45:35 -- bdev/bdev_raid.sh@409 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:44.902 [2024-04-16 20:45:35.912919] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:44.902 [2024-04-16 20:45:35.912982] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:44.902 [2024-04-16 20:45:35.913008] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c8ffc80 00:07:44.902 [2024-04-16 20:45:35.913014] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:44.902 [2024-04-16 20:45:35.913506] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:44.902 [2024-04-16 20:45:35.913530] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:44.902 [2024-04-16 20:45:35.913551] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:07:44.902 [2024-04-16 20:45:35.913560] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:44.902 pt1 00:07:44.902 20:45:35 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:07:44.902 20:45:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:07:44.902 20:45:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:44.902 20:45:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:07:44.902 20:45:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:44.902 20:45:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:07:44.902 20:45:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:44.902 20:45:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:44.902 20:45:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:44.902 20:45:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:44.902 20:45:35 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:44.902 20:45:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:45.159 20:45:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:45.159 "name": "raid_bdev1", 00:07:45.159 "uuid": "454d346e-fc32-11ee-80f8-ef3e42bb1492", 00:07:45.159 "strip_size_kb": 64, 00:07:45.159 "state": "configuring", 00:07:45.159 "raid_level": "raid0", 00:07:45.159 "superblock": true, 00:07:45.159 "num_base_bdevs": 3, 00:07:45.159 "num_base_bdevs_discovered": 1, 00:07:45.159 "num_base_bdevs_operational": 3, 00:07:45.159 "base_bdevs_list": [ 00:07:45.159 { 00:07:45.159 "name": "pt1", 00:07:45.159 "uuid": "487fa639-b0b3-cf5b-91fd-57f00fc14cf0", 00:07:45.159 "is_configured": true, 00:07:45.159 "data_offset": 2048, 00:07:45.159 "data_size": 63488 00:07:45.159 }, 00:07:45.159 { 00:07:45.159 "name": null, 00:07:45.160 "uuid": "52436266-de5c-015e-9310-419eb173f87a", 00:07:45.160 "is_configured": false, 00:07:45.160 "data_offset": 2048, 00:07:45.160 "data_size": 63488 00:07:45.160 }, 00:07:45.160 { 00:07:45.160 "name": null, 00:07:45.160 "uuid": "015c54cb-df92-e055-af38-62426289f3e1", 00:07:45.160 "is_configured": false, 00:07:45.160 "data_offset": 2048, 00:07:45.160 "data_size": 63488 00:07:45.160 } 00:07:45.160 ] 00:07:45.160 }' 00:07:45.160 20:45:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:45.160 20:45:36 -- common/autotest_common.sh@10 -- # set +x 00:07:45.418 20:45:36 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:07:45.418 20:45:36 -- bdev/bdev_raid.sh@416 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:45.676 [2024-04-16 20:45:36.541281] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:45.676 [2024-04-16 20:45:36.541343] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:45.676 [2024-04-16 20:45:36.541369] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c900680 00:07:45.676 [2024-04-16 20:45:36.541376] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:45.676 [2024-04-16 20:45:36.541467] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:45.676 [2024-04-16 20:45:36.541474] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:45.676 [2024-04-16 20:45:36.541492] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:07:45.676 [2024-04-16 20:45:36.541498] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:45.676 pt2 00:07:45.676 20:45:36 -- bdev/bdev_raid.sh@417 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:07:45.676 [2024-04-16 20:45:36.725382] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:07:45.676 20:45:36 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:07:45.676 20:45:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:07:45.676 20:45:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:45.676 20:45:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:07:45.676 20:45:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:45.676 20:45:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:07:45.676 20:45:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:45.676 20:45:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:45.676 20:45:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:45.676 20:45:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:45.676 20:45:36 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:45.676 20:45:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:45.933 20:45:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:45.933 "name": "raid_bdev1", 00:07:45.933 "uuid": "454d346e-fc32-11ee-80f8-ef3e42bb1492", 00:07:45.933 "strip_size_kb": 64, 00:07:45.933 "state": "configuring", 00:07:45.933 "raid_level": "raid0", 00:07:45.933 "superblock": true, 00:07:45.933 "num_base_bdevs": 3, 00:07:45.933 "num_base_bdevs_discovered": 1, 00:07:45.933 "num_base_bdevs_operational": 3, 00:07:45.933 "base_bdevs_list": [ 00:07:45.933 { 00:07:45.933 "name": "pt1", 00:07:45.933 "uuid": "487fa639-b0b3-cf5b-91fd-57f00fc14cf0", 00:07:45.933 "is_configured": true, 00:07:45.933 "data_offset": 2048, 00:07:45.933 "data_size": 63488 00:07:45.933 }, 00:07:45.933 { 00:07:45.933 "name": null, 00:07:45.933 "uuid": "52436266-de5c-015e-9310-419eb173f87a", 00:07:45.933 "is_configured": false, 00:07:45.933 "data_offset": 2048, 00:07:45.933 "data_size": 63488 00:07:45.933 }, 00:07:45.933 { 00:07:45.933 "name": null, 00:07:45.933 "uuid": "015c54cb-df92-e055-af38-62426289f3e1", 00:07:45.933 "is_configured": false, 00:07:45.933 "data_offset": 2048, 00:07:45.933 "data_size": 63488 00:07:45.933 } 00:07:45.933 ] 00:07:45.933 }' 00:07:45.933 20:45:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:45.933 20:45:36 -- common/autotest_common.sh@10 -- # set +x 00:07:46.191 20:45:37 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:07:46.191 20:45:37 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:07:46.191 20:45:37 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:46.454 [2024-04-16 20:45:37.329708] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:46.454 [2024-04-16 20:45:37.329755] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:46.454 [2024-04-16 20:45:37.329781] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c900680 00:07:46.454 [2024-04-16 20:45:37.329787] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:46.454 [2024-04-16 20:45:37.329877] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:46.454 [2024-04-16 20:45:37.329883] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:46.454 [2024-04-16 20:45:37.329899] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:07:46.454 [2024-04-16 20:45:37.329905] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:46.454 pt2 00:07:46.454 20:45:37 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:07:46.454 20:45:37 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:07:46.454 20:45:37 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:07:46.454 [2024-04-16 20:45:37.513803] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:07:46.454 [2024-04-16 20:45:37.513846] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:46.454 [2024-04-16 20:45:37.513869] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c900400 00:07:46.454 [2024-04-16 20:45:37.513876] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:46.454 [2024-04-16 20:45:37.513958] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:46.454 [2024-04-16 20:45:37.513964] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:07:46.454 [2024-04-16 20:45:37.513981] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:07:46.454 [2024-04-16 20:45:37.513987] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:07:46.454 [2024-04-16 20:45:37.514009] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c8ff780 00:07:46.454 [2024-04-16 20:45:37.514012] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:46.454 [2024-04-16 20:45:37.514027] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c962e20 00:07:46.454 [2024-04-16 20:45:37.514062] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c8ff780 00:07:46.454 [2024-04-16 20:45:37.514064] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82c8ff780 00:07:46.454 [2024-04-16 20:45:37.514079] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:46.454 pt3 00:07:46.454 20:45:37 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:07:46.454 20:45:37 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:07:46.454 20:45:37 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:07:46.454 20:45:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:07:46.454 20:45:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:07:46.454 20:45:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:07:46.454 20:45:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:46.454 20:45:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:07:46.454 20:45:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:46.454 20:45:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:46.454 20:45:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:46.454 20:45:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:46.454 20:45:37 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:46.454 20:45:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:46.720 20:45:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:46.720 "name": "raid_bdev1", 00:07:46.720 "uuid": "454d346e-fc32-11ee-80f8-ef3e42bb1492", 00:07:46.720 "strip_size_kb": 64, 00:07:46.720 "state": "online", 00:07:46.720 "raid_level": "raid0", 00:07:46.720 "superblock": true, 00:07:46.720 "num_base_bdevs": 3, 00:07:46.720 "num_base_bdevs_discovered": 3, 00:07:46.720 "num_base_bdevs_operational": 3, 00:07:46.720 "base_bdevs_list": [ 00:07:46.720 { 00:07:46.720 "name": "pt1", 00:07:46.720 "uuid": "487fa639-b0b3-cf5b-91fd-57f00fc14cf0", 00:07:46.720 "is_configured": true, 00:07:46.720 "data_offset": 2048, 00:07:46.720 "data_size": 63488 00:07:46.720 }, 00:07:46.720 { 00:07:46.720 "name": "pt2", 00:07:46.720 "uuid": "52436266-de5c-015e-9310-419eb173f87a", 00:07:46.720 "is_configured": true, 00:07:46.720 "data_offset": 2048, 00:07:46.720 "data_size": 63488 00:07:46.720 }, 00:07:46.720 { 00:07:46.720 "name": "pt3", 00:07:46.720 "uuid": "015c54cb-df92-e055-af38-62426289f3e1", 00:07:46.720 "is_configured": true, 00:07:46.720 "data_offset": 2048, 00:07:46.720 "data_size": 63488 00:07:46.720 } 00:07:46.720 ] 00:07:46.720 }' 00:07:46.720 20:45:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:46.720 20:45:37 -- common/autotest_common.sh@10 -- # set +x 00:07:46.978 20:45:37 -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:46.978 20:45:37 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:07:47.237 [2024-04-16 20:45:38.154182] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:47.237 20:45:38 -- bdev/bdev_raid.sh@430 -- # '[' 454d346e-fc32-11ee-80f8-ef3e42bb1492 '!=' 454d346e-fc32-11ee-80f8-ef3e42bb1492 ']' 00:07:47.237 20:45:38 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:07:47.237 20:45:38 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:07:47.237 20:45:38 -- bdev/bdev_raid.sh@197 -- # return 1 00:07:47.237 20:45:38 -- bdev/bdev_raid.sh@511 -- # killprocess 49762 00:07:47.237 20:45:38 -- common/autotest_common.sh@926 -- # '[' -z 49762 ']' 00:07:47.237 20:45:38 -- common/autotest_common.sh@930 -- # kill -0 49762 00:07:47.237 20:45:38 -- common/autotest_common.sh@931 -- # uname 00:07:47.237 20:45:38 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:07:47.237 20:45:38 -- common/autotest_common.sh@934 -- # tail -1 00:07:47.237 20:45:38 -- common/autotest_common.sh@934 -- # ps -c -o command 49762 00:07:47.237 20:45:38 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:07:47.237 20:45:38 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:07:47.237 killing process with pid 49762 00:07:47.237 20:45:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 49762' 00:07:47.237 20:45:38 -- common/autotest_common.sh@945 -- # kill 49762 00:07:47.237 [2024-04-16 20:45:38.187891] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:47.237 [2024-04-16 20:45:38.187906] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:47.237 [2024-04-16 20:45:38.187928] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:47.237 [2024-04-16 20:45:38.187931] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c8ff780 name raid_bdev1, state offline 00:07:47.237 20:45:38 -- common/autotest_common.sh@950 -- # wait 49762 00:07:47.237 [2024-04-16 20:45:38.201700] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:47.237 20:45:38 -- bdev/bdev_raid.sh@513 -- # return 0 00:07:47.237 00:07:47.237 real 0m6.930s 00:07:47.237 user 0m11.815s 00:07:47.237 sys 0m1.321s 00:07:47.237 20:45:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.237 20:45:38 -- common/autotest_common.sh@10 -- # set +x 00:07:47.237 ************************************ 00:07:47.237 END TEST raid_superblock_test 00:07:47.237 ************************************ 00:07:47.496 20:45:38 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:07:47.496 20:45:38 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:07:47.496 20:45:38 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:07:47.496 20:45:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:47.496 20:45:38 -- common/autotest_common.sh@10 -- # set +x 00:07:47.496 ************************************ 00:07:47.496 START TEST raid_state_function_test 00:07:47.496 ************************************ 00:07:47.496 20:45:38 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 3 false 00:07:47.496 20:45:38 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:07:47.496 20:45:38 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:07:47.496 20:45:38 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:07:47.496 20:45:38 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:07:47.496 20:45:38 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:07:47.496 20:45:38 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:47.496 20:45:38 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:07:47.496 20:45:38 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:07:47.496 20:45:38 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:47.496 20:45:38 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:07:47.496 20:45:38 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:07:47.496 20:45:38 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:47.496 20:45:38 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:07:47.496 20:45:38 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:07:47.496 20:45:38 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:47.496 20:45:38 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:47.496 20:45:38 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:07:47.496 20:45:38 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:07:47.496 20:45:38 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:07:47.496 20:45:38 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:07:47.496 20:45:38 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:07:47.496 20:45:38 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:07:47.496 20:45:38 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:07:47.496 20:45:38 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:07:47.496 20:45:38 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:07:47.496 20:45:38 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:07:47.496 20:45:38 -- bdev/bdev_raid.sh@226 -- # raid_pid=49943 00:07:47.496 20:45:38 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 49943' 00:07:47.496 Process raid pid: 49943 00:07:47.496 20:45:38 -- bdev/bdev_raid.sh@228 -- # waitforlisten 49943 /var/tmp/spdk-raid.sock 00:07:47.496 20:45:38 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:07:47.496 20:45:38 -- common/autotest_common.sh@819 -- # '[' -z 49943 ']' 00:07:47.496 20:45:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:47.496 20:45:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:47.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:47.496 20:45:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:47.496 20:45:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:47.496 20:45:38 -- common/autotest_common.sh@10 -- # set +x 00:07:47.496 [2024-04-16 20:45:38.412010] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:07:47.496 [2024-04-16 20:45:38.412287] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:07:47.755 EAL: TSC is not safe to use in SMP mode 00:07:47.755 EAL: TSC is not invariant 00:07:47.755 [2024-04-16 20:45:38.841229] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.013 [2024-04-16 20:45:38.932843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.013 [2024-04-16 20:45:38.933259] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:48.013 [2024-04-16 20:45:38.933268] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:48.271 20:45:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:48.271 20:45:39 -- common/autotest_common.sh@852 -- # return 0 00:07:48.271 20:45:39 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:07:48.530 [2024-04-16 20:45:39.540731] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:48.530 [2024-04-16 20:45:39.540786] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:48.530 [2024-04-16 20:45:39.540790] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:48.530 [2024-04-16 20:45:39.540796] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:48.530 [2024-04-16 20:45:39.540799] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:48.530 [2024-04-16 20:45:39.540804] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:48.530 20:45:39 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:48.530 20:45:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:48.530 20:45:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:48.530 20:45:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:07:48.530 20:45:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:48.530 20:45:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:07:48.530 20:45:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:48.530 20:45:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:48.530 20:45:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:48.530 20:45:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:48.530 20:45:39 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:48.530 20:45:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:48.787 20:45:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:48.787 "name": "Existed_Raid", 00:07:48.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:48.787 "strip_size_kb": 64, 00:07:48.787 "state": "configuring", 00:07:48.787 "raid_level": "concat", 00:07:48.787 "superblock": false, 00:07:48.787 "num_base_bdevs": 3, 00:07:48.787 "num_base_bdevs_discovered": 0, 00:07:48.787 "num_base_bdevs_operational": 3, 00:07:48.787 "base_bdevs_list": [ 00:07:48.787 { 00:07:48.787 "name": "BaseBdev1", 00:07:48.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:48.787 "is_configured": false, 00:07:48.787 "data_offset": 0, 00:07:48.787 "data_size": 0 00:07:48.787 }, 00:07:48.787 { 00:07:48.787 "name": "BaseBdev2", 00:07:48.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:48.787 "is_configured": false, 00:07:48.787 "data_offset": 0, 00:07:48.787 "data_size": 0 00:07:48.787 }, 00:07:48.787 { 00:07:48.787 "name": "BaseBdev3", 00:07:48.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:48.787 "is_configured": false, 00:07:48.787 "data_offset": 0, 00:07:48.787 "data_size": 0 00:07:48.787 } 00:07:48.787 ] 00:07:48.787 }' 00:07:48.788 20:45:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:48.788 20:45:39 -- common/autotest_common.sh@10 -- # set +x 00:07:49.054 20:45:40 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:49.320 [2024-04-16 20:45:40.181042] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:49.320 [2024-04-16 20:45:40.181062] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82aa8e500 name Existed_Raid, state configuring 00:07:49.320 20:45:40 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:07:49.320 [2024-04-16 20:45:40.365132] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:49.320 [2024-04-16 20:45:40.365190] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:49.320 [2024-04-16 20:45:40.365194] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:49.320 [2024-04-16 20:45:40.365200] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:49.320 [2024-04-16 20:45:40.365203] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:49.320 [2024-04-16 20:45:40.365208] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:49.320 20:45:40 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:07:49.578 [2024-04-16 20:45:40.545977] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:49.578 BaseBdev1 00:07:49.578 20:45:40 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:07:49.578 20:45:40 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:07:49.578 20:45:40 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:07:49.578 20:45:40 -- common/autotest_common.sh@889 -- # local i 00:07:49.578 20:45:40 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:07:49.578 20:45:40 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:07:49.578 20:45:40 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:49.836 20:45:40 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:49.836 [ 00:07:49.836 { 00:07:49.836 "name": "BaseBdev1", 00:07:49.836 "aliases": [ 00:07:49.836 "496c2439-fc32-11ee-80f8-ef3e42bb1492" 00:07:49.836 ], 00:07:49.836 "product_name": "Malloc disk", 00:07:49.836 "block_size": 512, 00:07:49.836 "num_blocks": 65536, 00:07:49.836 "uuid": "496c2439-fc32-11ee-80f8-ef3e42bb1492", 00:07:49.836 "assigned_rate_limits": { 00:07:49.836 "rw_ios_per_sec": 0, 00:07:49.836 "rw_mbytes_per_sec": 0, 00:07:49.836 "r_mbytes_per_sec": 0, 00:07:49.836 "w_mbytes_per_sec": 0 00:07:49.836 }, 00:07:49.836 "claimed": true, 00:07:49.836 "claim_type": "exclusive_write", 00:07:49.836 "zoned": false, 00:07:49.836 "supported_io_types": { 00:07:49.836 "read": true, 00:07:49.836 "write": true, 00:07:49.836 "unmap": true, 00:07:49.836 "write_zeroes": true, 00:07:49.836 "flush": true, 00:07:49.836 "reset": true, 00:07:49.836 "compare": false, 00:07:49.836 "compare_and_write": false, 00:07:49.836 "abort": true, 00:07:49.836 "nvme_admin": false, 00:07:49.836 "nvme_io": false 00:07:49.836 }, 00:07:49.836 "memory_domains": [ 00:07:49.836 { 00:07:49.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:49.836 "dma_device_type": 2 00:07:49.836 } 00:07:49.836 ], 00:07:49.836 "driver_specific": {} 00:07:49.836 } 00:07:49.836 ] 00:07:49.836 20:45:40 -- common/autotest_common.sh@895 -- # return 0 00:07:49.836 20:45:40 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:49.836 20:45:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:49.837 20:45:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:49.837 20:45:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:07:49.837 20:45:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:49.837 20:45:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:07:49.837 20:45:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:49.837 20:45:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:49.837 20:45:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:49.837 20:45:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:49.837 20:45:40 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:49.837 20:45:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:50.095 20:45:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:50.095 "name": "Existed_Raid", 00:07:50.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.095 "strip_size_kb": 64, 00:07:50.095 "state": "configuring", 00:07:50.095 "raid_level": "concat", 00:07:50.095 "superblock": false, 00:07:50.095 "num_base_bdevs": 3, 00:07:50.095 "num_base_bdevs_discovered": 1, 00:07:50.095 "num_base_bdevs_operational": 3, 00:07:50.095 "base_bdevs_list": [ 00:07:50.095 { 00:07:50.095 "name": "BaseBdev1", 00:07:50.095 "uuid": "496c2439-fc32-11ee-80f8-ef3e42bb1492", 00:07:50.095 "is_configured": true, 00:07:50.095 "data_offset": 0, 00:07:50.095 "data_size": 65536 00:07:50.095 }, 00:07:50.095 { 00:07:50.095 "name": "BaseBdev2", 00:07:50.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.095 "is_configured": false, 00:07:50.095 "data_offset": 0, 00:07:50.095 "data_size": 0 00:07:50.095 }, 00:07:50.095 { 00:07:50.095 "name": "BaseBdev3", 00:07:50.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.095 "is_configured": false, 00:07:50.095 "data_offset": 0, 00:07:50.095 "data_size": 0 00:07:50.095 } 00:07:50.095 ] 00:07:50.095 }' 00:07:50.095 20:45:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:50.095 20:45:41 -- common/autotest_common.sh@10 -- # set +x 00:07:50.353 20:45:41 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:50.611 [2024-04-16 20:45:41.537687] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:50.611 [2024-04-16 20:45:41.537713] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82aa8e500 name Existed_Raid, state configuring 00:07:50.611 20:45:41 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:07:50.611 20:45:41 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:07:50.611 [2024-04-16 20:45:41.713783] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:50.611 [2024-04-16 20:45:41.714411] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:50.611 [2024-04-16 20:45:41.714446] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:50.611 [2024-04-16 20:45:41.714449] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:50.611 [2024-04-16 20:45:41.714455] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:50.869 20:45:41 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:07:50.869 20:45:41 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:07:50.869 20:45:41 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:50.869 20:45:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:50.869 20:45:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:50.869 20:45:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:07:50.869 20:45:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:50.869 20:45:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:07:50.869 20:45:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:50.869 20:45:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:50.869 20:45:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:50.869 20:45:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:50.869 20:45:41 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:50.869 20:45:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:50.869 20:45:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:50.869 "name": "Existed_Raid", 00:07:50.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.869 "strip_size_kb": 64, 00:07:50.869 "state": "configuring", 00:07:50.869 "raid_level": "concat", 00:07:50.869 "superblock": false, 00:07:50.869 "num_base_bdevs": 3, 00:07:50.869 "num_base_bdevs_discovered": 1, 00:07:50.869 "num_base_bdevs_operational": 3, 00:07:50.869 "base_bdevs_list": [ 00:07:50.869 { 00:07:50.869 "name": "BaseBdev1", 00:07:50.869 "uuid": "496c2439-fc32-11ee-80f8-ef3e42bb1492", 00:07:50.869 "is_configured": true, 00:07:50.869 "data_offset": 0, 00:07:50.869 "data_size": 65536 00:07:50.869 }, 00:07:50.869 { 00:07:50.869 "name": "BaseBdev2", 00:07:50.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.869 "is_configured": false, 00:07:50.869 "data_offset": 0, 00:07:50.869 "data_size": 0 00:07:50.869 }, 00:07:50.869 { 00:07:50.869 "name": "BaseBdev3", 00:07:50.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.869 "is_configured": false, 00:07:50.869 "data_offset": 0, 00:07:50.869 "data_size": 0 00:07:50.869 } 00:07:50.869 ] 00:07:50.869 }' 00:07:50.869 20:45:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:50.869 20:45:41 -- common/autotest_common.sh@10 -- # set +x 00:07:51.127 20:45:42 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:07:51.385 [2024-04-16 20:45:42.358208] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:51.385 BaseBdev2 00:07:51.385 20:45:42 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:07:51.385 20:45:42 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:07:51.385 20:45:42 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:07:51.385 20:45:42 -- common/autotest_common.sh@889 -- # local i 00:07:51.385 20:45:42 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:07:51.385 20:45:42 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:07:51.385 20:45:42 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:51.643 20:45:42 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:51.643 [ 00:07:51.643 { 00:07:51.643 "name": "BaseBdev2", 00:07:51.643 "aliases": [ 00:07:51.643 "4a80c385-fc32-11ee-80f8-ef3e42bb1492" 00:07:51.643 ], 00:07:51.643 "product_name": "Malloc disk", 00:07:51.643 "block_size": 512, 00:07:51.643 "num_blocks": 65536, 00:07:51.643 "uuid": "4a80c385-fc32-11ee-80f8-ef3e42bb1492", 00:07:51.643 "assigned_rate_limits": { 00:07:51.643 "rw_ios_per_sec": 0, 00:07:51.643 "rw_mbytes_per_sec": 0, 00:07:51.643 "r_mbytes_per_sec": 0, 00:07:51.643 "w_mbytes_per_sec": 0 00:07:51.643 }, 00:07:51.643 "claimed": true, 00:07:51.643 "claim_type": "exclusive_write", 00:07:51.643 "zoned": false, 00:07:51.643 "supported_io_types": { 00:07:51.643 "read": true, 00:07:51.643 "write": true, 00:07:51.643 "unmap": true, 00:07:51.643 "write_zeroes": true, 00:07:51.643 "flush": true, 00:07:51.643 "reset": true, 00:07:51.643 "compare": false, 00:07:51.643 "compare_and_write": false, 00:07:51.643 "abort": true, 00:07:51.643 "nvme_admin": false, 00:07:51.643 "nvme_io": false 00:07:51.643 }, 00:07:51.643 "memory_domains": [ 00:07:51.643 { 00:07:51.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.643 "dma_device_type": 2 00:07:51.643 } 00:07:51.643 ], 00:07:51.643 "driver_specific": {} 00:07:51.643 } 00:07:51.643 ] 00:07:51.643 20:45:42 -- common/autotest_common.sh@895 -- # return 0 00:07:51.643 20:45:42 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:07:51.643 20:45:42 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:07:51.643 20:45:42 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:51.643 20:45:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:51.643 20:45:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:51.643 20:45:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:07:51.643 20:45:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:51.643 20:45:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:07:51.643 20:45:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:51.643 20:45:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:51.643 20:45:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:51.643 20:45:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:51.643 20:45:42 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:51.643 20:45:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:51.902 20:45:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:51.902 "name": "Existed_Raid", 00:07:51.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.902 "strip_size_kb": 64, 00:07:51.902 "state": "configuring", 00:07:51.902 "raid_level": "concat", 00:07:51.902 "superblock": false, 00:07:51.902 "num_base_bdevs": 3, 00:07:51.902 "num_base_bdevs_discovered": 2, 00:07:51.902 "num_base_bdevs_operational": 3, 00:07:51.902 "base_bdevs_list": [ 00:07:51.902 { 00:07:51.902 "name": "BaseBdev1", 00:07:51.902 "uuid": "496c2439-fc32-11ee-80f8-ef3e42bb1492", 00:07:51.902 "is_configured": true, 00:07:51.902 "data_offset": 0, 00:07:51.902 "data_size": 65536 00:07:51.902 }, 00:07:51.902 { 00:07:51.902 "name": "BaseBdev2", 00:07:51.902 "uuid": "4a80c385-fc32-11ee-80f8-ef3e42bb1492", 00:07:51.902 "is_configured": true, 00:07:51.902 "data_offset": 0, 00:07:51.902 "data_size": 65536 00:07:51.902 }, 00:07:51.902 { 00:07:51.902 "name": "BaseBdev3", 00:07:51.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.902 "is_configured": false, 00:07:51.902 "data_offset": 0, 00:07:51.902 "data_size": 0 00:07:51.902 } 00:07:51.902 ] 00:07:51.902 }' 00:07:51.902 20:45:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:51.902 20:45:42 -- common/autotest_common.sh@10 -- # set +x 00:07:52.161 20:45:43 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:07:52.419 [2024-04-16 20:45:43.338667] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:52.419 [2024-04-16 20:45:43.338692] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82aa8ea00 00:07:52.419 [2024-04-16 20:45:43.338696] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:52.419 [2024-04-16 20:45:43.338713] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82aaf1ec0 00:07:52.419 [2024-04-16 20:45:43.338788] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82aa8ea00 00:07:52.419 [2024-04-16 20:45:43.338791] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82aa8ea00 00:07:52.419 [2024-04-16 20:45:43.338817] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:52.419 BaseBdev3 00:07:52.419 20:45:43 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:07:52.419 20:45:43 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:07:52.419 20:45:43 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:07:52.419 20:45:43 -- common/autotest_common.sh@889 -- # local i 00:07:52.419 20:45:43 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:07:52.419 20:45:43 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:07:52.419 20:45:43 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:52.678 20:45:43 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:52.678 [ 00:07:52.678 { 00:07:52.678 "name": "BaseBdev3", 00:07:52.678 "aliases": [ 00:07:52.678 "4b165ef6-fc32-11ee-80f8-ef3e42bb1492" 00:07:52.678 ], 00:07:52.678 "product_name": "Malloc disk", 00:07:52.678 "block_size": 512, 00:07:52.678 "num_blocks": 65536, 00:07:52.678 "uuid": "4b165ef6-fc32-11ee-80f8-ef3e42bb1492", 00:07:52.678 "assigned_rate_limits": { 00:07:52.678 "rw_ios_per_sec": 0, 00:07:52.678 "rw_mbytes_per_sec": 0, 00:07:52.678 "r_mbytes_per_sec": 0, 00:07:52.678 "w_mbytes_per_sec": 0 00:07:52.678 }, 00:07:52.678 "claimed": true, 00:07:52.678 "claim_type": "exclusive_write", 00:07:52.678 "zoned": false, 00:07:52.678 "supported_io_types": { 00:07:52.678 "read": true, 00:07:52.678 "write": true, 00:07:52.678 "unmap": true, 00:07:52.678 "write_zeroes": true, 00:07:52.678 "flush": true, 00:07:52.678 "reset": true, 00:07:52.678 "compare": false, 00:07:52.678 "compare_and_write": false, 00:07:52.678 "abort": true, 00:07:52.678 "nvme_admin": false, 00:07:52.678 "nvme_io": false 00:07:52.678 }, 00:07:52.678 "memory_domains": [ 00:07:52.678 { 00:07:52.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.678 "dma_device_type": 2 00:07:52.678 } 00:07:52.678 ], 00:07:52.678 "driver_specific": {} 00:07:52.678 } 00:07:52.678 ] 00:07:52.678 20:45:43 -- common/autotest_common.sh@895 -- # return 0 00:07:52.678 20:45:43 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:07:52.678 20:45:43 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:07:52.678 20:45:43 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:07:52.678 20:45:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:52.678 20:45:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:07:52.678 20:45:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:07:52.678 20:45:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:52.678 20:45:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:07:52.678 20:45:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:52.678 20:45:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:52.678 20:45:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:52.678 20:45:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:52.678 20:45:43 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:52.678 20:45:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:52.936 20:45:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:52.936 "name": "Existed_Raid", 00:07:52.936 "uuid": "4b1663fd-fc32-11ee-80f8-ef3e42bb1492", 00:07:52.936 "strip_size_kb": 64, 00:07:52.936 "state": "online", 00:07:52.936 "raid_level": "concat", 00:07:52.936 "superblock": false, 00:07:52.936 "num_base_bdevs": 3, 00:07:52.936 "num_base_bdevs_discovered": 3, 00:07:52.936 "num_base_bdevs_operational": 3, 00:07:52.936 "base_bdevs_list": [ 00:07:52.936 { 00:07:52.936 "name": "BaseBdev1", 00:07:52.936 "uuid": "496c2439-fc32-11ee-80f8-ef3e42bb1492", 00:07:52.936 "is_configured": true, 00:07:52.936 "data_offset": 0, 00:07:52.936 "data_size": 65536 00:07:52.936 }, 00:07:52.936 { 00:07:52.936 "name": "BaseBdev2", 00:07:52.936 "uuid": "4a80c385-fc32-11ee-80f8-ef3e42bb1492", 00:07:52.936 "is_configured": true, 00:07:52.936 "data_offset": 0, 00:07:52.936 "data_size": 65536 00:07:52.936 }, 00:07:52.936 { 00:07:52.936 "name": "BaseBdev3", 00:07:52.936 "uuid": "4b165ef6-fc32-11ee-80f8-ef3e42bb1492", 00:07:52.936 "is_configured": true, 00:07:52.936 "data_offset": 0, 00:07:52.936 "data_size": 65536 00:07:52.936 } 00:07:52.936 ] 00:07:52.936 }' 00:07:52.936 20:45:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:52.936 20:45:43 -- common/autotest_common.sh@10 -- # set +x 00:07:53.194 20:45:44 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:07:53.452 [2024-04-16 20:45:44.347018] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:53.452 [2024-04-16 20:45:44.347037] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:53.452 [2024-04-16 20:45:44.347049] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:53.452 20:45:44 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:07:53.452 20:45:44 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:07:53.452 20:45:44 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:07:53.452 20:45:44 -- bdev/bdev_raid.sh@197 -- # return 1 00:07:53.452 20:45:44 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:07:53.452 20:45:44 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:07:53.452 20:45:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:53.452 20:45:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:07:53.452 20:45:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:07:53.452 20:45:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:53.452 20:45:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:53.452 20:45:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:53.452 20:45:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:53.452 20:45:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:53.452 20:45:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:53.452 20:45:44 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:53.452 20:45:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:53.452 20:45:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:53.452 "name": "Existed_Raid", 00:07:53.452 "uuid": "4b1663fd-fc32-11ee-80f8-ef3e42bb1492", 00:07:53.452 "strip_size_kb": 64, 00:07:53.452 "state": "offline", 00:07:53.452 "raid_level": "concat", 00:07:53.452 "superblock": false, 00:07:53.452 "num_base_bdevs": 3, 00:07:53.452 "num_base_bdevs_discovered": 2, 00:07:53.452 "num_base_bdevs_operational": 2, 00:07:53.452 "base_bdevs_list": [ 00:07:53.452 { 00:07:53.452 "name": null, 00:07:53.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:53.452 "is_configured": false, 00:07:53.452 "data_offset": 0, 00:07:53.452 "data_size": 65536 00:07:53.452 }, 00:07:53.452 { 00:07:53.452 "name": "BaseBdev2", 00:07:53.452 "uuid": "4a80c385-fc32-11ee-80f8-ef3e42bb1492", 00:07:53.452 "is_configured": true, 00:07:53.452 "data_offset": 0, 00:07:53.452 "data_size": 65536 00:07:53.452 }, 00:07:53.452 { 00:07:53.452 "name": "BaseBdev3", 00:07:53.452 "uuid": "4b165ef6-fc32-11ee-80f8-ef3e42bb1492", 00:07:53.452 "is_configured": true, 00:07:53.452 "data_offset": 0, 00:07:53.452 "data_size": 65536 00:07:53.452 } 00:07:53.452 ] 00:07:53.452 }' 00:07:53.452 20:45:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:53.452 20:45:44 -- common/autotest_common.sh@10 -- # set +x 00:07:53.711 20:45:44 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:07:53.711 20:45:44 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:07:53.711 20:45:44 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:53.711 20:45:44 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:07:54.013 20:45:45 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:07:54.013 20:45:45 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:54.013 20:45:45 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:07:54.271 [2024-04-16 20:45:45.187997] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:54.271 20:45:45 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:07:54.271 20:45:45 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:07:54.271 20:45:45 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:54.271 20:45:45 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:07:54.271 20:45:45 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:07:54.271 20:45:45 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:54.271 20:45:45 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:07:54.529 [2024-04-16 20:45:45.552801] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:54.529 [2024-04-16 20:45:45.552829] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82aa8ea00 name Existed_Raid, state offline 00:07:54.529 20:45:45 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:07:54.529 20:45:45 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:07:54.529 20:45:45 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:54.529 20:45:45 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:07:54.788 20:45:45 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:07:54.788 20:45:45 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:07:54.788 20:45:45 -- bdev/bdev_raid.sh@287 -- # killprocess 49943 00:07:54.788 20:45:45 -- common/autotest_common.sh@926 -- # '[' -z 49943 ']' 00:07:54.788 20:45:45 -- common/autotest_common.sh@930 -- # kill -0 49943 00:07:54.788 20:45:45 -- common/autotest_common.sh@931 -- # uname 00:07:54.788 20:45:45 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:07:54.788 20:45:45 -- common/autotest_common.sh@934 -- # ps -c -o command 49943 00:07:54.788 20:45:45 -- common/autotest_common.sh@934 -- # tail -1 00:07:54.788 20:45:45 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:07:54.788 20:45:45 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:07:54.788 killing process with pid 49943 00:07:54.788 20:45:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 49943' 00:07:54.788 20:45:45 -- common/autotest_common.sh@945 -- # kill 49943 00:07:54.788 [2024-04-16 20:45:45.817107] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:54.788 [2024-04-16 20:45:45.817146] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:54.788 20:45:45 -- common/autotest_common.sh@950 -- # wait 49943 00:07:55.048 20:45:45 -- bdev/bdev_raid.sh@289 -- # return 0 00:07:55.048 00:07:55.048 real 0m7.561s 00:07:55.048 user 0m13.144s 00:07:55.048 sys 0m1.304s 00:07:55.048 20:45:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.048 20:45:45 -- common/autotest_common.sh@10 -- # set +x 00:07:55.048 ************************************ 00:07:55.048 END TEST raid_state_function_test 00:07:55.048 ************************************ 00:07:55.048 20:45:45 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:07:55.048 20:45:46 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:07:55.048 20:45:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:55.048 20:45:46 -- common/autotest_common.sh@10 -- # set +x 00:07:55.048 ************************************ 00:07:55.048 START TEST raid_state_function_test_sb 00:07:55.048 ************************************ 00:07:55.048 20:45:46 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 3 true 00:07:55.048 20:45:46 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:07:55.048 20:45:46 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:07:55.048 20:45:46 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:07:55.048 20:45:46 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:07:55.048 20:45:46 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:07:55.048 20:45:46 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:55.048 20:45:46 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:07:55.048 20:45:46 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:07:55.048 20:45:46 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:55.048 20:45:46 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:07:55.048 20:45:46 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:07:55.048 20:45:46 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:55.048 20:45:46 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:07:55.048 20:45:46 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:07:55.048 20:45:46 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:55.048 20:45:46 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:55.048 20:45:46 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:07:55.048 20:45:46 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:07:55.048 20:45:46 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:07:55.048 20:45:46 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:07:55.048 20:45:46 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:07:55.048 20:45:46 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:07:55.048 20:45:46 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:07:55.048 20:45:46 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:07:55.048 20:45:46 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:07:55.048 20:45:46 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:07:55.048 20:45:46 -- bdev/bdev_raid.sh@226 -- # raid_pid=50176 00:07:55.048 20:45:46 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 50176' 00:07:55.048 Process raid pid: 50176 00:07:55.048 20:45:46 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:07:55.048 20:45:46 -- bdev/bdev_raid.sh@228 -- # waitforlisten 50176 /var/tmp/spdk-raid.sock 00:07:55.048 20:45:46 -- common/autotest_common.sh@819 -- # '[' -z 50176 ']' 00:07:55.048 20:45:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:55.048 20:45:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:55.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:55.048 20:45:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:55.048 20:45:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:55.048 20:45:46 -- common/autotest_common.sh@10 -- # set +x 00:07:55.048 [2024-04-16 20:45:46.028157] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:07:55.048 [2024-04-16 20:45:46.028529] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:07:55.617 EAL: TSC is not safe to use in SMP mode 00:07:55.617 EAL: TSC is not invariant 00:07:55.617 [2024-04-16 20:45:46.452142] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.617 [2024-04-16 20:45:46.541015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.617 [2024-04-16 20:45:46.541422] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:55.617 [2024-04-16 20:45:46.541431] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:55.875 20:45:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:55.875 20:45:46 -- common/autotest_common.sh@852 -- # return 0 00:07:55.875 20:45:46 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:07:56.132 [2024-04-16 20:45:47.080800] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:56.132 [2024-04-16 20:45:47.080861] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:56.132 [2024-04-16 20:45:47.080865] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:56.132 [2024-04-16 20:45:47.080871] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:56.132 [2024-04-16 20:45:47.080874] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:56.132 [2024-04-16 20:45:47.080879] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:56.132 20:45:47 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:56.132 20:45:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:56.132 20:45:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:56.132 20:45:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:07:56.132 20:45:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:56.132 20:45:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:07:56.132 20:45:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:56.132 20:45:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:56.132 20:45:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:56.132 20:45:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:56.132 20:45:47 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:56.133 20:45:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:56.391 20:45:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:56.391 "name": "Existed_Raid", 00:07:56.391 "uuid": "4d516397-fc32-11ee-80f8-ef3e42bb1492", 00:07:56.391 "strip_size_kb": 64, 00:07:56.391 "state": "configuring", 00:07:56.391 "raid_level": "concat", 00:07:56.391 "superblock": true, 00:07:56.391 "num_base_bdevs": 3, 00:07:56.391 "num_base_bdevs_discovered": 0, 00:07:56.391 "num_base_bdevs_operational": 3, 00:07:56.391 "base_bdevs_list": [ 00:07:56.391 { 00:07:56.391 "name": "BaseBdev1", 00:07:56.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:56.391 "is_configured": false, 00:07:56.391 "data_offset": 0, 00:07:56.391 "data_size": 0 00:07:56.391 }, 00:07:56.391 { 00:07:56.391 "name": "BaseBdev2", 00:07:56.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:56.391 "is_configured": false, 00:07:56.391 "data_offset": 0, 00:07:56.391 "data_size": 0 00:07:56.391 }, 00:07:56.391 { 00:07:56.391 "name": "BaseBdev3", 00:07:56.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:56.391 "is_configured": false, 00:07:56.391 "data_offset": 0, 00:07:56.391 "data_size": 0 00:07:56.391 } 00:07:56.391 ] 00:07:56.391 }' 00:07:56.391 20:45:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:56.391 20:45:47 -- common/autotest_common.sh@10 -- # set +x 00:07:56.649 20:45:47 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:56.649 [2024-04-16 20:45:47.709023] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:56.649 [2024-04-16 20:45:47.709046] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b767500 name Existed_Raid, state configuring 00:07:56.649 20:45:47 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:07:56.906 [2024-04-16 20:45:47.885108] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:56.907 [2024-04-16 20:45:47.885160] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:56.907 [2024-04-16 20:45:47.885164] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:56.907 [2024-04-16 20:45:47.885170] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:56.907 [2024-04-16 20:45:47.885172] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:56.907 [2024-04-16 20:45:47.885177] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:56.907 20:45:47 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:07:57.164 [2024-04-16 20:45:48.045936] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:57.164 BaseBdev1 00:07:57.164 20:45:48 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:07:57.164 20:45:48 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:07:57.164 20:45:48 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:07:57.164 20:45:48 -- common/autotest_common.sh@889 -- # local i 00:07:57.164 20:45:48 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:07:57.164 20:45:48 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:07:57.164 20:45:48 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:57.164 20:45:48 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:57.421 [ 00:07:57.421 { 00:07:57.421 "name": "BaseBdev1", 00:07:57.421 "aliases": [ 00:07:57.421 "4de48aad-fc32-11ee-80f8-ef3e42bb1492" 00:07:57.421 ], 00:07:57.421 "product_name": "Malloc disk", 00:07:57.421 "block_size": 512, 00:07:57.421 "num_blocks": 65536, 00:07:57.421 "uuid": "4de48aad-fc32-11ee-80f8-ef3e42bb1492", 00:07:57.421 "assigned_rate_limits": { 00:07:57.421 "rw_ios_per_sec": 0, 00:07:57.421 "rw_mbytes_per_sec": 0, 00:07:57.421 "r_mbytes_per_sec": 0, 00:07:57.421 "w_mbytes_per_sec": 0 00:07:57.421 }, 00:07:57.421 "claimed": true, 00:07:57.421 "claim_type": "exclusive_write", 00:07:57.421 "zoned": false, 00:07:57.421 "supported_io_types": { 00:07:57.421 "read": true, 00:07:57.421 "write": true, 00:07:57.421 "unmap": true, 00:07:57.421 "write_zeroes": true, 00:07:57.421 "flush": true, 00:07:57.421 "reset": true, 00:07:57.421 "compare": false, 00:07:57.421 "compare_and_write": false, 00:07:57.421 "abort": true, 00:07:57.421 "nvme_admin": false, 00:07:57.421 "nvme_io": false 00:07:57.421 }, 00:07:57.421 "memory_domains": [ 00:07:57.421 { 00:07:57.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:57.421 "dma_device_type": 2 00:07:57.421 } 00:07:57.421 ], 00:07:57.421 "driver_specific": {} 00:07:57.421 } 00:07:57.421 ] 00:07:57.421 20:45:48 -- common/autotest_common.sh@895 -- # return 0 00:07:57.421 20:45:48 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:57.421 20:45:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:57.421 20:45:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:57.421 20:45:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:07:57.421 20:45:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:57.421 20:45:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:07:57.421 20:45:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:57.421 20:45:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:57.421 20:45:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:57.421 20:45:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:57.421 20:45:48 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:57.421 20:45:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:57.679 20:45:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:57.679 "name": "Existed_Raid", 00:07:57.679 "uuid": "4dcc1dde-fc32-11ee-80f8-ef3e42bb1492", 00:07:57.679 "strip_size_kb": 64, 00:07:57.679 "state": "configuring", 00:07:57.679 "raid_level": "concat", 00:07:57.679 "superblock": true, 00:07:57.679 "num_base_bdevs": 3, 00:07:57.679 "num_base_bdevs_discovered": 1, 00:07:57.679 "num_base_bdevs_operational": 3, 00:07:57.679 "base_bdevs_list": [ 00:07:57.679 { 00:07:57.679 "name": "BaseBdev1", 00:07:57.679 "uuid": "4de48aad-fc32-11ee-80f8-ef3e42bb1492", 00:07:57.679 "is_configured": true, 00:07:57.679 "data_offset": 2048, 00:07:57.679 "data_size": 63488 00:07:57.679 }, 00:07:57.679 { 00:07:57.679 "name": "BaseBdev2", 00:07:57.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.679 "is_configured": false, 00:07:57.679 "data_offset": 0, 00:07:57.679 "data_size": 0 00:07:57.679 }, 00:07:57.679 { 00:07:57.679 "name": "BaseBdev3", 00:07:57.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.679 "is_configured": false, 00:07:57.679 "data_offset": 0, 00:07:57.679 "data_size": 0 00:07:57.679 } 00:07:57.679 ] 00:07:57.679 }' 00:07:57.679 20:45:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:57.679 20:45:48 -- common/autotest_common.sh@10 -- # set +x 00:07:57.937 20:45:48 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:57.937 [2024-04-16 20:45:49.021545] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:57.937 [2024-04-16 20:45:49.021570] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b767500 name Existed_Raid, state configuring 00:07:57.937 20:45:49 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:07:57.937 20:45:49 -- bdev/bdev_raid.sh@246 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:07:58.194 20:45:49 -- bdev/bdev_raid.sh@247 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:07:58.452 BaseBdev1 00:07:58.452 20:45:49 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:07:58.452 20:45:49 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:07:58.452 20:45:49 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:07:58.452 20:45:49 -- common/autotest_common.sh@889 -- # local i 00:07:58.452 20:45:49 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:07:58.452 20:45:49 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:07:58.452 20:45:49 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:58.710 20:45:49 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:58.710 [ 00:07:58.710 { 00:07:58.710 "name": "BaseBdev1", 00:07:58.710 "aliases": [ 00:07:58.710 "4eb09052-fc32-11ee-80f8-ef3e42bb1492" 00:07:58.710 ], 00:07:58.710 "product_name": "Malloc disk", 00:07:58.710 "block_size": 512, 00:07:58.710 "num_blocks": 65536, 00:07:58.710 "uuid": "4eb09052-fc32-11ee-80f8-ef3e42bb1492", 00:07:58.710 "assigned_rate_limits": { 00:07:58.710 "rw_ios_per_sec": 0, 00:07:58.710 "rw_mbytes_per_sec": 0, 00:07:58.710 "r_mbytes_per_sec": 0, 00:07:58.710 "w_mbytes_per_sec": 0 00:07:58.710 }, 00:07:58.710 "claimed": false, 00:07:58.710 "zoned": false, 00:07:58.710 "supported_io_types": { 00:07:58.710 "read": true, 00:07:58.710 "write": true, 00:07:58.710 "unmap": true, 00:07:58.710 "write_zeroes": true, 00:07:58.710 "flush": true, 00:07:58.710 "reset": true, 00:07:58.710 "compare": false, 00:07:58.710 "compare_and_write": false, 00:07:58.710 "abort": true, 00:07:58.710 "nvme_admin": false, 00:07:58.710 "nvme_io": false 00:07:58.710 }, 00:07:58.710 "memory_domains": [ 00:07:58.710 { 00:07:58.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.710 "dma_device_type": 2 00:07:58.710 } 00:07:58.710 ], 00:07:58.710 "driver_specific": {} 00:07:58.710 } 00:07:58.710 ] 00:07:58.710 20:45:49 -- common/autotest_common.sh@895 -- # return 0 00:07:58.710 20:45:49 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:07:58.968 [2024-04-16 20:45:49.930477] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:58.968 [2024-04-16 20:45:49.930980] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:58.968 [2024-04-16 20:45:49.931014] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:58.968 [2024-04-16 20:45:49.931017] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:58.968 [2024-04-16 20:45:49.931023] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:58.968 20:45:49 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:07:58.968 20:45:49 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:07:58.968 20:45:49 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:58.968 20:45:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:58.968 20:45:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:58.968 20:45:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:07:58.968 20:45:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:58.968 20:45:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:07:58.968 20:45:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:58.968 20:45:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:58.968 20:45:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:58.968 20:45:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:58.968 20:45:49 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:58.968 20:45:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:59.226 20:45:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:59.226 "name": "Existed_Raid", 00:07:59.226 "uuid": "4f0436f7-fc32-11ee-80f8-ef3e42bb1492", 00:07:59.226 "strip_size_kb": 64, 00:07:59.226 "state": "configuring", 00:07:59.226 "raid_level": "concat", 00:07:59.226 "superblock": true, 00:07:59.226 "num_base_bdevs": 3, 00:07:59.226 "num_base_bdevs_discovered": 1, 00:07:59.226 "num_base_bdevs_operational": 3, 00:07:59.226 "base_bdevs_list": [ 00:07:59.226 { 00:07:59.226 "name": "BaseBdev1", 00:07:59.226 "uuid": "4eb09052-fc32-11ee-80f8-ef3e42bb1492", 00:07:59.226 "is_configured": true, 00:07:59.226 "data_offset": 2048, 00:07:59.226 "data_size": 63488 00:07:59.226 }, 00:07:59.226 { 00:07:59.226 "name": "BaseBdev2", 00:07:59.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:59.226 "is_configured": false, 00:07:59.226 "data_offset": 0, 00:07:59.226 "data_size": 0 00:07:59.226 }, 00:07:59.226 { 00:07:59.226 "name": "BaseBdev3", 00:07:59.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:59.226 "is_configured": false, 00:07:59.226 "data_offset": 0, 00:07:59.226 "data_size": 0 00:07:59.226 } 00:07:59.226 ] 00:07:59.226 }' 00:07:59.226 20:45:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:59.226 20:45:50 -- common/autotest_common.sh@10 -- # set +x 00:07:59.484 20:45:50 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:07:59.484 [2024-04-16 20:45:50.546824] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:59.484 BaseBdev2 00:07:59.484 20:45:50 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:07:59.484 20:45:50 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:07:59.484 20:45:50 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:07:59.484 20:45:50 -- common/autotest_common.sh@889 -- # local i 00:07:59.484 20:45:50 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:07:59.484 20:45:50 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:07:59.484 20:45:50 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:59.743 20:45:50 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:00.001 [ 00:08:00.001 { 00:08:00.001 "name": "BaseBdev2", 00:08:00.001 "aliases": [ 00:08:00.001 "4f623f27-fc32-11ee-80f8-ef3e42bb1492" 00:08:00.001 ], 00:08:00.001 "product_name": "Malloc disk", 00:08:00.001 "block_size": 512, 00:08:00.001 "num_blocks": 65536, 00:08:00.001 "uuid": "4f623f27-fc32-11ee-80f8-ef3e42bb1492", 00:08:00.001 "assigned_rate_limits": { 00:08:00.001 "rw_ios_per_sec": 0, 00:08:00.001 "rw_mbytes_per_sec": 0, 00:08:00.001 "r_mbytes_per_sec": 0, 00:08:00.001 "w_mbytes_per_sec": 0 00:08:00.001 }, 00:08:00.001 "claimed": true, 00:08:00.001 "claim_type": "exclusive_write", 00:08:00.001 "zoned": false, 00:08:00.001 "supported_io_types": { 00:08:00.001 "read": true, 00:08:00.001 "write": true, 00:08:00.001 "unmap": true, 00:08:00.001 "write_zeroes": true, 00:08:00.001 "flush": true, 00:08:00.001 "reset": true, 00:08:00.001 "compare": false, 00:08:00.001 "compare_and_write": false, 00:08:00.001 "abort": true, 00:08:00.001 "nvme_admin": false, 00:08:00.001 "nvme_io": false 00:08:00.001 }, 00:08:00.001 "memory_domains": [ 00:08:00.001 { 00:08:00.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:00.001 "dma_device_type": 2 00:08:00.001 } 00:08:00.001 ], 00:08:00.001 "driver_specific": {} 00:08:00.001 } 00:08:00.001 ] 00:08:00.001 20:45:50 -- common/autotest_common.sh@895 -- # return 0 00:08:00.001 20:45:50 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:08:00.001 20:45:50 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:08:00.001 20:45:50 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:00.001 20:45:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:00.001 20:45:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:00.001 20:45:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:08:00.001 20:45:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:00.001 20:45:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:00.001 20:45:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:00.001 20:45:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:00.001 20:45:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:00.001 20:45:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:00.001 20:45:50 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:00.001 20:45:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:00.001 20:45:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:00.001 "name": "Existed_Raid", 00:08:00.001 "uuid": "4f0436f7-fc32-11ee-80f8-ef3e42bb1492", 00:08:00.001 "strip_size_kb": 64, 00:08:00.001 "state": "configuring", 00:08:00.001 "raid_level": "concat", 00:08:00.001 "superblock": true, 00:08:00.001 "num_base_bdevs": 3, 00:08:00.001 "num_base_bdevs_discovered": 2, 00:08:00.001 "num_base_bdevs_operational": 3, 00:08:00.001 "base_bdevs_list": [ 00:08:00.001 { 00:08:00.001 "name": "BaseBdev1", 00:08:00.001 "uuid": "4eb09052-fc32-11ee-80f8-ef3e42bb1492", 00:08:00.001 "is_configured": true, 00:08:00.001 "data_offset": 2048, 00:08:00.001 "data_size": 63488 00:08:00.001 }, 00:08:00.001 { 00:08:00.001 "name": "BaseBdev2", 00:08:00.001 "uuid": "4f623f27-fc32-11ee-80f8-ef3e42bb1492", 00:08:00.001 "is_configured": true, 00:08:00.001 "data_offset": 2048, 00:08:00.001 "data_size": 63488 00:08:00.001 }, 00:08:00.001 { 00:08:00.001 "name": "BaseBdev3", 00:08:00.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.001 "is_configured": false, 00:08:00.001 "data_offset": 0, 00:08:00.001 "data_size": 0 00:08:00.001 } 00:08:00.001 ] 00:08:00.001 }' 00:08:00.001 20:45:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:00.002 20:45:51 -- common/autotest_common.sh@10 -- # set +x 00:08:00.568 20:45:51 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:08:00.568 [2024-04-16 20:45:51.547186] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:00.568 [2024-04-16 20:45:51.547252] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b767a00 00:08:00.568 [2024-04-16 20:45:51.547257] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:00.568 [2024-04-16 20:45:51.547271] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b7caec0 00:08:00.568 [2024-04-16 20:45:51.547304] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b767a00 00:08:00.568 [2024-04-16 20:45:51.547306] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82b767a00 00:08:00.568 [2024-04-16 20:45:51.547319] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:00.568 BaseBdev3 00:08:00.568 20:45:51 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:08:00.568 20:45:51 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:08:00.568 20:45:51 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:00.568 20:45:51 -- common/autotest_common.sh@889 -- # local i 00:08:00.568 20:45:51 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:00.569 20:45:51 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:00.569 20:45:51 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:00.827 20:45:51 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:00.827 [ 00:08:00.827 { 00:08:00.827 "name": "BaseBdev3", 00:08:00.827 "aliases": [ 00:08:00.827 "4ffae48e-fc32-11ee-80f8-ef3e42bb1492" 00:08:00.827 ], 00:08:00.827 "product_name": "Malloc disk", 00:08:00.827 "block_size": 512, 00:08:00.827 "num_blocks": 65536, 00:08:00.827 "uuid": "4ffae48e-fc32-11ee-80f8-ef3e42bb1492", 00:08:00.827 "assigned_rate_limits": { 00:08:00.827 "rw_ios_per_sec": 0, 00:08:00.827 "rw_mbytes_per_sec": 0, 00:08:00.827 "r_mbytes_per_sec": 0, 00:08:00.827 "w_mbytes_per_sec": 0 00:08:00.827 }, 00:08:00.827 "claimed": true, 00:08:00.827 "claim_type": "exclusive_write", 00:08:00.827 "zoned": false, 00:08:00.827 "supported_io_types": { 00:08:00.827 "read": true, 00:08:00.827 "write": true, 00:08:00.827 "unmap": true, 00:08:00.827 "write_zeroes": true, 00:08:00.827 "flush": true, 00:08:00.827 "reset": true, 00:08:00.827 "compare": false, 00:08:00.827 "compare_and_write": false, 00:08:00.827 "abort": true, 00:08:00.827 "nvme_admin": false, 00:08:00.827 "nvme_io": false 00:08:00.827 }, 00:08:00.827 "memory_domains": [ 00:08:00.827 { 00:08:00.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:00.827 "dma_device_type": 2 00:08:00.827 } 00:08:00.827 ], 00:08:00.827 "driver_specific": {} 00:08:00.827 } 00:08:00.827 ] 00:08:00.827 20:45:51 -- common/autotest_common.sh@895 -- # return 0 00:08:00.827 20:45:51 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:08:00.827 20:45:51 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:08:00.827 20:45:51 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:00.827 20:45:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:00.827 20:45:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:08:00.827 20:45:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:08:00.827 20:45:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:00.827 20:45:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:00.827 20:45:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:00.827 20:45:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:00.827 20:45:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:00.827 20:45:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:00.827 20:45:51 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:00.827 20:45:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:01.086 20:45:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:01.086 "name": "Existed_Raid", 00:08:01.086 "uuid": "4f0436f7-fc32-11ee-80f8-ef3e42bb1492", 00:08:01.086 "strip_size_kb": 64, 00:08:01.086 "state": "online", 00:08:01.086 "raid_level": "concat", 00:08:01.086 "superblock": true, 00:08:01.086 "num_base_bdevs": 3, 00:08:01.086 "num_base_bdevs_discovered": 3, 00:08:01.086 "num_base_bdevs_operational": 3, 00:08:01.086 "base_bdevs_list": [ 00:08:01.086 { 00:08:01.086 "name": "BaseBdev1", 00:08:01.086 "uuid": "4eb09052-fc32-11ee-80f8-ef3e42bb1492", 00:08:01.086 "is_configured": true, 00:08:01.086 "data_offset": 2048, 00:08:01.086 "data_size": 63488 00:08:01.086 }, 00:08:01.086 { 00:08:01.086 "name": "BaseBdev2", 00:08:01.086 "uuid": "4f623f27-fc32-11ee-80f8-ef3e42bb1492", 00:08:01.086 "is_configured": true, 00:08:01.086 "data_offset": 2048, 00:08:01.086 "data_size": 63488 00:08:01.086 }, 00:08:01.086 { 00:08:01.086 "name": "BaseBdev3", 00:08:01.086 "uuid": "4ffae48e-fc32-11ee-80f8-ef3e42bb1492", 00:08:01.086 "is_configured": true, 00:08:01.086 "data_offset": 2048, 00:08:01.086 "data_size": 63488 00:08:01.086 } 00:08:01.086 ] 00:08:01.086 }' 00:08:01.086 20:45:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:01.086 20:45:52 -- common/autotest_common.sh@10 -- # set +x 00:08:01.345 20:45:52 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:08:01.604 [2024-04-16 20:45:52.519444] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:01.604 [2024-04-16 20:45:52.519465] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:01.604 [2024-04-16 20:45:52.519476] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:01.604 20:45:52 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:08:01.604 20:45:52 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:08:01.604 20:45:52 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:08:01.604 20:45:52 -- bdev/bdev_raid.sh@197 -- # return 1 00:08:01.604 20:45:52 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:08:01.604 20:45:52 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:01.604 20:45:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:01.604 20:45:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:08:01.604 20:45:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:08:01.604 20:45:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:01.604 20:45:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:08:01.604 20:45:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:01.604 20:45:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:01.604 20:45:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:01.604 20:45:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:01.604 20:45:52 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:01.604 20:45:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:01.604 20:45:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:01.604 "name": "Existed_Raid", 00:08:01.604 "uuid": "4f0436f7-fc32-11ee-80f8-ef3e42bb1492", 00:08:01.604 "strip_size_kb": 64, 00:08:01.604 "state": "offline", 00:08:01.604 "raid_level": "concat", 00:08:01.604 "superblock": true, 00:08:01.604 "num_base_bdevs": 3, 00:08:01.604 "num_base_bdevs_discovered": 2, 00:08:01.604 "num_base_bdevs_operational": 2, 00:08:01.604 "base_bdevs_list": [ 00:08:01.604 { 00:08:01.604 "name": null, 00:08:01.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.604 "is_configured": false, 00:08:01.604 "data_offset": 2048, 00:08:01.604 "data_size": 63488 00:08:01.604 }, 00:08:01.604 { 00:08:01.604 "name": "BaseBdev2", 00:08:01.604 "uuid": "4f623f27-fc32-11ee-80f8-ef3e42bb1492", 00:08:01.604 "is_configured": true, 00:08:01.604 "data_offset": 2048, 00:08:01.604 "data_size": 63488 00:08:01.604 }, 00:08:01.604 { 00:08:01.604 "name": "BaseBdev3", 00:08:01.604 "uuid": "4ffae48e-fc32-11ee-80f8-ef3e42bb1492", 00:08:01.604 "is_configured": true, 00:08:01.604 "data_offset": 2048, 00:08:01.604 "data_size": 63488 00:08:01.604 } 00:08:01.604 ] 00:08:01.604 }' 00:08:01.604 20:45:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:01.604 20:45:52 -- common/autotest_common.sh@10 -- # set +x 00:08:01.862 20:45:52 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:08:01.862 20:45:52 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:08:01.862 20:45:52 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:01.862 20:45:52 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:08:02.126 20:45:53 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:08:02.126 20:45:53 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:02.126 20:45:53 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:08:02.392 [2024-04-16 20:45:53.284308] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:02.393 20:45:53 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:08:02.393 20:45:53 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:08:02.393 20:45:53 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:02.393 20:45:53 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:08:02.393 20:45:53 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:08:02.393 20:45:53 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:02.393 20:45:53 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:08:02.651 [2024-04-16 20:45:53.645038] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:02.651 [2024-04-16 20:45:53.645062] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b767a00 name Existed_Raid, state offline 00:08:02.651 20:45:53 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:08:02.651 20:45:53 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:08:02.651 20:45:53 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:02.651 20:45:53 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:08:02.910 20:45:53 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:08:02.910 20:45:53 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:08:02.910 20:45:53 -- bdev/bdev_raid.sh@287 -- # killprocess 50176 00:08:02.910 20:45:53 -- common/autotest_common.sh@926 -- # '[' -z 50176 ']' 00:08:02.910 20:45:53 -- common/autotest_common.sh@930 -- # kill -0 50176 00:08:02.910 20:45:53 -- common/autotest_common.sh@931 -- # uname 00:08:02.910 20:45:53 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:08:02.910 20:45:53 -- common/autotest_common.sh@934 -- # ps -c -o command 50176 00:08:02.910 20:45:53 -- common/autotest_common.sh@934 -- # tail -1 00:08:02.910 20:45:53 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:08:02.910 20:45:53 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:08:02.910 killing process with pid 50176 00:08:02.910 20:45:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 50176' 00:08:02.910 20:45:53 -- common/autotest_common.sh@945 -- # kill 50176 00:08:02.910 [2024-04-16 20:45:53.836331] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:02.910 [2024-04-16 20:45:53.836368] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:02.910 20:45:53 -- common/autotest_common.sh@950 -- # wait 50176 00:08:02.910 20:45:53 -- bdev/bdev_raid.sh@289 -- # return 0 00:08:02.910 00:08:02.910 real 0m7.966s 00:08:02.910 user 0m13.678s 00:08:02.910 sys 0m1.555s 00:08:02.910 20:45:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:02.910 20:45:53 -- common/autotest_common.sh@10 -- # set +x 00:08:02.910 ************************************ 00:08:02.910 END TEST raid_state_function_test_sb 00:08:02.910 ************************************ 00:08:02.910 20:45:54 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:08:02.910 20:45:54 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:02.910 20:45:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:02.910 20:45:54 -- common/autotest_common.sh@10 -- # set +x 00:08:03.170 ************************************ 00:08:03.170 START TEST raid_superblock_test 00:08:03.170 ************************************ 00:08:03.170 20:45:54 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 3 00:08:03.170 20:45:54 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:08:03.170 20:45:54 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:08:03.170 20:45:54 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:08:03.170 20:45:54 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:08:03.170 20:45:54 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:08:03.170 20:45:54 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:08:03.170 20:45:54 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:08:03.170 20:45:54 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:08:03.170 20:45:54 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:08:03.170 20:45:54 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:08:03.170 20:45:54 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:08:03.170 20:45:54 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:08:03.170 20:45:54 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:08:03.170 20:45:54 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:08:03.170 20:45:54 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:08:03.170 20:45:54 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:08:03.170 20:45:54 -- bdev/bdev_raid.sh@357 -- # raid_pid=50412 00:08:03.170 20:45:54 -- bdev/bdev_raid.sh@358 -- # waitforlisten 50412 /var/tmp/spdk-raid.sock 00:08:03.170 20:45:54 -- bdev/bdev_raid.sh@356 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:08:03.170 20:45:54 -- common/autotest_common.sh@819 -- # '[' -z 50412 ']' 00:08:03.170 20:45:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:03.170 20:45:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:03.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:03.170 20:45:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:03.170 20:45:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:03.170 20:45:54 -- common/autotest_common.sh@10 -- # set +x 00:08:03.170 [2024-04-16 20:45:54.048216] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:08:03.170 [2024-04-16 20:45:54.048468] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:08:03.441 EAL: TSC is not safe to use in SMP mode 00:08:03.441 EAL: TSC is not invariant 00:08:03.441 [2024-04-16 20:45:54.474038] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.699 [2024-04-16 20:45:54.563375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.699 [2024-04-16 20:45:54.563775] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:03.699 [2024-04-16 20:45:54.563784] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:03.957 20:45:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:03.957 20:45:54 -- common/autotest_common.sh@852 -- # return 0 00:08:03.957 20:45:54 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:08:03.957 20:45:54 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:08:03.957 20:45:54 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:08:03.957 20:45:54 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:08:03.957 20:45:54 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:03.957 20:45:54 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:03.957 20:45:54 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:08:03.957 20:45:54 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:03.957 20:45:54 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:08:04.215 malloc1 00:08:04.215 20:45:55 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:04.215 [2024-04-16 20:45:55.291137] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:04.215 [2024-04-16 20:45:55.291200] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:04.215 [2024-04-16 20:45:55.291672] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d434780 00:08:04.215 [2024-04-16 20:45:55.291696] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:04.215 [2024-04-16 20:45:55.292337] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:04.215 [2024-04-16 20:45:55.292365] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:04.215 pt1 00:08:04.215 20:45:55 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:08:04.215 20:45:55 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:08:04.215 20:45:55 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:08:04.215 20:45:55 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:08:04.215 20:45:55 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:04.215 20:45:55 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:04.215 20:45:55 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:08:04.215 20:45:55 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:04.215 20:45:55 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:08:04.473 malloc2 00:08:04.473 20:45:55 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:04.731 [2024-04-16 20:45:55.655275] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:04.731 [2024-04-16 20:45:55.655331] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:04.731 [2024-04-16 20:45:55.655356] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d434c80 00:08:04.731 [2024-04-16 20:45:55.655361] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:04.731 [2024-04-16 20:45:55.655865] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:04.731 [2024-04-16 20:45:55.655890] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:04.731 pt2 00:08:04.731 20:45:55 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:08:04.731 20:45:55 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:08:04.731 20:45:55 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:08:04.731 20:45:55 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:08:04.731 20:45:55 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:04.731 20:45:55 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:04.731 20:45:55 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:08:04.731 20:45:55 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:04.731 20:45:55 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:08:04.731 malloc3 00:08:04.989 20:45:55 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:04.989 [2024-04-16 20:45:56.027397] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:04.989 [2024-04-16 20:45:56.027458] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:04.989 [2024-04-16 20:45:56.027482] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d435180 00:08:04.989 [2024-04-16 20:45:56.027487] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:04.989 [2024-04-16 20:45:56.027925] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:04.989 [2024-04-16 20:45:56.027951] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:04.989 pt3 00:08:04.989 20:45:56 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:08:04.989 20:45:56 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:08:04.989 20:45:56 -- bdev/bdev_raid.sh@375 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:08:05.248 [2024-04-16 20:45:56.183458] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:05.248 [2024-04-16 20:45:56.183838] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:05.248 [2024-04-16 20:45:56.183857] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:05.248 [2024-04-16 20:45:56.183906] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82d435400 00:08:05.248 [2024-04-16 20:45:56.183910] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:05.248 [2024-04-16 20:45:56.183937] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82d497e20 00:08:05.248 [2024-04-16 20:45:56.183987] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82d435400 00:08:05.248 [2024-04-16 20:45:56.183990] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82d435400 00:08:05.248 [2024-04-16 20:45:56.184010] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:05.248 20:45:56 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:05.248 20:45:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:08:05.248 20:45:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:08:05.248 20:45:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:08:05.248 20:45:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:05.248 20:45:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:05.248 20:45:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:05.248 20:45:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:05.248 20:45:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:05.248 20:45:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:05.248 20:45:56 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:05.248 20:45:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:05.506 20:45:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:05.506 "name": "raid_bdev1", 00:08:05.506 "uuid": "52be57fd-fc32-11ee-80f8-ef3e42bb1492", 00:08:05.506 "strip_size_kb": 64, 00:08:05.506 "state": "online", 00:08:05.506 "raid_level": "concat", 00:08:05.506 "superblock": true, 00:08:05.506 "num_base_bdevs": 3, 00:08:05.506 "num_base_bdevs_discovered": 3, 00:08:05.506 "num_base_bdevs_operational": 3, 00:08:05.506 "base_bdevs_list": [ 00:08:05.506 { 00:08:05.506 "name": "pt1", 00:08:05.506 "uuid": "db2176ed-5ad8-2f51-ac01-c09083fdc051", 00:08:05.506 "is_configured": true, 00:08:05.506 "data_offset": 2048, 00:08:05.506 "data_size": 63488 00:08:05.506 }, 00:08:05.506 { 00:08:05.506 "name": "pt2", 00:08:05.506 "uuid": "aa0f7eda-0e62-e156-b53d-72ddf2a461b5", 00:08:05.506 "is_configured": true, 00:08:05.506 "data_offset": 2048, 00:08:05.506 "data_size": 63488 00:08:05.506 }, 00:08:05.506 { 00:08:05.506 "name": "pt3", 00:08:05.506 "uuid": "89db88cc-c3f1-0d5b-96db-443c357f76f3", 00:08:05.506 "is_configured": true, 00:08:05.506 "data_offset": 2048, 00:08:05.506 "data_size": 63488 00:08:05.506 } 00:08:05.506 ] 00:08:05.506 }' 00:08:05.506 20:45:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:05.506 20:45:56 -- common/autotest_common.sh@10 -- # set +x 00:08:05.764 20:45:56 -- bdev/bdev_raid.sh@379 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:05.764 20:45:56 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:08:05.765 [2024-04-16 20:45:56.827689] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:05.765 20:45:56 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=52be57fd-fc32-11ee-80f8-ef3e42bb1492 00:08:05.765 20:45:56 -- bdev/bdev_raid.sh@380 -- # '[' -z 52be57fd-fc32-11ee-80f8-ef3e42bb1492 ']' 00:08:05.765 20:45:56 -- bdev/bdev_raid.sh@385 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:06.023 [2024-04-16 20:45:57.011712] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:06.023 [2024-04-16 20:45:57.011732] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:06.023 [2024-04-16 20:45:57.011751] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:06.023 [2024-04-16 20:45:57.011764] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:06.023 [2024-04-16 20:45:57.011767] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d435400 name raid_bdev1, state offline 00:08:06.023 20:45:57 -- bdev/bdev_raid.sh@386 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:06.023 20:45:57 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:08:06.281 20:45:57 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:08:06.281 20:45:57 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:08:06.281 20:45:57 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:08:06.281 20:45:57 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:08:06.281 20:45:57 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:08:06.281 20:45:57 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:08:06.540 20:45:57 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:08:06.540 20:45:57 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:08:06.797 20:45:57 -- bdev/bdev_raid.sh@395 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:08:06.797 20:45:57 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:07.055 20:45:57 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:08:07.055 20:45:57 -- bdev/bdev_raid.sh@401 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:08:07.055 20:45:57 -- common/autotest_common.sh@640 -- # local es=0 00:08:07.055 20:45:57 -- common/autotest_common.sh@642 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:08:07.056 20:45:57 -- common/autotest_common.sh@628 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:07.056 20:45:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:07.056 20:45:57 -- common/autotest_common.sh@632 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:07.056 20:45:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:07.056 20:45:57 -- common/autotest_common.sh@634 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:07.056 20:45:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:07.056 20:45:57 -- common/autotest_common.sh@634 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:07.056 20:45:57 -- common/autotest_common.sh@634 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:07.056 20:45:57 -- common/autotest_common.sh@643 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:08:07.056 [2024-04-16 20:45:58.108102] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:07.056 [2024-04-16 20:45:58.108539] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:07.056 [2024-04-16 20:45:58.108557] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:07.056 [2024-04-16 20:45:58.108568] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:08:07.056 [2024-04-16 20:45:58.108596] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:08:07.056 [2024-04-16 20:45:58.108604] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:08:07.056 [2024-04-16 20:45:58.108610] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:07.056 [2024-04-16 20:45:58.108614] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d435180 name raid_bdev1, state configuring 00:08:07.056 request: 00:08:07.056 { 00:08:07.056 "name": "raid_bdev1", 00:08:07.056 "raid_level": "concat", 00:08:07.056 "base_bdevs": [ 00:08:07.056 "malloc1", 00:08:07.056 "malloc2", 00:08:07.056 "malloc3" 00:08:07.056 ], 00:08:07.056 "superblock": false, 00:08:07.056 "strip_size_kb": 64, 00:08:07.056 "method": "bdev_raid_create", 00:08:07.056 "req_id": 1 00:08:07.056 } 00:08:07.056 Got JSON-RPC error response 00:08:07.056 response: 00:08:07.056 { 00:08:07.056 "code": -17, 00:08:07.056 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:07.056 } 00:08:07.056 20:45:58 -- common/autotest_common.sh@643 -- # es=1 00:08:07.056 20:45:58 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:07.056 20:45:58 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:08:07.056 20:45:58 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:07.056 20:45:58 -- bdev/bdev_raid.sh@403 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:07.056 20:45:58 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:08:07.314 20:45:58 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:08:07.314 20:45:58 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:08:07.314 20:45:58 -- bdev/bdev_raid.sh@409 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:07.573 [2024-04-16 20:45:58.476218] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:07.573 [2024-04-16 20:45:58.476267] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:07.573 [2024-04-16 20:45:58.476292] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d434c80 00:08:07.573 [2024-04-16 20:45:58.476298] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:07.573 [2024-04-16 20:45:58.476769] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:07.573 [2024-04-16 20:45:58.476793] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:07.573 [2024-04-16 20:45:58.476829] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:08:07.573 [2024-04-16 20:45:58.476839] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:07.573 pt1 00:08:07.573 20:45:58 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:08:07.573 20:45:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:08:07.573 20:45:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:07.573 20:45:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:08:07.573 20:45:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:07.573 20:45:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:07.573 20:45:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:07.573 20:45:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:07.573 20:45:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:07.573 20:45:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:07.573 20:45:58 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:07.573 20:45:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:07.573 20:45:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:07.573 "name": "raid_bdev1", 00:08:07.573 "uuid": "52be57fd-fc32-11ee-80f8-ef3e42bb1492", 00:08:07.573 "strip_size_kb": 64, 00:08:07.573 "state": "configuring", 00:08:07.573 "raid_level": "concat", 00:08:07.573 "superblock": true, 00:08:07.573 "num_base_bdevs": 3, 00:08:07.573 "num_base_bdevs_discovered": 1, 00:08:07.573 "num_base_bdevs_operational": 3, 00:08:07.573 "base_bdevs_list": [ 00:08:07.573 { 00:08:07.573 "name": "pt1", 00:08:07.573 "uuid": "db2176ed-5ad8-2f51-ac01-c09083fdc051", 00:08:07.573 "is_configured": true, 00:08:07.573 "data_offset": 2048, 00:08:07.573 "data_size": 63488 00:08:07.573 }, 00:08:07.573 { 00:08:07.573 "name": null, 00:08:07.573 "uuid": "aa0f7eda-0e62-e156-b53d-72ddf2a461b5", 00:08:07.573 "is_configured": false, 00:08:07.573 "data_offset": 2048, 00:08:07.573 "data_size": 63488 00:08:07.573 }, 00:08:07.573 { 00:08:07.573 "name": null, 00:08:07.573 "uuid": "89db88cc-c3f1-0d5b-96db-443c357f76f3", 00:08:07.573 "is_configured": false, 00:08:07.573 "data_offset": 2048, 00:08:07.573 "data_size": 63488 00:08:07.573 } 00:08:07.573 ] 00:08:07.573 }' 00:08:07.573 20:45:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:07.573 20:45:58 -- common/autotest_common.sh@10 -- # set +x 00:08:07.831 20:45:58 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:08:07.831 20:45:58 -- bdev/bdev_raid.sh@416 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:08.089 [2024-04-16 20:45:59.100413] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:08.089 [2024-04-16 20:45:59.100457] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:08.089 [2024-04-16 20:45:59.100482] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d435680 00:08:08.089 [2024-04-16 20:45:59.100488] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:08.089 [2024-04-16 20:45:59.100580] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:08.089 [2024-04-16 20:45:59.100590] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:08.089 [2024-04-16 20:45:59.100609] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:08:08.089 [2024-04-16 20:45:59.100616] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:08.089 pt2 00:08:08.089 20:45:59 -- bdev/bdev_raid.sh@417 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:08:08.348 [2024-04-16 20:45:59.284476] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:08.348 20:45:59 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:08:08.348 20:45:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:08:08.348 20:45:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:08.348 20:45:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:08:08.348 20:45:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:08.348 20:45:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:08.348 20:45:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:08.348 20:45:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:08.348 20:45:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:08.348 20:45:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:08.348 20:45:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:08.348 20:45:59 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:08.606 20:45:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:08.606 "name": "raid_bdev1", 00:08:08.606 "uuid": "52be57fd-fc32-11ee-80f8-ef3e42bb1492", 00:08:08.606 "strip_size_kb": 64, 00:08:08.606 "state": "configuring", 00:08:08.606 "raid_level": "concat", 00:08:08.606 "superblock": true, 00:08:08.606 "num_base_bdevs": 3, 00:08:08.606 "num_base_bdevs_discovered": 1, 00:08:08.606 "num_base_bdevs_operational": 3, 00:08:08.606 "base_bdevs_list": [ 00:08:08.606 { 00:08:08.606 "name": "pt1", 00:08:08.606 "uuid": "db2176ed-5ad8-2f51-ac01-c09083fdc051", 00:08:08.606 "is_configured": true, 00:08:08.606 "data_offset": 2048, 00:08:08.606 "data_size": 63488 00:08:08.606 }, 00:08:08.606 { 00:08:08.606 "name": null, 00:08:08.606 "uuid": "aa0f7eda-0e62-e156-b53d-72ddf2a461b5", 00:08:08.606 "is_configured": false, 00:08:08.606 "data_offset": 2048, 00:08:08.606 "data_size": 63488 00:08:08.606 }, 00:08:08.606 { 00:08:08.606 "name": null, 00:08:08.606 "uuid": "89db88cc-c3f1-0d5b-96db-443c357f76f3", 00:08:08.606 "is_configured": false, 00:08:08.606 "data_offset": 2048, 00:08:08.606 "data_size": 63488 00:08:08.606 } 00:08:08.606 ] 00:08:08.606 }' 00:08:08.606 20:45:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:08.606 20:45:59 -- common/autotest_common.sh@10 -- # set +x 00:08:08.864 20:45:59 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:08:08.864 20:45:59 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:08:08.864 20:45:59 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:08.864 [2024-04-16 20:45:59.932675] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:08.864 [2024-04-16 20:45:59.932723] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:08.864 [2024-04-16 20:45:59.932748] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d435680 00:08:08.864 [2024-04-16 20:45:59.932754] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:08.864 [2024-04-16 20:45:59.932844] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:08.864 [2024-04-16 20:45:59.932851] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:08.864 [2024-04-16 20:45:59.932869] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:08:08.864 [2024-04-16 20:45:59.932876] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:08.864 pt2 00:08:08.864 20:45:59 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:08:08.864 20:45:59 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:08:08.864 20:45:59 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:09.128 [2024-04-16 20:46:00.116730] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:09.128 [2024-04-16 20:46:00.116777] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:09.128 [2024-04-16 20:46:00.116801] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d435400 00:08:09.128 [2024-04-16 20:46:00.116807] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:09.128 [2024-04-16 20:46:00.116894] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:09.128 [2024-04-16 20:46:00.116901] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:09.128 [2024-04-16 20:46:00.116920] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:08:09.128 [2024-04-16 20:46:00.116926] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:09.128 [2024-04-16 20:46:00.116947] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82d434780 00:08:09.128 [2024-04-16 20:46:00.116950] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:09.128 [2024-04-16 20:46:00.116965] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82d497e20 00:08:09.128 [2024-04-16 20:46:00.117000] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82d434780 00:08:09.128 [2024-04-16 20:46:00.117003] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82d434780 00:08:09.128 [2024-04-16 20:46:00.117034] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:09.128 pt3 00:08:09.128 20:46:00 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:08:09.128 20:46:00 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:08:09.128 20:46:00 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:09.128 20:46:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:08:09.128 20:46:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:08:09.128 20:46:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:08:09.128 20:46:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:09.128 20:46:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:09.128 20:46:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:09.128 20:46:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:09.128 20:46:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:09.128 20:46:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:09.128 20:46:00 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:09.128 20:46:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:09.394 20:46:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:09.394 "name": "raid_bdev1", 00:08:09.394 "uuid": "52be57fd-fc32-11ee-80f8-ef3e42bb1492", 00:08:09.394 "strip_size_kb": 64, 00:08:09.394 "state": "online", 00:08:09.394 "raid_level": "concat", 00:08:09.394 "superblock": true, 00:08:09.394 "num_base_bdevs": 3, 00:08:09.394 "num_base_bdevs_discovered": 3, 00:08:09.394 "num_base_bdevs_operational": 3, 00:08:09.394 "base_bdevs_list": [ 00:08:09.394 { 00:08:09.394 "name": "pt1", 00:08:09.394 "uuid": "db2176ed-5ad8-2f51-ac01-c09083fdc051", 00:08:09.394 "is_configured": true, 00:08:09.394 "data_offset": 2048, 00:08:09.394 "data_size": 63488 00:08:09.394 }, 00:08:09.394 { 00:08:09.394 "name": "pt2", 00:08:09.394 "uuid": "aa0f7eda-0e62-e156-b53d-72ddf2a461b5", 00:08:09.394 "is_configured": true, 00:08:09.394 "data_offset": 2048, 00:08:09.394 "data_size": 63488 00:08:09.394 }, 00:08:09.394 { 00:08:09.394 "name": "pt3", 00:08:09.394 "uuid": "89db88cc-c3f1-0d5b-96db-443c357f76f3", 00:08:09.394 "is_configured": true, 00:08:09.394 "data_offset": 2048, 00:08:09.394 "data_size": 63488 00:08:09.394 } 00:08:09.394 ] 00:08:09.394 }' 00:08:09.394 20:46:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:09.394 20:46:00 -- common/autotest_common.sh@10 -- # set +x 00:08:09.653 20:46:00 -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:09.653 20:46:00 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:08:09.653 [2024-04-16 20:46:00.760951] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:09.912 20:46:00 -- bdev/bdev_raid.sh@430 -- # '[' 52be57fd-fc32-11ee-80f8-ef3e42bb1492 '!=' 52be57fd-fc32-11ee-80f8-ef3e42bb1492 ']' 00:08:09.912 20:46:00 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:08:09.912 20:46:00 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:08:09.912 20:46:00 -- bdev/bdev_raid.sh@197 -- # return 1 00:08:09.912 20:46:00 -- bdev/bdev_raid.sh@511 -- # killprocess 50412 00:08:09.912 20:46:00 -- common/autotest_common.sh@926 -- # '[' -z 50412 ']' 00:08:09.912 20:46:00 -- common/autotest_common.sh@930 -- # kill -0 50412 00:08:09.912 20:46:00 -- common/autotest_common.sh@931 -- # uname 00:08:09.912 20:46:00 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:08:09.912 20:46:00 -- common/autotest_common.sh@934 -- # ps -c -o command 50412 00:08:09.912 20:46:00 -- common/autotest_common.sh@934 -- # tail -1 00:08:09.912 20:46:00 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:08:09.912 20:46:00 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:08:09.912 killing process with pid 50412 00:08:09.912 20:46:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 50412' 00:08:09.912 20:46:00 -- common/autotest_common.sh@945 -- # kill 50412 00:08:09.912 [2024-04-16 20:46:00.794038] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:09.912 [2024-04-16 20:46:00.794074] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:09.912 [2024-04-16 20:46:00.794087] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:09.912 [2024-04-16 20:46:00.794091] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d434780 name raid_bdev1, state offline 00:08:09.912 20:46:00 -- common/autotest_common.sh@950 -- # wait 50412 00:08:09.912 [2024-04-16 20:46:00.807956] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:09.912 20:46:00 -- bdev/bdev_raid.sh@513 -- # return 0 00:08:09.912 00:08:09.912 real 0m6.911s 00:08:09.912 user 0m11.870s 00:08:09.912 sys 0m1.237s 00:08:09.912 20:46:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.912 20:46:00 -- common/autotest_common.sh@10 -- # set +x 00:08:09.912 ************************************ 00:08:09.912 END TEST raid_superblock_test 00:08:09.912 ************************************ 00:08:09.912 20:46:00 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:08:09.912 20:46:00 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:08:09.912 20:46:00 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:08:09.912 20:46:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:09.912 20:46:00 -- common/autotest_common.sh@10 -- # set +x 00:08:09.912 ************************************ 00:08:09.912 START TEST raid_state_function_test 00:08:09.912 ************************************ 00:08:09.912 20:46:00 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 3 false 00:08:09.912 20:46:00 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:08:09.912 20:46:00 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:08:09.912 20:46:00 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:08:09.912 20:46:00 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:08:09.912 20:46:00 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:08:09.912 20:46:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:09.912 20:46:00 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:08:09.912 20:46:00 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:08:09.912 20:46:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:09.912 20:46:00 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:08:09.912 20:46:00 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:08:09.912 20:46:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:09.912 20:46:00 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:08:09.912 20:46:00 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:08:09.912 20:46:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:09.912 20:46:00 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:09.912 20:46:00 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:08:09.912 20:46:01 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:08:09.912 20:46:01 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:08:09.912 20:46:01 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:08:09.912 20:46:01 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:08:09.912 20:46:01 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:08:09.912 20:46:01 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:08:09.912 20:46:01 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:08:09.912 20:46:01 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:08:09.912 20:46:01 -- bdev/bdev_raid.sh@226 -- # raid_pid=50593 00:08:09.912 20:46:01 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 50593' 00:08:09.912 Process raid pid: 50593 00:08:09.913 20:46:01 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:08:09.913 20:46:01 -- bdev/bdev_raid.sh@228 -- # waitforlisten 50593 /var/tmp/spdk-raid.sock 00:08:09.913 20:46:01 -- common/autotest_common.sh@819 -- # '[' -z 50593 ']' 00:08:09.913 20:46:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:09.913 20:46:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:09.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:09.913 20:46:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:09.913 20:46:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:09.913 20:46:01 -- common/autotest_common.sh@10 -- # set +x 00:08:09.913 [2024-04-16 20:46:01.013521] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:08:09.913 [2024-04-16 20:46:01.013862] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:08:10.480 EAL: TSC is not safe to use in SMP mode 00:08:10.480 EAL: TSC is not invariant 00:08:10.480 [2024-04-16 20:46:01.437067] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.480 [2024-04-16 20:46:01.527974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.480 [2024-04-16 20:46:01.528407] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:10.480 [2024-04-16 20:46:01.528415] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:11.046 20:46:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:11.047 20:46:01 -- common/autotest_common.sh@852 -- # return 0 00:08:11.047 20:46:01 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:08:11.047 [2024-04-16 20:46:02.067571] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:11.047 [2024-04-16 20:46:02.067623] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:11.047 [2024-04-16 20:46:02.067627] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:11.047 [2024-04-16 20:46:02.067633] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:11.047 [2024-04-16 20:46:02.067636] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:11.047 [2024-04-16 20:46:02.067641] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:11.047 20:46:02 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:11.047 20:46:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:11.047 20:46:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:11.047 20:46:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:11.047 20:46:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:11.047 20:46:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:11.047 20:46:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:11.047 20:46:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:11.047 20:46:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:11.047 20:46:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:11.047 20:46:02 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:11.047 20:46:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.305 20:46:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:11.305 "name": "Existed_Raid", 00:08:11.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.305 "strip_size_kb": 0, 00:08:11.305 "state": "configuring", 00:08:11.305 "raid_level": "raid1", 00:08:11.305 "superblock": false, 00:08:11.305 "num_base_bdevs": 3, 00:08:11.305 "num_base_bdevs_discovered": 0, 00:08:11.305 "num_base_bdevs_operational": 3, 00:08:11.305 "base_bdevs_list": [ 00:08:11.305 { 00:08:11.305 "name": "BaseBdev1", 00:08:11.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.305 "is_configured": false, 00:08:11.305 "data_offset": 0, 00:08:11.305 "data_size": 0 00:08:11.305 }, 00:08:11.305 { 00:08:11.305 "name": "BaseBdev2", 00:08:11.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.305 "is_configured": false, 00:08:11.305 "data_offset": 0, 00:08:11.305 "data_size": 0 00:08:11.305 }, 00:08:11.305 { 00:08:11.305 "name": "BaseBdev3", 00:08:11.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.305 "is_configured": false, 00:08:11.305 "data_offset": 0, 00:08:11.305 "data_size": 0 00:08:11.305 } 00:08:11.305 ] 00:08:11.305 }' 00:08:11.305 20:46:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:11.305 20:46:02 -- common/autotest_common.sh@10 -- # set +x 00:08:11.563 20:46:02 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:11.821 [2024-04-16 20:46:02.715737] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:11.821 [2024-04-16 20:46:02.715759] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b5b7500 name Existed_Raid, state configuring 00:08:11.821 20:46:02 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:08:11.821 [2024-04-16 20:46:02.895791] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:11.821 [2024-04-16 20:46:02.895827] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:11.821 [2024-04-16 20:46:02.895831] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:11.821 [2024-04-16 20:46:02.895837] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:11.821 [2024-04-16 20:46:02.895839] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:11.821 [2024-04-16 20:46:02.895845] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:11.821 20:46:02 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:08:12.079 [2024-04-16 20:46:03.080616] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:12.079 BaseBdev1 00:08:12.079 20:46:03 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:08:12.079 20:46:03 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:08:12.079 20:46:03 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:12.079 20:46:03 -- common/autotest_common.sh@889 -- # local i 00:08:12.079 20:46:03 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:12.079 20:46:03 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:12.079 20:46:03 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:12.338 20:46:03 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:12.338 [ 00:08:12.338 { 00:08:12.338 "name": "BaseBdev1", 00:08:12.338 "aliases": [ 00:08:12.338 "56daa758-fc32-11ee-80f8-ef3e42bb1492" 00:08:12.338 ], 00:08:12.338 "product_name": "Malloc disk", 00:08:12.338 "block_size": 512, 00:08:12.338 "num_blocks": 65536, 00:08:12.338 "uuid": "56daa758-fc32-11ee-80f8-ef3e42bb1492", 00:08:12.338 "assigned_rate_limits": { 00:08:12.338 "rw_ios_per_sec": 0, 00:08:12.338 "rw_mbytes_per_sec": 0, 00:08:12.338 "r_mbytes_per_sec": 0, 00:08:12.338 "w_mbytes_per_sec": 0 00:08:12.338 }, 00:08:12.338 "claimed": true, 00:08:12.338 "claim_type": "exclusive_write", 00:08:12.338 "zoned": false, 00:08:12.338 "supported_io_types": { 00:08:12.338 "read": true, 00:08:12.338 "write": true, 00:08:12.338 "unmap": true, 00:08:12.338 "write_zeroes": true, 00:08:12.338 "flush": true, 00:08:12.338 "reset": true, 00:08:12.338 "compare": false, 00:08:12.338 "compare_and_write": false, 00:08:12.338 "abort": true, 00:08:12.338 "nvme_admin": false, 00:08:12.338 "nvme_io": false 00:08:12.338 }, 00:08:12.338 "memory_domains": [ 00:08:12.338 { 00:08:12.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.338 "dma_device_type": 2 00:08:12.338 } 00:08:12.338 ], 00:08:12.338 "driver_specific": {} 00:08:12.338 } 00:08:12.338 ] 00:08:12.338 20:46:03 -- common/autotest_common.sh@895 -- # return 0 00:08:12.338 20:46:03 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:12.338 20:46:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:12.338 20:46:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:12.338 20:46:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:12.338 20:46:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:12.338 20:46:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:12.338 20:46:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:12.338 20:46:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:12.338 20:46:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:12.338 20:46:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:12.338 20:46:03 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:12.338 20:46:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:12.595 20:46:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:12.595 "name": "Existed_Raid", 00:08:12.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.595 "strip_size_kb": 0, 00:08:12.595 "state": "configuring", 00:08:12.595 "raid_level": "raid1", 00:08:12.595 "superblock": false, 00:08:12.595 "num_base_bdevs": 3, 00:08:12.595 "num_base_bdevs_discovered": 1, 00:08:12.595 "num_base_bdevs_operational": 3, 00:08:12.595 "base_bdevs_list": [ 00:08:12.595 { 00:08:12.595 "name": "BaseBdev1", 00:08:12.595 "uuid": "56daa758-fc32-11ee-80f8-ef3e42bb1492", 00:08:12.595 "is_configured": true, 00:08:12.595 "data_offset": 0, 00:08:12.595 "data_size": 65536 00:08:12.595 }, 00:08:12.595 { 00:08:12.595 "name": "BaseBdev2", 00:08:12.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.595 "is_configured": false, 00:08:12.595 "data_offset": 0, 00:08:12.595 "data_size": 0 00:08:12.595 }, 00:08:12.595 { 00:08:12.595 "name": "BaseBdev3", 00:08:12.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.595 "is_configured": false, 00:08:12.595 "data_offset": 0, 00:08:12.595 "data_size": 0 00:08:12.595 } 00:08:12.595 ] 00:08:12.595 }' 00:08:12.595 20:46:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:12.595 20:46:03 -- common/autotest_common.sh@10 -- # set +x 00:08:12.853 20:46:03 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:13.211 [2024-04-16 20:46:04.060132] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:13.211 [2024-04-16 20:46:04.060160] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b5b7500 name Existed_Raid, state configuring 00:08:13.211 20:46:04 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:08:13.211 20:46:04 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:08:13.211 [2024-04-16 20:46:04.244192] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:13.211 [2024-04-16 20:46:04.244801] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:13.211 [2024-04-16 20:46:04.244839] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:13.211 [2024-04-16 20:46:04.244843] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:13.211 [2024-04-16 20:46:04.244850] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:13.211 20:46:04 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:08:13.211 20:46:04 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:08:13.211 20:46:04 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:13.211 20:46:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:13.211 20:46:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:13.211 20:46:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:13.211 20:46:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:13.211 20:46:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:13.211 20:46:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:13.211 20:46:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:13.211 20:46:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:13.211 20:46:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:13.211 20:46:04 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:13.211 20:46:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:13.477 20:46:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:13.477 "name": "Existed_Raid", 00:08:13.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.477 "strip_size_kb": 0, 00:08:13.477 "state": "configuring", 00:08:13.477 "raid_level": "raid1", 00:08:13.477 "superblock": false, 00:08:13.477 "num_base_bdevs": 3, 00:08:13.477 "num_base_bdevs_discovered": 1, 00:08:13.477 "num_base_bdevs_operational": 3, 00:08:13.477 "base_bdevs_list": [ 00:08:13.477 { 00:08:13.477 "name": "BaseBdev1", 00:08:13.477 "uuid": "56daa758-fc32-11ee-80f8-ef3e42bb1492", 00:08:13.477 "is_configured": true, 00:08:13.477 "data_offset": 0, 00:08:13.477 "data_size": 65536 00:08:13.477 }, 00:08:13.477 { 00:08:13.477 "name": "BaseBdev2", 00:08:13.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.477 "is_configured": false, 00:08:13.477 "data_offset": 0, 00:08:13.477 "data_size": 0 00:08:13.477 }, 00:08:13.477 { 00:08:13.477 "name": "BaseBdev3", 00:08:13.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.477 "is_configured": false, 00:08:13.477 "data_offset": 0, 00:08:13.477 "data_size": 0 00:08:13.477 } 00:08:13.477 ] 00:08:13.477 }' 00:08:13.477 20:46:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:13.477 20:46:04 -- common/autotest_common.sh@10 -- # set +x 00:08:13.734 20:46:04 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:08:13.992 [2024-04-16 20:46:04.900509] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:13.992 BaseBdev2 00:08:13.992 20:46:04 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:08:13.992 20:46:04 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:08:13.992 20:46:04 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:13.992 20:46:04 -- common/autotest_common.sh@889 -- # local i 00:08:13.992 20:46:04 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:13.992 20:46:04 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:13.992 20:46:04 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:13.992 20:46:05 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:14.249 [ 00:08:14.249 { 00:08:14.249 "name": "BaseBdev2", 00:08:14.249 "aliases": [ 00:08:14.249 "57f07217-fc32-11ee-80f8-ef3e42bb1492" 00:08:14.249 ], 00:08:14.249 "product_name": "Malloc disk", 00:08:14.249 "block_size": 512, 00:08:14.249 "num_blocks": 65536, 00:08:14.249 "uuid": "57f07217-fc32-11ee-80f8-ef3e42bb1492", 00:08:14.249 "assigned_rate_limits": { 00:08:14.249 "rw_ios_per_sec": 0, 00:08:14.249 "rw_mbytes_per_sec": 0, 00:08:14.249 "r_mbytes_per_sec": 0, 00:08:14.249 "w_mbytes_per_sec": 0 00:08:14.249 }, 00:08:14.249 "claimed": true, 00:08:14.249 "claim_type": "exclusive_write", 00:08:14.249 "zoned": false, 00:08:14.249 "supported_io_types": { 00:08:14.249 "read": true, 00:08:14.249 "write": true, 00:08:14.249 "unmap": true, 00:08:14.249 "write_zeroes": true, 00:08:14.249 "flush": true, 00:08:14.249 "reset": true, 00:08:14.249 "compare": false, 00:08:14.249 "compare_and_write": false, 00:08:14.249 "abort": true, 00:08:14.249 "nvme_admin": false, 00:08:14.249 "nvme_io": false 00:08:14.249 }, 00:08:14.249 "memory_domains": [ 00:08:14.249 { 00:08:14.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.249 "dma_device_type": 2 00:08:14.249 } 00:08:14.249 ], 00:08:14.249 "driver_specific": {} 00:08:14.249 } 00:08:14.249 ] 00:08:14.249 20:46:05 -- common/autotest_common.sh@895 -- # return 0 00:08:14.249 20:46:05 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:08:14.249 20:46:05 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:08:14.249 20:46:05 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:14.249 20:46:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:14.249 20:46:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:14.249 20:46:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:14.249 20:46:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:14.249 20:46:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:14.249 20:46:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:14.249 20:46:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:14.249 20:46:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:14.250 20:46:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:14.250 20:46:05 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:14.250 20:46:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:14.507 20:46:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:14.507 "name": "Existed_Raid", 00:08:14.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.507 "strip_size_kb": 0, 00:08:14.507 "state": "configuring", 00:08:14.507 "raid_level": "raid1", 00:08:14.507 "superblock": false, 00:08:14.507 "num_base_bdevs": 3, 00:08:14.507 "num_base_bdevs_discovered": 2, 00:08:14.507 "num_base_bdevs_operational": 3, 00:08:14.507 "base_bdevs_list": [ 00:08:14.507 { 00:08:14.507 "name": "BaseBdev1", 00:08:14.507 "uuid": "56daa758-fc32-11ee-80f8-ef3e42bb1492", 00:08:14.507 "is_configured": true, 00:08:14.507 "data_offset": 0, 00:08:14.507 "data_size": 65536 00:08:14.507 }, 00:08:14.507 { 00:08:14.507 "name": "BaseBdev2", 00:08:14.507 "uuid": "57f07217-fc32-11ee-80f8-ef3e42bb1492", 00:08:14.507 "is_configured": true, 00:08:14.507 "data_offset": 0, 00:08:14.507 "data_size": 65536 00:08:14.507 }, 00:08:14.507 { 00:08:14.507 "name": "BaseBdev3", 00:08:14.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.507 "is_configured": false, 00:08:14.507 "data_offset": 0, 00:08:14.507 "data_size": 0 00:08:14.507 } 00:08:14.507 ] 00:08:14.507 }' 00:08:14.507 20:46:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:14.507 20:46:05 -- common/autotest_common.sh@10 -- # set +x 00:08:14.765 20:46:05 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:08:14.765 [2024-04-16 20:46:05.884769] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:14.765 [2024-04-16 20:46:05.884814] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b5b7a00 00:08:14.765 [2024-04-16 20:46:05.884818] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:14.765 [2024-04-16 20:46:05.884838] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b61aec0 00:08:14.765 [2024-04-16 20:46:05.884922] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b5b7a00 00:08:14.765 [2024-04-16 20:46:05.884926] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82b5b7a00 00:08:14.765 [2024-04-16 20:46:05.884953] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:15.023 BaseBdev3 00:08:15.023 20:46:05 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:08:15.023 20:46:05 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:08:15.023 20:46:05 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:15.023 20:46:05 -- common/autotest_common.sh@889 -- # local i 00:08:15.023 20:46:05 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:15.023 20:46:05 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:15.023 20:46:05 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:15.023 20:46:06 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:15.281 [ 00:08:15.281 { 00:08:15.281 "name": "BaseBdev3", 00:08:15.281 "aliases": [ 00:08:15.281 "5886a224-fc32-11ee-80f8-ef3e42bb1492" 00:08:15.281 ], 00:08:15.281 "product_name": "Malloc disk", 00:08:15.281 "block_size": 512, 00:08:15.281 "num_blocks": 65536, 00:08:15.281 "uuid": "5886a224-fc32-11ee-80f8-ef3e42bb1492", 00:08:15.281 "assigned_rate_limits": { 00:08:15.281 "rw_ios_per_sec": 0, 00:08:15.281 "rw_mbytes_per_sec": 0, 00:08:15.281 "r_mbytes_per_sec": 0, 00:08:15.281 "w_mbytes_per_sec": 0 00:08:15.281 }, 00:08:15.281 "claimed": true, 00:08:15.281 "claim_type": "exclusive_write", 00:08:15.281 "zoned": false, 00:08:15.281 "supported_io_types": { 00:08:15.281 "read": true, 00:08:15.281 "write": true, 00:08:15.281 "unmap": true, 00:08:15.281 "write_zeroes": true, 00:08:15.281 "flush": true, 00:08:15.281 "reset": true, 00:08:15.281 "compare": false, 00:08:15.281 "compare_and_write": false, 00:08:15.281 "abort": true, 00:08:15.281 "nvme_admin": false, 00:08:15.281 "nvme_io": false 00:08:15.281 }, 00:08:15.281 "memory_domains": [ 00:08:15.281 { 00:08:15.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.281 "dma_device_type": 2 00:08:15.281 } 00:08:15.281 ], 00:08:15.281 "driver_specific": {} 00:08:15.281 } 00:08:15.281 ] 00:08:15.281 20:46:06 -- common/autotest_common.sh@895 -- # return 0 00:08:15.281 20:46:06 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:08:15.281 20:46:06 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:08:15.281 20:46:06 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:08:15.281 20:46:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:15.281 20:46:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:08:15.281 20:46:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:15.281 20:46:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:15.281 20:46:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:15.281 20:46:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:15.281 20:46:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:15.281 20:46:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:15.281 20:46:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:15.281 20:46:06 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:15.281 20:46:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.540 20:46:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:15.540 "name": "Existed_Raid", 00:08:15.540 "uuid": "5886a7ec-fc32-11ee-80f8-ef3e42bb1492", 00:08:15.540 "strip_size_kb": 0, 00:08:15.540 "state": "online", 00:08:15.540 "raid_level": "raid1", 00:08:15.540 "superblock": false, 00:08:15.540 "num_base_bdevs": 3, 00:08:15.540 "num_base_bdevs_discovered": 3, 00:08:15.540 "num_base_bdevs_operational": 3, 00:08:15.540 "base_bdevs_list": [ 00:08:15.540 { 00:08:15.540 "name": "BaseBdev1", 00:08:15.540 "uuid": "56daa758-fc32-11ee-80f8-ef3e42bb1492", 00:08:15.540 "is_configured": true, 00:08:15.540 "data_offset": 0, 00:08:15.540 "data_size": 65536 00:08:15.540 }, 00:08:15.540 { 00:08:15.540 "name": "BaseBdev2", 00:08:15.540 "uuid": "57f07217-fc32-11ee-80f8-ef3e42bb1492", 00:08:15.540 "is_configured": true, 00:08:15.540 "data_offset": 0, 00:08:15.540 "data_size": 65536 00:08:15.540 }, 00:08:15.540 { 00:08:15.540 "name": "BaseBdev3", 00:08:15.540 "uuid": "5886a224-fc32-11ee-80f8-ef3e42bb1492", 00:08:15.540 "is_configured": true, 00:08:15.540 "data_offset": 0, 00:08:15.540 "data_size": 65536 00:08:15.540 } 00:08:15.540 ] 00:08:15.540 }' 00:08:15.540 20:46:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:15.540 20:46:06 -- common/autotest_common.sh@10 -- # set +x 00:08:15.798 20:46:06 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:08:15.798 [2024-04-16 20:46:06.896935] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:15.798 20:46:06 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:08:15.798 20:46:06 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:08:15.798 20:46:06 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:08:15.798 20:46:06 -- bdev/bdev_raid.sh@196 -- # return 0 00:08:15.798 20:46:06 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:08:15.798 20:46:06 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:15.798 20:46:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:15.798 20:46:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:08:15.798 20:46:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:15.798 20:46:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:15.798 20:46:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:08:15.798 20:46:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:15.798 20:46:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:15.798 20:46:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:15.798 20:46:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:15.798 20:46:06 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:15.798 20:46:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.056 20:46:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:16.056 "name": "Existed_Raid", 00:08:16.056 "uuid": "5886a7ec-fc32-11ee-80f8-ef3e42bb1492", 00:08:16.056 "strip_size_kb": 0, 00:08:16.056 "state": "online", 00:08:16.056 "raid_level": "raid1", 00:08:16.056 "superblock": false, 00:08:16.056 "num_base_bdevs": 3, 00:08:16.056 "num_base_bdevs_discovered": 2, 00:08:16.056 "num_base_bdevs_operational": 2, 00:08:16.056 "base_bdevs_list": [ 00:08:16.056 { 00:08:16.056 "name": null, 00:08:16.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.056 "is_configured": false, 00:08:16.056 "data_offset": 0, 00:08:16.056 "data_size": 65536 00:08:16.056 }, 00:08:16.056 { 00:08:16.056 "name": "BaseBdev2", 00:08:16.056 "uuid": "57f07217-fc32-11ee-80f8-ef3e42bb1492", 00:08:16.056 "is_configured": true, 00:08:16.056 "data_offset": 0, 00:08:16.056 "data_size": 65536 00:08:16.056 }, 00:08:16.056 { 00:08:16.056 "name": "BaseBdev3", 00:08:16.056 "uuid": "5886a224-fc32-11ee-80f8-ef3e42bb1492", 00:08:16.056 "is_configured": true, 00:08:16.056 "data_offset": 0, 00:08:16.056 "data_size": 65536 00:08:16.056 } 00:08:16.056 ] 00:08:16.056 }' 00:08:16.056 20:46:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:16.056 20:46:07 -- common/autotest_common.sh@10 -- # set +x 00:08:16.315 20:46:07 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:08:16.315 20:46:07 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:08:16.315 20:46:07 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:16.315 20:46:07 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:08:16.573 20:46:07 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:08:16.573 20:46:07 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:16.573 20:46:07 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:08:16.830 [2024-04-16 20:46:07.709788] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:16.830 20:46:07 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:08:16.830 20:46:07 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:08:16.830 20:46:07 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:08:16.830 20:46:07 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:16.830 20:46:07 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:08:16.830 20:46:07 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:16.830 20:46:07 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:08:17.088 [2024-04-16 20:46:08.086528] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:17.088 [2024-04-16 20:46:08.086550] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:17.088 [2024-04-16 20:46:08.086561] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:17.088 [2024-04-16 20:46:08.091132] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:17.088 [2024-04-16 20:46:08.091145] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b5b7a00 name Existed_Raid, state offline 00:08:17.088 20:46:08 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:08:17.088 20:46:08 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:08:17.088 20:46:08 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:08:17.088 20:46:08 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:17.347 20:46:08 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:08:17.347 20:46:08 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:08:17.347 20:46:08 -- bdev/bdev_raid.sh@287 -- # killprocess 50593 00:08:17.347 20:46:08 -- common/autotest_common.sh@926 -- # '[' -z 50593 ']' 00:08:17.347 20:46:08 -- common/autotest_common.sh@930 -- # kill -0 50593 00:08:17.347 20:46:08 -- common/autotest_common.sh@931 -- # uname 00:08:17.347 20:46:08 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:08:17.347 20:46:08 -- common/autotest_common.sh@934 -- # ps -c -o command 50593 00:08:17.347 20:46:08 -- common/autotest_common.sh@934 -- # tail -1 00:08:17.347 20:46:08 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:08:17.347 20:46:08 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:08:17.347 20:46:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 50593' 00:08:17.347 killing process with pid 50593 00:08:17.347 20:46:08 -- common/autotest_common.sh@945 -- # kill 50593 00:08:17.347 [2024-04-16 20:46:08.316355] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:17.347 20:46:08 -- common/autotest_common.sh@950 -- # wait 50593 00:08:17.347 [2024-04-16 20:46:08.316406] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:17.347 20:46:08 -- bdev/bdev_raid.sh@289 -- # return 0 00:08:17.347 00:08:17.347 real 0m7.459s 00:08:17.347 user 0m12.861s 00:08:17.347 sys 0m1.400s 00:08:17.347 20:46:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:17.347 20:46:08 -- common/autotest_common.sh@10 -- # set +x 00:08:17.347 ************************************ 00:08:17.347 END TEST raid_state_function_test 00:08:17.347 ************************************ 00:08:17.607 20:46:08 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:08:17.607 20:46:08 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:08:17.607 20:46:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:17.607 20:46:08 -- common/autotest_common.sh@10 -- # set +x 00:08:17.607 ************************************ 00:08:17.607 START TEST raid_state_function_test_sb 00:08:17.607 ************************************ 00:08:17.607 20:46:08 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 3 true 00:08:17.607 20:46:08 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:08:17.607 20:46:08 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:08:17.607 20:46:08 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:08:17.607 20:46:08 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:08:17.607 20:46:08 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:08:17.607 20:46:08 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:17.607 20:46:08 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:08:17.607 20:46:08 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:08:17.607 20:46:08 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:17.607 20:46:08 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:08:17.607 20:46:08 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:08:17.607 20:46:08 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:17.607 20:46:08 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:08:17.607 20:46:08 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:08:17.607 20:46:08 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:17.607 20:46:08 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:17.607 20:46:08 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:08:17.607 20:46:08 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:08:17.607 20:46:08 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:08:17.607 20:46:08 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:08:17.607 20:46:08 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:08:17.607 20:46:08 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:08:17.607 20:46:08 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:08:17.607 20:46:08 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:08:17.607 20:46:08 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:08:17.607 20:46:08 -- bdev/bdev_raid.sh@226 -- # raid_pid=50826 00:08:17.607 20:46:08 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 50826' 00:08:17.607 Process raid pid: 50826 00:08:17.607 20:46:08 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:08:17.607 20:46:08 -- bdev/bdev_raid.sh@228 -- # waitforlisten 50826 /var/tmp/spdk-raid.sock 00:08:17.607 20:46:08 -- common/autotest_common.sh@819 -- # '[' -z 50826 ']' 00:08:17.607 20:46:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:17.607 20:46:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:17.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:17.607 20:46:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:17.607 20:46:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:17.607 20:46:08 -- common/autotest_common.sh@10 -- # set +x 00:08:17.607 [2024-04-16 20:46:08.524073] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:08:17.607 [2024-04-16 20:46:08.524411] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:08:17.866 EAL: TSC is not safe to use in SMP mode 00:08:17.866 EAL: TSC is not invariant 00:08:17.866 [2024-04-16 20:46:08.949796] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.125 [2024-04-16 20:46:09.043251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.125 [2024-04-16 20:46:09.043667] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:18.125 [2024-04-16 20:46:09.043676] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:18.383 20:46:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:18.383 20:46:09 -- common/autotest_common.sh@852 -- # return 0 00:08:18.383 20:46:09 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:08:18.642 [2024-04-16 20:46:09.594885] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:18.642 [2024-04-16 20:46:09.594930] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:18.642 [2024-04-16 20:46:09.594934] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:18.642 [2024-04-16 20:46:09.594940] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:18.642 [2024-04-16 20:46:09.594943] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:18.642 [2024-04-16 20:46:09.594948] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:18.642 20:46:09 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:18.642 20:46:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:18.643 20:46:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:18.643 20:46:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:18.643 20:46:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:18.643 20:46:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:18.643 20:46:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:18.643 20:46:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:18.643 20:46:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:18.643 20:46:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:18.643 20:46:09 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:18.643 20:46:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:18.901 20:46:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:18.901 "name": "Existed_Raid", 00:08:18.901 "uuid": "5abcc408-fc32-11ee-80f8-ef3e42bb1492", 00:08:18.901 "strip_size_kb": 0, 00:08:18.901 "state": "configuring", 00:08:18.901 "raid_level": "raid1", 00:08:18.901 "superblock": true, 00:08:18.901 "num_base_bdevs": 3, 00:08:18.901 "num_base_bdevs_discovered": 0, 00:08:18.901 "num_base_bdevs_operational": 3, 00:08:18.901 "base_bdevs_list": [ 00:08:18.901 { 00:08:18.901 "name": "BaseBdev1", 00:08:18.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.901 "is_configured": false, 00:08:18.901 "data_offset": 0, 00:08:18.901 "data_size": 0 00:08:18.901 }, 00:08:18.901 { 00:08:18.901 "name": "BaseBdev2", 00:08:18.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.902 "is_configured": false, 00:08:18.902 "data_offset": 0, 00:08:18.902 "data_size": 0 00:08:18.902 }, 00:08:18.902 { 00:08:18.902 "name": "BaseBdev3", 00:08:18.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.902 "is_configured": false, 00:08:18.902 "data_offset": 0, 00:08:18.902 "data_size": 0 00:08:18.902 } 00:08:18.902 ] 00:08:18.902 }' 00:08:18.902 20:46:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:18.902 20:46:09 -- common/autotest_common.sh@10 -- # set +x 00:08:19.160 20:46:10 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:19.160 [2024-04-16 20:46:10.227014] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:19.160 [2024-04-16 20:46:10.227038] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b178500 name Existed_Raid, state configuring 00:08:19.160 20:46:10 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:08:19.418 [2024-04-16 20:46:10.411072] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:19.418 [2024-04-16 20:46:10.411116] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:19.418 [2024-04-16 20:46:10.411120] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:19.418 [2024-04-16 20:46:10.411126] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:19.418 [2024-04-16 20:46:10.411128] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:19.418 [2024-04-16 20:46:10.411133] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:19.418 20:46:10 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:08:19.675 [2024-04-16 20:46:10.599869] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:19.675 BaseBdev1 00:08:19.675 20:46:10 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:08:19.675 20:46:10 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:08:19.675 20:46:10 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:19.675 20:46:10 -- common/autotest_common.sh@889 -- # local i 00:08:19.675 20:46:10 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:19.675 20:46:10 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:19.676 20:46:10 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:19.934 20:46:10 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:19.934 [ 00:08:19.934 { 00:08:19.934 "name": "BaseBdev1", 00:08:19.934 "aliases": [ 00:08:19.934 "5b55ffe3-fc32-11ee-80f8-ef3e42bb1492" 00:08:19.934 ], 00:08:19.934 "product_name": "Malloc disk", 00:08:19.934 "block_size": 512, 00:08:19.934 "num_blocks": 65536, 00:08:19.934 "uuid": "5b55ffe3-fc32-11ee-80f8-ef3e42bb1492", 00:08:19.934 "assigned_rate_limits": { 00:08:19.934 "rw_ios_per_sec": 0, 00:08:19.934 "rw_mbytes_per_sec": 0, 00:08:19.934 "r_mbytes_per_sec": 0, 00:08:19.934 "w_mbytes_per_sec": 0 00:08:19.934 }, 00:08:19.934 "claimed": true, 00:08:19.934 "claim_type": "exclusive_write", 00:08:19.934 "zoned": false, 00:08:19.934 "supported_io_types": { 00:08:19.934 "read": true, 00:08:19.934 "write": true, 00:08:19.934 "unmap": true, 00:08:19.934 "write_zeroes": true, 00:08:19.934 "flush": true, 00:08:19.934 "reset": true, 00:08:19.934 "compare": false, 00:08:19.934 "compare_and_write": false, 00:08:19.934 "abort": true, 00:08:19.934 "nvme_admin": false, 00:08:19.934 "nvme_io": false 00:08:19.934 }, 00:08:19.934 "memory_domains": [ 00:08:19.934 { 00:08:19.934 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.934 "dma_device_type": 2 00:08:19.934 } 00:08:19.934 ], 00:08:19.934 "driver_specific": {} 00:08:19.934 } 00:08:19.934 ] 00:08:19.934 20:46:10 -- common/autotest_common.sh@895 -- # return 0 00:08:19.934 20:46:10 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:19.934 20:46:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:19.934 20:46:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:19.934 20:46:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:19.934 20:46:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:19.934 20:46:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:19.934 20:46:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:19.934 20:46:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:19.934 20:46:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:19.934 20:46:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:19.934 20:46:10 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:19.934 20:46:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:20.192 20:46:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:20.192 "name": "Existed_Raid", 00:08:20.192 "uuid": "5b394e4e-fc32-11ee-80f8-ef3e42bb1492", 00:08:20.192 "strip_size_kb": 0, 00:08:20.192 "state": "configuring", 00:08:20.192 "raid_level": "raid1", 00:08:20.192 "superblock": true, 00:08:20.192 "num_base_bdevs": 3, 00:08:20.192 "num_base_bdevs_discovered": 1, 00:08:20.192 "num_base_bdevs_operational": 3, 00:08:20.192 "base_bdevs_list": [ 00:08:20.192 { 00:08:20.192 "name": "BaseBdev1", 00:08:20.192 "uuid": "5b55ffe3-fc32-11ee-80f8-ef3e42bb1492", 00:08:20.192 "is_configured": true, 00:08:20.192 "data_offset": 2048, 00:08:20.192 "data_size": 63488 00:08:20.192 }, 00:08:20.192 { 00:08:20.192 "name": "BaseBdev2", 00:08:20.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:20.192 "is_configured": false, 00:08:20.192 "data_offset": 0, 00:08:20.192 "data_size": 0 00:08:20.192 }, 00:08:20.192 { 00:08:20.192 "name": "BaseBdev3", 00:08:20.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:20.192 "is_configured": false, 00:08:20.193 "data_offset": 0, 00:08:20.193 "data_size": 0 00:08:20.193 } 00:08:20.193 ] 00:08:20.193 }' 00:08:20.193 20:46:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:20.193 20:46:11 -- common/autotest_common.sh@10 -- # set +x 00:08:20.451 20:46:11 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:20.709 [2024-04-16 20:46:11.611348] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:20.709 [2024-04-16 20:46:11.611376] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b178500 name Existed_Raid, state configuring 00:08:20.709 20:46:11 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:08:20.709 20:46:11 -- bdev/bdev_raid.sh@246 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:08:20.709 20:46:11 -- bdev/bdev_raid.sh@247 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:08:20.966 BaseBdev1 00:08:20.966 20:46:11 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:08:20.966 20:46:11 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:08:20.966 20:46:11 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:20.966 20:46:11 -- common/autotest_common.sh@889 -- # local i 00:08:20.966 20:46:11 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:20.966 20:46:11 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:20.966 20:46:11 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:21.224 20:46:12 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:21.482 [ 00:08:21.482 { 00:08:21.482 "name": "BaseBdev1", 00:08:21.482 "aliases": [ 00:08:21.482 "5c25a671-fc32-11ee-80f8-ef3e42bb1492" 00:08:21.482 ], 00:08:21.482 "product_name": "Malloc disk", 00:08:21.482 "block_size": 512, 00:08:21.482 "num_blocks": 65536, 00:08:21.482 "uuid": "5c25a671-fc32-11ee-80f8-ef3e42bb1492", 00:08:21.482 "assigned_rate_limits": { 00:08:21.482 "rw_ios_per_sec": 0, 00:08:21.482 "rw_mbytes_per_sec": 0, 00:08:21.482 "r_mbytes_per_sec": 0, 00:08:21.482 "w_mbytes_per_sec": 0 00:08:21.482 }, 00:08:21.482 "claimed": false, 00:08:21.482 "zoned": false, 00:08:21.482 "supported_io_types": { 00:08:21.482 "read": true, 00:08:21.482 "write": true, 00:08:21.482 "unmap": true, 00:08:21.482 "write_zeroes": true, 00:08:21.482 "flush": true, 00:08:21.482 "reset": true, 00:08:21.482 "compare": false, 00:08:21.482 "compare_and_write": false, 00:08:21.482 "abort": true, 00:08:21.482 "nvme_admin": false, 00:08:21.482 "nvme_io": false 00:08:21.482 }, 00:08:21.482 "memory_domains": [ 00:08:21.482 { 00:08:21.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.482 "dma_device_type": 2 00:08:21.482 } 00:08:21.482 ], 00:08:21.482 "driver_specific": {} 00:08:21.482 } 00:08:21.482 ] 00:08:21.482 20:46:12 -- common/autotest_common.sh@895 -- # return 0 00:08:21.482 20:46:12 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:08:21.482 [2024-04-16 20:46:12.536108] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:21.482 [2024-04-16 20:46:12.536552] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:21.482 [2024-04-16 20:46:12.536586] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:21.482 [2024-04-16 20:46:12.536590] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:21.482 [2024-04-16 20:46:12.536597] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:21.482 20:46:12 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:08:21.482 20:46:12 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:08:21.482 20:46:12 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:21.482 20:46:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:21.482 20:46:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:21.482 20:46:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:21.482 20:46:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:21.482 20:46:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:21.482 20:46:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:21.482 20:46:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:21.482 20:46:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:21.482 20:46:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:21.482 20:46:12 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:21.482 20:46:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:21.740 20:46:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:21.741 "name": "Existed_Raid", 00:08:21.741 "uuid": "5c7d8f77-fc32-11ee-80f8-ef3e42bb1492", 00:08:21.741 "strip_size_kb": 0, 00:08:21.741 "state": "configuring", 00:08:21.741 "raid_level": "raid1", 00:08:21.741 "superblock": true, 00:08:21.741 "num_base_bdevs": 3, 00:08:21.741 "num_base_bdevs_discovered": 1, 00:08:21.741 "num_base_bdevs_operational": 3, 00:08:21.741 "base_bdevs_list": [ 00:08:21.741 { 00:08:21.741 "name": "BaseBdev1", 00:08:21.741 "uuid": "5c25a671-fc32-11ee-80f8-ef3e42bb1492", 00:08:21.741 "is_configured": true, 00:08:21.741 "data_offset": 2048, 00:08:21.741 "data_size": 63488 00:08:21.741 }, 00:08:21.741 { 00:08:21.741 "name": "BaseBdev2", 00:08:21.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.741 "is_configured": false, 00:08:21.741 "data_offset": 0, 00:08:21.741 "data_size": 0 00:08:21.741 }, 00:08:21.741 { 00:08:21.741 "name": "BaseBdev3", 00:08:21.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.741 "is_configured": false, 00:08:21.741 "data_offset": 0, 00:08:21.741 "data_size": 0 00:08:21.741 } 00:08:21.741 ] 00:08:21.741 }' 00:08:21.741 20:46:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:21.741 20:46:12 -- common/autotest_common.sh@10 -- # set +x 00:08:21.998 20:46:13 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:08:22.256 [2024-04-16 20:46:13.176347] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:22.256 BaseBdev2 00:08:22.256 20:46:13 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:08:22.256 20:46:13 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:08:22.256 20:46:13 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:22.256 20:46:13 -- common/autotest_common.sh@889 -- # local i 00:08:22.256 20:46:13 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:22.256 20:46:13 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:22.256 20:46:13 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:22.256 20:46:13 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:22.513 [ 00:08:22.513 { 00:08:22.513 "name": "BaseBdev2", 00:08:22.513 "aliases": [ 00:08:22.513 "5cdf3d56-fc32-11ee-80f8-ef3e42bb1492" 00:08:22.513 ], 00:08:22.513 "product_name": "Malloc disk", 00:08:22.513 "block_size": 512, 00:08:22.513 "num_blocks": 65536, 00:08:22.513 "uuid": "5cdf3d56-fc32-11ee-80f8-ef3e42bb1492", 00:08:22.513 "assigned_rate_limits": { 00:08:22.513 "rw_ios_per_sec": 0, 00:08:22.513 "rw_mbytes_per_sec": 0, 00:08:22.513 "r_mbytes_per_sec": 0, 00:08:22.513 "w_mbytes_per_sec": 0 00:08:22.513 }, 00:08:22.513 "claimed": true, 00:08:22.513 "claim_type": "exclusive_write", 00:08:22.513 "zoned": false, 00:08:22.513 "supported_io_types": { 00:08:22.513 "read": true, 00:08:22.513 "write": true, 00:08:22.514 "unmap": true, 00:08:22.514 "write_zeroes": true, 00:08:22.514 "flush": true, 00:08:22.514 "reset": true, 00:08:22.514 "compare": false, 00:08:22.514 "compare_and_write": false, 00:08:22.514 "abort": true, 00:08:22.514 "nvme_admin": false, 00:08:22.514 "nvme_io": false 00:08:22.514 }, 00:08:22.514 "memory_domains": [ 00:08:22.514 { 00:08:22.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.514 "dma_device_type": 2 00:08:22.514 } 00:08:22.514 ], 00:08:22.514 "driver_specific": {} 00:08:22.514 } 00:08:22.514 ] 00:08:22.514 20:46:13 -- common/autotest_common.sh@895 -- # return 0 00:08:22.514 20:46:13 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:08:22.514 20:46:13 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:08:22.514 20:46:13 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:22.514 20:46:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:22.514 20:46:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:22.514 20:46:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:22.514 20:46:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:22.514 20:46:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:22.514 20:46:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:22.514 20:46:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:22.514 20:46:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:22.514 20:46:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:22.514 20:46:13 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:22.514 20:46:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:22.771 20:46:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:22.771 "name": "Existed_Raid", 00:08:22.771 "uuid": "5c7d8f77-fc32-11ee-80f8-ef3e42bb1492", 00:08:22.771 "strip_size_kb": 0, 00:08:22.771 "state": "configuring", 00:08:22.771 "raid_level": "raid1", 00:08:22.771 "superblock": true, 00:08:22.771 "num_base_bdevs": 3, 00:08:22.771 "num_base_bdevs_discovered": 2, 00:08:22.771 "num_base_bdevs_operational": 3, 00:08:22.771 "base_bdevs_list": [ 00:08:22.771 { 00:08:22.771 "name": "BaseBdev1", 00:08:22.771 "uuid": "5c25a671-fc32-11ee-80f8-ef3e42bb1492", 00:08:22.771 "is_configured": true, 00:08:22.771 "data_offset": 2048, 00:08:22.771 "data_size": 63488 00:08:22.771 }, 00:08:22.771 { 00:08:22.771 "name": "BaseBdev2", 00:08:22.771 "uuid": "5cdf3d56-fc32-11ee-80f8-ef3e42bb1492", 00:08:22.771 "is_configured": true, 00:08:22.771 "data_offset": 2048, 00:08:22.771 "data_size": 63488 00:08:22.771 }, 00:08:22.771 { 00:08:22.771 "name": "BaseBdev3", 00:08:22.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.771 "is_configured": false, 00:08:22.771 "data_offset": 0, 00:08:22.771 "data_size": 0 00:08:22.771 } 00:08:22.771 ] 00:08:22.771 }' 00:08:22.771 20:46:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:22.771 20:46:13 -- common/autotest_common.sh@10 -- # set +x 00:08:23.028 20:46:13 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:08:23.287 [2024-04-16 20:46:14.168578] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:23.287 [2024-04-16 20:46:14.168637] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b178a00 00:08:23.287 [2024-04-16 20:46:14.168641] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:23.287 [2024-04-16 20:46:14.168657] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b1dbec0 00:08:23.287 [2024-04-16 20:46:14.168690] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b178a00 00:08:23.287 [2024-04-16 20:46:14.168692] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82b178a00 00:08:23.287 [2024-04-16 20:46:14.168706] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:23.287 BaseBdev3 00:08:23.287 20:46:14 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:08:23.287 20:46:14 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:08:23.287 20:46:14 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:23.287 20:46:14 -- common/autotest_common.sh@889 -- # local i 00:08:23.287 20:46:14 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:23.287 20:46:14 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:23.287 20:46:14 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:23.287 20:46:14 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:23.545 [ 00:08:23.545 { 00:08:23.545 "name": "BaseBdev3", 00:08:23.545 "aliases": [ 00:08:23.545 "5d76a48e-fc32-11ee-80f8-ef3e42bb1492" 00:08:23.545 ], 00:08:23.545 "product_name": "Malloc disk", 00:08:23.545 "block_size": 512, 00:08:23.545 "num_blocks": 65536, 00:08:23.545 "uuid": "5d76a48e-fc32-11ee-80f8-ef3e42bb1492", 00:08:23.545 "assigned_rate_limits": { 00:08:23.545 "rw_ios_per_sec": 0, 00:08:23.545 "rw_mbytes_per_sec": 0, 00:08:23.545 "r_mbytes_per_sec": 0, 00:08:23.545 "w_mbytes_per_sec": 0 00:08:23.545 }, 00:08:23.545 "claimed": true, 00:08:23.545 "claim_type": "exclusive_write", 00:08:23.545 "zoned": false, 00:08:23.545 "supported_io_types": { 00:08:23.545 "read": true, 00:08:23.545 "write": true, 00:08:23.545 "unmap": true, 00:08:23.545 "write_zeroes": true, 00:08:23.545 "flush": true, 00:08:23.545 "reset": true, 00:08:23.545 "compare": false, 00:08:23.545 "compare_and_write": false, 00:08:23.545 "abort": true, 00:08:23.545 "nvme_admin": false, 00:08:23.545 "nvme_io": false 00:08:23.545 }, 00:08:23.545 "memory_domains": [ 00:08:23.545 { 00:08:23.545 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.545 "dma_device_type": 2 00:08:23.545 } 00:08:23.545 ], 00:08:23.545 "driver_specific": {} 00:08:23.545 } 00:08:23.545 ] 00:08:23.545 20:46:14 -- common/autotest_common.sh@895 -- # return 0 00:08:23.545 20:46:14 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:08:23.545 20:46:14 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:08:23.546 20:46:14 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:08:23.546 20:46:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:23.546 20:46:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:08:23.546 20:46:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:23.546 20:46:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:23.546 20:46:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:23.546 20:46:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:23.546 20:46:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:23.546 20:46:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:23.546 20:46:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:23.546 20:46:14 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:23.546 20:46:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.808 20:46:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:23.808 "name": "Existed_Raid", 00:08:23.808 "uuid": "5c7d8f77-fc32-11ee-80f8-ef3e42bb1492", 00:08:23.808 "strip_size_kb": 0, 00:08:23.808 "state": "online", 00:08:23.808 "raid_level": "raid1", 00:08:23.808 "superblock": true, 00:08:23.808 "num_base_bdevs": 3, 00:08:23.808 "num_base_bdevs_discovered": 3, 00:08:23.808 "num_base_bdevs_operational": 3, 00:08:23.808 "base_bdevs_list": [ 00:08:23.808 { 00:08:23.808 "name": "BaseBdev1", 00:08:23.808 "uuid": "5c25a671-fc32-11ee-80f8-ef3e42bb1492", 00:08:23.808 "is_configured": true, 00:08:23.808 "data_offset": 2048, 00:08:23.808 "data_size": 63488 00:08:23.808 }, 00:08:23.808 { 00:08:23.808 "name": "BaseBdev2", 00:08:23.808 "uuid": "5cdf3d56-fc32-11ee-80f8-ef3e42bb1492", 00:08:23.808 "is_configured": true, 00:08:23.808 "data_offset": 2048, 00:08:23.808 "data_size": 63488 00:08:23.808 }, 00:08:23.808 { 00:08:23.808 "name": "BaseBdev3", 00:08:23.808 "uuid": "5d76a48e-fc32-11ee-80f8-ef3e42bb1492", 00:08:23.808 "is_configured": true, 00:08:23.808 "data_offset": 2048, 00:08:23.808 "data_size": 63488 00:08:23.808 } 00:08:23.808 ] 00:08:23.808 }' 00:08:23.808 20:46:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:23.808 20:46:14 -- common/autotest_common.sh@10 -- # set +x 00:08:24.076 20:46:14 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:08:24.076 [2024-04-16 20:46:15.136698] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:24.076 20:46:15 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:08:24.076 20:46:15 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:08:24.076 20:46:15 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:08:24.076 20:46:15 -- bdev/bdev_raid.sh@196 -- # return 0 00:08:24.076 20:46:15 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:08:24.076 20:46:15 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:24.076 20:46:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:24.076 20:46:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:08:24.076 20:46:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:24.076 20:46:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:24.076 20:46:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:08:24.076 20:46:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:24.076 20:46:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:24.076 20:46:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:24.076 20:46:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:24.076 20:46:15 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:24.076 20:46:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:24.341 20:46:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:24.341 "name": "Existed_Raid", 00:08:24.341 "uuid": "5c7d8f77-fc32-11ee-80f8-ef3e42bb1492", 00:08:24.341 "strip_size_kb": 0, 00:08:24.341 "state": "online", 00:08:24.341 "raid_level": "raid1", 00:08:24.341 "superblock": true, 00:08:24.341 "num_base_bdevs": 3, 00:08:24.341 "num_base_bdevs_discovered": 2, 00:08:24.341 "num_base_bdevs_operational": 2, 00:08:24.341 "base_bdevs_list": [ 00:08:24.341 { 00:08:24.341 "name": null, 00:08:24.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.341 "is_configured": false, 00:08:24.341 "data_offset": 2048, 00:08:24.341 "data_size": 63488 00:08:24.341 }, 00:08:24.341 { 00:08:24.341 "name": "BaseBdev2", 00:08:24.341 "uuid": "5cdf3d56-fc32-11ee-80f8-ef3e42bb1492", 00:08:24.341 "is_configured": true, 00:08:24.341 "data_offset": 2048, 00:08:24.341 "data_size": 63488 00:08:24.341 }, 00:08:24.341 { 00:08:24.341 "name": "BaseBdev3", 00:08:24.341 "uuid": "5d76a48e-fc32-11ee-80f8-ef3e42bb1492", 00:08:24.341 "is_configured": true, 00:08:24.341 "data_offset": 2048, 00:08:24.341 "data_size": 63488 00:08:24.341 } 00:08:24.341 ] 00:08:24.341 }' 00:08:24.341 20:46:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:24.341 20:46:15 -- common/autotest_common.sh@10 -- # set +x 00:08:24.599 20:46:15 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:08:24.599 20:46:15 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:08:24.599 20:46:15 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:24.599 20:46:15 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:08:24.857 20:46:15 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:08:24.857 20:46:15 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:24.857 20:46:15 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:08:24.857 [2024-04-16 20:46:15.957507] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:24.857 20:46:15 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:08:24.857 20:46:15 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:08:24.857 20:46:15 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:24.857 20:46:15 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:08:25.116 20:46:16 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:08:25.116 20:46:16 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:25.116 20:46:16 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:08:25.374 [2024-04-16 20:46:16.322270] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:25.374 [2024-04-16 20:46:16.322294] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:25.374 [2024-04-16 20:46:16.322306] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:25.374 [2024-04-16 20:46:16.326899] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:25.374 [2024-04-16 20:46:16.326912] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b178a00 name Existed_Raid, state offline 00:08:25.374 20:46:16 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:08:25.374 20:46:16 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:08:25.374 20:46:16 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:08:25.374 20:46:16 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:25.634 20:46:16 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:08:25.634 20:46:16 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:08:25.634 20:46:16 -- bdev/bdev_raid.sh@287 -- # killprocess 50826 00:08:25.634 20:46:16 -- common/autotest_common.sh@926 -- # '[' -z 50826 ']' 00:08:25.634 20:46:16 -- common/autotest_common.sh@930 -- # kill -0 50826 00:08:25.634 20:46:16 -- common/autotest_common.sh@931 -- # uname 00:08:25.634 20:46:16 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:08:25.634 20:46:16 -- common/autotest_common.sh@934 -- # ps -c -o command 50826 00:08:25.634 20:46:16 -- common/autotest_common.sh@934 -- # tail -1 00:08:25.634 20:46:16 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:08:25.634 20:46:16 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:08:25.634 killing process with pid 50826 00:08:25.634 20:46:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 50826' 00:08:25.634 20:46:16 -- common/autotest_common.sh@945 -- # kill 50826 00:08:25.634 [2024-04-16 20:46:16.549148] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:25.634 [2024-04-16 20:46:16.549188] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:25.634 20:46:16 -- common/autotest_common.sh@950 -- # wait 50826 00:08:25.634 20:46:16 -- bdev/bdev_raid.sh@289 -- # return 0 00:08:25.634 00:08:25.634 real 0m8.182s 00:08:25.634 user 0m14.134s 00:08:25.634 sys 0m1.518s 00:08:25.634 20:46:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:25.634 20:46:16 -- common/autotest_common.sh@10 -- # set +x 00:08:25.634 ************************************ 00:08:25.634 END TEST raid_state_function_test_sb 00:08:25.634 ************************************ 00:08:25.634 20:46:16 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:08:25.634 20:46:16 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:25.634 20:46:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:25.634 20:46:16 -- common/autotest_common.sh@10 -- # set +x 00:08:25.634 ************************************ 00:08:25.634 START TEST raid_superblock_test 00:08:25.634 ************************************ 00:08:25.634 20:46:16 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 3 00:08:25.634 20:46:16 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:08:25.634 20:46:16 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:08:25.634 20:46:16 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:08:25.634 20:46:16 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:08:25.634 20:46:16 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:08:25.634 20:46:16 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:08:25.634 20:46:16 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:08:25.634 20:46:16 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:08:25.634 20:46:16 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:08:25.634 20:46:16 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:08:25.634 20:46:16 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:08:25.634 20:46:16 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:08:25.634 20:46:16 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:08:25.634 20:46:16 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:08:25.634 20:46:16 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:08:25.634 20:46:16 -- bdev/bdev_raid.sh@357 -- # raid_pid=51062 00:08:25.634 20:46:16 -- bdev/bdev_raid.sh@358 -- # waitforlisten 51062 /var/tmp/spdk-raid.sock 00:08:25.634 20:46:16 -- bdev/bdev_raid.sh@356 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:08:25.634 20:46:16 -- common/autotest_common.sh@819 -- # '[' -z 51062 ']' 00:08:25.634 20:46:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:25.634 20:46:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:25.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:25.634 20:46:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:25.634 20:46:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:25.634 20:46:16 -- common/autotest_common.sh@10 -- # set +x 00:08:25.634 [2024-04-16 20:46:16.750957] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:08:25.634 [2024-04-16 20:46:16.751312] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:08:26.203 EAL: TSC is not safe to use in SMP mode 00:08:26.203 EAL: TSC is not invariant 00:08:26.203 [2024-04-16 20:46:17.177897] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.203 [2024-04-16 20:46:17.268995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.203 [2024-04-16 20:46:17.269404] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:26.203 [2024-04-16 20:46:17.269413] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:26.770 20:46:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:26.770 20:46:17 -- common/autotest_common.sh@852 -- # return 0 00:08:26.770 20:46:17 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:08:26.770 20:46:17 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:08:26.770 20:46:17 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:08:26.770 20:46:17 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:08:26.770 20:46:17 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:26.770 20:46:17 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:26.770 20:46:17 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:08:26.770 20:46:17 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:26.770 20:46:17 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:08:26.770 malloc1 00:08:26.770 20:46:17 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:27.029 [2024-04-16 20:46:18.008525] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:27.029 [2024-04-16 20:46:18.008575] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:27.029 [2024-04-16 20:46:18.009096] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a866780 00:08:27.029 [2024-04-16 20:46:18.009119] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:27.029 [2024-04-16 20:46:18.009805] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:27.029 [2024-04-16 20:46:18.009835] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:27.029 pt1 00:08:27.029 20:46:18 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:08:27.029 20:46:18 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:08:27.029 20:46:18 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:08:27.029 20:46:18 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:08:27.029 20:46:18 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:27.029 20:46:18 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:27.029 20:46:18 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:08:27.029 20:46:18 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:27.029 20:46:18 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:08:27.287 malloc2 00:08:27.287 20:46:18 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:27.287 [2024-04-16 20:46:18.376624] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:27.287 [2024-04-16 20:46:18.376673] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:27.287 [2024-04-16 20:46:18.376713] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a866c80 00:08:27.287 [2024-04-16 20:46:18.376719] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:27.287 [2024-04-16 20:46:18.377182] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:27.287 [2024-04-16 20:46:18.377205] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:27.287 pt2 00:08:27.287 20:46:18 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:08:27.287 20:46:18 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:08:27.287 20:46:18 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:08:27.287 20:46:18 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:08:27.287 20:46:18 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:27.287 20:46:18 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:27.287 20:46:18 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:08:27.287 20:46:18 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:27.287 20:46:18 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:08:27.547 malloc3 00:08:27.547 20:46:18 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:27.808 [2024-04-16 20:46:18.752705] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:27.808 [2024-04-16 20:46:18.752755] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:27.808 [2024-04-16 20:46:18.752781] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a867180 00:08:27.808 [2024-04-16 20:46:18.752787] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:27.808 [2024-04-16 20:46:18.753294] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:27.808 [2024-04-16 20:46:18.753322] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:27.808 pt3 00:08:27.808 20:46:18 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:08:27.808 20:46:18 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:08:27.808 20:46:18 -- bdev/bdev_raid.sh@375 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:08:28.067 [2024-04-16 20:46:18.940741] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:28.067 [2024-04-16 20:46:18.941151] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:28.067 [2024-04-16 20:46:18.941171] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:28.067 [2024-04-16 20:46:18.941217] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82a867400 00:08:28.067 [2024-04-16 20:46:18.941221] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:28.067 [2024-04-16 20:46:18.941247] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82a8c9e20 00:08:28.067 [2024-04-16 20:46:18.941298] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82a867400 00:08:28.067 [2024-04-16 20:46:18.941301] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82a867400 00:08:28.067 [2024-04-16 20:46:18.941319] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:28.067 20:46:18 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:08:28.067 20:46:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:08:28.067 20:46:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:08:28.067 20:46:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:28.067 20:46:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:28.067 20:46:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:28.067 20:46:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:28.067 20:46:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:28.067 20:46:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:28.067 20:46:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:28.067 20:46:18 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:28.067 20:46:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:28.067 20:46:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:28.067 "name": "raid_bdev1", 00:08:28.067 "uuid": "604ed475-fc32-11ee-80f8-ef3e42bb1492", 00:08:28.067 "strip_size_kb": 0, 00:08:28.067 "state": "online", 00:08:28.067 "raid_level": "raid1", 00:08:28.067 "superblock": true, 00:08:28.067 "num_base_bdevs": 3, 00:08:28.067 "num_base_bdevs_discovered": 3, 00:08:28.067 "num_base_bdevs_operational": 3, 00:08:28.067 "base_bdevs_list": [ 00:08:28.067 { 00:08:28.067 "name": "pt1", 00:08:28.067 "uuid": "79b954d3-a07f-6b51-afb5-780abc926a0d", 00:08:28.067 "is_configured": true, 00:08:28.067 "data_offset": 2048, 00:08:28.067 "data_size": 63488 00:08:28.067 }, 00:08:28.067 { 00:08:28.067 "name": "pt2", 00:08:28.067 "uuid": "9d108677-2e3c-dc5e-a5c5-9e70d2c18c3b", 00:08:28.067 "is_configured": true, 00:08:28.067 "data_offset": 2048, 00:08:28.067 "data_size": 63488 00:08:28.067 }, 00:08:28.067 { 00:08:28.067 "name": "pt3", 00:08:28.067 "uuid": "e9060c88-8e51-6c54-863c-c6aae9b728ae", 00:08:28.067 "is_configured": true, 00:08:28.067 "data_offset": 2048, 00:08:28.067 "data_size": 63488 00:08:28.067 } 00:08:28.067 ] 00:08:28.067 }' 00:08:28.067 20:46:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:28.067 20:46:19 -- common/autotest_common.sh@10 -- # set +x 00:08:28.326 20:46:19 -- bdev/bdev_raid.sh@379 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:28.326 20:46:19 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:08:28.585 [2024-04-16 20:46:19.588911] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:28.585 20:46:19 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=604ed475-fc32-11ee-80f8-ef3e42bb1492 00:08:28.585 20:46:19 -- bdev/bdev_raid.sh@380 -- # '[' -z 604ed475-fc32-11ee-80f8-ef3e42bb1492 ']' 00:08:28.585 20:46:19 -- bdev/bdev_raid.sh@385 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:28.844 [2024-04-16 20:46:19.776915] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:28.844 [2024-04-16 20:46:19.776937] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:28.844 [2024-04-16 20:46:19.776957] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:28.844 [2024-04-16 20:46:19.776970] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:28.844 [2024-04-16 20:46:19.776973] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a867400 name raid_bdev1, state offline 00:08:28.844 20:46:19 -- bdev/bdev_raid.sh@386 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:28.844 20:46:19 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:08:29.103 20:46:19 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:08:29.103 20:46:19 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:08:29.103 20:46:19 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:08:29.103 20:46:19 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:08:29.103 20:46:20 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:08:29.103 20:46:20 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:08:29.361 20:46:20 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:08:29.361 20:46:20 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:08:29.619 20:46:20 -- bdev/bdev_raid.sh@395 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:08:29.619 20:46:20 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:29.619 20:46:20 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:08:29.619 20:46:20 -- bdev/bdev_raid.sh@401 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:08:29.619 20:46:20 -- common/autotest_common.sh@640 -- # local es=0 00:08:29.620 20:46:20 -- common/autotest_common.sh@642 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:08:29.620 20:46:20 -- common/autotest_common.sh@628 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:29.620 20:46:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:29.620 20:46:20 -- common/autotest_common.sh@632 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:29.620 20:46:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:29.620 20:46:20 -- common/autotest_common.sh@634 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:29.620 20:46:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:29.620 20:46:20 -- common/autotest_common.sh@634 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:29.620 20:46:20 -- common/autotest_common.sh@634 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:29.620 20:46:20 -- common/autotest_common.sh@643 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:08:29.884 [2024-04-16 20:46:20.877160] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:29.884 [2024-04-16 20:46:20.877620] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:29.884 [2024-04-16 20:46:20.877639] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:29.884 [2024-04-16 20:46:20.877652] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:08:29.884 [2024-04-16 20:46:20.877683] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:08:29.884 [2024-04-16 20:46:20.877692] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:08:29.884 [2024-04-16 20:46:20.877699] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:29.884 [2024-04-16 20:46:20.877703] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a867180 name raid_bdev1, state configuring 00:08:29.884 request: 00:08:29.884 { 00:08:29.884 "name": "raid_bdev1", 00:08:29.884 "raid_level": "raid1", 00:08:29.884 "base_bdevs": [ 00:08:29.884 "malloc1", 00:08:29.884 "malloc2", 00:08:29.884 "malloc3" 00:08:29.884 ], 00:08:29.884 "superblock": false, 00:08:29.884 "method": "bdev_raid_create", 00:08:29.884 "req_id": 1 00:08:29.884 } 00:08:29.884 Got JSON-RPC error response 00:08:29.884 response: 00:08:29.884 { 00:08:29.884 "code": -17, 00:08:29.884 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:29.884 } 00:08:29.884 20:46:20 -- common/autotest_common.sh@643 -- # es=1 00:08:29.884 20:46:20 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:29.884 20:46:20 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:08:29.884 20:46:20 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:29.884 20:46:20 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:08:29.884 20:46:20 -- bdev/bdev_raid.sh@403 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:30.152 20:46:21 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:08:30.152 20:46:21 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:08:30.152 20:46:21 -- bdev/bdev_raid.sh@409 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:30.410 [2024-04-16 20:46:21.277267] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:30.410 [2024-04-16 20:46:21.277319] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:30.410 [2024-04-16 20:46:21.277363] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a866c80 00:08:30.410 [2024-04-16 20:46:21.277370] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:30.410 [2024-04-16 20:46:21.277934] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:30.410 [2024-04-16 20:46:21.277974] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:30.410 [2024-04-16 20:46:21.277997] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:08:30.410 [2024-04-16 20:46:21.278009] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:30.410 pt1 00:08:30.410 20:46:21 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:08:30.410 20:46:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:08:30.410 20:46:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:30.410 20:46:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:30.410 20:46:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:30.410 20:46:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:30.410 20:46:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:30.410 20:46:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:30.410 20:46:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:30.410 20:46:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:30.410 20:46:21 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:30.410 20:46:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:30.410 20:46:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:30.410 "name": "raid_bdev1", 00:08:30.410 "uuid": "604ed475-fc32-11ee-80f8-ef3e42bb1492", 00:08:30.410 "strip_size_kb": 0, 00:08:30.410 "state": "configuring", 00:08:30.410 "raid_level": "raid1", 00:08:30.410 "superblock": true, 00:08:30.410 "num_base_bdevs": 3, 00:08:30.410 "num_base_bdevs_discovered": 1, 00:08:30.410 "num_base_bdevs_operational": 3, 00:08:30.410 "base_bdevs_list": [ 00:08:30.410 { 00:08:30.410 "name": "pt1", 00:08:30.410 "uuid": "79b954d3-a07f-6b51-afb5-780abc926a0d", 00:08:30.410 "is_configured": true, 00:08:30.410 "data_offset": 2048, 00:08:30.410 "data_size": 63488 00:08:30.410 }, 00:08:30.410 { 00:08:30.410 "name": null, 00:08:30.410 "uuid": "9d108677-2e3c-dc5e-a5c5-9e70d2c18c3b", 00:08:30.410 "is_configured": false, 00:08:30.410 "data_offset": 2048, 00:08:30.410 "data_size": 63488 00:08:30.410 }, 00:08:30.410 { 00:08:30.410 "name": null, 00:08:30.410 "uuid": "e9060c88-8e51-6c54-863c-c6aae9b728ae", 00:08:30.410 "is_configured": false, 00:08:30.410 "data_offset": 2048, 00:08:30.410 "data_size": 63488 00:08:30.410 } 00:08:30.410 ] 00:08:30.410 }' 00:08:30.410 20:46:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:30.410 20:46:21 -- common/autotest_common.sh@10 -- # set +x 00:08:30.668 20:46:21 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:08:30.668 20:46:21 -- bdev/bdev_raid.sh@416 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:30.928 [2024-04-16 20:46:21.925464] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:30.928 [2024-04-16 20:46:21.925511] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:30.928 [2024-04-16 20:46:21.925538] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a867680 00:08:30.928 [2024-04-16 20:46:21.925543] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:30.928 [2024-04-16 20:46:21.925633] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:30.928 [2024-04-16 20:46:21.925639] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:30.928 [2024-04-16 20:46:21.925656] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:08:30.928 [2024-04-16 20:46:21.925679] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:30.928 pt2 00:08:30.928 20:46:21 -- bdev/bdev_raid.sh@417 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:08:31.186 [2024-04-16 20:46:22.109518] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:31.186 20:46:22 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:08:31.186 20:46:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:08:31.186 20:46:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:31.186 20:46:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:31.186 20:46:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:31.186 20:46:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:31.186 20:46:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:31.186 20:46:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:31.186 20:46:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:31.186 20:46:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:31.186 20:46:22 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:31.186 20:46:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:31.186 20:46:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:31.187 "name": "raid_bdev1", 00:08:31.187 "uuid": "604ed475-fc32-11ee-80f8-ef3e42bb1492", 00:08:31.187 "strip_size_kb": 0, 00:08:31.187 "state": "configuring", 00:08:31.187 "raid_level": "raid1", 00:08:31.187 "superblock": true, 00:08:31.187 "num_base_bdevs": 3, 00:08:31.187 "num_base_bdevs_discovered": 1, 00:08:31.187 "num_base_bdevs_operational": 3, 00:08:31.187 "base_bdevs_list": [ 00:08:31.187 { 00:08:31.187 "name": "pt1", 00:08:31.187 "uuid": "79b954d3-a07f-6b51-afb5-780abc926a0d", 00:08:31.187 "is_configured": true, 00:08:31.187 "data_offset": 2048, 00:08:31.187 "data_size": 63488 00:08:31.187 }, 00:08:31.187 { 00:08:31.187 "name": null, 00:08:31.187 "uuid": "9d108677-2e3c-dc5e-a5c5-9e70d2c18c3b", 00:08:31.187 "is_configured": false, 00:08:31.187 "data_offset": 2048, 00:08:31.187 "data_size": 63488 00:08:31.187 }, 00:08:31.187 { 00:08:31.187 "name": null, 00:08:31.187 "uuid": "e9060c88-8e51-6c54-863c-c6aae9b728ae", 00:08:31.187 "is_configured": false, 00:08:31.187 "data_offset": 2048, 00:08:31.187 "data_size": 63488 00:08:31.187 } 00:08:31.187 ] 00:08:31.187 }' 00:08:31.187 20:46:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:31.187 20:46:22 -- common/autotest_common.sh@10 -- # set +x 00:08:31.753 20:46:22 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:08:31.753 20:46:22 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:08:31.753 20:46:22 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:31.753 [2024-04-16 20:46:22.741679] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:31.753 [2024-04-16 20:46:22.741726] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:31.753 [2024-04-16 20:46:22.741751] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a867680 00:08:31.754 [2024-04-16 20:46:22.741761] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:31.754 [2024-04-16 20:46:22.741852] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:31.754 [2024-04-16 20:46:22.741863] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:31.754 [2024-04-16 20:46:22.741881] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:08:31.754 [2024-04-16 20:46:22.741886] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:31.754 pt2 00:08:31.754 20:46:22 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:08:31.754 20:46:22 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:08:31.754 20:46:22 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:32.012 [2024-04-16 20:46:22.901718] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:32.012 [2024-04-16 20:46:22.901764] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:32.012 [2024-04-16 20:46:22.901805] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a867400 00:08:32.012 [2024-04-16 20:46:22.901811] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:32.012 [2024-04-16 20:46:22.901892] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:32.012 [2024-04-16 20:46:22.901909] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:32.012 [2024-04-16 20:46:22.901925] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:08:32.012 [2024-04-16 20:46:22.901931] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:32.012 [2024-04-16 20:46:22.901953] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82a866780 00:08:32.012 [2024-04-16 20:46:22.901955] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:32.012 [2024-04-16 20:46:22.901970] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82a8c9e20 00:08:32.012 [2024-04-16 20:46:22.902007] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82a866780 00:08:32.012 [2024-04-16 20:46:22.902009] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82a866780 00:08:32.012 [2024-04-16 20:46:22.902024] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:32.012 pt3 00:08:32.012 20:46:22 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:08:32.012 20:46:22 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:08:32.012 20:46:22 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:08:32.012 20:46:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:08:32.012 20:46:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:08:32.012 20:46:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:32.012 20:46:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:32.012 20:46:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:32.012 20:46:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:32.012 20:46:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:32.012 20:46:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:32.012 20:46:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:32.012 20:46:22 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:32.012 20:46:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:32.012 20:46:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:32.012 "name": "raid_bdev1", 00:08:32.012 "uuid": "604ed475-fc32-11ee-80f8-ef3e42bb1492", 00:08:32.012 "strip_size_kb": 0, 00:08:32.012 "state": "online", 00:08:32.012 "raid_level": "raid1", 00:08:32.012 "superblock": true, 00:08:32.012 "num_base_bdevs": 3, 00:08:32.012 "num_base_bdevs_discovered": 3, 00:08:32.012 "num_base_bdevs_operational": 3, 00:08:32.012 "base_bdevs_list": [ 00:08:32.012 { 00:08:32.012 "name": "pt1", 00:08:32.012 "uuid": "79b954d3-a07f-6b51-afb5-780abc926a0d", 00:08:32.012 "is_configured": true, 00:08:32.012 "data_offset": 2048, 00:08:32.012 "data_size": 63488 00:08:32.012 }, 00:08:32.012 { 00:08:32.012 "name": "pt2", 00:08:32.012 "uuid": "9d108677-2e3c-dc5e-a5c5-9e70d2c18c3b", 00:08:32.012 "is_configured": true, 00:08:32.012 "data_offset": 2048, 00:08:32.013 "data_size": 63488 00:08:32.013 }, 00:08:32.013 { 00:08:32.013 "name": "pt3", 00:08:32.013 "uuid": "e9060c88-8e51-6c54-863c-c6aae9b728ae", 00:08:32.013 "is_configured": true, 00:08:32.013 "data_offset": 2048, 00:08:32.013 "data_size": 63488 00:08:32.013 } 00:08:32.013 ] 00:08:32.013 }' 00:08:32.013 20:46:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:32.013 20:46:23 -- common/autotest_common.sh@10 -- # set +x 00:08:32.271 20:46:23 -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:32.271 20:46:23 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:08:32.529 [2024-04-16 20:46:23.553920] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:32.529 20:46:23 -- bdev/bdev_raid.sh@430 -- # '[' 604ed475-fc32-11ee-80f8-ef3e42bb1492 '!=' 604ed475-fc32-11ee-80f8-ef3e42bb1492 ']' 00:08:32.529 20:46:23 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:08:32.529 20:46:23 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:08:32.529 20:46:23 -- bdev/bdev_raid.sh@196 -- # return 0 00:08:32.529 20:46:23 -- bdev/bdev_raid.sh@436 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:08:32.787 [2024-04-16 20:46:23.741946] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:32.787 20:46:23 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:32.787 20:46:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:08:32.787 20:46:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:08:32.787 20:46:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:32.787 20:46:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:32.787 20:46:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:08:32.787 20:46:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:32.787 20:46:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:32.787 20:46:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:32.787 20:46:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:32.787 20:46:23 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:32.787 20:46:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:33.045 20:46:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:33.045 "name": "raid_bdev1", 00:08:33.045 "uuid": "604ed475-fc32-11ee-80f8-ef3e42bb1492", 00:08:33.045 "strip_size_kb": 0, 00:08:33.045 "state": "online", 00:08:33.045 "raid_level": "raid1", 00:08:33.045 "superblock": true, 00:08:33.045 "num_base_bdevs": 3, 00:08:33.045 "num_base_bdevs_discovered": 2, 00:08:33.045 "num_base_bdevs_operational": 2, 00:08:33.045 "base_bdevs_list": [ 00:08:33.045 { 00:08:33.045 "name": null, 00:08:33.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.045 "is_configured": false, 00:08:33.045 "data_offset": 2048, 00:08:33.045 "data_size": 63488 00:08:33.045 }, 00:08:33.045 { 00:08:33.045 "name": "pt2", 00:08:33.045 "uuid": "9d108677-2e3c-dc5e-a5c5-9e70d2c18c3b", 00:08:33.045 "is_configured": true, 00:08:33.045 "data_offset": 2048, 00:08:33.045 "data_size": 63488 00:08:33.045 }, 00:08:33.045 { 00:08:33.045 "name": "pt3", 00:08:33.045 "uuid": "e9060c88-8e51-6c54-863c-c6aae9b728ae", 00:08:33.045 "is_configured": true, 00:08:33.045 "data_offset": 2048, 00:08:33.045 "data_size": 63488 00:08:33.045 } 00:08:33.045 ] 00:08:33.045 }' 00:08:33.045 20:46:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:33.045 20:46:23 -- common/autotest_common.sh@10 -- # set +x 00:08:33.302 20:46:24 -- bdev/bdev_raid.sh@442 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:33.302 [2024-04-16 20:46:24.374083] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:33.302 [2024-04-16 20:46:24.374104] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:33.302 [2024-04-16 20:46:24.374123] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:33.302 [2024-04-16 20:46:24.374152] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:33.302 [2024-04-16 20:46:24.374156] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a866780 name raid_bdev1, state offline 00:08:33.302 20:46:24 -- bdev/bdev_raid.sh@443 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:33.302 20:46:24 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:08:33.560 20:46:24 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:08:33.560 20:46:24 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:08:33.560 20:46:24 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:08:33.560 20:46:24 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:08:33.560 20:46:24 -- bdev/bdev_raid.sh@450 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:08:33.818 20:46:24 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:08:33.818 20:46:24 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:08:33.818 20:46:24 -- bdev/bdev_raid.sh@450 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:08:34.077 20:46:24 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:08:34.077 20:46:24 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:08:34.077 20:46:24 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:08:34.077 20:46:24 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:08:34.077 20:46:24 -- bdev/bdev_raid.sh@455 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:34.077 [2024-04-16 20:46:25.094269] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:34.077 [2024-04-16 20:46:25.094334] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:34.077 [2024-04-16 20:46:25.094361] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a867400 00:08:34.077 [2024-04-16 20:46:25.094366] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:34.077 [2024-04-16 20:46:25.094863] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:34.077 [2024-04-16 20:46:25.094886] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:34.077 [2024-04-16 20:46:25.094905] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:08:34.077 [2024-04-16 20:46:25.094914] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:34.077 pt2 00:08:34.077 20:46:25 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:34.077 20:46:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:08:34.077 20:46:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:34.077 20:46:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:34.077 20:46:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:34.077 20:46:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:08:34.077 20:46:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:34.077 20:46:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:34.077 20:46:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:34.077 20:46:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:34.077 20:46:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:34.077 20:46:25 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:34.335 20:46:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:34.335 "name": "raid_bdev1", 00:08:34.335 "uuid": "604ed475-fc32-11ee-80f8-ef3e42bb1492", 00:08:34.335 "strip_size_kb": 0, 00:08:34.335 "state": "configuring", 00:08:34.335 "raid_level": "raid1", 00:08:34.335 "superblock": true, 00:08:34.335 "num_base_bdevs": 3, 00:08:34.335 "num_base_bdevs_discovered": 1, 00:08:34.335 "num_base_bdevs_operational": 2, 00:08:34.335 "base_bdevs_list": [ 00:08:34.335 { 00:08:34.335 "name": null, 00:08:34.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.336 "is_configured": false, 00:08:34.336 "data_offset": 2048, 00:08:34.336 "data_size": 63488 00:08:34.336 }, 00:08:34.336 { 00:08:34.336 "name": "pt2", 00:08:34.336 "uuid": "9d108677-2e3c-dc5e-a5c5-9e70d2c18c3b", 00:08:34.336 "is_configured": true, 00:08:34.336 "data_offset": 2048, 00:08:34.336 "data_size": 63488 00:08:34.336 }, 00:08:34.336 { 00:08:34.336 "name": null, 00:08:34.336 "uuid": "e9060c88-8e51-6c54-863c-c6aae9b728ae", 00:08:34.336 "is_configured": false, 00:08:34.336 "data_offset": 2048, 00:08:34.336 "data_size": 63488 00:08:34.336 } 00:08:34.336 ] 00:08:34.336 }' 00:08:34.336 20:46:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:34.336 20:46:25 -- common/autotest_common.sh@10 -- # set +x 00:08:34.593 20:46:25 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:08:34.593 20:46:25 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:08:34.593 20:46:25 -- bdev/bdev_raid.sh@462 -- # i=2 00:08:34.593 20:46:25 -- bdev/bdev_raid.sh@463 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:34.593 [2024-04-16 20:46:25.714444] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:34.593 [2024-04-16 20:46:25.714492] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:34.593 [2024-04-16 20:46:25.714517] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a866780 00:08:34.593 [2024-04-16 20:46:25.714522] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:34.593 [2024-04-16 20:46:25.714609] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:34.593 [2024-04-16 20:46:25.714616] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:34.594 [2024-04-16 20:46:25.714639] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:08:34.594 [2024-04-16 20:46:25.714645] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:34.594 [2024-04-16 20:46:25.714665] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82a867180 00:08:34.594 [2024-04-16 20:46:25.714667] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:34.594 [2024-04-16 20:46:25.714682] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82a8c9e20 00:08:34.594 [2024-04-16 20:46:25.714726] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82a867180 00:08:34.594 [2024-04-16 20:46:25.714736] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82a867180 00:08:34.594 [2024-04-16 20:46:25.714751] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:34.852 pt3 00:08:34.852 20:46:25 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:34.852 20:46:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:08:34.852 20:46:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:08:34.852 20:46:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:34.852 20:46:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:34.852 20:46:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:08:34.852 20:46:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:34.852 20:46:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:34.852 20:46:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:34.852 20:46:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:34.852 20:46:25 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:34.852 20:46:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:34.852 20:46:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:34.852 "name": "raid_bdev1", 00:08:34.852 "uuid": "604ed475-fc32-11ee-80f8-ef3e42bb1492", 00:08:34.852 "strip_size_kb": 0, 00:08:34.852 "state": "online", 00:08:34.852 "raid_level": "raid1", 00:08:34.852 "superblock": true, 00:08:34.852 "num_base_bdevs": 3, 00:08:34.852 "num_base_bdevs_discovered": 2, 00:08:34.852 "num_base_bdevs_operational": 2, 00:08:34.852 "base_bdevs_list": [ 00:08:34.852 { 00:08:34.852 "name": null, 00:08:34.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.852 "is_configured": false, 00:08:34.852 "data_offset": 2048, 00:08:34.852 "data_size": 63488 00:08:34.852 }, 00:08:34.852 { 00:08:34.852 "name": "pt2", 00:08:34.852 "uuid": "9d108677-2e3c-dc5e-a5c5-9e70d2c18c3b", 00:08:34.852 "is_configured": true, 00:08:34.852 "data_offset": 2048, 00:08:34.852 "data_size": 63488 00:08:34.852 }, 00:08:34.852 { 00:08:34.852 "name": "pt3", 00:08:34.852 "uuid": "e9060c88-8e51-6c54-863c-c6aae9b728ae", 00:08:34.852 "is_configured": true, 00:08:34.852 "data_offset": 2048, 00:08:34.852 "data_size": 63488 00:08:34.852 } 00:08:34.852 ] 00:08:34.852 }' 00:08:34.852 20:46:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:34.852 20:46:25 -- common/autotest_common.sh@10 -- # set +x 00:08:35.110 20:46:26 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:08:35.110 20:46:26 -- bdev/bdev_raid.sh@470 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:35.369 [2024-04-16 20:46:26.366608] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:35.369 [2024-04-16 20:46:26.366631] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:35.369 [2024-04-16 20:46:26.366652] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:35.369 [2024-04-16 20:46:26.366663] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:35.369 [2024-04-16 20:46:26.366667] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a867180 name raid_bdev1, state offline 00:08:35.369 20:46:26 -- bdev/bdev_raid.sh@471 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:35.369 20:46:26 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:08:35.627 20:46:26 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:08:35.627 20:46:26 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:08:35.627 20:46:26 -- bdev/bdev_raid.sh@478 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:35.627 [2024-04-16 20:46:26.742702] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:35.627 [2024-04-16 20:46:26.742747] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:35.627 [2024-04-16 20:46:26.742774] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a867680 00:08:35.627 [2024-04-16 20:46:26.742780] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:35.627 [2024-04-16 20:46:26.743281] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:35.627 [2024-04-16 20:46:26.743306] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:35.627 [2024-04-16 20:46:26.743329] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:08:35.627 [2024-04-16 20:46:26.743338] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:35.627 pt1 00:08:35.885 20:46:26 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:08:35.885 20:46:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:08:35.885 20:46:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:35.886 20:46:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:35.886 20:46:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:35.886 20:46:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:35.886 20:46:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:35.886 20:46:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:35.886 20:46:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:35.886 20:46:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:35.886 20:46:26 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:35.886 20:46:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:35.886 20:46:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:35.886 "name": "raid_bdev1", 00:08:35.886 "uuid": "604ed475-fc32-11ee-80f8-ef3e42bb1492", 00:08:35.886 "strip_size_kb": 0, 00:08:35.886 "state": "configuring", 00:08:35.886 "raid_level": "raid1", 00:08:35.886 "superblock": true, 00:08:35.886 "num_base_bdevs": 3, 00:08:35.886 "num_base_bdevs_discovered": 1, 00:08:35.886 "num_base_bdevs_operational": 3, 00:08:35.886 "base_bdevs_list": [ 00:08:35.886 { 00:08:35.886 "name": "pt1", 00:08:35.886 "uuid": "79b954d3-a07f-6b51-afb5-780abc926a0d", 00:08:35.886 "is_configured": true, 00:08:35.886 "data_offset": 2048, 00:08:35.886 "data_size": 63488 00:08:35.886 }, 00:08:35.886 { 00:08:35.886 "name": null, 00:08:35.886 "uuid": "9d108677-2e3c-dc5e-a5c5-9e70d2c18c3b", 00:08:35.886 "is_configured": false, 00:08:35.886 "data_offset": 2048, 00:08:35.886 "data_size": 63488 00:08:35.886 }, 00:08:35.886 { 00:08:35.886 "name": null, 00:08:35.886 "uuid": "e9060c88-8e51-6c54-863c-c6aae9b728ae", 00:08:35.886 "is_configured": false, 00:08:35.886 "data_offset": 2048, 00:08:35.886 "data_size": 63488 00:08:35.886 } 00:08:35.886 ] 00:08:35.886 }' 00:08:35.886 20:46:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:35.886 20:46:26 -- common/autotest_common.sh@10 -- # set +x 00:08:36.144 20:46:27 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:08:36.144 20:46:27 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:08:36.144 20:46:27 -- bdev/bdev_raid.sh@485 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:08:36.402 20:46:27 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:08:36.402 20:46:27 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:08:36.403 20:46:27 -- bdev/bdev_raid.sh@485 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:08:36.661 20:46:27 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:08:36.661 20:46:27 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:08:36.661 20:46:27 -- bdev/bdev_raid.sh@489 -- # i=2 00:08:36.661 20:46:27 -- bdev/bdev_raid.sh@490 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:36.661 [2024-04-16 20:46:27.782951] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:36.661 [2024-04-16 20:46:27.782998] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:36.661 [2024-04-16 20:46:27.783039] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a866780 00:08:36.661 [2024-04-16 20:46:27.783054] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:36.661 [2024-04-16 20:46:27.783146] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:36.661 [2024-04-16 20:46:27.783159] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:36.661 [2024-04-16 20:46:27.783178] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:08:36.661 [2024-04-16 20:46:27.783182] bdev_raid.c:3239:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:36.661 [2024-04-16 20:46:27.783185] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:36.661 [2024-04-16 20:46:27.783192] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a866c80 name raid_bdev1, state configuring 00:08:36.661 [2024-04-16 20:46:27.783202] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:36.919 pt3 00:08:36.919 20:46:27 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:36.919 20:46:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:08:36.919 20:46:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:36.919 20:46:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:36.919 20:46:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:36.919 20:46:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:08:36.919 20:46:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:36.919 20:46:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:36.919 20:46:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:36.919 20:46:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:36.919 20:46:27 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:36.919 20:46:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:36.919 20:46:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:36.919 "name": "raid_bdev1", 00:08:36.919 "uuid": "604ed475-fc32-11ee-80f8-ef3e42bb1492", 00:08:36.919 "strip_size_kb": 0, 00:08:36.919 "state": "configuring", 00:08:36.919 "raid_level": "raid1", 00:08:36.919 "superblock": true, 00:08:36.919 "num_base_bdevs": 3, 00:08:36.919 "num_base_bdevs_discovered": 1, 00:08:36.919 "num_base_bdevs_operational": 2, 00:08:36.919 "base_bdevs_list": [ 00:08:36.919 { 00:08:36.919 "name": null, 00:08:36.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.919 "is_configured": false, 00:08:36.919 "data_offset": 2048, 00:08:36.919 "data_size": 63488 00:08:36.919 }, 00:08:36.919 { 00:08:36.919 "name": null, 00:08:36.919 "uuid": "9d108677-2e3c-dc5e-a5c5-9e70d2c18c3b", 00:08:36.919 "is_configured": false, 00:08:36.919 "data_offset": 2048, 00:08:36.919 "data_size": 63488 00:08:36.919 }, 00:08:36.919 { 00:08:36.919 "name": "pt3", 00:08:36.919 "uuid": "e9060c88-8e51-6c54-863c-c6aae9b728ae", 00:08:36.919 "is_configured": true, 00:08:36.919 "data_offset": 2048, 00:08:36.919 "data_size": 63488 00:08:36.919 } 00:08:36.919 ] 00:08:36.919 }' 00:08:36.919 20:46:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:36.919 20:46:27 -- common/autotest_common.sh@10 -- # set +x 00:08:37.177 20:46:28 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:08:37.177 20:46:28 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:08:37.177 20:46:28 -- bdev/bdev_raid.sh@498 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:37.435 [2024-04-16 20:46:28.431093] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:37.435 [2024-04-16 20:46:28.431139] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:37.435 [2024-04-16 20:46:28.431165] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a867400 00:08:37.435 [2024-04-16 20:46:28.431171] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:37.435 [2024-04-16 20:46:28.431262] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:37.435 [2024-04-16 20:46:28.431284] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:37.435 [2024-04-16 20:46:28.431303] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:08:37.435 [2024-04-16 20:46:28.431310] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:37.435 [2024-04-16 20:46:28.431331] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82a866c80 00:08:37.435 [2024-04-16 20:46:28.431334] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:37.435 [2024-04-16 20:46:28.431349] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82a8c9e20 00:08:37.435 [2024-04-16 20:46:28.431378] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82a866c80 00:08:37.435 [2024-04-16 20:46:28.431381] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82a866c80 00:08:37.435 [2024-04-16 20:46:28.431396] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:37.435 pt2 00:08:37.435 20:46:28 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:08:37.435 20:46:28 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:08:37.435 20:46:28 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:37.435 20:46:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:08:37.435 20:46:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:08:37.435 20:46:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:37.435 20:46:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:37.435 20:46:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:08:37.435 20:46:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:37.436 20:46:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:37.436 20:46:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:37.436 20:46:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:37.436 20:46:28 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:37.436 20:46:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:37.693 20:46:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:37.693 "name": "raid_bdev1", 00:08:37.693 "uuid": "604ed475-fc32-11ee-80f8-ef3e42bb1492", 00:08:37.693 "strip_size_kb": 0, 00:08:37.693 "state": "online", 00:08:37.693 "raid_level": "raid1", 00:08:37.693 "superblock": true, 00:08:37.693 "num_base_bdevs": 3, 00:08:37.693 "num_base_bdevs_discovered": 2, 00:08:37.693 "num_base_bdevs_operational": 2, 00:08:37.693 "base_bdevs_list": [ 00:08:37.693 { 00:08:37.693 "name": null, 00:08:37.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.693 "is_configured": false, 00:08:37.693 "data_offset": 2048, 00:08:37.693 "data_size": 63488 00:08:37.693 }, 00:08:37.693 { 00:08:37.693 "name": "pt2", 00:08:37.693 "uuid": "9d108677-2e3c-dc5e-a5c5-9e70d2c18c3b", 00:08:37.693 "is_configured": true, 00:08:37.693 "data_offset": 2048, 00:08:37.693 "data_size": 63488 00:08:37.693 }, 00:08:37.693 { 00:08:37.693 "name": "pt3", 00:08:37.693 "uuid": "e9060c88-8e51-6c54-863c-c6aae9b728ae", 00:08:37.693 "is_configured": true, 00:08:37.693 "data_offset": 2048, 00:08:37.693 "data_size": 63488 00:08:37.693 } 00:08:37.693 ] 00:08:37.693 }' 00:08:37.693 20:46:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:37.693 20:46:28 -- common/autotest_common.sh@10 -- # set +x 00:08:37.952 20:46:28 -- bdev/bdev_raid.sh@506 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:37.952 20:46:28 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:08:37.952 [2024-04-16 20:46:29.075291] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:38.225 20:46:29 -- bdev/bdev_raid.sh@506 -- # '[' 604ed475-fc32-11ee-80f8-ef3e42bb1492 '!=' 604ed475-fc32-11ee-80f8-ef3e42bb1492 ']' 00:08:38.225 20:46:29 -- bdev/bdev_raid.sh@511 -- # killprocess 51062 00:08:38.225 20:46:29 -- common/autotest_common.sh@926 -- # '[' -z 51062 ']' 00:08:38.225 20:46:29 -- common/autotest_common.sh@930 -- # kill -0 51062 00:08:38.225 20:46:29 -- common/autotest_common.sh@931 -- # uname 00:08:38.225 20:46:29 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:08:38.225 20:46:29 -- common/autotest_common.sh@934 -- # ps -c -o command 51062 00:08:38.225 20:46:29 -- common/autotest_common.sh@934 -- # tail -1 00:08:38.225 20:46:29 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:08:38.225 20:46:29 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:08:38.225 killing process with pid 51062 00:08:38.225 20:46:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 51062' 00:08:38.225 20:46:29 -- common/autotest_common.sh@945 -- # kill 51062 00:08:38.225 [2024-04-16 20:46:29.107325] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:38.225 [2024-04-16 20:46:29.107358] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:38.225 [2024-04-16 20:46:29.107371] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:38.225 [2024-04-16 20:46:29.107374] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a866c80 name raid_bdev1, state offline 00:08:38.225 20:46:29 -- common/autotest_common.sh@950 -- # wait 51062 00:08:38.225 [2024-04-16 20:46:29.121309] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:38.225 20:46:29 -- bdev/bdev_raid.sh@513 -- # return 0 00:08:38.225 00:08:38.225 real 0m12.521s 00:08:38.225 user 0m22.350s 00:08:38.225 sys 0m2.024s 00:08:38.225 20:46:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:38.225 20:46:29 -- common/autotest_common.sh@10 -- # set +x 00:08:38.225 ************************************ 00:08:38.225 END TEST raid_superblock_test 00:08:38.225 ************************************ 00:08:38.225 20:46:29 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:08:38.225 20:46:29 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:08:38.225 20:46:29 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:08:38.225 20:46:29 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:08:38.225 20:46:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:38.225 20:46:29 -- common/autotest_common.sh@10 -- # set +x 00:08:38.225 ************************************ 00:08:38.225 START TEST raid_state_function_test 00:08:38.225 ************************************ 00:08:38.225 20:46:29 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 4 false 00:08:38.225 20:46:29 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:08:38.225 20:46:29 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:08:38.225 20:46:29 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:08:38.225 20:46:29 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:08:38.225 20:46:29 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:08:38.225 20:46:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:38.225 20:46:29 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:08:38.225 20:46:29 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:08:38.225 20:46:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:38.225 20:46:29 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:08:38.225 20:46:29 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:08:38.225 20:46:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:38.225 20:46:29 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:08:38.225 20:46:29 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:08:38.225 20:46:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:38.225 20:46:29 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:08:38.225 20:46:29 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:08:38.225 20:46:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:38.225 20:46:29 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:08:38.225 20:46:29 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:08:38.225 20:46:29 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:08:38.225 20:46:29 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:08:38.225 20:46:29 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:08:38.225 20:46:29 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:08:38.225 20:46:29 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:08:38.225 20:46:29 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:08:38.225 20:46:29 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:08:38.225 20:46:29 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:08:38.225 20:46:29 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:08:38.225 20:46:29 -- bdev/bdev_raid.sh@226 -- # raid_pid=51444 00:08:38.225 Process raid pid: 51444 00:08:38.225 20:46:29 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 51444' 00:08:38.225 20:46:29 -- bdev/bdev_raid.sh@228 -- # waitforlisten 51444 /var/tmp/spdk-raid.sock 00:08:38.225 20:46:29 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:08:38.225 20:46:29 -- common/autotest_common.sh@819 -- # '[' -z 51444 ']' 00:08:38.225 20:46:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:38.225 20:46:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:38.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:38.225 20:46:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:38.225 20:46:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:38.225 20:46:29 -- common/autotest_common.sh@10 -- # set +x 00:08:38.225 [2024-04-16 20:46:29.317430] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:08:38.225 [2024-04-16 20:46:29.317673] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:08:38.802 EAL: TSC is not safe to use in SMP mode 00:08:38.802 EAL: TSC is not invariant 00:08:38.802 [2024-04-16 20:46:29.749202] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.802 [2024-04-16 20:46:29.839230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.802 [2024-04-16 20:46:29.839652] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:38.802 [2024-04-16 20:46:29.839661] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:39.370 20:46:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:39.370 20:46:30 -- common/autotest_common.sh@852 -- # return 0 00:08:39.370 20:46:30 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:08:39.370 [2024-04-16 20:46:30.370728] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:39.370 [2024-04-16 20:46:30.370774] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:39.370 [2024-04-16 20:46:30.370778] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:39.370 [2024-04-16 20:46:30.370784] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:39.370 [2024-04-16 20:46:30.370787] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:39.370 [2024-04-16 20:46:30.370792] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:39.370 [2024-04-16 20:46:30.370794] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:08:39.370 [2024-04-16 20:46:30.370816] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:08:39.370 20:46:30 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:39.370 20:46:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:39.370 20:46:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:39.370 20:46:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:39.370 20:46:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:39.370 20:46:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:08:39.370 20:46:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:39.370 20:46:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:39.370 20:46:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:39.370 20:46:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:39.370 20:46:30 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:39.370 20:46:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.629 20:46:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:39.629 "name": "Existed_Raid", 00:08:39.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.629 "strip_size_kb": 64, 00:08:39.629 "state": "configuring", 00:08:39.629 "raid_level": "raid0", 00:08:39.629 "superblock": false, 00:08:39.629 "num_base_bdevs": 4, 00:08:39.629 "num_base_bdevs_discovered": 0, 00:08:39.629 "num_base_bdevs_operational": 4, 00:08:39.629 "base_bdevs_list": [ 00:08:39.629 { 00:08:39.629 "name": "BaseBdev1", 00:08:39.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.629 "is_configured": false, 00:08:39.629 "data_offset": 0, 00:08:39.629 "data_size": 0 00:08:39.629 }, 00:08:39.629 { 00:08:39.629 "name": "BaseBdev2", 00:08:39.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.629 "is_configured": false, 00:08:39.629 "data_offset": 0, 00:08:39.629 "data_size": 0 00:08:39.629 }, 00:08:39.629 { 00:08:39.629 "name": "BaseBdev3", 00:08:39.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.629 "is_configured": false, 00:08:39.629 "data_offset": 0, 00:08:39.629 "data_size": 0 00:08:39.629 }, 00:08:39.629 { 00:08:39.629 "name": "BaseBdev4", 00:08:39.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.629 "is_configured": false, 00:08:39.629 "data_offset": 0, 00:08:39.629 "data_size": 0 00:08:39.629 } 00:08:39.629 ] 00:08:39.629 }' 00:08:39.629 20:46:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:39.629 20:46:30 -- common/autotest_common.sh@10 -- # set +x 00:08:39.888 20:46:30 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:39.888 [2024-04-16 20:46:31.002853] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:39.888 [2024-04-16 20:46:31.002873] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82db5e500 name Existed_Raid, state configuring 00:08:40.148 20:46:31 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:08:40.148 [2024-04-16 20:46:31.182901] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:40.148 [2024-04-16 20:46:31.182935] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:40.148 [2024-04-16 20:46:31.182939] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:40.148 [2024-04-16 20:46:31.182945] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:40.148 [2024-04-16 20:46:31.182947] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:40.148 [2024-04-16 20:46:31.182953] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:40.148 [2024-04-16 20:46:31.182955] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:08:40.148 [2024-04-16 20:46:31.182960] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:08:40.148 20:46:31 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:08:40.407 [2024-04-16 20:46:31.367711] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:40.407 BaseBdev1 00:08:40.407 20:46:31 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:08:40.407 20:46:31 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:08:40.407 20:46:31 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:40.407 20:46:31 -- common/autotest_common.sh@889 -- # local i 00:08:40.407 20:46:31 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:40.407 20:46:31 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:40.407 20:46:31 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:40.667 20:46:31 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:40.667 [ 00:08:40.667 { 00:08:40.667 "name": "BaseBdev1", 00:08:40.667 "aliases": [ 00:08:40.667 "67b6eb5c-fc32-11ee-80f8-ef3e42bb1492" 00:08:40.667 ], 00:08:40.667 "product_name": "Malloc disk", 00:08:40.667 "block_size": 512, 00:08:40.667 "num_blocks": 65536, 00:08:40.667 "uuid": "67b6eb5c-fc32-11ee-80f8-ef3e42bb1492", 00:08:40.667 "assigned_rate_limits": { 00:08:40.667 "rw_ios_per_sec": 0, 00:08:40.667 "rw_mbytes_per_sec": 0, 00:08:40.667 "r_mbytes_per_sec": 0, 00:08:40.667 "w_mbytes_per_sec": 0 00:08:40.667 }, 00:08:40.667 "claimed": true, 00:08:40.667 "claim_type": "exclusive_write", 00:08:40.667 "zoned": false, 00:08:40.667 "supported_io_types": { 00:08:40.667 "read": true, 00:08:40.667 "write": true, 00:08:40.667 "unmap": true, 00:08:40.667 "write_zeroes": true, 00:08:40.667 "flush": true, 00:08:40.667 "reset": true, 00:08:40.667 "compare": false, 00:08:40.667 "compare_and_write": false, 00:08:40.667 "abort": true, 00:08:40.667 "nvme_admin": false, 00:08:40.667 "nvme_io": false 00:08:40.667 }, 00:08:40.667 "memory_domains": [ 00:08:40.667 { 00:08:40.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.667 "dma_device_type": 2 00:08:40.667 } 00:08:40.667 ], 00:08:40.667 "driver_specific": {} 00:08:40.667 } 00:08:40.667 ] 00:08:40.667 20:46:31 -- common/autotest_common.sh@895 -- # return 0 00:08:40.667 20:46:31 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:40.667 20:46:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:40.667 20:46:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:40.667 20:46:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:40.667 20:46:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:40.667 20:46:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:08:40.667 20:46:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:40.667 20:46:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:40.667 20:46:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:40.667 20:46:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:40.667 20:46:31 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:40.667 20:46:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.926 20:46:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:40.926 "name": "Existed_Raid", 00:08:40.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.926 "strip_size_kb": 64, 00:08:40.926 "state": "configuring", 00:08:40.926 "raid_level": "raid0", 00:08:40.926 "superblock": false, 00:08:40.926 "num_base_bdevs": 4, 00:08:40.926 "num_base_bdevs_discovered": 1, 00:08:40.926 "num_base_bdevs_operational": 4, 00:08:40.926 "base_bdevs_list": [ 00:08:40.926 { 00:08:40.926 "name": "BaseBdev1", 00:08:40.926 "uuid": "67b6eb5c-fc32-11ee-80f8-ef3e42bb1492", 00:08:40.926 "is_configured": true, 00:08:40.926 "data_offset": 0, 00:08:40.926 "data_size": 65536 00:08:40.926 }, 00:08:40.926 { 00:08:40.926 "name": "BaseBdev2", 00:08:40.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.926 "is_configured": false, 00:08:40.926 "data_offset": 0, 00:08:40.926 "data_size": 0 00:08:40.926 }, 00:08:40.926 { 00:08:40.926 "name": "BaseBdev3", 00:08:40.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.926 "is_configured": false, 00:08:40.926 "data_offset": 0, 00:08:40.926 "data_size": 0 00:08:40.926 }, 00:08:40.926 { 00:08:40.926 "name": "BaseBdev4", 00:08:40.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.926 "is_configured": false, 00:08:40.926 "data_offset": 0, 00:08:40.926 "data_size": 0 00:08:40.926 } 00:08:40.926 ] 00:08:40.926 }' 00:08:40.926 20:46:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:40.926 20:46:31 -- common/autotest_common.sh@10 -- # set +x 00:08:41.185 20:46:32 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:41.444 [2024-04-16 20:46:32.347138] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:41.444 [2024-04-16 20:46:32.347165] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82db5e500 name Existed_Raid, state configuring 00:08:41.444 20:46:32 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:08:41.444 20:46:32 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:08:41.444 [2024-04-16 20:46:32.531187] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:41.444 [2024-04-16 20:46:32.531828] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:41.444 [2024-04-16 20:46:32.531865] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:41.444 [2024-04-16 20:46:32.531869] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:41.444 [2024-04-16 20:46:32.531875] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:41.444 [2024-04-16 20:46:32.531878] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:08:41.444 [2024-04-16 20:46:32.531883] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:08:41.444 20:46:32 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:08:41.444 20:46:32 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:08:41.444 20:46:32 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:41.444 20:46:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:41.444 20:46:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:41.444 20:46:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:41.444 20:46:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:41.444 20:46:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:08:41.444 20:46:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:41.444 20:46:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:41.444 20:46:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:41.444 20:46:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:41.444 20:46:32 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:41.444 20:46:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.703 20:46:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:41.703 "name": "Existed_Raid", 00:08:41.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.703 "strip_size_kb": 64, 00:08:41.703 "state": "configuring", 00:08:41.703 "raid_level": "raid0", 00:08:41.703 "superblock": false, 00:08:41.703 "num_base_bdevs": 4, 00:08:41.703 "num_base_bdevs_discovered": 1, 00:08:41.703 "num_base_bdevs_operational": 4, 00:08:41.703 "base_bdevs_list": [ 00:08:41.703 { 00:08:41.704 "name": "BaseBdev1", 00:08:41.704 "uuid": "67b6eb5c-fc32-11ee-80f8-ef3e42bb1492", 00:08:41.704 "is_configured": true, 00:08:41.704 "data_offset": 0, 00:08:41.704 "data_size": 65536 00:08:41.704 }, 00:08:41.704 { 00:08:41.704 "name": "BaseBdev2", 00:08:41.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.704 "is_configured": false, 00:08:41.704 "data_offset": 0, 00:08:41.704 "data_size": 0 00:08:41.704 }, 00:08:41.704 { 00:08:41.704 "name": "BaseBdev3", 00:08:41.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.704 "is_configured": false, 00:08:41.704 "data_offset": 0, 00:08:41.704 "data_size": 0 00:08:41.704 }, 00:08:41.704 { 00:08:41.704 "name": "BaseBdev4", 00:08:41.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.704 "is_configured": false, 00:08:41.704 "data_offset": 0, 00:08:41.704 "data_size": 0 00:08:41.704 } 00:08:41.704 ] 00:08:41.704 }' 00:08:41.704 20:46:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:41.704 20:46:32 -- common/autotest_common.sh@10 -- # set +x 00:08:41.963 20:46:33 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:08:42.222 [2024-04-16 20:46:33.171430] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:42.222 BaseBdev2 00:08:42.222 20:46:33 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:08:42.222 20:46:33 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:08:42.222 20:46:33 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:42.222 20:46:33 -- common/autotest_common.sh@889 -- # local i 00:08:42.222 20:46:33 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:42.222 20:46:33 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:42.222 20:46:33 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:42.222 20:46:33 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:42.481 [ 00:08:42.481 { 00:08:42.481 "name": "BaseBdev2", 00:08:42.481 "aliases": [ 00:08:42.481 "68ca3ef3-fc32-11ee-80f8-ef3e42bb1492" 00:08:42.481 ], 00:08:42.481 "product_name": "Malloc disk", 00:08:42.481 "block_size": 512, 00:08:42.481 "num_blocks": 65536, 00:08:42.481 "uuid": "68ca3ef3-fc32-11ee-80f8-ef3e42bb1492", 00:08:42.481 "assigned_rate_limits": { 00:08:42.481 "rw_ios_per_sec": 0, 00:08:42.481 "rw_mbytes_per_sec": 0, 00:08:42.481 "r_mbytes_per_sec": 0, 00:08:42.481 "w_mbytes_per_sec": 0 00:08:42.481 }, 00:08:42.481 "claimed": true, 00:08:42.481 "claim_type": "exclusive_write", 00:08:42.481 "zoned": false, 00:08:42.481 "supported_io_types": { 00:08:42.481 "read": true, 00:08:42.481 "write": true, 00:08:42.481 "unmap": true, 00:08:42.481 "write_zeroes": true, 00:08:42.481 "flush": true, 00:08:42.481 "reset": true, 00:08:42.481 "compare": false, 00:08:42.481 "compare_and_write": false, 00:08:42.481 "abort": true, 00:08:42.481 "nvme_admin": false, 00:08:42.481 "nvme_io": false 00:08:42.481 }, 00:08:42.481 "memory_domains": [ 00:08:42.481 { 00:08:42.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.481 "dma_device_type": 2 00:08:42.481 } 00:08:42.481 ], 00:08:42.481 "driver_specific": {} 00:08:42.481 } 00:08:42.481 ] 00:08:42.481 20:46:33 -- common/autotest_common.sh@895 -- # return 0 00:08:42.481 20:46:33 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:08:42.481 20:46:33 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:08:42.481 20:46:33 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:42.481 20:46:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:42.481 20:46:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:42.481 20:46:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:42.481 20:46:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:42.481 20:46:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:08:42.481 20:46:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:42.481 20:46:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:42.481 20:46:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:42.481 20:46:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:42.481 20:46:33 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:42.481 20:46:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.741 20:46:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:42.741 "name": "Existed_Raid", 00:08:42.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.741 "strip_size_kb": 64, 00:08:42.741 "state": "configuring", 00:08:42.741 "raid_level": "raid0", 00:08:42.741 "superblock": false, 00:08:42.741 "num_base_bdevs": 4, 00:08:42.741 "num_base_bdevs_discovered": 2, 00:08:42.741 "num_base_bdevs_operational": 4, 00:08:42.741 "base_bdevs_list": [ 00:08:42.741 { 00:08:42.741 "name": "BaseBdev1", 00:08:42.741 "uuid": "67b6eb5c-fc32-11ee-80f8-ef3e42bb1492", 00:08:42.741 "is_configured": true, 00:08:42.741 "data_offset": 0, 00:08:42.741 "data_size": 65536 00:08:42.741 }, 00:08:42.741 { 00:08:42.741 "name": "BaseBdev2", 00:08:42.741 "uuid": "68ca3ef3-fc32-11ee-80f8-ef3e42bb1492", 00:08:42.741 "is_configured": true, 00:08:42.741 "data_offset": 0, 00:08:42.741 "data_size": 65536 00:08:42.741 }, 00:08:42.741 { 00:08:42.741 "name": "BaseBdev3", 00:08:42.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.741 "is_configured": false, 00:08:42.741 "data_offset": 0, 00:08:42.741 "data_size": 0 00:08:42.741 }, 00:08:42.741 { 00:08:42.741 "name": "BaseBdev4", 00:08:42.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.741 "is_configured": false, 00:08:42.741 "data_offset": 0, 00:08:42.741 "data_size": 0 00:08:42.741 } 00:08:42.741 ] 00:08:42.741 }' 00:08:42.741 20:46:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:42.741 20:46:33 -- common/autotest_common.sh@10 -- # set +x 00:08:43.000 20:46:33 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:08:43.270 [2024-04-16 20:46:34.131614] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:43.270 BaseBdev3 00:08:43.270 20:46:34 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:08:43.270 20:46:34 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:08:43.270 20:46:34 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:43.270 20:46:34 -- common/autotest_common.sh@889 -- # local i 00:08:43.270 20:46:34 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:43.271 20:46:34 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:43.271 20:46:34 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:43.271 20:46:34 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:43.528 [ 00:08:43.528 { 00:08:43.528 "name": "BaseBdev3", 00:08:43.528 "aliases": [ 00:08:43.528 "695cc2b8-fc32-11ee-80f8-ef3e42bb1492" 00:08:43.528 ], 00:08:43.528 "product_name": "Malloc disk", 00:08:43.528 "block_size": 512, 00:08:43.528 "num_blocks": 65536, 00:08:43.528 "uuid": "695cc2b8-fc32-11ee-80f8-ef3e42bb1492", 00:08:43.528 "assigned_rate_limits": { 00:08:43.528 "rw_ios_per_sec": 0, 00:08:43.528 "rw_mbytes_per_sec": 0, 00:08:43.528 "r_mbytes_per_sec": 0, 00:08:43.528 "w_mbytes_per_sec": 0 00:08:43.528 }, 00:08:43.528 "claimed": true, 00:08:43.528 "claim_type": "exclusive_write", 00:08:43.528 "zoned": false, 00:08:43.528 "supported_io_types": { 00:08:43.528 "read": true, 00:08:43.528 "write": true, 00:08:43.528 "unmap": true, 00:08:43.528 "write_zeroes": true, 00:08:43.528 "flush": true, 00:08:43.528 "reset": true, 00:08:43.528 "compare": false, 00:08:43.528 "compare_and_write": false, 00:08:43.528 "abort": true, 00:08:43.528 "nvme_admin": false, 00:08:43.528 "nvme_io": false 00:08:43.528 }, 00:08:43.528 "memory_domains": [ 00:08:43.528 { 00:08:43.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.528 "dma_device_type": 2 00:08:43.528 } 00:08:43.528 ], 00:08:43.528 "driver_specific": {} 00:08:43.528 } 00:08:43.528 ] 00:08:43.528 20:46:34 -- common/autotest_common.sh@895 -- # return 0 00:08:43.528 20:46:34 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:08:43.528 20:46:34 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:08:43.528 20:46:34 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:43.528 20:46:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:43.528 20:46:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:43.529 20:46:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:43.529 20:46:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:43.529 20:46:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:08:43.529 20:46:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:43.529 20:46:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:43.529 20:46:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:43.529 20:46:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:43.529 20:46:34 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:43.529 20:46:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.787 20:46:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:43.787 "name": "Existed_Raid", 00:08:43.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.787 "strip_size_kb": 64, 00:08:43.787 "state": "configuring", 00:08:43.787 "raid_level": "raid0", 00:08:43.787 "superblock": false, 00:08:43.787 "num_base_bdevs": 4, 00:08:43.787 "num_base_bdevs_discovered": 3, 00:08:43.787 "num_base_bdevs_operational": 4, 00:08:43.787 "base_bdevs_list": [ 00:08:43.787 { 00:08:43.787 "name": "BaseBdev1", 00:08:43.787 "uuid": "67b6eb5c-fc32-11ee-80f8-ef3e42bb1492", 00:08:43.787 "is_configured": true, 00:08:43.787 "data_offset": 0, 00:08:43.787 "data_size": 65536 00:08:43.787 }, 00:08:43.787 { 00:08:43.787 "name": "BaseBdev2", 00:08:43.787 "uuid": "68ca3ef3-fc32-11ee-80f8-ef3e42bb1492", 00:08:43.787 "is_configured": true, 00:08:43.787 "data_offset": 0, 00:08:43.787 "data_size": 65536 00:08:43.787 }, 00:08:43.787 { 00:08:43.787 "name": "BaseBdev3", 00:08:43.787 "uuid": "695cc2b8-fc32-11ee-80f8-ef3e42bb1492", 00:08:43.787 "is_configured": true, 00:08:43.787 "data_offset": 0, 00:08:43.787 "data_size": 65536 00:08:43.787 }, 00:08:43.787 { 00:08:43.787 "name": "BaseBdev4", 00:08:43.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.787 "is_configured": false, 00:08:43.787 "data_offset": 0, 00:08:43.787 "data_size": 0 00:08:43.787 } 00:08:43.787 ] 00:08:43.787 }' 00:08:43.787 20:46:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:43.787 20:46:34 -- common/autotest_common.sh@10 -- # set +x 00:08:44.046 20:46:34 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:08:44.046 [2024-04-16 20:46:35.115807] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:08:44.046 [2024-04-16 20:46:35.115829] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82db5ea00 00:08:44.046 [2024-04-16 20:46:35.115832] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:08:44.046 [2024-04-16 20:46:35.115855] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82dbc1ec0 00:08:44.046 [2024-04-16 20:46:35.115931] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82db5ea00 00:08:44.046 [2024-04-16 20:46:35.115938] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82db5ea00 00:08:44.046 [2024-04-16 20:46:35.115963] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:44.046 BaseBdev4 00:08:44.046 20:46:35 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:08:44.046 20:46:35 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:08:44.046 20:46:35 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:44.046 20:46:35 -- common/autotest_common.sh@889 -- # local i 00:08:44.046 20:46:35 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:44.046 20:46:35 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:44.046 20:46:35 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:44.305 20:46:35 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:08:44.572 [ 00:08:44.572 { 00:08:44.572 "name": "BaseBdev4", 00:08:44.572 "aliases": [ 00:08:44.572 "69f2efdf-fc32-11ee-80f8-ef3e42bb1492" 00:08:44.572 ], 00:08:44.572 "product_name": "Malloc disk", 00:08:44.572 "block_size": 512, 00:08:44.572 "num_blocks": 65536, 00:08:44.572 "uuid": "69f2efdf-fc32-11ee-80f8-ef3e42bb1492", 00:08:44.572 "assigned_rate_limits": { 00:08:44.572 "rw_ios_per_sec": 0, 00:08:44.572 "rw_mbytes_per_sec": 0, 00:08:44.572 "r_mbytes_per_sec": 0, 00:08:44.572 "w_mbytes_per_sec": 0 00:08:44.572 }, 00:08:44.572 "claimed": true, 00:08:44.572 "claim_type": "exclusive_write", 00:08:44.572 "zoned": false, 00:08:44.572 "supported_io_types": { 00:08:44.572 "read": true, 00:08:44.572 "write": true, 00:08:44.572 "unmap": true, 00:08:44.572 "write_zeroes": true, 00:08:44.572 "flush": true, 00:08:44.572 "reset": true, 00:08:44.572 "compare": false, 00:08:44.572 "compare_and_write": false, 00:08:44.572 "abort": true, 00:08:44.572 "nvme_admin": false, 00:08:44.572 "nvme_io": false 00:08:44.572 }, 00:08:44.572 "memory_domains": [ 00:08:44.572 { 00:08:44.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.572 "dma_device_type": 2 00:08:44.572 } 00:08:44.572 ], 00:08:44.572 "driver_specific": {} 00:08:44.572 } 00:08:44.572 ] 00:08:44.572 20:46:35 -- common/autotest_common.sh@895 -- # return 0 00:08:44.572 20:46:35 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:08:44.572 20:46:35 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:08:44.572 20:46:35 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:08:44.572 20:46:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:44.572 20:46:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:08:44.572 20:46:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:44.572 20:46:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:44.572 20:46:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:08:44.572 20:46:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:44.572 20:46:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:44.572 20:46:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:44.572 20:46:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:44.572 20:46:35 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:44.572 20:46:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.572 20:46:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:44.572 "name": "Existed_Raid", 00:08:44.572 "uuid": "69f2f45e-fc32-11ee-80f8-ef3e42bb1492", 00:08:44.572 "strip_size_kb": 64, 00:08:44.572 "state": "online", 00:08:44.572 "raid_level": "raid0", 00:08:44.572 "superblock": false, 00:08:44.572 "num_base_bdevs": 4, 00:08:44.572 "num_base_bdevs_discovered": 4, 00:08:44.572 "num_base_bdevs_operational": 4, 00:08:44.572 "base_bdevs_list": [ 00:08:44.572 { 00:08:44.572 "name": "BaseBdev1", 00:08:44.572 "uuid": "67b6eb5c-fc32-11ee-80f8-ef3e42bb1492", 00:08:44.572 "is_configured": true, 00:08:44.572 "data_offset": 0, 00:08:44.572 "data_size": 65536 00:08:44.572 }, 00:08:44.572 { 00:08:44.572 "name": "BaseBdev2", 00:08:44.572 "uuid": "68ca3ef3-fc32-11ee-80f8-ef3e42bb1492", 00:08:44.572 "is_configured": true, 00:08:44.572 "data_offset": 0, 00:08:44.572 "data_size": 65536 00:08:44.572 }, 00:08:44.572 { 00:08:44.572 "name": "BaseBdev3", 00:08:44.572 "uuid": "695cc2b8-fc32-11ee-80f8-ef3e42bb1492", 00:08:44.572 "is_configured": true, 00:08:44.572 "data_offset": 0, 00:08:44.572 "data_size": 65536 00:08:44.572 }, 00:08:44.572 { 00:08:44.572 "name": "BaseBdev4", 00:08:44.572 "uuid": "69f2efdf-fc32-11ee-80f8-ef3e42bb1492", 00:08:44.572 "is_configured": true, 00:08:44.572 "data_offset": 0, 00:08:44.572 "data_size": 65536 00:08:44.572 } 00:08:44.572 ] 00:08:44.572 }' 00:08:44.572 20:46:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:44.572 20:46:35 -- common/autotest_common.sh@10 -- # set +x 00:08:44.841 20:46:35 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:08:45.100 [2024-04-16 20:46:36.135927] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:45.100 [2024-04-16 20:46:36.135948] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:45.100 [2024-04-16 20:46:36.135960] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:45.100 20:46:36 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:08:45.100 20:46:36 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:08:45.100 20:46:36 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:08:45.100 20:46:36 -- bdev/bdev_raid.sh@197 -- # return 1 00:08:45.100 20:46:36 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:08:45.100 20:46:36 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:08:45.100 20:46:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:45.100 20:46:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:08:45.100 20:46:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:45.100 20:46:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:45.100 20:46:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:45.100 20:46:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:45.100 20:46:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:45.100 20:46:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:45.100 20:46:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:45.100 20:46:36 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:45.100 20:46:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.359 20:46:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:45.359 "name": "Existed_Raid", 00:08:45.359 "uuid": "69f2f45e-fc32-11ee-80f8-ef3e42bb1492", 00:08:45.359 "strip_size_kb": 64, 00:08:45.359 "state": "offline", 00:08:45.359 "raid_level": "raid0", 00:08:45.359 "superblock": false, 00:08:45.359 "num_base_bdevs": 4, 00:08:45.359 "num_base_bdevs_discovered": 3, 00:08:45.359 "num_base_bdevs_operational": 3, 00:08:45.359 "base_bdevs_list": [ 00:08:45.359 { 00:08:45.359 "name": null, 00:08:45.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.360 "is_configured": false, 00:08:45.360 "data_offset": 0, 00:08:45.360 "data_size": 65536 00:08:45.360 }, 00:08:45.360 { 00:08:45.360 "name": "BaseBdev2", 00:08:45.360 "uuid": "68ca3ef3-fc32-11ee-80f8-ef3e42bb1492", 00:08:45.360 "is_configured": true, 00:08:45.360 "data_offset": 0, 00:08:45.360 "data_size": 65536 00:08:45.360 }, 00:08:45.360 { 00:08:45.360 "name": "BaseBdev3", 00:08:45.360 "uuid": "695cc2b8-fc32-11ee-80f8-ef3e42bb1492", 00:08:45.360 "is_configured": true, 00:08:45.360 "data_offset": 0, 00:08:45.360 "data_size": 65536 00:08:45.360 }, 00:08:45.360 { 00:08:45.360 "name": "BaseBdev4", 00:08:45.360 "uuid": "69f2efdf-fc32-11ee-80f8-ef3e42bb1492", 00:08:45.360 "is_configured": true, 00:08:45.360 "data_offset": 0, 00:08:45.360 "data_size": 65536 00:08:45.360 } 00:08:45.360 ] 00:08:45.360 }' 00:08:45.360 20:46:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:45.360 20:46:36 -- common/autotest_common.sh@10 -- # set +x 00:08:45.618 20:46:36 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:08:45.618 20:46:36 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:08:45.618 20:46:36 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:45.618 20:46:36 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:08:45.877 20:46:36 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:08:45.877 20:46:36 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:45.877 20:46:36 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:08:45.877 [2024-04-16 20:46:36.972730] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:45.877 20:46:36 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:08:45.877 20:46:36 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:08:45.877 20:46:36 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:45.877 20:46:36 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:08:46.136 20:46:37 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:08:46.136 20:46:37 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:46.136 20:46:37 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:08:46.396 [2024-04-16 20:46:37.337442] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:46.396 20:46:37 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:08:46.396 20:46:37 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:08:46.396 20:46:37 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:08:46.396 20:46:37 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:46.655 20:46:37 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:08:46.655 20:46:37 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:46.655 20:46:37 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:08:46.655 [2024-04-16 20:46:37.706114] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:08:46.655 [2024-04-16 20:46:37.706138] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82db5ea00 name Existed_Raid, state offline 00:08:46.655 20:46:37 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:08:46.655 20:46:37 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:08:46.655 20:46:37 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:46.655 20:46:37 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:08:46.914 20:46:37 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:08:46.914 20:46:37 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:08:46.914 20:46:37 -- bdev/bdev_raid.sh@287 -- # killprocess 51444 00:08:46.914 20:46:37 -- common/autotest_common.sh@926 -- # '[' -z 51444 ']' 00:08:46.914 20:46:37 -- common/autotest_common.sh@930 -- # kill -0 51444 00:08:46.914 20:46:37 -- common/autotest_common.sh@931 -- # uname 00:08:46.914 20:46:37 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:08:46.914 20:46:37 -- common/autotest_common.sh@934 -- # ps -c -o command 51444 00:08:46.914 20:46:37 -- common/autotest_common.sh@934 -- # tail -1 00:08:46.914 20:46:37 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:08:46.914 20:46:37 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:08:46.914 killing process with pid 51444 00:08:46.914 20:46:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 51444' 00:08:46.915 20:46:37 -- common/autotest_common.sh@945 -- # kill 51444 00:08:46.915 [2024-04-16 20:46:37.923931] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:46.915 [2024-04-16 20:46:37.923966] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:46.915 20:46:37 -- common/autotest_common.sh@950 -- # wait 51444 00:08:47.174 20:46:38 -- bdev/bdev_raid.sh@289 -- # return 0 00:08:47.174 00:08:47.174 real 0m8.758s 00:08:47.174 user 0m15.202s 00:08:47.174 sys 0m1.624s 00:08:47.174 20:46:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:47.174 20:46:38 -- common/autotest_common.sh@10 -- # set +x 00:08:47.174 ************************************ 00:08:47.174 END TEST raid_state_function_test 00:08:47.174 ************************************ 00:08:47.174 20:46:38 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:08:47.174 20:46:38 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:08:47.174 20:46:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:47.174 20:46:38 -- common/autotest_common.sh@10 -- # set +x 00:08:47.174 ************************************ 00:08:47.174 START TEST raid_state_function_test_sb 00:08:47.174 ************************************ 00:08:47.174 20:46:38 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 4 true 00:08:47.174 20:46:38 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:08:47.174 20:46:38 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:08:47.174 20:46:38 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:08:47.174 20:46:38 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:08:47.174 20:46:38 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:08:47.174 20:46:38 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:47.174 20:46:38 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:08:47.174 20:46:38 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:08:47.174 20:46:38 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:47.174 20:46:38 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:08:47.174 20:46:38 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:08:47.174 20:46:38 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:47.174 20:46:38 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:08:47.174 20:46:38 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:08:47.174 20:46:38 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:47.174 20:46:38 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:08:47.174 20:46:38 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:08:47.174 20:46:38 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:47.174 20:46:38 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:08:47.174 20:46:38 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:08:47.174 20:46:38 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:08:47.174 20:46:38 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:08:47.174 20:46:38 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:08:47.174 20:46:38 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:08:47.174 20:46:38 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:08:47.174 20:46:38 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:08:47.174 20:46:38 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:08:47.174 20:46:38 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:08:47.174 20:46:38 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:08:47.174 20:46:38 -- bdev/bdev_raid.sh@226 -- # raid_pid=51714 00:08:47.174 20:46:38 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 51714' 00:08:47.174 Process raid pid: 51714 00:08:47.174 20:46:38 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:08:47.174 20:46:38 -- bdev/bdev_raid.sh@228 -- # waitforlisten 51714 /var/tmp/spdk-raid.sock 00:08:47.174 20:46:38 -- common/autotest_common.sh@819 -- # '[' -z 51714 ']' 00:08:47.174 20:46:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:47.174 20:46:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:47.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:47.174 20:46:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:47.174 20:46:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:47.174 20:46:38 -- common/autotest_common.sh@10 -- # set +x 00:08:47.174 [2024-04-16 20:46:38.129759] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:08:47.174 [2024-04-16 20:46:38.130037] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:08:47.433 EAL: TSC is not safe to use in SMP mode 00:08:47.433 EAL: TSC is not invariant 00:08:47.433 [2024-04-16 20:46:38.555918] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.692 [2024-04-16 20:46:38.643382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.693 [2024-04-16 20:46:38.643779] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:47.693 [2024-04-16 20:46:38.643788] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:47.951 20:46:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:47.951 20:46:39 -- common/autotest_common.sh@852 -- # return 0 00:08:47.951 20:46:39 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:08:48.211 [2024-04-16 20:46:39.198961] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:48.211 [2024-04-16 20:46:39.198997] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:48.211 [2024-04-16 20:46:39.199017] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:48.211 [2024-04-16 20:46:39.199023] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:48.211 [2024-04-16 20:46:39.199025] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:48.211 [2024-04-16 20:46:39.199030] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:48.211 [2024-04-16 20:46:39.199033] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:08:48.211 [2024-04-16 20:46:39.199038] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:08:48.211 20:46:39 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:48.211 20:46:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:48.211 20:46:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:48.211 20:46:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:48.211 20:46:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:48.211 20:46:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:08:48.211 20:46:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:48.211 20:46:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:48.211 20:46:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:48.211 20:46:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:48.211 20:46:39 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:48.211 20:46:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.470 20:46:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:48.470 "name": "Existed_Raid", 00:08:48.470 "uuid": "6c61fd36-fc32-11ee-80f8-ef3e42bb1492", 00:08:48.470 "strip_size_kb": 64, 00:08:48.470 "state": "configuring", 00:08:48.470 "raid_level": "raid0", 00:08:48.470 "superblock": true, 00:08:48.470 "num_base_bdevs": 4, 00:08:48.470 "num_base_bdevs_discovered": 0, 00:08:48.470 "num_base_bdevs_operational": 4, 00:08:48.470 "base_bdevs_list": [ 00:08:48.470 { 00:08:48.470 "name": "BaseBdev1", 00:08:48.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.470 "is_configured": false, 00:08:48.470 "data_offset": 0, 00:08:48.470 "data_size": 0 00:08:48.470 }, 00:08:48.470 { 00:08:48.470 "name": "BaseBdev2", 00:08:48.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.470 "is_configured": false, 00:08:48.470 "data_offset": 0, 00:08:48.470 "data_size": 0 00:08:48.470 }, 00:08:48.470 { 00:08:48.470 "name": "BaseBdev3", 00:08:48.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.470 "is_configured": false, 00:08:48.470 "data_offset": 0, 00:08:48.470 "data_size": 0 00:08:48.470 }, 00:08:48.470 { 00:08:48.470 "name": "BaseBdev4", 00:08:48.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.470 "is_configured": false, 00:08:48.470 "data_offset": 0, 00:08:48.470 "data_size": 0 00:08:48.470 } 00:08:48.470 ] 00:08:48.470 }' 00:08:48.470 20:46:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:48.470 20:46:39 -- common/autotest_common.sh@10 -- # set +x 00:08:48.729 20:46:39 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:48.729 [2024-04-16 20:46:39.835044] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:48.729 [2024-04-16 20:46:39.835062] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82de39500 name Existed_Raid, state configuring 00:08:48.729 20:46:39 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:08:48.987 [2024-04-16 20:46:40.015100] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:48.987 [2024-04-16 20:46:40.015149] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:48.987 [2024-04-16 20:46:40.015153] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:48.987 [2024-04-16 20:46:40.015159] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:48.987 [2024-04-16 20:46:40.015161] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:48.987 [2024-04-16 20:46:40.015166] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:48.987 [2024-04-16 20:46:40.015168] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:08:48.987 [2024-04-16 20:46:40.015173] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:08:48.987 20:46:40 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:08:49.245 [2024-04-16 20:46:40.195885] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:49.245 BaseBdev1 00:08:49.245 20:46:40 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:08:49.245 20:46:40 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:08:49.245 20:46:40 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:49.245 20:46:40 -- common/autotest_common.sh@889 -- # local i 00:08:49.245 20:46:40 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:49.246 20:46:40 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:49.246 20:46:40 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:49.504 20:46:40 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:49.504 [ 00:08:49.504 { 00:08:49.504 "name": "BaseBdev1", 00:08:49.504 "aliases": [ 00:08:49.504 "6cf9fe5f-fc32-11ee-80f8-ef3e42bb1492" 00:08:49.504 ], 00:08:49.504 "product_name": "Malloc disk", 00:08:49.504 "block_size": 512, 00:08:49.504 "num_blocks": 65536, 00:08:49.504 "uuid": "6cf9fe5f-fc32-11ee-80f8-ef3e42bb1492", 00:08:49.504 "assigned_rate_limits": { 00:08:49.504 "rw_ios_per_sec": 0, 00:08:49.504 "rw_mbytes_per_sec": 0, 00:08:49.504 "r_mbytes_per_sec": 0, 00:08:49.504 "w_mbytes_per_sec": 0 00:08:49.504 }, 00:08:49.504 "claimed": true, 00:08:49.504 "claim_type": "exclusive_write", 00:08:49.504 "zoned": false, 00:08:49.504 "supported_io_types": { 00:08:49.504 "read": true, 00:08:49.504 "write": true, 00:08:49.504 "unmap": true, 00:08:49.504 "write_zeroes": true, 00:08:49.504 "flush": true, 00:08:49.504 "reset": true, 00:08:49.504 "compare": false, 00:08:49.504 "compare_and_write": false, 00:08:49.504 "abort": true, 00:08:49.504 "nvme_admin": false, 00:08:49.504 "nvme_io": false 00:08:49.504 }, 00:08:49.504 "memory_domains": [ 00:08:49.504 { 00:08:49.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.504 "dma_device_type": 2 00:08:49.504 } 00:08:49.504 ], 00:08:49.504 "driver_specific": {} 00:08:49.504 } 00:08:49.504 ] 00:08:49.504 20:46:40 -- common/autotest_common.sh@895 -- # return 0 00:08:49.504 20:46:40 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:49.504 20:46:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:49.504 20:46:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:49.504 20:46:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:49.504 20:46:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:49.504 20:46:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:08:49.504 20:46:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:49.504 20:46:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:49.504 20:46:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:49.504 20:46:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:49.504 20:46:40 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:49.504 20:46:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.762 20:46:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:49.762 "name": "Existed_Raid", 00:08:49.762 "uuid": "6cde855a-fc32-11ee-80f8-ef3e42bb1492", 00:08:49.762 "strip_size_kb": 64, 00:08:49.762 "state": "configuring", 00:08:49.762 "raid_level": "raid0", 00:08:49.762 "superblock": true, 00:08:49.762 "num_base_bdevs": 4, 00:08:49.762 "num_base_bdevs_discovered": 1, 00:08:49.762 "num_base_bdevs_operational": 4, 00:08:49.762 "base_bdevs_list": [ 00:08:49.762 { 00:08:49.762 "name": "BaseBdev1", 00:08:49.762 "uuid": "6cf9fe5f-fc32-11ee-80f8-ef3e42bb1492", 00:08:49.762 "is_configured": true, 00:08:49.762 "data_offset": 2048, 00:08:49.762 "data_size": 63488 00:08:49.762 }, 00:08:49.762 { 00:08:49.762 "name": "BaseBdev2", 00:08:49.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.762 "is_configured": false, 00:08:49.762 "data_offset": 0, 00:08:49.762 "data_size": 0 00:08:49.762 }, 00:08:49.762 { 00:08:49.763 "name": "BaseBdev3", 00:08:49.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.763 "is_configured": false, 00:08:49.763 "data_offset": 0, 00:08:49.763 "data_size": 0 00:08:49.763 }, 00:08:49.763 { 00:08:49.763 "name": "BaseBdev4", 00:08:49.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.763 "is_configured": false, 00:08:49.763 "data_offset": 0, 00:08:49.763 "data_size": 0 00:08:49.763 } 00:08:49.763 ] 00:08:49.763 }' 00:08:49.763 20:46:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:49.763 20:46:40 -- common/autotest_common.sh@10 -- # set +x 00:08:50.021 20:46:40 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:50.279 [2024-04-16 20:46:41.163327] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:50.279 [2024-04-16 20:46:41.163350] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82de39500 name Existed_Raid, state configuring 00:08:50.279 20:46:41 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:08:50.279 20:46:41 -- bdev/bdev_raid.sh@246 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:08:50.279 20:46:41 -- bdev/bdev_raid.sh@247 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:08:50.538 BaseBdev1 00:08:50.538 20:46:41 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:08:50.538 20:46:41 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:08:50.538 20:46:41 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:50.538 20:46:41 -- common/autotest_common.sh@889 -- # local i 00:08:50.538 20:46:41 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:50.538 20:46:41 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:50.538 20:46:41 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:50.797 20:46:41 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:50.797 [ 00:08:50.797 { 00:08:50.797 "name": "BaseBdev1", 00:08:50.797 "aliases": [ 00:08:50.797 "6dc42529-fc32-11ee-80f8-ef3e42bb1492" 00:08:50.797 ], 00:08:50.797 "product_name": "Malloc disk", 00:08:50.797 "block_size": 512, 00:08:50.797 "num_blocks": 65536, 00:08:50.797 "uuid": "6dc42529-fc32-11ee-80f8-ef3e42bb1492", 00:08:50.797 "assigned_rate_limits": { 00:08:50.797 "rw_ios_per_sec": 0, 00:08:50.797 "rw_mbytes_per_sec": 0, 00:08:50.797 "r_mbytes_per_sec": 0, 00:08:50.797 "w_mbytes_per_sec": 0 00:08:50.797 }, 00:08:50.797 "claimed": false, 00:08:50.797 "zoned": false, 00:08:50.797 "supported_io_types": { 00:08:50.797 "read": true, 00:08:50.797 "write": true, 00:08:50.797 "unmap": true, 00:08:50.797 "write_zeroes": true, 00:08:50.797 "flush": true, 00:08:50.797 "reset": true, 00:08:50.797 "compare": false, 00:08:50.797 "compare_and_write": false, 00:08:50.797 "abort": true, 00:08:50.797 "nvme_admin": false, 00:08:50.797 "nvme_io": false 00:08:50.797 }, 00:08:50.797 "memory_domains": [ 00:08:50.797 { 00:08:50.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.797 "dma_device_type": 2 00:08:50.797 } 00:08:50.797 ], 00:08:50.797 "driver_specific": {} 00:08:50.797 } 00:08:50.797 ] 00:08:50.797 20:46:41 -- common/autotest_common.sh@895 -- # return 0 00:08:50.797 20:46:41 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:08:51.056 [2024-04-16 20:46:42.064065] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:51.056 [2024-04-16 20:46:42.064502] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:51.056 [2024-04-16 20:46:42.064553] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:51.056 [2024-04-16 20:46:42.064558] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:51.056 [2024-04-16 20:46:42.064564] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:51.056 [2024-04-16 20:46:42.064566] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:08:51.057 [2024-04-16 20:46:42.064571] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:08:51.057 20:46:42 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:08:51.057 20:46:42 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:08:51.057 20:46:42 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:51.057 20:46:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:51.057 20:46:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:51.057 20:46:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:51.057 20:46:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:51.057 20:46:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:08:51.057 20:46:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:51.057 20:46:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:51.057 20:46:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:51.057 20:46:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:51.057 20:46:42 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:51.057 20:46:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.315 20:46:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:51.315 "name": "Existed_Raid", 00:08:51.315 "uuid": "6e172b5b-fc32-11ee-80f8-ef3e42bb1492", 00:08:51.315 "strip_size_kb": 64, 00:08:51.315 "state": "configuring", 00:08:51.315 "raid_level": "raid0", 00:08:51.315 "superblock": true, 00:08:51.315 "num_base_bdevs": 4, 00:08:51.315 "num_base_bdevs_discovered": 1, 00:08:51.315 "num_base_bdevs_operational": 4, 00:08:51.315 "base_bdevs_list": [ 00:08:51.315 { 00:08:51.315 "name": "BaseBdev1", 00:08:51.315 "uuid": "6dc42529-fc32-11ee-80f8-ef3e42bb1492", 00:08:51.315 "is_configured": true, 00:08:51.315 "data_offset": 2048, 00:08:51.315 "data_size": 63488 00:08:51.315 }, 00:08:51.315 { 00:08:51.315 "name": "BaseBdev2", 00:08:51.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.315 "is_configured": false, 00:08:51.315 "data_offset": 0, 00:08:51.315 "data_size": 0 00:08:51.315 }, 00:08:51.315 { 00:08:51.315 "name": "BaseBdev3", 00:08:51.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.315 "is_configured": false, 00:08:51.315 "data_offset": 0, 00:08:51.315 "data_size": 0 00:08:51.315 }, 00:08:51.315 { 00:08:51.315 "name": "BaseBdev4", 00:08:51.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.315 "is_configured": false, 00:08:51.315 "data_offset": 0, 00:08:51.315 "data_size": 0 00:08:51.315 } 00:08:51.315 ] 00:08:51.315 }' 00:08:51.315 20:46:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:51.315 20:46:42 -- common/autotest_common.sh@10 -- # set +x 00:08:51.574 20:46:42 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:08:51.834 [2024-04-16 20:46:42.716280] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:51.834 BaseBdev2 00:08:51.834 20:46:42 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:08:51.834 20:46:42 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:08:51.834 20:46:42 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:51.834 20:46:42 -- common/autotest_common.sh@889 -- # local i 00:08:51.834 20:46:42 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:51.834 20:46:42 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:51.834 20:46:42 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:51.834 20:46:42 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:52.093 [ 00:08:52.093 { 00:08:52.093 "name": "BaseBdev2", 00:08:52.093 "aliases": [ 00:08:52.093 "6e7aad2b-fc32-11ee-80f8-ef3e42bb1492" 00:08:52.093 ], 00:08:52.093 "product_name": "Malloc disk", 00:08:52.093 "block_size": 512, 00:08:52.093 "num_blocks": 65536, 00:08:52.093 "uuid": "6e7aad2b-fc32-11ee-80f8-ef3e42bb1492", 00:08:52.093 "assigned_rate_limits": { 00:08:52.093 "rw_ios_per_sec": 0, 00:08:52.093 "rw_mbytes_per_sec": 0, 00:08:52.093 "r_mbytes_per_sec": 0, 00:08:52.093 "w_mbytes_per_sec": 0 00:08:52.093 }, 00:08:52.093 "claimed": true, 00:08:52.093 "claim_type": "exclusive_write", 00:08:52.093 "zoned": false, 00:08:52.093 "supported_io_types": { 00:08:52.093 "read": true, 00:08:52.093 "write": true, 00:08:52.093 "unmap": true, 00:08:52.093 "write_zeroes": true, 00:08:52.093 "flush": true, 00:08:52.094 "reset": true, 00:08:52.094 "compare": false, 00:08:52.094 "compare_and_write": false, 00:08:52.094 "abort": true, 00:08:52.094 "nvme_admin": false, 00:08:52.094 "nvme_io": false 00:08:52.094 }, 00:08:52.094 "memory_domains": [ 00:08:52.094 { 00:08:52.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.094 "dma_device_type": 2 00:08:52.094 } 00:08:52.094 ], 00:08:52.094 "driver_specific": {} 00:08:52.094 } 00:08:52.094 ] 00:08:52.094 20:46:43 -- common/autotest_common.sh@895 -- # return 0 00:08:52.094 20:46:43 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:08:52.094 20:46:43 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:08:52.094 20:46:43 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:52.094 20:46:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:52.094 20:46:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:52.094 20:46:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:52.094 20:46:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:52.094 20:46:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:08:52.094 20:46:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:52.094 20:46:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:52.094 20:46:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:52.094 20:46:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:52.094 20:46:43 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:52.094 20:46:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.353 20:46:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:52.353 "name": "Existed_Raid", 00:08:52.353 "uuid": "6e172b5b-fc32-11ee-80f8-ef3e42bb1492", 00:08:52.353 "strip_size_kb": 64, 00:08:52.353 "state": "configuring", 00:08:52.353 "raid_level": "raid0", 00:08:52.353 "superblock": true, 00:08:52.353 "num_base_bdevs": 4, 00:08:52.353 "num_base_bdevs_discovered": 2, 00:08:52.353 "num_base_bdevs_operational": 4, 00:08:52.353 "base_bdevs_list": [ 00:08:52.353 { 00:08:52.353 "name": "BaseBdev1", 00:08:52.353 "uuid": "6dc42529-fc32-11ee-80f8-ef3e42bb1492", 00:08:52.353 "is_configured": true, 00:08:52.353 "data_offset": 2048, 00:08:52.353 "data_size": 63488 00:08:52.353 }, 00:08:52.353 { 00:08:52.353 "name": "BaseBdev2", 00:08:52.353 "uuid": "6e7aad2b-fc32-11ee-80f8-ef3e42bb1492", 00:08:52.353 "is_configured": true, 00:08:52.353 "data_offset": 2048, 00:08:52.353 "data_size": 63488 00:08:52.353 }, 00:08:52.353 { 00:08:52.353 "name": "BaseBdev3", 00:08:52.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.353 "is_configured": false, 00:08:52.353 "data_offset": 0, 00:08:52.353 "data_size": 0 00:08:52.353 }, 00:08:52.353 { 00:08:52.353 "name": "BaseBdev4", 00:08:52.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.353 "is_configured": false, 00:08:52.353 "data_offset": 0, 00:08:52.353 "data_size": 0 00:08:52.353 } 00:08:52.353 ] 00:08:52.353 }' 00:08:52.353 20:46:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:52.353 20:46:43 -- common/autotest_common.sh@10 -- # set +x 00:08:52.612 20:46:43 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:08:52.612 [2024-04-16 20:46:43.716465] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:52.612 BaseBdev3 00:08:52.612 20:46:43 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:08:52.612 20:46:43 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:08:52.612 20:46:43 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:52.612 20:46:43 -- common/autotest_common.sh@889 -- # local i 00:08:52.612 20:46:43 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:52.612 20:46:43 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:52.612 20:46:43 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:52.872 20:46:43 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:53.131 [ 00:08:53.131 { 00:08:53.131 "name": "BaseBdev3", 00:08:53.131 "aliases": [ 00:08:53.131 "6f134add-fc32-11ee-80f8-ef3e42bb1492" 00:08:53.131 ], 00:08:53.131 "product_name": "Malloc disk", 00:08:53.131 "block_size": 512, 00:08:53.131 "num_blocks": 65536, 00:08:53.131 "uuid": "6f134add-fc32-11ee-80f8-ef3e42bb1492", 00:08:53.131 "assigned_rate_limits": { 00:08:53.131 "rw_ios_per_sec": 0, 00:08:53.131 "rw_mbytes_per_sec": 0, 00:08:53.131 "r_mbytes_per_sec": 0, 00:08:53.131 "w_mbytes_per_sec": 0 00:08:53.131 }, 00:08:53.131 "claimed": true, 00:08:53.131 "claim_type": "exclusive_write", 00:08:53.131 "zoned": false, 00:08:53.131 "supported_io_types": { 00:08:53.131 "read": true, 00:08:53.131 "write": true, 00:08:53.131 "unmap": true, 00:08:53.131 "write_zeroes": true, 00:08:53.131 "flush": true, 00:08:53.131 "reset": true, 00:08:53.131 "compare": false, 00:08:53.131 "compare_and_write": false, 00:08:53.131 "abort": true, 00:08:53.131 "nvme_admin": false, 00:08:53.131 "nvme_io": false 00:08:53.131 }, 00:08:53.131 "memory_domains": [ 00:08:53.131 { 00:08:53.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.131 "dma_device_type": 2 00:08:53.131 } 00:08:53.131 ], 00:08:53.131 "driver_specific": {} 00:08:53.131 } 00:08:53.131 ] 00:08:53.131 20:46:44 -- common/autotest_common.sh@895 -- # return 0 00:08:53.131 20:46:44 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:08:53.131 20:46:44 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:08:53.131 20:46:44 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:53.131 20:46:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:53.131 20:46:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:53.131 20:46:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:53.131 20:46:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:53.131 20:46:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:08:53.131 20:46:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:53.131 20:46:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:53.131 20:46:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:53.131 20:46:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:53.131 20:46:44 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:53.131 20:46:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.391 20:46:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:53.391 "name": "Existed_Raid", 00:08:53.391 "uuid": "6e172b5b-fc32-11ee-80f8-ef3e42bb1492", 00:08:53.391 "strip_size_kb": 64, 00:08:53.391 "state": "configuring", 00:08:53.391 "raid_level": "raid0", 00:08:53.391 "superblock": true, 00:08:53.391 "num_base_bdevs": 4, 00:08:53.391 "num_base_bdevs_discovered": 3, 00:08:53.391 "num_base_bdevs_operational": 4, 00:08:53.391 "base_bdevs_list": [ 00:08:53.391 { 00:08:53.391 "name": "BaseBdev1", 00:08:53.391 "uuid": "6dc42529-fc32-11ee-80f8-ef3e42bb1492", 00:08:53.391 "is_configured": true, 00:08:53.391 "data_offset": 2048, 00:08:53.391 "data_size": 63488 00:08:53.391 }, 00:08:53.391 { 00:08:53.391 "name": "BaseBdev2", 00:08:53.391 "uuid": "6e7aad2b-fc32-11ee-80f8-ef3e42bb1492", 00:08:53.391 "is_configured": true, 00:08:53.391 "data_offset": 2048, 00:08:53.391 "data_size": 63488 00:08:53.391 }, 00:08:53.391 { 00:08:53.391 "name": "BaseBdev3", 00:08:53.391 "uuid": "6f134add-fc32-11ee-80f8-ef3e42bb1492", 00:08:53.391 "is_configured": true, 00:08:53.391 "data_offset": 2048, 00:08:53.391 "data_size": 63488 00:08:53.391 }, 00:08:53.391 { 00:08:53.391 "name": "BaseBdev4", 00:08:53.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.391 "is_configured": false, 00:08:53.391 "data_offset": 0, 00:08:53.391 "data_size": 0 00:08:53.391 } 00:08:53.391 ] 00:08:53.391 }' 00:08:53.391 20:46:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:53.391 20:46:44 -- common/autotest_common.sh@10 -- # set +x 00:08:53.650 20:46:44 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:08:53.650 [2024-04-16 20:46:44.704655] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:08:53.650 [2024-04-16 20:46:44.704713] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82de39a00 00:08:53.650 [2024-04-16 20:46:44.704717] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:08:53.650 [2024-04-16 20:46:44.704733] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82de9cec0 00:08:53.650 [2024-04-16 20:46:44.704767] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82de39a00 00:08:53.650 [2024-04-16 20:46:44.704770] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82de39a00 00:08:53.650 [2024-04-16 20:46:44.704784] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:53.650 BaseBdev4 00:08:53.650 20:46:44 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:08:53.650 20:46:44 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:08:53.650 20:46:44 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:53.650 20:46:44 -- common/autotest_common.sh@889 -- # local i 00:08:53.650 20:46:44 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:53.650 20:46:44 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:53.650 20:46:44 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:53.908 20:46:44 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:08:54.167 [ 00:08:54.167 { 00:08:54.167 "name": "BaseBdev4", 00:08:54.167 "aliases": [ 00:08:54.167 "6faa141c-fc32-11ee-80f8-ef3e42bb1492" 00:08:54.167 ], 00:08:54.167 "product_name": "Malloc disk", 00:08:54.167 "block_size": 512, 00:08:54.167 "num_blocks": 65536, 00:08:54.167 "uuid": "6faa141c-fc32-11ee-80f8-ef3e42bb1492", 00:08:54.167 "assigned_rate_limits": { 00:08:54.167 "rw_ios_per_sec": 0, 00:08:54.167 "rw_mbytes_per_sec": 0, 00:08:54.167 "r_mbytes_per_sec": 0, 00:08:54.167 "w_mbytes_per_sec": 0 00:08:54.167 }, 00:08:54.167 "claimed": true, 00:08:54.167 "claim_type": "exclusive_write", 00:08:54.167 "zoned": false, 00:08:54.167 "supported_io_types": { 00:08:54.167 "read": true, 00:08:54.167 "write": true, 00:08:54.167 "unmap": true, 00:08:54.167 "write_zeroes": true, 00:08:54.167 "flush": true, 00:08:54.167 "reset": true, 00:08:54.167 "compare": false, 00:08:54.167 "compare_and_write": false, 00:08:54.167 "abort": true, 00:08:54.167 "nvme_admin": false, 00:08:54.167 "nvme_io": false 00:08:54.167 }, 00:08:54.167 "memory_domains": [ 00:08:54.167 { 00:08:54.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.167 "dma_device_type": 2 00:08:54.167 } 00:08:54.167 ], 00:08:54.167 "driver_specific": {} 00:08:54.167 } 00:08:54.167 ] 00:08:54.167 20:46:45 -- common/autotest_common.sh@895 -- # return 0 00:08:54.167 20:46:45 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:08:54.167 20:46:45 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:08:54.167 20:46:45 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:08:54.167 20:46:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:54.167 20:46:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:08:54.167 20:46:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:54.167 20:46:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:54.167 20:46:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:08:54.167 20:46:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:54.167 20:46:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:54.167 20:46:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:54.167 20:46:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:54.167 20:46:45 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:54.167 20:46:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.167 20:46:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:54.167 "name": "Existed_Raid", 00:08:54.167 "uuid": "6e172b5b-fc32-11ee-80f8-ef3e42bb1492", 00:08:54.167 "strip_size_kb": 64, 00:08:54.167 "state": "online", 00:08:54.167 "raid_level": "raid0", 00:08:54.167 "superblock": true, 00:08:54.167 "num_base_bdevs": 4, 00:08:54.167 "num_base_bdevs_discovered": 4, 00:08:54.167 "num_base_bdevs_operational": 4, 00:08:54.167 "base_bdevs_list": [ 00:08:54.167 { 00:08:54.167 "name": "BaseBdev1", 00:08:54.167 "uuid": "6dc42529-fc32-11ee-80f8-ef3e42bb1492", 00:08:54.167 "is_configured": true, 00:08:54.167 "data_offset": 2048, 00:08:54.167 "data_size": 63488 00:08:54.167 }, 00:08:54.167 { 00:08:54.167 "name": "BaseBdev2", 00:08:54.167 "uuid": "6e7aad2b-fc32-11ee-80f8-ef3e42bb1492", 00:08:54.167 "is_configured": true, 00:08:54.167 "data_offset": 2048, 00:08:54.167 "data_size": 63488 00:08:54.167 }, 00:08:54.167 { 00:08:54.167 "name": "BaseBdev3", 00:08:54.167 "uuid": "6f134add-fc32-11ee-80f8-ef3e42bb1492", 00:08:54.167 "is_configured": true, 00:08:54.167 "data_offset": 2048, 00:08:54.167 "data_size": 63488 00:08:54.167 }, 00:08:54.167 { 00:08:54.167 "name": "BaseBdev4", 00:08:54.167 "uuid": "6faa141c-fc32-11ee-80f8-ef3e42bb1492", 00:08:54.167 "is_configured": true, 00:08:54.167 "data_offset": 2048, 00:08:54.167 "data_size": 63488 00:08:54.167 } 00:08:54.167 ] 00:08:54.167 }' 00:08:54.168 20:46:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:54.168 20:46:45 -- common/autotest_common.sh@10 -- # set +x 00:08:54.425 20:46:45 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:08:54.683 [2024-04-16 20:46:45.680740] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:54.683 [2024-04-16 20:46:45.680771] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:54.684 [2024-04-16 20:46:45.680799] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:54.684 20:46:45 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:08:54.684 20:46:45 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:08:54.684 20:46:45 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:08:54.684 20:46:45 -- bdev/bdev_raid.sh@197 -- # return 1 00:08:54.684 20:46:45 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:08:54.684 20:46:45 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:08:54.684 20:46:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:54.684 20:46:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:08:54.684 20:46:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:54.684 20:46:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:54.684 20:46:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:54.684 20:46:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:54.684 20:46:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:54.684 20:46:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:54.684 20:46:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:54.684 20:46:45 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:54.684 20:46:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.942 20:46:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:54.942 "name": "Existed_Raid", 00:08:54.942 "uuid": "6e172b5b-fc32-11ee-80f8-ef3e42bb1492", 00:08:54.942 "strip_size_kb": 64, 00:08:54.942 "state": "offline", 00:08:54.942 "raid_level": "raid0", 00:08:54.942 "superblock": true, 00:08:54.942 "num_base_bdevs": 4, 00:08:54.942 "num_base_bdevs_discovered": 3, 00:08:54.942 "num_base_bdevs_operational": 3, 00:08:54.942 "base_bdevs_list": [ 00:08:54.942 { 00:08:54.942 "name": null, 00:08:54.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.942 "is_configured": false, 00:08:54.942 "data_offset": 2048, 00:08:54.942 "data_size": 63488 00:08:54.942 }, 00:08:54.942 { 00:08:54.942 "name": "BaseBdev2", 00:08:54.942 "uuid": "6e7aad2b-fc32-11ee-80f8-ef3e42bb1492", 00:08:54.942 "is_configured": true, 00:08:54.942 "data_offset": 2048, 00:08:54.942 "data_size": 63488 00:08:54.942 }, 00:08:54.942 { 00:08:54.942 "name": "BaseBdev3", 00:08:54.942 "uuid": "6f134add-fc32-11ee-80f8-ef3e42bb1492", 00:08:54.942 "is_configured": true, 00:08:54.942 "data_offset": 2048, 00:08:54.942 "data_size": 63488 00:08:54.942 }, 00:08:54.942 { 00:08:54.942 "name": "BaseBdev4", 00:08:54.942 "uuid": "6faa141c-fc32-11ee-80f8-ef3e42bb1492", 00:08:54.942 "is_configured": true, 00:08:54.942 "data_offset": 2048, 00:08:54.942 "data_size": 63488 00:08:54.942 } 00:08:54.942 ] 00:08:54.942 }' 00:08:54.942 20:46:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:54.942 20:46:45 -- common/autotest_common.sh@10 -- # set +x 00:08:55.207 20:46:46 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:08:55.207 20:46:46 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:08:55.207 20:46:46 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:55.207 20:46:46 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:08:55.473 20:46:46 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:08:55.473 20:46:46 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:55.473 20:46:46 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:08:55.473 [2024-04-16 20:46:46.497589] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:55.473 20:46:46 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:08:55.473 20:46:46 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:08:55.473 20:46:46 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:55.473 20:46:46 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:08:55.731 20:46:46 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:08:55.731 20:46:46 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:55.731 20:46:46 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:08:55.990 [2024-04-16 20:46:46.862389] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:55.990 20:46:46 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:08:55.990 20:46:46 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:08:55.990 20:46:46 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:55.990 20:46:46 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:08:55.990 20:46:47 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:08:55.990 20:46:47 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:55.990 20:46:47 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:08:56.250 [2024-04-16 20:46:47.199075] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:08:56.250 [2024-04-16 20:46:47.199096] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82de39a00 name Existed_Raid, state offline 00:08:56.250 20:46:47 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:08:56.250 20:46:47 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:08:56.250 20:46:47 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:56.250 20:46:47 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:08:56.510 20:46:47 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:08:56.510 20:46:47 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:08:56.510 20:46:47 -- bdev/bdev_raid.sh@287 -- # killprocess 51714 00:08:56.510 20:46:47 -- common/autotest_common.sh@926 -- # '[' -z 51714 ']' 00:08:56.510 20:46:47 -- common/autotest_common.sh@930 -- # kill -0 51714 00:08:56.510 20:46:47 -- common/autotest_common.sh@931 -- # uname 00:08:56.510 20:46:47 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:08:56.510 20:46:47 -- common/autotest_common.sh@934 -- # ps -c -o command 51714 00:08:56.510 20:46:47 -- common/autotest_common.sh@934 -- # tail -1 00:08:56.510 20:46:47 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:08:56.510 20:46:47 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:08:56.510 killing process with pid 51714 00:08:56.510 20:46:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 51714' 00:08:56.510 20:46:47 -- common/autotest_common.sh@945 -- # kill 51714 00:08:56.510 [2024-04-16 20:46:47.417797] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:56.510 [2024-04-16 20:46:47.417831] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:56.510 20:46:47 -- common/autotest_common.sh@950 -- # wait 51714 00:08:56.510 20:46:47 -- bdev/bdev_raid.sh@289 -- # return 0 00:08:56.510 00:08:56.510 real 0m9.444s 00:08:56.510 user 0m16.639s 00:08:56.510 sys 0m1.562s 00:08:56.510 20:46:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:56.510 20:46:47 -- common/autotest_common.sh@10 -- # set +x 00:08:56.510 ************************************ 00:08:56.510 END TEST raid_state_function_test_sb 00:08:56.510 ************************************ 00:08:56.510 20:46:47 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:08:56.510 20:46:47 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:56.510 20:46:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:56.510 20:46:47 -- common/autotest_common.sh@10 -- # set +x 00:08:56.510 ************************************ 00:08:56.510 START TEST raid_superblock_test 00:08:56.510 ************************************ 00:08:56.510 20:46:47 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 4 00:08:56.510 20:46:47 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:08:56.510 20:46:47 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:08:56.510 20:46:47 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:08:56.510 20:46:47 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:08:56.510 20:46:47 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:08:56.510 20:46:47 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:08:56.510 20:46:47 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:08:56.510 20:46:47 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:08:56.510 20:46:47 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:08:56.510 20:46:47 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:08:56.510 20:46:47 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:08:56.510 20:46:47 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:08:56.510 20:46:47 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:08:56.510 20:46:47 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:08:56.510 20:46:47 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:08:56.510 20:46:47 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:08:56.510 20:46:47 -- bdev/bdev_raid.sh@357 -- # raid_pid=51987 00:08:56.510 20:46:47 -- bdev/bdev_raid.sh@358 -- # waitforlisten 51987 /var/tmp/spdk-raid.sock 00:08:56.510 20:46:47 -- bdev/bdev_raid.sh@356 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:08:56.510 20:46:47 -- common/autotest_common.sh@819 -- # '[' -z 51987 ']' 00:08:56.510 20:46:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:56.510 20:46:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:56.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:56.510 20:46:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:56.510 20:46:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:56.510 20:46:47 -- common/autotest_common.sh@10 -- # set +x 00:08:56.510 [2024-04-16 20:46:47.624819] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:08:56.510 [2024-04-16 20:46:47.625128] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:08:57.080 EAL: TSC is not safe to use in SMP mode 00:08:57.080 EAL: TSC is not invariant 00:08:57.080 [2024-04-16 20:46:48.052480] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.080 [2024-04-16 20:46:48.142584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.080 [2024-04-16 20:46:48.142994] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:57.080 [2024-04-16 20:46:48.143003] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:57.648 20:46:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:57.648 20:46:48 -- common/autotest_common.sh@852 -- # return 0 00:08:57.648 20:46:48 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:08:57.648 20:46:48 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:08:57.648 20:46:48 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:08:57.648 20:46:48 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:08:57.648 20:46:48 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:57.648 20:46:48 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:57.648 20:46:48 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:08:57.648 20:46:48 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:57.648 20:46:48 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:08:57.648 malloc1 00:08:57.648 20:46:48 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:57.907 [2024-04-16 20:46:48.834117] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:57.907 [2024-04-16 20:46:48.834181] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:57.907 [2024-04-16 20:46:48.834697] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82adfa780 00:08:57.907 [2024-04-16 20:46:48.834720] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:57.907 [2024-04-16 20:46:48.835353] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:57.907 [2024-04-16 20:46:48.835381] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:57.907 pt1 00:08:57.907 20:46:48 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:08:57.907 20:46:48 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:08:57.907 20:46:48 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:08:57.907 20:46:48 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:08:57.907 20:46:48 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:57.907 20:46:48 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:57.907 20:46:48 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:08:57.907 20:46:48 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:57.907 20:46:48 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:08:57.907 malloc2 00:08:57.907 20:46:49 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:58.167 [2024-04-16 20:46:49.194173] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:58.167 [2024-04-16 20:46:49.194230] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:58.167 [2024-04-16 20:46:49.194253] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82adfac80 00:08:58.167 [2024-04-16 20:46:49.194259] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:58.167 [2024-04-16 20:46:49.194713] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:58.167 [2024-04-16 20:46:49.194739] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:58.167 pt2 00:08:58.167 20:46:49 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:08:58.167 20:46:49 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:08:58.167 20:46:49 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:08:58.167 20:46:49 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:08:58.167 20:46:49 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:58.167 20:46:49 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:58.167 20:46:49 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:08:58.167 20:46:49 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:58.167 20:46:49 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:08:58.426 malloc3 00:08:58.426 20:46:49 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:58.426 [2024-04-16 20:46:49.526222] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:58.426 [2024-04-16 20:46:49.526278] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:58.426 [2024-04-16 20:46:49.526299] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82adfb180 00:08:58.426 [2024-04-16 20:46:49.526305] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:58.426 [2024-04-16 20:46:49.526690] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:58.426 [2024-04-16 20:46:49.526712] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:58.426 pt3 00:08:58.426 20:46:49 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:08:58.426 20:46:49 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:08:58.426 20:46:49 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:08:58.426 20:46:49 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:08:58.426 20:46:49 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:08:58.426 20:46:49 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:58.426 20:46:49 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:08:58.426 20:46:49 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:58.426 20:46:49 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:08:58.685 malloc4 00:08:58.685 20:46:49 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:08:58.943 [2024-04-16 20:46:49.898292] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:08:58.943 [2024-04-16 20:46:49.898337] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:58.943 [2024-04-16 20:46:49.898375] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82adfb680 00:08:58.943 [2024-04-16 20:46:49.898381] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:58.943 [2024-04-16 20:46:49.898835] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:58.944 [2024-04-16 20:46:49.898863] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:08:58.944 pt4 00:08:58.944 20:46:49 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:08:58.944 20:46:49 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:08:58.944 20:46:49 -- bdev/bdev_raid.sh@375 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:08:59.203 [2024-04-16 20:46:50.078329] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:59.203 [2024-04-16 20:46:50.078730] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:59.203 [2024-04-16 20:46:50.078752] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:59.203 [2024-04-16 20:46:50.078760] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:08:59.203 [2024-04-16 20:46:50.078806] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82adfb900 00:08:59.203 [2024-04-16 20:46:50.078811] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:08:59.203 [2024-04-16 20:46:50.078837] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82ae5de20 00:08:59.203 [2024-04-16 20:46:50.078886] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82adfb900 00:08:59.203 [2024-04-16 20:46:50.078889] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82adfb900 00:08:59.203 [2024-04-16 20:46:50.078907] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:59.203 20:46:50 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:08:59.203 20:46:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:08:59.203 20:46:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:08:59.203 20:46:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:59.203 20:46:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:59.203 20:46:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:08:59.203 20:46:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:59.203 20:46:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:59.203 20:46:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:59.203 20:46:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:59.203 20:46:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:59.203 20:46:50 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:59.203 20:46:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:59.203 "name": "raid_bdev1", 00:08:59.203 "uuid": "72de0c81-fc32-11ee-80f8-ef3e42bb1492", 00:08:59.203 "strip_size_kb": 64, 00:08:59.203 "state": "online", 00:08:59.203 "raid_level": "raid0", 00:08:59.203 "superblock": true, 00:08:59.203 "num_base_bdevs": 4, 00:08:59.203 "num_base_bdevs_discovered": 4, 00:08:59.203 "num_base_bdevs_operational": 4, 00:08:59.203 "base_bdevs_list": [ 00:08:59.203 { 00:08:59.203 "name": "pt1", 00:08:59.203 "uuid": "f6e351ed-5294-6d50-bcee-aec7fbdaba97", 00:08:59.203 "is_configured": true, 00:08:59.203 "data_offset": 2048, 00:08:59.203 "data_size": 63488 00:08:59.203 }, 00:08:59.203 { 00:08:59.203 "name": "pt2", 00:08:59.203 "uuid": "8c697c19-6e0d-b85e-9e8e-0188b9fc1879", 00:08:59.203 "is_configured": true, 00:08:59.203 "data_offset": 2048, 00:08:59.203 "data_size": 63488 00:08:59.203 }, 00:08:59.203 { 00:08:59.203 "name": "pt3", 00:08:59.203 "uuid": "2261656b-75a1-a556-9735-dd24cd986480", 00:08:59.203 "is_configured": true, 00:08:59.203 "data_offset": 2048, 00:08:59.203 "data_size": 63488 00:08:59.203 }, 00:08:59.203 { 00:08:59.203 "name": "pt4", 00:08:59.203 "uuid": "0f346e30-c4c5-1e58-b9bf-65f1206fc3d7", 00:08:59.203 "is_configured": true, 00:08:59.203 "data_offset": 2048, 00:08:59.203 "data_size": 63488 00:08:59.203 } 00:08:59.203 ] 00:08:59.203 }' 00:08:59.203 20:46:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:59.203 20:46:50 -- common/autotest_common.sh@10 -- # set +x 00:08:59.463 20:46:50 -- bdev/bdev_raid.sh@379 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:59.463 20:46:50 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:08:59.722 [2024-04-16 20:46:50.714447] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:59.722 20:46:50 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=72de0c81-fc32-11ee-80f8-ef3e42bb1492 00:08:59.722 20:46:50 -- bdev/bdev_raid.sh@380 -- # '[' -z 72de0c81-fc32-11ee-80f8-ef3e42bb1492 ']' 00:08:59.722 20:46:50 -- bdev/bdev_raid.sh@385 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:59.981 [2024-04-16 20:46:50.890440] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:59.981 [2024-04-16 20:46:50.890455] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:59.981 [2024-04-16 20:46:50.890471] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:59.981 [2024-04-16 20:46:50.890499] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:59.981 [2024-04-16 20:46:50.890502] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82adfb900 name raid_bdev1, state offline 00:08:59.981 20:46:50 -- bdev/bdev_raid.sh@386 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:59.981 20:46:50 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:08:59.981 20:46:51 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:08:59.981 20:46:51 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:08:59.981 20:46:51 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:08:59.981 20:46:51 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:09:00.240 20:46:51 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:09:00.240 20:46:51 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:09:00.500 20:46:51 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:09:00.500 20:46:51 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:09:00.759 20:46:51 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:09:00.759 20:46:51 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:09:00.759 20:46:51 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:00.759 20:46:51 -- bdev/bdev_raid.sh@395 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:09:01.018 20:46:51 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:09:01.018 20:46:51 -- bdev/bdev_raid.sh@401 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:09:01.018 20:46:51 -- common/autotest_common.sh@640 -- # local es=0 00:09:01.018 20:46:51 -- common/autotest_common.sh@642 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:09:01.018 20:46:51 -- common/autotest_common.sh@628 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:01.018 20:46:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:01.018 20:46:51 -- common/autotest_common.sh@632 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:01.018 20:46:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:01.018 20:46:51 -- common/autotest_common.sh@634 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:01.018 20:46:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:01.018 20:46:51 -- common/autotest_common.sh@634 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:01.018 20:46:51 -- common/autotest_common.sh@634 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:01.018 20:46:51 -- common/autotest_common.sh@643 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:09:01.018 [2024-04-16 20:46:52.138659] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:01.019 [2024-04-16 20:46:52.139105] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:01.019 [2024-04-16 20:46:52.139123] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:01.019 [2024-04-16 20:46:52.139129] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:09:01.019 [2024-04-16 20:46:52.139140] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:09:01.019 [2024-04-16 20:46:52.139166] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:09:01.019 [2024-04-16 20:46:52.139177] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:09:01.019 [2024-04-16 20:46:52.139190] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:09:01.019 [2024-04-16 20:46:52.139196] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:01.019 [2024-04-16 20:46:52.139200] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82adfb680 name raid_bdev1, state configuring 00:09:01.019 request: 00:09:01.019 { 00:09:01.019 "name": "raid_bdev1", 00:09:01.019 "raid_level": "raid0", 00:09:01.019 "base_bdevs": [ 00:09:01.019 "malloc1", 00:09:01.019 "malloc2", 00:09:01.019 "malloc3", 00:09:01.019 "malloc4" 00:09:01.019 ], 00:09:01.019 "superblock": false, 00:09:01.019 "strip_size_kb": 64, 00:09:01.019 "method": "bdev_raid_create", 00:09:01.019 "req_id": 1 00:09:01.019 } 00:09:01.019 Got JSON-RPC error response 00:09:01.019 response: 00:09:01.019 { 00:09:01.019 "code": -17, 00:09:01.019 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:01.019 } 00:09:01.278 20:46:52 -- common/autotest_common.sh@643 -- # es=1 00:09:01.278 20:46:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:01.278 20:46:52 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:01.278 20:46:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:01.278 20:46:52 -- bdev/bdev_raid.sh@403 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:01.278 20:46:52 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:09:01.278 20:46:52 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:09:01.278 20:46:52 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:09:01.278 20:46:52 -- bdev/bdev_raid.sh@409 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:01.537 [2024-04-16 20:46:52.478742] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:01.537 [2024-04-16 20:46:52.478804] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:01.537 [2024-04-16 20:46:52.478828] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82adfb180 00:09:01.537 [2024-04-16 20:46:52.478835] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:01.537 [2024-04-16 20:46:52.479321] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:01.537 [2024-04-16 20:46:52.479342] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:01.537 [2024-04-16 20:46:52.479361] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:09:01.537 [2024-04-16 20:46:52.479370] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:01.537 pt1 00:09:01.537 20:46:52 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:09:01.537 20:46:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:09:01.537 20:46:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:01.537 20:46:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:09:01.537 20:46:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:01.537 20:46:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:01.537 20:46:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:01.537 20:46:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:01.537 20:46:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:01.537 20:46:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:01.537 20:46:52 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:01.537 20:46:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:01.796 20:46:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:01.796 "name": "raid_bdev1", 00:09:01.796 "uuid": "72de0c81-fc32-11ee-80f8-ef3e42bb1492", 00:09:01.796 "strip_size_kb": 64, 00:09:01.796 "state": "configuring", 00:09:01.796 "raid_level": "raid0", 00:09:01.796 "superblock": true, 00:09:01.796 "num_base_bdevs": 4, 00:09:01.796 "num_base_bdevs_discovered": 1, 00:09:01.796 "num_base_bdevs_operational": 4, 00:09:01.796 "base_bdevs_list": [ 00:09:01.796 { 00:09:01.796 "name": "pt1", 00:09:01.796 "uuid": "f6e351ed-5294-6d50-bcee-aec7fbdaba97", 00:09:01.796 "is_configured": true, 00:09:01.796 "data_offset": 2048, 00:09:01.796 "data_size": 63488 00:09:01.796 }, 00:09:01.796 { 00:09:01.796 "name": null, 00:09:01.796 "uuid": "8c697c19-6e0d-b85e-9e8e-0188b9fc1879", 00:09:01.796 "is_configured": false, 00:09:01.796 "data_offset": 2048, 00:09:01.796 "data_size": 63488 00:09:01.796 }, 00:09:01.796 { 00:09:01.796 "name": null, 00:09:01.796 "uuid": "2261656b-75a1-a556-9735-dd24cd986480", 00:09:01.796 "is_configured": false, 00:09:01.796 "data_offset": 2048, 00:09:01.796 "data_size": 63488 00:09:01.796 }, 00:09:01.796 { 00:09:01.796 "name": null, 00:09:01.796 "uuid": "0f346e30-c4c5-1e58-b9bf-65f1206fc3d7", 00:09:01.796 "is_configured": false, 00:09:01.796 "data_offset": 2048, 00:09:01.796 "data_size": 63488 00:09:01.796 } 00:09:01.796 ] 00:09:01.796 }' 00:09:01.796 20:46:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:01.796 20:46:52 -- common/autotest_common.sh@10 -- # set +x 00:09:02.055 20:46:52 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:09:02.055 20:46:52 -- bdev/bdev_raid.sh@416 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:02.055 [2024-04-16 20:46:53.142855] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:02.055 [2024-04-16 20:46:53.142901] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:02.055 [2024-04-16 20:46:53.142923] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82adfa780 00:09:02.055 [2024-04-16 20:46:53.142929] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:02.055 [2024-04-16 20:46:53.143020] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:02.055 [2024-04-16 20:46:53.143027] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:02.055 [2024-04-16 20:46:53.143045] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:09:02.055 [2024-04-16 20:46:53.143051] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:02.055 pt2 00:09:02.055 20:46:53 -- bdev/bdev_raid.sh@417 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:09:02.314 [2024-04-16 20:46:53.326890] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:02.314 20:46:53 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:09:02.314 20:46:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:09:02.314 20:46:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:02.314 20:46:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:09:02.314 20:46:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:02.314 20:46:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:02.314 20:46:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:02.314 20:46:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:02.314 20:46:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:02.314 20:46:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:02.314 20:46:53 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:02.314 20:46:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:02.573 20:46:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:02.573 "name": "raid_bdev1", 00:09:02.573 "uuid": "72de0c81-fc32-11ee-80f8-ef3e42bb1492", 00:09:02.573 "strip_size_kb": 64, 00:09:02.573 "state": "configuring", 00:09:02.573 "raid_level": "raid0", 00:09:02.573 "superblock": true, 00:09:02.573 "num_base_bdevs": 4, 00:09:02.573 "num_base_bdevs_discovered": 1, 00:09:02.573 "num_base_bdevs_operational": 4, 00:09:02.573 "base_bdevs_list": [ 00:09:02.573 { 00:09:02.573 "name": "pt1", 00:09:02.573 "uuid": "f6e351ed-5294-6d50-bcee-aec7fbdaba97", 00:09:02.573 "is_configured": true, 00:09:02.573 "data_offset": 2048, 00:09:02.573 "data_size": 63488 00:09:02.573 }, 00:09:02.573 { 00:09:02.573 "name": null, 00:09:02.573 "uuid": "8c697c19-6e0d-b85e-9e8e-0188b9fc1879", 00:09:02.573 "is_configured": false, 00:09:02.573 "data_offset": 2048, 00:09:02.573 "data_size": 63488 00:09:02.573 }, 00:09:02.573 { 00:09:02.573 "name": null, 00:09:02.573 "uuid": "2261656b-75a1-a556-9735-dd24cd986480", 00:09:02.573 "is_configured": false, 00:09:02.573 "data_offset": 2048, 00:09:02.573 "data_size": 63488 00:09:02.573 }, 00:09:02.573 { 00:09:02.573 "name": null, 00:09:02.573 "uuid": "0f346e30-c4c5-1e58-b9bf-65f1206fc3d7", 00:09:02.573 "is_configured": false, 00:09:02.573 "data_offset": 2048, 00:09:02.573 "data_size": 63488 00:09:02.573 } 00:09:02.573 ] 00:09:02.573 }' 00:09:02.573 20:46:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:02.573 20:46:53 -- common/autotest_common.sh@10 -- # set +x 00:09:02.832 20:46:53 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:09:02.832 20:46:53 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:09:02.832 20:46:53 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:02.832 [2024-04-16 20:46:53.954985] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:02.832 [2024-04-16 20:46:53.955031] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:02.832 [2024-04-16 20:46:53.955054] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82adfa780 00:09:02.832 [2024-04-16 20:46:53.955059] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:02.832 [2024-04-16 20:46:53.955155] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:02.832 [2024-04-16 20:46:53.955164] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:02.832 [2024-04-16 20:46:53.955180] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:09:02.832 [2024-04-16 20:46:53.955187] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:02.832 pt2 00:09:03.091 20:46:53 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:09:03.091 20:46:53 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:09:03.091 20:46:53 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:03.091 [2024-04-16 20:46:54.111004] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:03.091 [2024-04-16 20:46:54.111036] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:03.091 [2024-04-16 20:46:54.111055] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82adfbb80 00:09:03.091 [2024-04-16 20:46:54.111060] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:03.091 [2024-04-16 20:46:54.111130] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:03.092 [2024-04-16 20:46:54.111139] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:03.092 [2024-04-16 20:46:54.111154] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:09:03.092 [2024-04-16 20:46:54.111171] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:03.092 pt3 00:09:03.092 20:46:54 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:09:03.092 20:46:54 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:09:03.092 20:46:54 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:03.356 [2024-04-16 20:46:54.287030] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:03.356 [2024-04-16 20:46:54.287062] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:03.356 [2024-04-16 20:46:54.287080] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82adfb900 00:09:03.356 [2024-04-16 20:46:54.287101] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:03.356 [2024-04-16 20:46:54.287165] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:03.356 [2024-04-16 20:46:54.287176] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:03.356 [2024-04-16 20:46:54.287190] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:09:03.356 [2024-04-16 20:46:54.287197] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:03.356 [2024-04-16 20:46:54.287219] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82adfac80 00:09:03.356 [2024-04-16 20:46:54.287225] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:03.356 [2024-04-16 20:46:54.287240] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82ae5de20 00:09:03.356 [2024-04-16 20:46:54.287276] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82adfac80 00:09:03.356 [2024-04-16 20:46:54.287283] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82adfac80 00:09:03.356 [2024-04-16 20:46:54.287298] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:03.356 pt4 00:09:03.356 20:46:54 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:09:03.356 20:46:54 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:09:03.356 20:46:54 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:03.356 20:46:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:09:03.356 20:46:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:03.356 20:46:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:09:03.356 20:46:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:03.356 20:46:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:03.356 20:46:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:03.356 20:46:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:03.356 20:46:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:03.356 20:46:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:03.356 20:46:54 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:03.356 20:46:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:03.625 20:46:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:03.625 "name": "raid_bdev1", 00:09:03.625 "uuid": "72de0c81-fc32-11ee-80f8-ef3e42bb1492", 00:09:03.625 "strip_size_kb": 64, 00:09:03.625 "state": "online", 00:09:03.625 "raid_level": "raid0", 00:09:03.625 "superblock": true, 00:09:03.625 "num_base_bdevs": 4, 00:09:03.625 "num_base_bdevs_discovered": 4, 00:09:03.625 "num_base_bdevs_operational": 4, 00:09:03.625 "base_bdevs_list": [ 00:09:03.625 { 00:09:03.625 "name": "pt1", 00:09:03.625 "uuid": "f6e351ed-5294-6d50-bcee-aec7fbdaba97", 00:09:03.625 "is_configured": true, 00:09:03.625 "data_offset": 2048, 00:09:03.625 "data_size": 63488 00:09:03.625 }, 00:09:03.625 { 00:09:03.625 "name": "pt2", 00:09:03.625 "uuid": "8c697c19-6e0d-b85e-9e8e-0188b9fc1879", 00:09:03.625 "is_configured": true, 00:09:03.625 "data_offset": 2048, 00:09:03.625 "data_size": 63488 00:09:03.625 }, 00:09:03.625 { 00:09:03.625 "name": "pt3", 00:09:03.625 "uuid": "2261656b-75a1-a556-9735-dd24cd986480", 00:09:03.625 "is_configured": true, 00:09:03.625 "data_offset": 2048, 00:09:03.625 "data_size": 63488 00:09:03.625 }, 00:09:03.625 { 00:09:03.625 "name": "pt4", 00:09:03.625 "uuid": "0f346e30-c4c5-1e58-b9bf-65f1206fc3d7", 00:09:03.625 "is_configured": true, 00:09:03.625 "data_offset": 2048, 00:09:03.625 "data_size": 63488 00:09:03.625 } 00:09:03.625 ] 00:09:03.625 }' 00:09:03.625 20:46:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:03.625 20:46:54 -- common/autotest_common.sh@10 -- # set +x 00:09:03.899 20:46:54 -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:03.899 20:46:54 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:09:03.899 [2024-04-16 20:46:54.931162] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:03.899 20:46:54 -- bdev/bdev_raid.sh@430 -- # '[' 72de0c81-fc32-11ee-80f8-ef3e42bb1492 '!=' 72de0c81-fc32-11ee-80f8-ef3e42bb1492 ']' 00:09:03.899 20:46:54 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:09:03.899 20:46:54 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:09:03.899 20:46:54 -- bdev/bdev_raid.sh@197 -- # return 1 00:09:03.899 20:46:54 -- bdev/bdev_raid.sh@511 -- # killprocess 51987 00:09:03.899 20:46:54 -- common/autotest_common.sh@926 -- # '[' -z 51987 ']' 00:09:03.899 20:46:54 -- common/autotest_common.sh@930 -- # kill -0 51987 00:09:03.899 20:46:54 -- common/autotest_common.sh@931 -- # uname 00:09:03.899 20:46:54 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:09:03.899 20:46:54 -- common/autotest_common.sh@934 -- # ps -c -o command 51987 00:09:03.899 20:46:54 -- common/autotest_common.sh@934 -- # tail -1 00:09:03.899 20:46:54 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:09:03.899 20:46:54 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:09:03.899 killing process with pid 51987 00:09:03.899 20:46:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 51987' 00:09:03.899 20:46:54 -- common/autotest_common.sh@945 -- # kill 51987 00:09:03.899 [2024-04-16 20:46:54.962637] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:03.899 [2024-04-16 20:46:54.962665] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:03.899 [2024-04-16 20:46:54.962679] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:03.899 [2024-04-16 20:46:54.962682] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82adfac80 name raid_bdev1, state offline 00:09:03.899 20:46:54 -- common/autotest_common.sh@950 -- # wait 51987 00:09:03.899 [2024-04-16 20:46:54.981029] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:04.159 20:46:55 -- bdev/bdev_raid.sh@513 -- # return 0 00:09:04.159 00:09:04.159 real 0m7.508s 00:09:04.159 user 0m12.807s 00:09:04.159 sys 0m1.480s 00:09:04.159 20:46:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:04.159 20:46:55 -- common/autotest_common.sh@10 -- # set +x 00:09:04.159 ************************************ 00:09:04.159 END TEST raid_superblock_test 00:09:04.159 ************************************ 00:09:04.159 20:46:55 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:09:04.159 20:46:55 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:09:04.159 20:46:55 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:09:04.159 20:46:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:04.159 20:46:55 -- common/autotest_common.sh@10 -- # set +x 00:09:04.159 ************************************ 00:09:04.159 START TEST raid_state_function_test 00:09:04.159 ************************************ 00:09:04.159 20:46:55 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 4 false 00:09:04.159 20:46:55 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:09:04.159 20:46:55 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:09:04.159 20:46:55 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:09:04.159 20:46:55 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:09:04.159 20:46:55 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:09:04.159 20:46:55 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:04.159 20:46:55 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:09:04.159 20:46:55 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:09:04.159 20:46:55 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:04.159 20:46:55 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:09:04.159 20:46:55 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:09:04.159 20:46:55 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:04.159 20:46:55 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:09:04.159 20:46:55 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:09:04.159 20:46:55 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:04.159 20:46:55 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:09:04.159 20:46:55 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:09:04.159 20:46:55 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:04.159 20:46:55 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:04.159 20:46:55 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:09:04.159 20:46:55 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:09:04.159 20:46:55 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:09:04.159 20:46:55 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:09:04.159 20:46:55 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:09:04.159 20:46:55 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:09:04.159 20:46:55 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:09:04.159 20:46:55 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:09:04.159 20:46:55 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:09:04.159 20:46:55 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:09:04.159 20:46:55 -- bdev/bdev_raid.sh@226 -- # raid_pid=52172 00:09:04.159 20:46:55 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 52172' 00:09:04.159 Process raid pid: 52172 00:09:04.159 20:46:55 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:04.159 20:46:55 -- bdev/bdev_raid.sh@228 -- # waitforlisten 52172 /var/tmp/spdk-raid.sock 00:09:04.159 20:46:55 -- common/autotest_common.sh@819 -- # '[' -z 52172 ']' 00:09:04.159 20:46:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:04.159 20:46:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:04.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:04.159 20:46:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:04.159 20:46:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:04.159 20:46:55 -- common/autotest_common.sh@10 -- # set +x 00:09:04.159 [2024-04-16 20:46:55.200119] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:09:04.159 [2024-04-16 20:46:55.200381] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:09:04.728 EAL: TSC is not safe to use in SMP mode 00:09:04.728 EAL: TSC is not invariant 00:09:04.728 [2024-04-16 20:46:55.626873] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.728 [2024-04-16 20:46:55.719330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.728 [2024-04-16 20:46:55.719742] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:04.728 [2024-04-16 20:46:55.719751] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:04.987 20:46:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:04.987 20:46:56 -- common/autotest_common.sh@852 -- # return 0 00:09:04.987 20:46:56 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:09:05.246 [2024-04-16 20:46:56.218814] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:05.246 [2024-04-16 20:46:56.218864] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:05.246 [2024-04-16 20:46:56.218868] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:05.246 [2024-04-16 20:46:56.218890] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:05.246 [2024-04-16 20:46:56.218893] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:05.246 [2024-04-16 20:46:56.218899] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:05.246 [2024-04-16 20:46:56.218901] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:05.246 [2024-04-16 20:46:56.218907] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:05.246 20:46:56 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:05.246 20:46:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:05.246 20:46:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:05.246 20:46:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:05.246 20:46:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:05.246 20:46:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:05.246 20:46:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:05.246 20:46:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:05.246 20:46:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:05.246 20:46:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:05.246 20:46:56 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:05.246 20:46:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.505 20:46:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:05.505 "name": "Existed_Raid", 00:09:05.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.505 "strip_size_kb": 64, 00:09:05.505 "state": "configuring", 00:09:05.505 "raid_level": "concat", 00:09:05.505 "superblock": false, 00:09:05.505 "num_base_bdevs": 4, 00:09:05.505 "num_base_bdevs_discovered": 0, 00:09:05.505 "num_base_bdevs_operational": 4, 00:09:05.505 "base_bdevs_list": [ 00:09:05.505 { 00:09:05.505 "name": "BaseBdev1", 00:09:05.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.505 "is_configured": false, 00:09:05.505 "data_offset": 0, 00:09:05.505 "data_size": 0 00:09:05.505 }, 00:09:05.505 { 00:09:05.505 "name": "BaseBdev2", 00:09:05.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.505 "is_configured": false, 00:09:05.505 "data_offset": 0, 00:09:05.505 "data_size": 0 00:09:05.505 }, 00:09:05.505 { 00:09:05.505 "name": "BaseBdev3", 00:09:05.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.505 "is_configured": false, 00:09:05.505 "data_offset": 0, 00:09:05.505 "data_size": 0 00:09:05.505 }, 00:09:05.505 { 00:09:05.505 "name": "BaseBdev4", 00:09:05.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.505 "is_configured": false, 00:09:05.505 "data_offset": 0, 00:09:05.505 "data_size": 0 00:09:05.505 } 00:09:05.505 ] 00:09:05.505 }' 00:09:05.505 20:46:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:05.505 20:46:56 -- common/autotest_common.sh@10 -- # set +x 00:09:05.765 20:46:56 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:05.765 [2024-04-16 20:46:56.850887] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:05.765 [2024-04-16 20:46:56.850920] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c24f500 name Existed_Raid, state configuring 00:09:05.765 20:46:56 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:09:06.024 [2024-04-16 20:46:57.026935] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:06.025 [2024-04-16 20:46:57.026971] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:06.025 [2024-04-16 20:46:57.026991] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:06.025 [2024-04-16 20:46:57.026997] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:06.025 [2024-04-16 20:46:57.026999] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:06.025 [2024-04-16 20:46:57.027005] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:06.025 [2024-04-16 20:46:57.027007] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:06.025 [2024-04-16 20:46:57.027012] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:06.025 20:46:57 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:06.284 [2024-04-16 20:46:57.183746] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:06.284 BaseBdev1 00:09:06.284 20:46:57 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:09:06.284 20:46:57 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:09:06.284 20:46:57 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:06.284 20:46:57 -- common/autotest_common.sh@889 -- # local i 00:09:06.284 20:46:57 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:06.284 20:46:57 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:06.284 20:46:57 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:06.284 20:46:57 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:06.544 [ 00:09:06.544 { 00:09:06.544 "name": "BaseBdev1", 00:09:06.544 "aliases": [ 00:09:06.544 "771a21bc-fc32-11ee-80f8-ef3e42bb1492" 00:09:06.544 ], 00:09:06.544 "product_name": "Malloc disk", 00:09:06.544 "block_size": 512, 00:09:06.544 "num_blocks": 65536, 00:09:06.544 "uuid": "771a21bc-fc32-11ee-80f8-ef3e42bb1492", 00:09:06.544 "assigned_rate_limits": { 00:09:06.544 "rw_ios_per_sec": 0, 00:09:06.544 "rw_mbytes_per_sec": 0, 00:09:06.544 "r_mbytes_per_sec": 0, 00:09:06.544 "w_mbytes_per_sec": 0 00:09:06.544 }, 00:09:06.544 "claimed": true, 00:09:06.544 "claim_type": "exclusive_write", 00:09:06.544 "zoned": false, 00:09:06.544 "supported_io_types": { 00:09:06.544 "read": true, 00:09:06.544 "write": true, 00:09:06.544 "unmap": true, 00:09:06.544 "write_zeroes": true, 00:09:06.544 "flush": true, 00:09:06.544 "reset": true, 00:09:06.544 "compare": false, 00:09:06.544 "compare_and_write": false, 00:09:06.544 "abort": true, 00:09:06.544 "nvme_admin": false, 00:09:06.544 "nvme_io": false 00:09:06.544 }, 00:09:06.544 "memory_domains": [ 00:09:06.544 { 00:09:06.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.544 "dma_device_type": 2 00:09:06.544 } 00:09:06.544 ], 00:09:06.544 "driver_specific": {} 00:09:06.544 } 00:09:06.544 ] 00:09:06.544 20:46:57 -- common/autotest_common.sh@895 -- # return 0 00:09:06.544 20:46:57 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:06.544 20:46:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:06.544 20:46:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:06.544 20:46:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:06.544 20:46:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:06.544 20:46:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:06.544 20:46:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:06.544 20:46:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:06.544 20:46:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:06.544 20:46:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:06.544 20:46:57 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:06.544 20:46:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.803 20:46:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:06.803 "name": "Existed_Raid", 00:09:06.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.804 "strip_size_kb": 64, 00:09:06.804 "state": "configuring", 00:09:06.804 "raid_level": "concat", 00:09:06.804 "superblock": false, 00:09:06.804 "num_base_bdevs": 4, 00:09:06.804 "num_base_bdevs_discovered": 1, 00:09:06.804 "num_base_bdevs_operational": 4, 00:09:06.804 "base_bdevs_list": [ 00:09:06.804 { 00:09:06.804 "name": "BaseBdev1", 00:09:06.804 "uuid": "771a21bc-fc32-11ee-80f8-ef3e42bb1492", 00:09:06.804 "is_configured": true, 00:09:06.804 "data_offset": 0, 00:09:06.804 "data_size": 65536 00:09:06.804 }, 00:09:06.804 { 00:09:06.804 "name": "BaseBdev2", 00:09:06.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.804 "is_configured": false, 00:09:06.804 "data_offset": 0, 00:09:06.804 "data_size": 0 00:09:06.804 }, 00:09:06.804 { 00:09:06.804 "name": "BaseBdev3", 00:09:06.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.804 "is_configured": false, 00:09:06.804 "data_offset": 0, 00:09:06.804 "data_size": 0 00:09:06.804 }, 00:09:06.804 { 00:09:06.804 "name": "BaseBdev4", 00:09:06.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.804 "is_configured": false, 00:09:06.804 "data_offset": 0, 00:09:06.804 "data_size": 0 00:09:06.804 } 00:09:06.804 ] 00:09:06.804 }' 00:09:06.804 20:46:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:06.804 20:46:57 -- common/autotest_common.sh@10 -- # set +x 00:09:07.063 20:46:58 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:07.063 [2024-04-16 20:46:58.171122] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:07.063 [2024-04-16 20:46:58.171146] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c24f500 name Existed_Raid, state configuring 00:09:07.063 20:46:58 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:09:07.063 20:46:58 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:09:07.323 [2024-04-16 20:46:58.355159] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:07.323 [2024-04-16 20:46:58.355782] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:07.323 [2024-04-16 20:46:58.355817] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:07.323 [2024-04-16 20:46:58.355820] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:07.323 [2024-04-16 20:46:58.355827] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:07.323 [2024-04-16 20:46:58.355829] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:07.323 [2024-04-16 20:46:58.355835] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:07.323 20:46:58 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:09:07.323 20:46:58 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:07.323 20:46:58 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:07.323 20:46:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:07.323 20:46:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:07.323 20:46:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:07.323 20:46:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:07.323 20:46:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:07.323 20:46:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:07.323 20:46:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:07.323 20:46:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:07.323 20:46:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:07.323 20:46:58 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:07.323 20:46:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.583 20:46:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:07.583 "name": "Existed_Raid", 00:09:07.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.583 "strip_size_kb": 64, 00:09:07.583 "state": "configuring", 00:09:07.583 "raid_level": "concat", 00:09:07.583 "superblock": false, 00:09:07.583 "num_base_bdevs": 4, 00:09:07.583 "num_base_bdevs_discovered": 1, 00:09:07.583 "num_base_bdevs_operational": 4, 00:09:07.583 "base_bdevs_list": [ 00:09:07.583 { 00:09:07.583 "name": "BaseBdev1", 00:09:07.583 "uuid": "771a21bc-fc32-11ee-80f8-ef3e42bb1492", 00:09:07.583 "is_configured": true, 00:09:07.583 "data_offset": 0, 00:09:07.583 "data_size": 65536 00:09:07.583 }, 00:09:07.583 { 00:09:07.583 "name": "BaseBdev2", 00:09:07.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.583 "is_configured": false, 00:09:07.583 "data_offset": 0, 00:09:07.583 "data_size": 0 00:09:07.583 }, 00:09:07.583 { 00:09:07.583 "name": "BaseBdev3", 00:09:07.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.583 "is_configured": false, 00:09:07.583 "data_offset": 0, 00:09:07.583 "data_size": 0 00:09:07.583 }, 00:09:07.583 { 00:09:07.583 "name": "BaseBdev4", 00:09:07.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.583 "is_configured": false, 00:09:07.583 "data_offset": 0, 00:09:07.583 "data_size": 0 00:09:07.583 } 00:09:07.583 ] 00:09:07.583 }' 00:09:07.583 20:46:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:07.583 20:46:58 -- common/autotest_common.sh@10 -- # set +x 00:09:07.843 20:46:58 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:08.102 [2024-04-16 20:46:58.983348] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:08.102 BaseBdev2 00:09:08.102 20:46:58 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:09:08.102 20:46:58 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:09:08.102 20:46:58 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:08.102 20:46:58 -- common/autotest_common.sh@889 -- # local i 00:09:08.102 20:46:58 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:08.102 20:46:58 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:08.102 20:46:58 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:08.102 20:46:59 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:08.362 [ 00:09:08.362 { 00:09:08.362 "name": "BaseBdev2", 00:09:08.362 "aliases": [ 00:09:08.362 "782cd521-fc32-11ee-80f8-ef3e42bb1492" 00:09:08.362 ], 00:09:08.362 "product_name": "Malloc disk", 00:09:08.362 "block_size": 512, 00:09:08.362 "num_blocks": 65536, 00:09:08.362 "uuid": "782cd521-fc32-11ee-80f8-ef3e42bb1492", 00:09:08.362 "assigned_rate_limits": { 00:09:08.362 "rw_ios_per_sec": 0, 00:09:08.362 "rw_mbytes_per_sec": 0, 00:09:08.362 "r_mbytes_per_sec": 0, 00:09:08.362 "w_mbytes_per_sec": 0 00:09:08.362 }, 00:09:08.362 "claimed": true, 00:09:08.362 "claim_type": "exclusive_write", 00:09:08.362 "zoned": false, 00:09:08.362 "supported_io_types": { 00:09:08.362 "read": true, 00:09:08.362 "write": true, 00:09:08.362 "unmap": true, 00:09:08.362 "write_zeroes": true, 00:09:08.362 "flush": true, 00:09:08.362 "reset": true, 00:09:08.362 "compare": false, 00:09:08.362 "compare_and_write": false, 00:09:08.362 "abort": true, 00:09:08.362 "nvme_admin": false, 00:09:08.362 "nvme_io": false 00:09:08.362 }, 00:09:08.362 "memory_domains": [ 00:09:08.362 { 00:09:08.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.362 "dma_device_type": 2 00:09:08.362 } 00:09:08.362 ], 00:09:08.362 "driver_specific": {} 00:09:08.362 } 00:09:08.362 ] 00:09:08.362 20:46:59 -- common/autotest_common.sh@895 -- # return 0 00:09:08.362 20:46:59 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:09:08.362 20:46:59 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:08.362 20:46:59 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:08.362 20:46:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:08.362 20:46:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:08.362 20:46:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:08.362 20:46:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:08.362 20:46:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:08.362 20:46:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:08.362 20:46:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:08.362 20:46:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:08.362 20:46:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:08.362 20:46:59 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:08.362 20:46:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.622 20:46:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:08.622 "name": "Existed_Raid", 00:09:08.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.622 "strip_size_kb": 64, 00:09:08.622 "state": "configuring", 00:09:08.622 "raid_level": "concat", 00:09:08.622 "superblock": false, 00:09:08.622 "num_base_bdevs": 4, 00:09:08.622 "num_base_bdevs_discovered": 2, 00:09:08.622 "num_base_bdevs_operational": 4, 00:09:08.622 "base_bdevs_list": [ 00:09:08.622 { 00:09:08.622 "name": "BaseBdev1", 00:09:08.622 "uuid": "771a21bc-fc32-11ee-80f8-ef3e42bb1492", 00:09:08.622 "is_configured": true, 00:09:08.622 "data_offset": 0, 00:09:08.622 "data_size": 65536 00:09:08.622 }, 00:09:08.622 { 00:09:08.622 "name": "BaseBdev2", 00:09:08.622 "uuid": "782cd521-fc32-11ee-80f8-ef3e42bb1492", 00:09:08.622 "is_configured": true, 00:09:08.622 "data_offset": 0, 00:09:08.622 "data_size": 65536 00:09:08.622 }, 00:09:08.622 { 00:09:08.622 "name": "BaseBdev3", 00:09:08.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.622 "is_configured": false, 00:09:08.622 "data_offset": 0, 00:09:08.622 "data_size": 0 00:09:08.622 }, 00:09:08.622 { 00:09:08.622 "name": "BaseBdev4", 00:09:08.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.622 "is_configured": false, 00:09:08.622 "data_offset": 0, 00:09:08.622 "data_size": 0 00:09:08.622 } 00:09:08.622 ] 00:09:08.622 }' 00:09:08.622 20:46:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:08.622 20:46:59 -- common/autotest_common.sh@10 -- # set +x 00:09:08.882 20:46:59 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:09:08.882 [2024-04-16 20:46:59.959488] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:08.882 BaseBdev3 00:09:08.882 20:46:59 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:09:08.882 20:46:59 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:09:08.882 20:46:59 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:08.882 20:46:59 -- common/autotest_common.sh@889 -- # local i 00:09:08.882 20:46:59 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:08.882 20:46:59 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:08.882 20:46:59 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:09.142 20:47:00 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:09.402 [ 00:09:09.402 { 00:09:09.402 "name": "BaseBdev3", 00:09:09.402 "aliases": [ 00:09:09.402 "78c1c801-fc32-11ee-80f8-ef3e42bb1492" 00:09:09.402 ], 00:09:09.402 "product_name": "Malloc disk", 00:09:09.402 "block_size": 512, 00:09:09.402 "num_blocks": 65536, 00:09:09.402 "uuid": "78c1c801-fc32-11ee-80f8-ef3e42bb1492", 00:09:09.402 "assigned_rate_limits": { 00:09:09.402 "rw_ios_per_sec": 0, 00:09:09.402 "rw_mbytes_per_sec": 0, 00:09:09.402 "r_mbytes_per_sec": 0, 00:09:09.403 "w_mbytes_per_sec": 0 00:09:09.403 }, 00:09:09.403 "claimed": true, 00:09:09.403 "claim_type": "exclusive_write", 00:09:09.403 "zoned": false, 00:09:09.403 "supported_io_types": { 00:09:09.403 "read": true, 00:09:09.403 "write": true, 00:09:09.403 "unmap": true, 00:09:09.403 "write_zeroes": true, 00:09:09.403 "flush": true, 00:09:09.403 "reset": true, 00:09:09.403 "compare": false, 00:09:09.403 "compare_and_write": false, 00:09:09.403 "abort": true, 00:09:09.403 "nvme_admin": false, 00:09:09.403 "nvme_io": false 00:09:09.403 }, 00:09:09.403 "memory_domains": [ 00:09:09.403 { 00:09:09.403 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.403 "dma_device_type": 2 00:09:09.403 } 00:09:09.403 ], 00:09:09.403 "driver_specific": {} 00:09:09.403 } 00:09:09.403 ] 00:09:09.403 20:47:00 -- common/autotest_common.sh@895 -- # return 0 00:09:09.403 20:47:00 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:09:09.403 20:47:00 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:09.403 20:47:00 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:09.403 20:47:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:09.403 20:47:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:09.403 20:47:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:09.403 20:47:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:09.403 20:47:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:09.403 20:47:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:09.403 20:47:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:09.403 20:47:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:09.403 20:47:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:09.403 20:47:00 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:09.403 20:47:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.403 20:47:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:09.403 "name": "Existed_Raid", 00:09:09.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.403 "strip_size_kb": 64, 00:09:09.403 "state": "configuring", 00:09:09.403 "raid_level": "concat", 00:09:09.403 "superblock": false, 00:09:09.403 "num_base_bdevs": 4, 00:09:09.403 "num_base_bdevs_discovered": 3, 00:09:09.403 "num_base_bdevs_operational": 4, 00:09:09.403 "base_bdevs_list": [ 00:09:09.403 { 00:09:09.403 "name": "BaseBdev1", 00:09:09.403 "uuid": "771a21bc-fc32-11ee-80f8-ef3e42bb1492", 00:09:09.403 "is_configured": true, 00:09:09.403 "data_offset": 0, 00:09:09.403 "data_size": 65536 00:09:09.403 }, 00:09:09.403 { 00:09:09.403 "name": "BaseBdev2", 00:09:09.403 "uuid": "782cd521-fc32-11ee-80f8-ef3e42bb1492", 00:09:09.403 "is_configured": true, 00:09:09.403 "data_offset": 0, 00:09:09.403 "data_size": 65536 00:09:09.403 }, 00:09:09.403 { 00:09:09.403 "name": "BaseBdev3", 00:09:09.403 "uuid": "78c1c801-fc32-11ee-80f8-ef3e42bb1492", 00:09:09.403 "is_configured": true, 00:09:09.403 "data_offset": 0, 00:09:09.403 "data_size": 65536 00:09:09.403 }, 00:09:09.403 { 00:09:09.403 "name": "BaseBdev4", 00:09:09.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.403 "is_configured": false, 00:09:09.403 "data_offset": 0, 00:09:09.403 "data_size": 0 00:09:09.403 } 00:09:09.403 ] 00:09:09.403 }' 00:09:09.403 20:47:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:09.403 20:47:00 -- common/autotest_common.sh@10 -- # set +x 00:09:09.663 20:47:00 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:09:09.922 [2024-04-16 20:47:00.939632] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:09.922 [2024-04-16 20:47:00.939653] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c24fa00 00:09:09.922 [2024-04-16 20:47:00.939657] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:09.922 [2024-04-16 20:47:00.939680] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c2b2ec0 00:09:09.922 [2024-04-16 20:47:00.939753] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c24fa00 00:09:09.922 [2024-04-16 20:47:00.939756] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82c24fa00 00:09:09.922 [2024-04-16 20:47:00.939780] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:09.922 BaseBdev4 00:09:09.922 20:47:00 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:09:09.922 20:47:00 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:09:09.922 20:47:00 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:09.922 20:47:00 -- common/autotest_common.sh@889 -- # local i 00:09:09.922 20:47:00 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:09.922 20:47:00 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:09.922 20:47:00 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:10.182 20:47:01 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:10.182 [ 00:09:10.182 { 00:09:10.182 "name": "BaseBdev4", 00:09:10.182 "aliases": [ 00:09:10.182 "79575711-fc32-11ee-80f8-ef3e42bb1492" 00:09:10.182 ], 00:09:10.182 "product_name": "Malloc disk", 00:09:10.182 "block_size": 512, 00:09:10.182 "num_blocks": 65536, 00:09:10.182 "uuid": "79575711-fc32-11ee-80f8-ef3e42bb1492", 00:09:10.182 "assigned_rate_limits": { 00:09:10.182 "rw_ios_per_sec": 0, 00:09:10.182 "rw_mbytes_per_sec": 0, 00:09:10.182 "r_mbytes_per_sec": 0, 00:09:10.182 "w_mbytes_per_sec": 0 00:09:10.182 }, 00:09:10.182 "claimed": true, 00:09:10.182 "claim_type": "exclusive_write", 00:09:10.182 "zoned": false, 00:09:10.182 "supported_io_types": { 00:09:10.182 "read": true, 00:09:10.182 "write": true, 00:09:10.182 "unmap": true, 00:09:10.182 "write_zeroes": true, 00:09:10.182 "flush": true, 00:09:10.182 "reset": true, 00:09:10.182 "compare": false, 00:09:10.182 "compare_and_write": false, 00:09:10.182 "abort": true, 00:09:10.182 "nvme_admin": false, 00:09:10.182 "nvme_io": false 00:09:10.182 }, 00:09:10.182 "memory_domains": [ 00:09:10.182 { 00:09:10.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.182 "dma_device_type": 2 00:09:10.182 } 00:09:10.182 ], 00:09:10.182 "driver_specific": {} 00:09:10.182 } 00:09:10.182 ] 00:09:10.182 20:47:01 -- common/autotest_common.sh@895 -- # return 0 00:09:10.182 20:47:01 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:09:10.182 20:47:01 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:10.182 20:47:01 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:09:10.442 20:47:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:10.442 20:47:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:10.442 20:47:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:10.442 20:47:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:10.442 20:47:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:10.442 20:47:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:10.442 20:47:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:10.442 20:47:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:10.442 20:47:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:10.442 20:47:01 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:10.442 20:47:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.442 20:47:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:10.442 "name": "Existed_Raid", 00:09:10.442 "uuid": "79575b88-fc32-11ee-80f8-ef3e42bb1492", 00:09:10.442 "strip_size_kb": 64, 00:09:10.442 "state": "online", 00:09:10.442 "raid_level": "concat", 00:09:10.442 "superblock": false, 00:09:10.442 "num_base_bdevs": 4, 00:09:10.442 "num_base_bdevs_discovered": 4, 00:09:10.442 "num_base_bdevs_operational": 4, 00:09:10.442 "base_bdevs_list": [ 00:09:10.442 { 00:09:10.442 "name": "BaseBdev1", 00:09:10.442 "uuid": "771a21bc-fc32-11ee-80f8-ef3e42bb1492", 00:09:10.442 "is_configured": true, 00:09:10.442 "data_offset": 0, 00:09:10.442 "data_size": 65536 00:09:10.442 }, 00:09:10.442 { 00:09:10.442 "name": "BaseBdev2", 00:09:10.442 "uuid": "782cd521-fc32-11ee-80f8-ef3e42bb1492", 00:09:10.442 "is_configured": true, 00:09:10.442 "data_offset": 0, 00:09:10.442 "data_size": 65536 00:09:10.442 }, 00:09:10.442 { 00:09:10.442 "name": "BaseBdev3", 00:09:10.442 "uuid": "78c1c801-fc32-11ee-80f8-ef3e42bb1492", 00:09:10.442 "is_configured": true, 00:09:10.442 "data_offset": 0, 00:09:10.442 "data_size": 65536 00:09:10.442 }, 00:09:10.442 { 00:09:10.442 "name": "BaseBdev4", 00:09:10.442 "uuid": "79575711-fc32-11ee-80f8-ef3e42bb1492", 00:09:10.442 "is_configured": true, 00:09:10.442 "data_offset": 0, 00:09:10.442 "data_size": 65536 00:09:10.442 } 00:09:10.442 ] 00:09:10.442 }' 00:09:10.442 20:47:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:10.442 20:47:01 -- common/autotest_common.sh@10 -- # set +x 00:09:10.702 20:47:01 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:10.961 [2024-04-16 20:47:01.891678] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:10.961 [2024-04-16 20:47:01.891696] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:10.961 [2024-04-16 20:47:01.891707] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:10.961 20:47:01 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:09:10.961 20:47:01 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:09:10.961 20:47:01 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:09:10.961 20:47:01 -- bdev/bdev_raid.sh@197 -- # return 1 00:09:10.961 20:47:01 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:09:10.961 20:47:01 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:09:10.961 20:47:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:10.961 20:47:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:09:10.961 20:47:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:10.961 20:47:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:10.961 20:47:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:09:10.961 20:47:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:10.961 20:47:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:10.961 20:47:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:10.961 20:47:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:10.961 20:47:01 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:10.961 20:47:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.221 20:47:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:11.221 "name": "Existed_Raid", 00:09:11.221 "uuid": "79575b88-fc32-11ee-80f8-ef3e42bb1492", 00:09:11.221 "strip_size_kb": 64, 00:09:11.221 "state": "offline", 00:09:11.221 "raid_level": "concat", 00:09:11.221 "superblock": false, 00:09:11.221 "num_base_bdevs": 4, 00:09:11.221 "num_base_bdevs_discovered": 3, 00:09:11.221 "num_base_bdevs_operational": 3, 00:09:11.221 "base_bdevs_list": [ 00:09:11.221 { 00:09:11.221 "name": null, 00:09:11.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.221 "is_configured": false, 00:09:11.221 "data_offset": 0, 00:09:11.221 "data_size": 65536 00:09:11.221 }, 00:09:11.221 { 00:09:11.221 "name": "BaseBdev2", 00:09:11.221 "uuid": "782cd521-fc32-11ee-80f8-ef3e42bb1492", 00:09:11.221 "is_configured": true, 00:09:11.221 "data_offset": 0, 00:09:11.221 "data_size": 65536 00:09:11.221 }, 00:09:11.221 { 00:09:11.221 "name": "BaseBdev3", 00:09:11.221 "uuid": "78c1c801-fc32-11ee-80f8-ef3e42bb1492", 00:09:11.221 "is_configured": true, 00:09:11.221 "data_offset": 0, 00:09:11.221 "data_size": 65536 00:09:11.221 }, 00:09:11.221 { 00:09:11.221 "name": "BaseBdev4", 00:09:11.221 "uuid": "79575711-fc32-11ee-80f8-ef3e42bb1492", 00:09:11.221 "is_configured": true, 00:09:11.221 "data_offset": 0, 00:09:11.221 "data_size": 65536 00:09:11.221 } 00:09:11.221 ] 00:09:11.221 }' 00:09:11.221 20:47:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:11.221 20:47:02 -- common/autotest_common.sh@10 -- # set +x 00:09:11.480 20:47:02 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:09:11.480 20:47:02 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:11.480 20:47:02 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:11.480 20:47:02 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:09:11.480 20:47:02 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:09:11.480 20:47:02 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:11.480 20:47:02 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:09:11.739 [2024-04-16 20:47:02.700459] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:11.739 20:47:02 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:09:11.739 20:47:02 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:11.739 20:47:02 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:11.739 20:47:02 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:09:11.999 20:47:02 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:09:11.999 20:47:02 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:11.999 20:47:02 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:09:11.999 [2024-04-16 20:47:03.065232] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:11.999 20:47:03 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:09:11.999 20:47:03 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:11.999 20:47:03 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:09:11.999 20:47:03 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:12.258 20:47:03 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:09:12.258 20:47:03 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:12.258 20:47:03 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:09:12.518 [2024-04-16 20:47:03.441947] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:12.518 [2024-04-16 20:47:03.441970] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c24fa00 name Existed_Raid, state offline 00:09:12.518 20:47:03 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:09:12.518 20:47:03 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:12.518 20:47:03 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:12.518 20:47:03 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:09:12.518 20:47:03 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:09:12.518 20:47:03 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:09:12.518 20:47:03 -- bdev/bdev_raid.sh@287 -- # killprocess 52172 00:09:12.518 20:47:03 -- common/autotest_common.sh@926 -- # '[' -z 52172 ']' 00:09:12.518 20:47:03 -- common/autotest_common.sh@930 -- # kill -0 52172 00:09:12.518 20:47:03 -- common/autotest_common.sh@931 -- # uname 00:09:12.518 20:47:03 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:09:12.519 20:47:03 -- common/autotest_common.sh@934 -- # ps -c -o command 52172 00:09:12.519 20:47:03 -- common/autotest_common.sh@934 -- # tail -1 00:09:12.519 20:47:03 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:09:12.519 20:47:03 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:09:12.519 killing process with pid 52172 00:09:12.519 20:47:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 52172' 00:09:12.519 20:47:03 -- common/autotest_common.sh@945 -- # kill 52172 00:09:12.519 [2024-04-16 20:47:03.632763] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:12.519 [2024-04-16 20:47:03.632793] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:12.519 20:47:03 -- common/autotest_common.sh@950 -- # wait 52172 00:09:12.778 20:47:03 -- bdev/bdev_raid.sh@289 -- # return 0 00:09:12.778 00:09:12.778 real 0m8.590s 00:09:12.778 user 0m14.924s 00:09:12.778 sys 0m1.582s 00:09:12.778 20:47:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:12.778 20:47:03 -- common/autotest_common.sh@10 -- # set +x 00:09:12.778 ************************************ 00:09:12.778 END TEST raid_state_function_test 00:09:12.778 ************************************ 00:09:12.778 20:47:03 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:09:12.778 20:47:03 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:09:12.778 20:47:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:12.778 20:47:03 -- common/autotest_common.sh@10 -- # set +x 00:09:12.778 ************************************ 00:09:12.778 START TEST raid_state_function_test_sb 00:09:12.778 ************************************ 00:09:12.778 20:47:03 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 4 true 00:09:12.778 20:47:03 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:09:12.778 20:47:03 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:09:12.778 20:47:03 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:09:12.778 20:47:03 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:09:12.778 20:47:03 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:09:12.778 20:47:03 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:12.778 20:47:03 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:09:12.778 20:47:03 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:09:12.778 20:47:03 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:12.778 20:47:03 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:09:12.778 20:47:03 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:09:12.778 20:47:03 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:12.778 20:47:03 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:09:12.778 20:47:03 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:09:12.778 20:47:03 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:12.778 20:47:03 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:09:12.778 20:47:03 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:09:12.778 20:47:03 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:12.778 20:47:03 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:12.778 20:47:03 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:09:12.778 20:47:03 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:09:12.778 20:47:03 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:09:12.778 20:47:03 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:09:12.778 20:47:03 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:09:12.778 20:47:03 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:09:12.778 20:47:03 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:09:12.778 20:47:03 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:09:12.778 20:47:03 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:09:12.778 20:47:03 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:09:12.778 20:47:03 -- bdev/bdev_raid.sh@226 -- # raid_pid=52442 00:09:12.778 20:47:03 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 52442' 00:09:12.778 Process raid pid: 52442 00:09:12.778 20:47:03 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:12.778 20:47:03 -- bdev/bdev_raid.sh@228 -- # waitforlisten 52442 /var/tmp/spdk-raid.sock 00:09:12.778 20:47:03 -- common/autotest_common.sh@819 -- # '[' -z 52442 ']' 00:09:12.778 20:47:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:12.778 20:47:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:12.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:12.778 20:47:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:12.778 20:47:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:12.778 20:47:03 -- common/autotest_common.sh@10 -- # set +x 00:09:12.778 [2024-04-16 20:47:03.843121] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:09:12.778 [2024-04-16 20:47:03.843499] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:09:13.354 EAL: TSC is not safe to use in SMP mode 00:09:13.354 EAL: TSC is not invariant 00:09:13.354 [2024-04-16 20:47:04.269587] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.354 [2024-04-16 20:47:04.360855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.354 [2024-04-16 20:47:04.361256] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:13.354 [2024-04-16 20:47:04.361264] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:13.613 20:47:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:13.613 20:47:04 -- common/autotest_common.sh@852 -- # return 0 00:09:13.613 20:47:04 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:09:13.872 [2024-04-16 20:47:04.896405] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:13.873 [2024-04-16 20:47:04.896442] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:13.873 [2024-04-16 20:47:04.896446] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:13.873 [2024-04-16 20:47:04.896452] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:13.873 [2024-04-16 20:47:04.896454] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:13.873 [2024-04-16 20:47:04.896459] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:13.873 [2024-04-16 20:47:04.896461] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:13.873 [2024-04-16 20:47:04.896485] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:13.873 20:47:04 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:13.873 20:47:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:13.873 20:47:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:13.873 20:47:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:13.873 20:47:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:13.873 20:47:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:13.873 20:47:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:13.873 20:47:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:13.873 20:47:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:13.873 20:47:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:13.873 20:47:04 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:13.873 20:47:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.133 20:47:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:14.133 "name": "Existed_Raid", 00:09:14.133 "uuid": "7bb31ba8-fc32-11ee-80f8-ef3e42bb1492", 00:09:14.133 "strip_size_kb": 64, 00:09:14.133 "state": "configuring", 00:09:14.133 "raid_level": "concat", 00:09:14.133 "superblock": true, 00:09:14.133 "num_base_bdevs": 4, 00:09:14.133 "num_base_bdevs_discovered": 0, 00:09:14.133 "num_base_bdevs_operational": 4, 00:09:14.133 "base_bdevs_list": [ 00:09:14.133 { 00:09:14.133 "name": "BaseBdev1", 00:09:14.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.133 "is_configured": false, 00:09:14.133 "data_offset": 0, 00:09:14.133 "data_size": 0 00:09:14.133 }, 00:09:14.133 { 00:09:14.133 "name": "BaseBdev2", 00:09:14.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.133 "is_configured": false, 00:09:14.133 "data_offset": 0, 00:09:14.133 "data_size": 0 00:09:14.133 }, 00:09:14.133 { 00:09:14.133 "name": "BaseBdev3", 00:09:14.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.133 "is_configured": false, 00:09:14.133 "data_offset": 0, 00:09:14.133 "data_size": 0 00:09:14.133 }, 00:09:14.133 { 00:09:14.133 "name": "BaseBdev4", 00:09:14.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.133 "is_configured": false, 00:09:14.133 "data_offset": 0, 00:09:14.133 "data_size": 0 00:09:14.133 } 00:09:14.133 ] 00:09:14.133 }' 00:09:14.133 20:47:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:14.133 20:47:05 -- common/autotest_common.sh@10 -- # set +x 00:09:14.393 20:47:05 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:14.652 [2024-04-16 20:47:05.532454] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:14.652 [2024-04-16 20:47:05.532470] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d4a5500 name Existed_Raid, state configuring 00:09:14.652 20:47:05 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:09:14.652 [2024-04-16 20:47:05.712484] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:14.652 [2024-04-16 20:47:05.712513] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:14.652 [2024-04-16 20:47:05.712516] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:14.652 [2024-04-16 20:47:05.712538] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:14.652 [2024-04-16 20:47:05.712541] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:14.652 [2024-04-16 20:47:05.712546] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:14.652 [2024-04-16 20:47:05.712548] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:14.652 [2024-04-16 20:47:05.712554] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:14.652 20:47:05 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:14.911 [2024-04-16 20:47:05.893251] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:14.911 BaseBdev1 00:09:14.911 20:47:05 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:09:14.911 20:47:05 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:09:14.911 20:47:05 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:14.911 20:47:05 -- common/autotest_common.sh@889 -- # local i 00:09:14.911 20:47:05 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:14.911 20:47:05 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:14.911 20:47:05 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:15.170 20:47:06 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:15.170 [ 00:09:15.170 { 00:09:15.170 "name": "BaseBdev1", 00:09:15.170 "aliases": [ 00:09:15.170 "7c4b1a11-fc32-11ee-80f8-ef3e42bb1492" 00:09:15.170 ], 00:09:15.170 "product_name": "Malloc disk", 00:09:15.170 "block_size": 512, 00:09:15.170 "num_blocks": 65536, 00:09:15.170 "uuid": "7c4b1a11-fc32-11ee-80f8-ef3e42bb1492", 00:09:15.170 "assigned_rate_limits": { 00:09:15.170 "rw_ios_per_sec": 0, 00:09:15.170 "rw_mbytes_per_sec": 0, 00:09:15.170 "r_mbytes_per_sec": 0, 00:09:15.170 "w_mbytes_per_sec": 0 00:09:15.170 }, 00:09:15.170 "claimed": true, 00:09:15.170 "claim_type": "exclusive_write", 00:09:15.170 "zoned": false, 00:09:15.170 "supported_io_types": { 00:09:15.170 "read": true, 00:09:15.170 "write": true, 00:09:15.170 "unmap": true, 00:09:15.170 "write_zeroes": true, 00:09:15.170 "flush": true, 00:09:15.170 "reset": true, 00:09:15.170 "compare": false, 00:09:15.170 "compare_and_write": false, 00:09:15.170 "abort": true, 00:09:15.170 "nvme_admin": false, 00:09:15.170 "nvme_io": false 00:09:15.170 }, 00:09:15.170 "memory_domains": [ 00:09:15.170 { 00:09:15.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.170 "dma_device_type": 2 00:09:15.170 } 00:09:15.170 ], 00:09:15.170 "driver_specific": {} 00:09:15.170 } 00:09:15.170 ] 00:09:15.170 20:47:06 -- common/autotest_common.sh@895 -- # return 0 00:09:15.170 20:47:06 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:15.170 20:47:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:15.170 20:47:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:15.170 20:47:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:15.170 20:47:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:15.170 20:47:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:15.170 20:47:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:15.170 20:47:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:15.170 20:47:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:15.170 20:47:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:15.170 20:47:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.170 20:47:06 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:15.429 20:47:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:15.429 "name": "Existed_Raid", 00:09:15.429 "uuid": "7c2fa1de-fc32-11ee-80f8-ef3e42bb1492", 00:09:15.429 "strip_size_kb": 64, 00:09:15.429 "state": "configuring", 00:09:15.429 "raid_level": "concat", 00:09:15.429 "superblock": true, 00:09:15.429 "num_base_bdevs": 4, 00:09:15.429 "num_base_bdevs_discovered": 1, 00:09:15.429 "num_base_bdevs_operational": 4, 00:09:15.429 "base_bdevs_list": [ 00:09:15.429 { 00:09:15.429 "name": "BaseBdev1", 00:09:15.429 "uuid": "7c4b1a11-fc32-11ee-80f8-ef3e42bb1492", 00:09:15.429 "is_configured": true, 00:09:15.429 "data_offset": 2048, 00:09:15.429 "data_size": 63488 00:09:15.429 }, 00:09:15.429 { 00:09:15.429 "name": "BaseBdev2", 00:09:15.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.429 "is_configured": false, 00:09:15.429 "data_offset": 0, 00:09:15.429 "data_size": 0 00:09:15.429 }, 00:09:15.429 { 00:09:15.429 "name": "BaseBdev3", 00:09:15.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.429 "is_configured": false, 00:09:15.429 "data_offset": 0, 00:09:15.429 "data_size": 0 00:09:15.429 }, 00:09:15.429 { 00:09:15.429 "name": "BaseBdev4", 00:09:15.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.429 "is_configured": false, 00:09:15.430 "data_offset": 0, 00:09:15.430 "data_size": 0 00:09:15.430 } 00:09:15.430 ] 00:09:15.430 }' 00:09:15.430 20:47:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:15.430 20:47:06 -- common/autotest_common.sh@10 -- # set +x 00:09:15.688 20:47:06 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:15.947 [2024-04-16 20:47:06.880695] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:15.947 [2024-04-16 20:47:06.880710] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d4a5500 name Existed_Raid, state configuring 00:09:15.947 20:47:06 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:09:15.947 20:47:06 -- bdev/bdev_raid.sh@246 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:16.207 20:47:07 -- bdev/bdev_raid.sh@247 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:16.207 BaseBdev1 00:09:16.207 20:47:07 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:09:16.207 20:47:07 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:09:16.207 20:47:07 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:16.207 20:47:07 -- common/autotest_common.sh@889 -- # local i 00:09:16.207 20:47:07 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:16.207 20:47:07 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:16.207 20:47:07 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:16.466 20:47:07 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:16.726 [ 00:09:16.726 { 00:09:16.726 "name": "BaseBdev1", 00:09:16.726 "aliases": [ 00:09:16.726 "7d184d9d-fc32-11ee-80f8-ef3e42bb1492" 00:09:16.726 ], 00:09:16.726 "product_name": "Malloc disk", 00:09:16.726 "block_size": 512, 00:09:16.726 "num_blocks": 65536, 00:09:16.726 "uuid": "7d184d9d-fc32-11ee-80f8-ef3e42bb1492", 00:09:16.726 "assigned_rate_limits": { 00:09:16.726 "rw_ios_per_sec": 0, 00:09:16.726 "rw_mbytes_per_sec": 0, 00:09:16.726 "r_mbytes_per_sec": 0, 00:09:16.726 "w_mbytes_per_sec": 0 00:09:16.726 }, 00:09:16.726 "claimed": false, 00:09:16.726 "zoned": false, 00:09:16.726 "supported_io_types": { 00:09:16.726 "read": true, 00:09:16.726 "write": true, 00:09:16.726 "unmap": true, 00:09:16.726 "write_zeroes": true, 00:09:16.726 "flush": true, 00:09:16.726 "reset": true, 00:09:16.726 "compare": false, 00:09:16.726 "compare_and_write": false, 00:09:16.726 "abort": true, 00:09:16.726 "nvme_admin": false, 00:09:16.726 "nvme_io": false 00:09:16.726 }, 00:09:16.726 "memory_domains": [ 00:09:16.726 { 00:09:16.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.726 "dma_device_type": 2 00:09:16.726 } 00:09:16.726 ], 00:09:16.726 "driver_specific": {} 00:09:16.726 } 00:09:16.726 ] 00:09:16.726 20:47:07 -- common/autotest_common.sh@895 -- # return 0 00:09:16.726 20:47:07 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:09:16.726 [2024-04-16 20:47:07.777388] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:16.726 [2024-04-16 20:47:07.777803] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:16.726 [2024-04-16 20:47:07.777838] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:16.726 [2024-04-16 20:47:07.777842] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:16.726 [2024-04-16 20:47:07.777848] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:16.726 [2024-04-16 20:47:07.777861] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:16.726 [2024-04-16 20:47:07.777867] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:16.726 20:47:07 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:09:16.726 20:47:07 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:16.726 20:47:07 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:16.726 20:47:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:16.726 20:47:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:16.726 20:47:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:16.726 20:47:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:16.726 20:47:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:16.726 20:47:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:16.726 20:47:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:16.726 20:47:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:16.726 20:47:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:16.726 20:47:07 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:16.726 20:47:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.986 20:47:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:16.986 "name": "Existed_Raid", 00:09:16.986 "uuid": "7d6ab619-fc32-11ee-80f8-ef3e42bb1492", 00:09:16.986 "strip_size_kb": 64, 00:09:16.986 "state": "configuring", 00:09:16.986 "raid_level": "concat", 00:09:16.986 "superblock": true, 00:09:16.986 "num_base_bdevs": 4, 00:09:16.986 "num_base_bdevs_discovered": 1, 00:09:16.986 "num_base_bdevs_operational": 4, 00:09:16.986 "base_bdevs_list": [ 00:09:16.986 { 00:09:16.986 "name": "BaseBdev1", 00:09:16.986 "uuid": "7d184d9d-fc32-11ee-80f8-ef3e42bb1492", 00:09:16.986 "is_configured": true, 00:09:16.986 "data_offset": 2048, 00:09:16.986 "data_size": 63488 00:09:16.986 }, 00:09:16.986 { 00:09:16.986 "name": "BaseBdev2", 00:09:16.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.986 "is_configured": false, 00:09:16.986 "data_offset": 0, 00:09:16.986 "data_size": 0 00:09:16.986 }, 00:09:16.986 { 00:09:16.986 "name": "BaseBdev3", 00:09:16.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.986 "is_configured": false, 00:09:16.986 "data_offset": 0, 00:09:16.986 "data_size": 0 00:09:16.986 }, 00:09:16.986 { 00:09:16.986 "name": "BaseBdev4", 00:09:16.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.986 "is_configured": false, 00:09:16.986 "data_offset": 0, 00:09:16.986 "data_size": 0 00:09:16.986 } 00:09:16.986 ] 00:09:16.986 }' 00:09:16.986 20:47:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:16.986 20:47:07 -- common/autotest_common.sh@10 -- # set +x 00:09:17.246 20:47:08 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:17.505 [2024-04-16 20:47:08.409545] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:17.505 BaseBdev2 00:09:17.505 20:47:08 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:09:17.505 20:47:08 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:09:17.505 20:47:08 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:17.505 20:47:08 -- common/autotest_common.sh@889 -- # local i 00:09:17.505 20:47:08 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:17.505 20:47:08 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:17.505 20:47:08 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:17.505 20:47:08 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:17.765 [ 00:09:17.765 { 00:09:17.765 "name": "BaseBdev2", 00:09:17.765 "aliases": [ 00:09:17.765 "7dcb28aa-fc32-11ee-80f8-ef3e42bb1492" 00:09:17.765 ], 00:09:17.765 "product_name": "Malloc disk", 00:09:17.765 "block_size": 512, 00:09:17.765 "num_blocks": 65536, 00:09:17.765 "uuid": "7dcb28aa-fc32-11ee-80f8-ef3e42bb1492", 00:09:17.765 "assigned_rate_limits": { 00:09:17.765 "rw_ios_per_sec": 0, 00:09:17.765 "rw_mbytes_per_sec": 0, 00:09:17.765 "r_mbytes_per_sec": 0, 00:09:17.765 "w_mbytes_per_sec": 0 00:09:17.765 }, 00:09:17.765 "claimed": true, 00:09:17.765 "claim_type": "exclusive_write", 00:09:17.765 "zoned": false, 00:09:17.765 "supported_io_types": { 00:09:17.765 "read": true, 00:09:17.765 "write": true, 00:09:17.765 "unmap": true, 00:09:17.765 "write_zeroes": true, 00:09:17.765 "flush": true, 00:09:17.765 "reset": true, 00:09:17.765 "compare": false, 00:09:17.765 "compare_and_write": false, 00:09:17.765 "abort": true, 00:09:17.765 "nvme_admin": false, 00:09:17.765 "nvme_io": false 00:09:17.765 }, 00:09:17.765 "memory_domains": [ 00:09:17.765 { 00:09:17.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.765 "dma_device_type": 2 00:09:17.765 } 00:09:17.765 ], 00:09:17.765 "driver_specific": {} 00:09:17.765 } 00:09:17.765 ] 00:09:17.765 20:47:08 -- common/autotest_common.sh@895 -- # return 0 00:09:17.765 20:47:08 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:09:17.765 20:47:08 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:17.765 20:47:08 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:17.765 20:47:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:17.765 20:47:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:17.765 20:47:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:17.765 20:47:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:17.765 20:47:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:17.765 20:47:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:17.765 20:47:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:17.765 20:47:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:17.765 20:47:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:17.765 20:47:08 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:17.765 20:47:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.024 20:47:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:18.024 "name": "Existed_Raid", 00:09:18.024 "uuid": "7d6ab619-fc32-11ee-80f8-ef3e42bb1492", 00:09:18.024 "strip_size_kb": 64, 00:09:18.024 "state": "configuring", 00:09:18.024 "raid_level": "concat", 00:09:18.024 "superblock": true, 00:09:18.024 "num_base_bdevs": 4, 00:09:18.024 "num_base_bdevs_discovered": 2, 00:09:18.024 "num_base_bdevs_operational": 4, 00:09:18.024 "base_bdevs_list": [ 00:09:18.024 { 00:09:18.024 "name": "BaseBdev1", 00:09:18.024 "uuid": "7d184d9d-fc32-11ee-80f8-ef3e42bb1492", 00:09:18.024 "is_configured": true, 00:09:18.024 "data_offset": 2048, 00:09:18.024 "data_size": 63488 00:09:18.024 }, 00:09:18.024 { 00:09:18.024 "name": "BaseBdev2", 00:09:18.024 "uuid": "7dcb28aa-fc32-11ee-80f8-ef3e42bb1492", 00:09:18.024 "is_configured": true, 00:09:18.024 "data_offset": 2048, 00:09:18.024 "data_size": 63488 00:09:18.024 }, 00:09:18.024 { 00:09:18.024 "name": "BaseBdev3", 00:09:18.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.024 "is_configured": false, 00:09:18.024 "data_offset": 0, 00:09:18.024 "data_size": 0 00:09:18.024 }, 00:09:18.024 { 00:09:18.024 "name": "BaseBdev4", 00:09:18.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.024 "is_configured": false, 00:09:18.024 "data_offset": 0, 00:09:18.024 "data_size": 0 00:09:18.024 } 00:09:18.024 ] 00:09:18.024 }' 00:09:18.024 20:47:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:18.025 20:47:08 -- common/autotest_common.sh@10 -- # set +x 00:09:18.284 20:47:09 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:09:18.284 [2024-04-16 20:47:09.361643] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:18.284 BaseBdev3 00:09:18.284 20:47:09 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:09:18.284 20:47:09 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:09:18.284 20:47:09 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:18.284 20:47:09 -- common/autotest_common.sh@889 -- # local i 00:09:18.284 20:47:09 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:18.284 20:47:09 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:18.284 20:47:09 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:18.544 20:47:09 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:18.803 [ 00:09:18.803 { 00:09:18.804 "name": "BaseBdev3", 00:09:18.804 "aliases": [ 00:09:18.804 "7e5c70d8-fc32-11ee-80f8-ef3e42bb1492" 00:09:18.804 ], 00:09:18.804 "product_name": "Malloc disk", 00:09:18.804 "block_size": 512, 00:09:18.804 "num_blocks": 65536, 00:09:18.804 "uuid": "7e5c70d8-fc32-11ee-80f8-ef3e42bb1492", 00:09:18.804 "assigned_rate_limits": { 00:09:18.804 "rw_ios_per_sec": 0, 00:09:18.804 "rw_mbytes_per_sec": 0, 00:09:18.804 "r_mbytes_per_sec": 0, 00:09:18.804 "w_mbytes_per_sec": 0 00:09:18.804 }, 00:09:18.804 "claimed": true, 00:09:18.804 "claim_type": "exclusive_write", 00:09:18.804 "zoned": false, 00:09:18.804 "supported_io_types": { 00:09:18.804 "read": true, 00:09:18.804 "write": true, 00:09:18.804 "unmap": true, 00:09:18.804 "write_zeroes": true, 00:09:18.804 "flush": true, 00:09:18.804 "reset": true, 00:09:18.804 "compare": false, 00:09:18.804 "compare_and_write": false, 00:09:18.804 "abort": true, 00:09:18.804 "nvme_admin": false, 00:09:18.804 "nvme_io": false 00:09:18.804 }, 00:09:18.804 "memory_domains": [ 00:09:18.804 { 00:09:18.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.804 "dma_device_type": 2 00:09:18.804 } 00:09:18.804 ], 00:09:18.804 "driver_specific": {} 00:09:18.804 } 00:09:18.804 ] 00:09:18.804 20:47:09 -- common/autotest_common.sh@895 -- # return 0 00:09:18.804 20:47:09 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:09:18.804 20:47:09 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:18.804 20:47:09 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:18.804 20:47:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:18.804 20:47:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:18.804 20:47:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:18.804 20:47:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:18.804 20:47:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:18.804 20:47:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:18.804 20:47:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:18.804 20:47:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:18.804 20:47:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:18.804 20:47:09 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:18.804 20:47:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.804 20:47:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:18.804 "name": "Existed_Raid", 00:09:18.804 "uuid": "7d6ab619-fc32-11ee-80f8-ef3e42bb1492", 00:09:18.804 "strip_size_kb": 64, 00:09:18.804 "state": "configuring", 00:09:18.804 "raid_level": "concat", 00:09:18.804 "superblock": true, 00:09:18.804 "num_base_bdevs": 4, 00:09:18.804 "num_base_bdevs_discovered": 3, 00:09:18.804 "num_base_bdevs_operational": 4, 00:09:18.804 "base_bdevs_list": [ 00:09:18.804 { 00:09:18.804 "name": "BaseBdev1", 00:09:18.804 "uuid": "7d184d9d-fc32-11ee-80f8-ef3e42bb1492", 00:09:18.804 "is_configured": true, 00:09:18.804 "data_offset": 2048, 00:09:18.804 "data_size": 63488 00:09:18.804 }, 00:09:18.804 { 00:09:18.804 "name": "BaseBdev2", 00:09:18.804 "uuid": "7dcb28aa-fc32-11ee-80f8-ef3e42bb1492", 00:09:18.804 "is_configured": true, 00:09:18.804 "data_offset": 2048, 00:09:18.804 "data_size": 63488 00:09:18.804 }, 00:09:18.804 { 00:09:18.804 "name": "BaseBdev3", 00:09:18.804 "uuid": "7e5c70d8-fc32-11ee-80f8-ef3e42bb1492", 00:09:18.804 "is_configured": true, 00:09:18.804 "data_offset": 2048, 00:09:18.804 "data_size": 63488 00:09:18.804 }, 00:09:18.804 { 00:09:18.804 "name": "BaseBdev4", 00:09:18.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.804 "is_configured": false, 00:09:18.804 "data_offset": 0, 00:09:18.804 "data_size": 0 00:09:18.804 } 00:09:18.804 ] 00:09:18.804 }' 00:09:18.804 20:47:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:18.804 20:47:09 -- common/autotest_common.sh@10 -- # set +x 00:09:19.063 20:47:10 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:09:19.323 [2024-04-16 20:47:10.345750] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:19.323 [2024-04-16 20:47:10.345809] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82d4a5a00 00:09:19.323 [2024-04-16 20:47:10.345813] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:19.323 [2024-04-16 20:47:10.345828] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82d508ec0 00:09:19.323 [2024-04-16 20:47:10.345860] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82d4a5a00 00:09:19.323 [2024-04-16 20:47:10.345863] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82d4a5a00 00:09:19.323 [2024-04-16 20:47:10.345877] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:19.323 BaseBdev4 00:09:19.323 20:47:10 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:09:19.323 20:47:10 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:09:19.323 20:47:10 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:19.323 20:47:10 -- common/autotest_common.sh@889 -- # local i 00:09:19.323 20:47:10 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:19.323 20:47:10 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:19.323 20:47:10 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:19.583 20:47:10 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:19.583 [ 00:09:19.583 { 00:09:19.583 "name": "BaseBdev4", 00:09:19.583 "aliases": [ 00:09:19.583 "7ef29aee-fc32-11ee-80f8-ef3e42bb1492" 00:09:19.583 ], 00:09:19.583 "product_name": "Malloc disk", 00:09:19.583 "block_size": 512, 00:09:19.583 "num_blocks": 65536, 00:09:19.583 "uuid": "7ef29aee-fc32-11ee-80f8-ef3e42bb1492", 00:09:19.583 "assigned_rate_limits": { 00:09:19.583 "rw_ios_per_sec": 0, 00:09:19.583 "rw_mbytes_per_sec": 0, 00:09:19.583 "r_mbytes_per_sec": 0, 00:09:19.583 "w_mbytes_per_sec": 0 00:09:19.583 }, 00:09:19.583 "claimed": true, 00:09:19.583 "claim_type": "exclusive_write", 00:09:19.583 "zoned": false, 00:09:19.583 "supported_io_types": { 00:09:19.583 "read": true, 00:09:19.583 "write": true, 00:09:19.583 "unmap": true, 00:09:19.583 "write_zeroes": true, 00:09:19.583 "flush": true, 00:09:19.583 "reset": true, 00:09:19.583 "compare": false, 00:09:19.583 "compare_and_write": false, 00:09:19.583 "abort": true, 00:09:19.583 "nvme_admin": false, 00:09:19.583 "nvme_io": false 00:09:19.583 }, 00:09:19.583 "memory_domains": [ 00:09:19.583 { 00:09:19.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.583 "dma_device_type": 2 00:09:19.583 } 00:09:19.583 ], 00:09:19.583 "driver_specific": {} 00:09:19.583 } 00:09:19.583 ] 00:09:19.583 20:47:10 -- common/autotest_common.sh@895 -- # return 0 00:09:19.584 20:47:10 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:09:19.584 20:47:10 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:19.584 20:47:10 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:09:19.584 20:47:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:19.584 20:47:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:19.584 20:47:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:19.584 20:47:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:19.584 20:47:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:19.584 20:47:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:19.584 20:47:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:19.584 20:47:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:19.584 20:47:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:19.584 20:47:10 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:19.584 20:47:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.843 20:47:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:19.843 "name": "Existed_Raid", 00:09:19.843 "uuid": "7d6ab619-fc32-11ee-80f8-ef3e42bb1492", 00:09:19.843 "strip_size_kb": 64, 00:09:19.843 "state": "online", 00:09:19.843 "raid_level": "concat", 00:09:19.843 "superblock": true, 00:09:19.843 "num_base_bdevs": 4, 00:09:19.843 "num_base_bdevs_discovered": 4, 00:09:19.843 "num_base_bdevs_operational": 4, 00:09:19.843 "base_bdevs_list": [ 00:09:19.843 { 00:09:19.843 "name": "BaseBdev1", 00:09:19.843 "uuid": "7d184d9d-fc32-11ee-80f8-ef3e42bb1492", 00:09:19.843 "is_configured": true, 00:09:19.843 "data_offset": 2048, 00:09:19.843 "data_size": 63488 00:09:19.843 }, 00:09:19.843 { 00:09:19.843 "name": "BaseBdev2", 00:09:19.843 "uuid": "7dcb28aa-fc32-11ee-80f8-ef3e42bb1492", 00:09:19.843 "is_configured": true, 00:09:19.843 "data_offset": 2048, 00:09:19.843 "data_size": 63488 00:09:19.843 }, 00:09:19.843 { 00:09:19.843 "name": "BaseBdev3", 00:09:19.843 "uuid": "7e5c70d8-fc32-11ee-80f8-ef3e42bb1492", 00:09:19.843 "is_configured": true, 00:09:19.843 "data_offset": 2048, 00:09:19.843 "data_size": 63488 00:09:19.843 }, 00:09:19.843 { 00:09:19.843 "name": "BaseBdev4", 00:09:19.843 "uuid": "7ef29aee-fc32-11ee-80f8-ef3e42bb1492", 00:09:19.843 "is_configured": true, 00:09:19.843 "data_offset": 2048, 00:09:19.843 "data_size": 63488 00:09:19.843 } 00:09:19.843 ] 00:09:19.843 }' 00:09:19.843 20:47:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:19.843 20:47:10 -- common/autotest_common.sh@10 -- # set +x 00:09:20.103 20:47:11 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:20.362 [2024-04-16 20:47:11.329836] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:20.362 [2024-04-16 20:47:11.329852] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:20.362 [2024-04-16 20:47:11.329862] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:20.362 20:47:11 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:09:20.362 20:47:11 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:09:20.362 20:47:11 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:09:20.362 20:47:11 -- bdev/bdev_raid.sh@197 -- # return 1 00:09:20.362 20:47:11 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:09:20.362 20:47:11 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:09:20.362 20:47:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:20.363 20:47:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:09:20.363 20:47:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:20.363 20:47:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:20.363 20:47:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:09:20.363 20:47:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:20.363 20:47:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:20.363 20:47:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:20.363 20:47:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:20.363 20:47:11 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:20.363 20:47:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.622 20:47:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:20.622 "name": "Existed_Raid", 00:09:20.622 "uuid": "7d6ab619-fc32-11ee-80f8-ef3e42bb1492", 00:09:20.622 "strip_size_kb": 64, 00:09:20.622 "state": "offline", 00:09:20.622 "raid_level": "concat", 00:09:20.622 "superblock": true, 00:09:20.622 "num_base_bdevs": 4, 00:09:20.622 "num_base_bdevs_discovered": 3, 00:09:20.622 "num_base_bdevs_operational": 3, 00:09:20.622 "base_bdevs_list": [ 00:09:20.622 { 00:09:20.622 "name": null, 00:09:20.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.622 "is_configured": false, 00:09:20.622 "data_offset": 2048, 00:09:20.622 "data_size": 63488 00:09:20.622 }, 00:09:20.622 { 00:09:20.622 "name": "BaseBdev2", 00:09:20.622 "uuid": "7dcb28aa-fc32-11ee-80f8-ef3e42bb1492", 00:09:20.622 "is_configured": true, 00:09:20.622 "data_offset": 2048, 00:09:20.622 "data_size": 63488 00:09:20.622 }, 00:09:20.622 { 00:09:20.622 "name": "BaseBdev3", 00:09:20.622 "uuid": "7e5c70d8-fc32-11ee-80f8-ef3e42bb1492", 00:09:20.622 "is_configured": true, 00:09:20.622 "data_offset": 2048, 00:09:20.622 "data_size": 63488 00:09:20.622 }, 00:09:20.622 { 00:09:20.622 "name": "BaseBdev4", 00:09:20.622 "uuid": "7ef29aee-fc32-11ee-80f8-ef3e42bb1492", 00:09:20.622 "is_configured": true, 00:09:20.622 "data_offset": 2048, 00:09:20.622 "data_size": 63488 00:09:20.622 } 00:09:20.622 ] 00:09:20.622 }' 00:09:20.622 20:47:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:20.622 20:47:11 -- common/autotest_common.sh@10 -- # set +x 00:09:20.881 20:47:11 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:09:20.881 20:47:11 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:20.881 20:47:11 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:20.881 20:47:11 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:09:20.881 20:47:11 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:09:20.881 20:47:11 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:20.881 20:47:11 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:09:21.141 [2024-04-16 20:47:12.138571] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:21.141 20:47:12 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:09:21.141 20:47:12 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:21.141 20:47:12 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:21.141 20:47:12 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:09:21.401 20:47:12 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:09:21.401 20:47:12 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:21.401 20:47:12 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:09:21.401 [2024-04-16 20:47:12.479245] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:21.401 20:47:12 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:09:21.401 20:47:12 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:21.401 20:47:12 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:21.401 20:47:12 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:09:21.660 20:47:12 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:09:21.660 20:47:12 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:21.660 20:47:12 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:09:21.920 [2024-04-16 20:47:12.835933] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:21.920 [2024-04-16 20:47:12.835949] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d4a5a00 name Existed_Raid, state offline 00:09:21.920 20:47:12 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:09:21.920 20:47:12 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:21.920 20:47:12 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:21.920 20:47:12 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:09:21.920 20:47:13 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:09:21.920 20:47:13 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:09:21.920 20:47:13 -- bdev/bdev_raid.sh@287 -- # killprocess 52442 00:09:21.920 20:47:13 -- common/autotest_common.sh@926 -- # '[' -z 52442 ']' 00:09:21.920 20:47:13 -- common/autotest_common.sh@930 -- # kill -0 52442 00:09:21.920 20:47:13 -- common/autotest_common.sh@931 -- # uname 00:09:21.920 20:47:13 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:09:21.920 20:47:13 -- common/autotest_common.sh@934 -- # ps -c -o command 52442 00:09:21.920 20:47:13 -- common/autotest_common.sh@934 -- # tail -1 00:09:22.179 20:47:13 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:09:22.179 20:47:13 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:09:22.179 killing process with pid 52442 00:09:22.179 20:47:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 52442' 00:09:22.179 20:47:13 -- common/autotest_common.sh@945 -- # kill 52442 00:09:22.179 [2024-04-16 20:47:13.049977] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:22.179 [2024-04-16 20:47:13.050009] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:22.179 20:47:13 -- common/autotest_common.sh@950 -- # wait 52442 00:09:22.179 20:47:13 -- bdev/bdev_raid.sh@289 -- # return 0 00:09:22.179 00:09:22.179 real 0m9.366s 00:09:22.179 user 0m16.244s 00:09:22.179 sys 0m1.767s 00:09:22.179 20:47:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:22.179 20:47:13 -- common/autotest_common.sh@10 -- # set +x 00:09:22.179 ************************************ 00:09:22.179 END TEST raid_state_function_test_sb 00:09:22.179 ************************************ 00:09:22.179 20:47:13 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:09:22.179 20:47:13 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:09:22.179 20:47:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:22.179 20:47:13 -- common/autotest_common.sh@10 -- # set +x 00:09:22.179 ************************************ 00:09:22.179 START TEST raid_superblock_test 00:09:22.179 ************************************ 00:09:22.179 20:47:13 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 4 00:09:22.179 20:47:13 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:09:22.179 20:47:13 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:09:22.179 20:47:13 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:09:22.179 20:47:13 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:09:22.180 20:47:13 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:09:22.180 20:47:13 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:09:22.180 20:47:13 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:09:22.180 20:47:13 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:09:22.180 20:47:13 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:09:22.180 20:47:13 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:09:22.180 20:47:13 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:09:22.180 20:47:13 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:09:22.180 20:47:13 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:09:22.180 20:47:13 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:09:22.180 20:47:13 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:09:22.180 20:47:13 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:09:22.180 20:47:13 -- bdev/bdev_raid.sh@357 -- # raid_pid=52715 00:09:22.180 20:47:13 -- bdev/bdev_raid.sh@356 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:09:22.180 20:47:13 -- bdev/bdev_raid.sh@358 -- # waitforlisten 52715 /var/tmp/spdk-raid.sock 00:09:22.180 20:47:13 -- common/autotest_common.sh@819 -- # '[' -z 52715 ']' 00:09:22.180 20:47:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:22.180 20:47:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:22.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:22.180 20:47:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:22.180 20:47:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:22.180 20:47:13 -- common/autotest_common.sh@10 -- # set +x 00:09:22.180 [2024-04-16 20:47:13.263498] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:09:22.180 [2024-04-16 20:47:13.263788] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:09:22.752 EAL: TSC is not safe to use in SMP mode 00:09:22.752 EAL: TSC is not invariant 00:09:22.752 [2024-04-16 20:47:13.693461] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.752 [2024-04-16 20:47:13.782560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.752 [2024-04-16 20:47:13.782965] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:22.752 [2024-04-16 20:47:13.782990] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:23.334 20:47:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:23.335 20:47:14 -- common/autotest_common.sh@852 -- # return 0 00:09:23.335 20:47:14 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:09:23.335 20:47:14 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:09:23.335 20:47:14 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:09:23.335 20:47:14 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:09:23.335 20:47:14 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:23.335 20:47:14 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:23.335 20:47:14 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:09:23.335 20:47:14 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:23.335 20:47:14 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:09:23.335 malloc1 00:09:23.335 20:47:14 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:23.596 [2024-04-16 20:47:14.490118] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:23.596 [2024-04-16 20:47:14.490174] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:23.596 [2024-04-16 20:47:14.490660] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82aa9d780 00:09:23.596 [2024-04-16 20:47:14.490685] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:23.596 [2024-04-16 20:47:14.491315] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:23.596 [2024-04-16 20:47:14.491341] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:23.596 pt1 00:09:23.596 20:47:14 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:09:23.596 20:47:14 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:09:23.597 20:47:14 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:09:23.597 20:47:14 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:09:23.597 20:47:14 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:23.597 20:47:14 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:23.597 20:47:14 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:09:23.597 20:47:14 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:23.597 20:47:14 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:09:23.597 malloc2 00:09:23.597 20:47:14 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:23.856 [2024-04-16 20:47:14.850161] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:23.856 [2024-04-16 20:47:14.850197] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:23.856 [2024-04-16 20:47:14.850219] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82aa9dc80 00:09:23.856 [2024-04-16 20:47:14.850225] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:23.856 [2024-04-16 20:47:14.850651] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:23.856 [2024-04-16 20:47:14.850676] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:23.856 pt2 00:09:23.856 20:47:14 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:09:23.856 20:47:14 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:09:23.856 20:47:14 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:09:23.856 20:47:14 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:09:23.856 20:47:14 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:23.856 20:47:14 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:23.856 20:47:14 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:09:23.856 20:47:14 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:23.856 20:47:14 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:09:24.116 malloc3 00:09:24.116 20:47:15 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:24.116 [2024-04-16 20:47:15.194198] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:24.116 [2024-04-16 20:47:15.194235] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:24.116 [2024-04-16 20:47:15.194256] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82aa9e180 00:09:24.116 [2024-04-16 20:47:15.194261] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:24.116 [2024-04-16 20:47:15.194678] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:24.116 [2024-04-16 20:47:15.194703] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:24.116 pt3 00:09:24.116 20:47:15 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:09:24.116 20:47:15 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:09:24.116 20:47:15 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:09:24.116 20:47:15 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:09:24.116 20:47:15 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:09:24.116 20:47:15 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:24.116 20:47:15 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:09:24.116 20:47:15 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:24.116 20:47:15 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:09:24.376 malloc4 00:09:24.376 20:47:15 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:24.636 [2024-04-16 20:47:15.530245] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:24.636 [2024-04-16 20:47:15.530281] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:24.636 [2024-04-16 20:47:15.530300] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82aa9e680 00:09:24.636 [2024-04-16 20:47:15.530306] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:24.636 [2024-04-16 20:47:15.530727] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:24.636 [2024-04-16 20:47:15.530757] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:24.636 pt4 00:09:24.636 20:47:15 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:09:24.636 20:47:15 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:09:24.636 20:47:15 -- bdev/bdev_raid.sh@375 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:09:24.636 [2024-04-16 20:47:15.710278] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:24.636 [2024-04-16 20:47:15.710658] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:24.636 [2024-04-16 20:47:15.710677] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:24.636 [2024-04-16 20:47:15.710685] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:24.636 [2024-04-16 20:47:15.710731] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82aa9e900 00:09:24.636 [2024-04-16 20:47:15.710736] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:24.636 [2024-04-16 20:47:15.710766] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82ab00e20 00:09:24.636 [2024-04-16 20:47:15.710815] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82aa9e900 00:09:24.636 [2024-04-16 20:47:15.710818] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82aa9e900 00:09:24.636 [2024-04-16 20:47:15.710835] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:24.636 20:47:15 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:09:24.636 20:47:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:09:24.636 20:47:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:24.636 20:47:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:24.636 20:47:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:24.636 20:47:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:24.636 20:47:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:24.636 20:47:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:24.636 20:47:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:24.636 20:47:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:24.636 20:47:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:24.636 20:47:15 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:24.896 20:47:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:24.896 "name": "raid_bdev1", 00:09:24.896 "uuid": "82252c9d-fc32-11ee-80f8-ef3e42bb1492", 00:09:24.896 "strip_size_kb": 64, 00:09:24.896 "state": "online", 00:09:24.896 "raid_level": "concat", 00:09:24.896 "superblock": true, 00:09:24.896 "num_base_bdevs": 4, 00:09:24.896 "num_base_bdevs_discovered": 4, 00:09:24.896 "num_base_bdevs_operational": 4, 00:09:24.896 "base_bdevs_list": [ 00:09:24.896 { 00:09:24.896 "name": "pt1", 00:09:24.896 "uuid": "225568a8-2f40-b752-9b0d-40449a4ee698", 00:09:24.896 "is_configured": true, 00:09:24.896 "data_offset": 2048, 00:09:24.896 "data_size": 63488 00:09:24.896 }, 00:09:24.896 { 00:09:24.896 "name": "pt2", 00:09:24.896 "uuid": "2297f392-e790-ff55-b68b-6bb2d5538e9a", 00:09:24.896 "is_configured": true, 00:09:24.896 "data_offset": 2048, 00:09:24.896 "data_size": 63488 00:09:24.896 }, 00:09:24.896 { 00:09:24.896 "name": "pt3", 00:09:24.896 "uuid": "b56bf794-4830-405e-bfcd-eb4bf150edd8", 00:09:24.896 "is_configured": true, 00:09:24.896 "data_offset": 2048, 00:09:24.896 "data_size": 63488 00:09:24.896 }, 00:09:24.896 { 00:09:24.896 "name": "pt4", 00:09:24.896 "uuid": "7823cd20-f6af-5051-8e2f-ca3683a8971e", 00:09:24.896 "is_configured": true, 00:09:24.896 "data_offset": 2048, 00:09:24.896 "data_size": 63488 00:09:24.896 } 00:09:24.896 ] 00:09:24.896 }' 00:09:24.896 20:47:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:24.896 20:47:15 -- common/autotest_common.sh@10 -- # set +x 00:09:25.155 20:47:16 -- bdev/bdev_raid.sh@379 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:25.155 20:47:16 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:09:25.415 [2024-04-16 20:47:16.338356] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:25.415 20:47:16 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=82252c9d-fc32-11ee-80f8-ef3e42bb1492 00:09:25.415 20:47:16 -- bdev/bdev_raid.sh@380 -- # '[' -z 82252c9d-fc32-11ee-80f8-ef3e42bb1492 ']' 00:09:25.415 20:47:16 -- bdev/bdev_raid.sh@385 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:09:25.415 [2024-04-16 20:47:16.502346] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:25.415 [2024-04-16 20:47:16.502359] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:25.415 [2024-04-16 20:47:16.502370] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:25.415 [2024-04-16 20:47:16.502396] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:25.415 [2024-04-16 20:47:16.502399] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82aa9e900 name raid_bdev1, state offline 00:09:25.415 20:47:16 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:09:25.415 20:47:16 -- bdev/bdev_raid.sh@386 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:25.674 20:47:16 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:09:25.674 20:47:16 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:09:25.674 20:47:16 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:09:25.674 20:47:16 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:09:25.933 20:47:16 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:09:25.933 20:47:16 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:09:25.933 20:47:17 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:09:25.933 20:47:17 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:09:26.193 20:47:17 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:09:26.193 20:47:17 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:09:26.453 20:47:17 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:26.453 20:47:17 -- bdev/bdev_raid.sh@395 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:09:26.453 20:47:17 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:09:26.453 20:47:17 -- bdev/bdev_raid.sh@401 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:09:26.453 20:47:17 -- common/autotest_common.sh@640 -- # local es=0 00:09:26.453 20:47:17 -- common/autotest_common.sh@642 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:09:26.453 20:47:17 -- common/autotest_common.sh@628 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:26.453 20:47:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:26.453 20:47:17 -- common/autotest_common.sh@632 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:26.713 20:47:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:26.713 20:47:17 -- common/autotest_common.sh@634 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:26.713 20:47:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:26.713 20:47:17 -- common/autotest_common.sh@634 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:26.713 20:47:17 -- common/autotest_common.sh@634 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:26.713 20:47:17 -- common/autotest_common.sh@643 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:09:26.713 [2024-04-16 20:47:17.750497] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:26.713 [2024-04-16 20:47:17.750956] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:26.713 [2024-04-16 20:47:17.750974] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:26.713 [2024-04-16 20:47:17.750980] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:09:26.713 [2024-04-16 20:47:17.750990] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:09:26.713 [2024-04-16 20:47:17.751018] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:09:26.713 [2024-04-16 20:47:17.751025] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:09:26.713 [2024-04-16 20:47:17.751031] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:09:26.713 [2024-04-16 20:47:17.751037] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:26.713 [2024-04-16 20:47:17.751041] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82aa9e680 name raid_bdev1, state configuring 00:09:26.713 request: 00:09:26.713 { 00:09:26.713 "name": "raid_bdev1", 00:09:26.713 "raid_level": "concat", 00:09:26.713 "base_bdevs": [ 00:09:26.713 "malloc1", 00:09:26.713 "malloc2", 00:09:26.713 "malloc3", 00:09:26.713 "malloc4" 00:09:26.713 ], 00:09:26.713 "superblock": false, 00:09:26.713 "strip_size_kb": 64, 00:09:26.713 "method": "bdev_raid_create", 00:09:26.713 "req_id": 1 00:09:26.713 } 00:09:26.713 Got JSON-RPC error response 00:09:26.713 response: 00:09:26.713 { 00:09:26.713 "code": -17, 00:09:26.713 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:26.713 } 00:09:26.713 20:47:17 -- common/autotest_common.sh@643 -- # es=1 00:09:26.713 20:47:17 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:26.713 20:47:17 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:26.713 20:47:17 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:26.713 20:47:17 -- bdev/bdev_raid.sh@403 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:26.713 20:47:17 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:09:26.973 20:47:17 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:09:26.973 20:47:17 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:09:26.973 20:47:17 -- bdev/bdev_raid.sh@409 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:26.973 [2024-04-16 20:47:18.090535] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:26.973 [2024-04-16 20:47:18.090567] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:26.973 [2024-04-16 20:47:18.090590] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82aa9e180 00:09:26.973 [2024-04-16 20:47:18.090595] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:26.973 [2024-04-16 20:47:18.091091] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:26.973 [2024-04-16 20:47:18.091119] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:26.973 [2024-04-16 20:47:18.091136] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:09:26.973 [2024-04-16 20:47:18.091145] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:26.973 pt1 00:09:27.232 20:47:18 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:09:27.233 20:47:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:09:27.233 20:47:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:27.233 20:47:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:27.233 20:47:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:27.233 20:47:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:27.233 20:47:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:27.233 20:47:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:27.233 20:47:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:27.233 20:47:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:27.233 20:47:18 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:27.233 20:47:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:27.233 20:47:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:27.233 "name": "raid_bdev1", 00:09:27.233 "uuid": "82252c9d-fc32-11ee-80f8-ef3e42bb1492", 00:09:27.233 "strip_size_kb": 64, 00:09:27.233 "state": "configuring", 00:09:27.233 "raid_level": "concat", 00:09:27.233 "superblock": true, 00:09:27.233 "num_base_bdevs": 4, 00:09:27.233 "num_base_bdevs_discovered": 1, 00:09:27.233 "num_base_bdevs_operational": 4, 00:09:27.233 "base_bdevs_list": [ 00:09:27.233 { 00:09:27.233 "name": "pt1", 00:09:27.233 "uuid": "225568a8-2f40-b752-9b0d-40449a4ee698", 00:09:27.233 "is_configured": true, 00:09:27.233 "data_offset": 2048, 00:09:27.233 "data_size": 63488 00:09:27.233 }, 00:09:27.233 { 00:09:27.233 "name": null, 00:09:27.233 "uuid": "2297f392-e790-ff55-b68b-6bb2d5538e9a", 00:09:27.233 "is_configured": false, 00:09:27.233 "data_offset": 2048, 00:09:27.233 "data_size": 63488 00:09:27.233 }, 00:09:27.233 { 00:09:27.233 "name": null, 00:09:27.233 "uuid": "b56bf794-4830-405e-bfcd-eb4bf150edd8", 00:09:27.233 "is_configured": false, 00:09:27.233 "data_offset": 2048, 00:09:27.233 "data_size": 63488 00:09:27.233 }, 00:09:27.233 { 00:09:27.233 "name": null, 00:09:27.233 "uuid": "7823cd20-f6af-5051-8e2f-ca3683a8971e", 00:09:27.233 "is_configured": false, 00:09:27.233 "data_offset": 2048, 00:09:27.233 "data_size": 63488 00:09:27.233 } 00:09:27.233 ] 00:09:27.233 }' 00:09:27.233 20:47:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:27.233 20:47:18 -- common/autotest_common.sh@10 -- # set +x 00:09:27.492 20:47:18 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:09:27.492 20:47:18 -- bdev/bdev_raid.sh@416 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:27.751 [2024-04-16 20:47:18.718606] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:27.751 [2024-04-16 20:47:18.718638] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:27.751 [2024-04-16 20:47:18.718677] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82aa9d780 00:09:27.751 [2024-04-16 20:47:18.718682] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:27.751 [2024-04-16 20:47:18.718749] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:27.751 [2024-04-16 20:47:18.718767] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:27.751 [2024-04-16 20:47:18.718780] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:09:27.751 [2024-04-16 20:47:18.718785] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:27.751 pt2 00:09:27.751 20:47:18 -- bdev/bdev_raid.sh@417 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:09:28.011 [2024-04-16 20:47:18.898627] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:28.011 20:47:18 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:09:28.011 20:47:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:09:28.011 20:47:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:28.011 20:47:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:28.011 20:47:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:28.011 20:47:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:28.011 20:47:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:28.011 20:47:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:28.011 20:47:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:28.011 20:47:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:28.011 20:47:18 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:28.011 20:47:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:28.011 20:47:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:28.011 "name": "raid_bdev1", 00:09:28.011 "uuid": "82252c9d-fc32-11ee-80f8-ef3e42bb1492", 00:09:28.011 "strip_size_kb": 64, 00:09:28.011 "state": "configuring", 00:09:28.011 "raid_level": "concat", 00:09:28.011 "superblock": true, 00:09:28.011 "num_base_bdevs": 4, 00:09:28.011 "num_base_bdevs_discovered": 1, 00:09:28.011 "num_base_bdevs_operational": 4, 00:09:28.011 "base_bdevs_list": [ 00:09:28.011 { 00:09:28.011 "name": "pt1", 00:09:28.011 "uuid": "225568a8-2f40-b752-9b0d-40449a4ee698", 00:09:28.011 "is_configured": true, 00:09:28.011 "data_offset": 2048, 00:09:28.011 "data_size": 63488 00:09:28.011 }, 00:09:28.011 { 00:09:28.011 "name": null, 00:09:28.011 "uuid": "2297f392-e790-ff55-b68b-6bb2d5538e9a", 00:09:28.011 "is_configured": false, 00:09:28.011 "data_offset": 2048, 00:09:28.011 "data_size": 63488 00:09:28.011 }, 00:09:28.011 { 00:09:28.011 "name": null, 00:09:28.011 "uuid": "b56bf794-4830-405e-bfcd-eb4bf150edd8", 00:09:28.011 "is_configured": false, 00:09:28.011 "data_offset": 2048, 00:09:28.011 "data_size": 63488 00:09:28.011 }, 00:09:28.011 { 00:09:28.011 "name": null, 00:09:28.011 "uuid": "7823cd20-f6af-5051-8e2f-ca3683a8971e", 00:09:28.012 "is_configured": false, 00:09:28.012 "data_offset": 2048, 00:09:28.012 "data_size": 63488 00:09:28.012 } 00:09:28.012 ] 00:09:28.012 }' 00:09:28.012 20:47:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:28.012 20:47:19 -- common/autotest_common.sh@10 -- # set +x 00:09:28.271 20:47:19 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:09:28.271 20:47:19 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:09:28.271 20:47:19 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:28.531 [2024-04-16 20:47:19.522695] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:28.531 [2024-04-16 20:47:19.522727] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:28.531 [2024-04-16 20:47:19.522746] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82aa9d780 00:09:28.531 [2024-04-16 20:47:19.522751] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:28.531 [2024-04-16 20:47:19.522831] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:28.531 [2024-04-16 20:47:19.522837] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:28.531 [2024-04-16 20:47:19.522849] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:09:28.531 [2024-04-16 20:47:19.522854] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:28.531 pt2 00:09:28.531 20:47:19 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:09:28.531 20:47:19 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:09:28.531 20:47:19 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:28.791 [2024-04-16 20:47:19.702716] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:28.791 [2024-04-16 20:47:19.702741] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:28.791 [2024-04-16 20:47:19.702770] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82aa9eb80 00:09:28.791 [2024-04-16 20:47:19.702775] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:28.791 [2024-04-16 20:47:19.702824] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:28.791 [2024-04-16 20:47:19.702829] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:28.791 [2024-04-16 20:47:19.702840] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:09:28.791 [2024-04-16 20:47:19.702844] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:28.791 pt3 00:09:28.791 20:47:19 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:09:28.791 20:47:19 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:09:28.791 20:47:19 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:28.791 [2024-04-16 20:47:19.882736] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:28.791 [2024-04-16 20:47:19.882761] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:28.791 [2024-04-16 20:47:19.882773] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82aa9e900 00:09:28.791 [2024-04-16 20:47:19.882778] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:28.791 [2024-04-16 20:47:19.882842] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:28.791 [2024-04-16 20:47:19.882848] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:28.791 [2024-04-16 20:47:19.882859] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:09:28.791 [2024-04-16 20:47:19.882866] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:28.791 [2024-04-16 20:47:19.882883] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82aa9dc80 00:09:28.791 [2024-04-16 20:47:19.882898] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:28.791 [2024-04-16 20:47:19.882912] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82ab00e20 00:09:28.791 [2024-04-16 20:47:19.882945] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82aa9dc80 00:09:28.791 [2024-04-16 20:47:19.882947] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82aa9dc80 00:09:28.791 [2024-04-16 20:47:19.882961] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:28.791 pt4 00:09:28.791 20:47:19 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:09:28.791 20:47:19 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:09:28.791 20:47:19 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:09:28.791 20:47:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:09:28.791 20:47:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:28.791 20:47:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:28.791 20:47:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:28.791 20:47:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:28.791 20:47:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:28.791 20:47:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:28.791 20:47:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:28.791 20:47:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:28.791 20:47:19 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:28.791 20:47:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:29.051 20:47:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:29.051 "name": "raid_bdev1", 00:09:29.051 "uuid": "82252c9d-fc32-11ee-80f8-ef3e42bb1492", 00:09:29.051 "strip_size_kb": 64, 00:09:29.051 "state": "online", 00:09:29.051 "raid_level": "concat", 00:09:29.051 "superblock": true, 00:09:29.051 "num_base_bdevs": 4, 00:09:29.051 "num_base_bdevs_discovered": 4, 00:09:29.051 "num_base_bdevs_operational": 4, 00:09:29.051 "base_bdevs_list": [ 00:09:29.051 { 00:09:29.051 "name": "pt1", 00:09:29.051 "uuid": "225568a8-2f40-b752-9b0d-40449a4ee698", 00:09:29.051 "is_configured": true, 00:09:29.051 "data_offset": 2048, 00:09:29.051 "data_size": 63488 00:09:29.051 }, 00:09:29.051 { 00:09:29.051 "name": "pt2", 00:09:29.051 "uuid": "2297f392-e790-ff55-b68b-6bb2d5538e9a", 00:09:29.051 "is_configured": true, 00:09:29.051 "data_offset": 2048, 00:09:29.051 "data_size": 63488 00:09:29.051 }, 00:09:29.051 { 00:09:29.051 "name": "pt3", 00:09:29.051 "uuid": "b56bf794-4830-405e-bfcd-eb4bf150edd8", 00:09:29.051 "is_configured": true, 00:09:29.051 "data_offset": 2048, 00:09:29.051 "data_size": 63488 00:09:29.051 }, 00:09:29.051 { 00:09:29.051 "name": "pt4", 00:09:29.051 "uuid": "7823cd20-f6af-5051-8e2f-ca3683a8971e", 00:09:29.051 "is_configured": true, 00:09:29.051 "data_offset": 2048, 00:09:29.051 "data_size": 63488 00:09:29.051 } 00:09:29.051 ] 00:09:29.051 }' 00:09:29.051 20:47:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:29.051 20:47:20 -- common/autotest_common.sh@10 -- # set +x 00:09:29.311 20:47:20 -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:29.311 20:47:20 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:09:29.596 [2024-04-16 20:47:20.510832] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:29.596 20:47:20 -- bdev/bdev_raid.sh@430 -- # '[' 82252c9d-fc32-11ee-80f8-ef3e42bb1492 '!=' 82252c9d-fc32-11ee-80f8-ef3e42bb1492 ']' 00:09:29.596 20:47:20 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:09:29.596 20:47:20 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:09:29.596 20:47:20 -- bdev/bdev_raid.sh@197 -- # return 1 00:09:29.596 20:47:20 -- bdev/bdev_raid.sh@511 -- # killprocess 52715 00:09:29.596 20:47:20 -- common/autotest_common.sh@926 -- # '[' -z 52715 ']' 00:09:29.596 20:47:20 -- common/autotest_common.sh@930 -- # kill -0 52715 00:09:29.596 20:47:20 -- common/autotest_common.sh@931 -- # uname 00:09:29.596 20:47:20 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:09:29.596 20:47:20 -- common/autotest_common.sh@934 -- # ps -c -o command 52715 00:09:29.596 20:47:20 -- common/autotest_common.sh@934 -- # tail -1 00:09:29.596 20:47:20 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:09:29.596 20:47:20 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:09:29.596 killing process with pid 52715 00:09:29.596 20:47:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 52715' 00:09:29.596 20:47:20 -- common/autotest_common.sh@945 -- # kill 52715 00:09:29.596 [2024-04-16 20:47:20.543083] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:29.596 [2024-04-16 20:47:20.543097] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:29.596 [2024-04-16 20:47:20.543118] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:29.596 [2024-04-16 20:47:20.543122] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82aa9dc80 name raid_bdev1, state offline 00:09:29.596 20:47:20 -- common/autotest_common.sh@950 -- # wait 52715 00:09:29.596 [2024-04-16 20:47:20.561570] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:29.596 20:47:20 -- bdev/bdev_raid.sh@513 -- # return 0 00:09:29.596 00:09:29.596 real 0m7.449s 00:09:29.596 user 0m12.735s 00:09:29.596 sys 0m1.412s 00:09:29.596 20:47:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:29.596 20:47:20 -- common/autotest_common.sh@10 -- # set +x 00:09:29.596 ************************************ 00:09:29.596 END TEST raid_superblock_test 00:09:29.596 ************************************ 00:09:29.856 20:47:20 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:09:29.856 20:47:20 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:09:29.856 20:47:20 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:09:29.856 20:47:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:29.856 20:47:20 -- common/autotest_common.sh@10 -- # set +x 00:09:29.856 ************************************ 00:09:29.856 START TEST raid_state_function_test 00:09:29.856 ************************************ 00:09:29.856 20:47:20 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 4 false 00:09:29.856 20:47:20 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:09:29.856 20:47:20 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:09:29.856 20:47:20 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:09:29.856 20:47:20 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:09:29.856 20:47:20 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:09:29.856 20:47:20 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:29.856 20:47:20 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:09:29.856 20:47:20 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:09:29.856 20:47:20 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:29.856 20:47:20 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:09:29.856 20:47:20 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:09:29.856 20:47:20 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:29.856 20:47:20 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:09:29.856 20:47:20 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:09:29.856 20:47:20 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:29.856 20:47:20 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:09:29.856 20:47:20 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:09:29.856 20:47:20 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:29.856 20:47:20 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:29.856 20:47:20 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:09:29.856 20:47:20 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:09:29.856 20:47:20 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:09:29.856 20:47:20 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:09:29.856 20:47:20 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:09:29.856 20:47:20 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:09:29.856 20:47:20 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:09:29.856 20:47:20 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:09:29.856 20:47:20 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:09:29.856 20:47:20 -- bdev/bdev_raid.sh@226 -- # raid_pid=52900 00:09:29.856 20:47:20 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 52900' 00:09:29.856 Process raid pid: 52900 00:09:29.856 20:47:20 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:29.856 20:47:20 -- bdev/bdev_raid.sh@228 -- # waitforlisten 52900 /var/tmp/spdk-raid.sock 00:09:29.857 20:47:20 -- common/autotest_common.sh@819 -- # '[' -z 52900 ']' 00:09:29.857 20:47:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:29.857 20:47:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:29.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:29.857 20:47:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:29.857 20:47:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:29.857 20:47:20 -- common/autotest_common.sh@10 -- # set +x 00:09:29.857 [2024-04-16 20:47:20.779041] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:09:29.857 [2024-04-16 20:47:20.779387] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:09:30.116 EAL: TSC is not safe to use in SMP mode 00:09:30.116 EAL: TSC is not invariant 00:09:30.116 [2024-04-16 20:47:21.202210] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.375 [2024-04-16 20:47:21.279566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.375 [2024-04-16 20:47:21.279974] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:30.375 [2024-04-16 20:47:21.279983] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:30.634 20:47:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:30.634 20:47:21 -- common/autotest_common.sh@852 -- # return 0 00:09:30.635 20:47:21 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:09:30.894 [2024-04-16 20:47:21.843111] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:30.894 [2024-04-16 20:47:21.843164] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:30.894 [2024-04-16 20:47:21.843167] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:30.894 [2024-04-16 20:47:21.843173] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:30.894 [2024-04-16 20:47:21.843175] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:30.894 [2024-04-16 20:47:21.843180] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:30.894 [2024-04-16 20:47:21.843182] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:30.894 [2024-04-16 20:47:21.843187] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:30.894 20:47:21 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:30.894 20:47:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:30.894 20:47:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:30.894 20:47:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:30.894 20:47:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:30.894 20:47:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:30.894 20:47:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:30.894 20:47:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:30.894 20:47:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:30.894 20:47:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:30.894 20:47:21 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:30.894 20:47:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.162 20:47:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:31.162 "name": "Existed_Raid", 00:09:31.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.162 "strip_size_kb": 0, 00:09:31.162 "state": "configuring", 00:09:31.162 "raid_level": "raid1", 00:09:31.162 "superblock": false, 00:09:31.162 "num_base_bdevs": 4, 00:09:31.162 "num_base_bdevs_discovered": 0, 00:09:31.162 "num_base_bdevs_operational": 4, 00:09:31.162 "base_bdevs_list": [ 00:09:31.162 { 00:09:31.162 "name": "BaseBdev1", 00:09:31.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.162 "is_configured": false, 00:09:31.162 "data_offset": 0, 00:09:31.162 "data_size": 0 00:09:31.162 }, 00:09:31.162 { 00:09:31.162 "name": "BaseBdev2", 00:09:31.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.162 "is_configured": false, 00:09:31.162 "data_offset": 0, 00:09:31.162 "data_size": 0 00:09:31.162 }, 00:09:31.162 { 00:09:31.162 "name": "BaseBdev3", 00:09:31.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.162 "is_configured": false, 00:09:31.162 "data_offset": 0, 00:09:31.162 "data_size": 0 00:09:31.162 }, 00:09:31.162 { 00:09:31.162 "name": "BaseBdev4", 00:09:31.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.162 "is_configured": false, 00:09:31.162 "data_offset": 0, 00:09:31.162 "data_size": 0 00:09:31.162 } 00:09:31.162 ] 00:09:31.162 }' 00:09:31.162 20:47:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:31.162 20:47:22 -- common/autotest_common.sh@10 -- # set +x 00:09:31.162 20:47:22 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:31.437 [2024-04-16 20:47:22.439193] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:31.437 [2024-04-16 20:47:22.439206] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b74d500 name Existed_Raid, state configuring 00:09:31.437 20:47:22 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:09:31.697 [2024-04-16 20:47:22.623219] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:31.697 [2024-04-16 20:47:22.623250] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:31.697 [2024-04-16 20:47:22.623253] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:31.697 [2024-04-16 20:47:22.623275] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:31.697 [2024-04-16 20:47:22.623278] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:31.697 [2024-04-16 20:47:22.623283] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:31.697 [2024-04-16 20:47:22.623286] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:31.697 [2024-04-16 20:47:22.623291] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:31.697 20:47:22 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:31.697 [2024-04-16 20:47:22.803997] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:31.697 BaseBdev1 00:09:31.697 20:47:22 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:09:31.697 20:47:22 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:09:31.697 20:47:22 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:31.697 20:47:22 -- common/autotest_common.sh@889 -- # local i 00:09:31.697 20:47:22 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:31.697 20:47:22 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:31.697 20:47:22 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:31.956 20:47:22 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:32.215 [ 00:09:32.215 { 00:09:32.215 "name": "BaseBdev1", 00:09:32.215 "aliases": [ 00:09:32.215 "865f79d8-fc32-11ee-80f8-ef3e42bb1492" 00:09:32.215 ], 00:09:32.215 "product_name": "Malloc disk", 00:09:32.215 "block_size": 512, 00:09:32.215 "num_blocks": 65536, 00:09:32.215 "uuid": "865f79d8-fc32-11ee-80f8-ef3e42bb1492", 00:09:32.215 "assigned_rate_limits": { 00:09:32.215 "rw_ios_per_sec": 0, 00:09:32.215 "rw_mbytes_per_sec": 0, 00:09:32.215 "r_mbytes_per_sec": 0, 00:09:32.215 "w_mbytes_per_sec": 0 00:09:32.215 }, 00:09:32.215 "claimed": true, 00:09:32.215 "claim_type": "exclusive_write", 00:09:32.215 "zoned": false, 00:09:32.215 "supported_io_types": { 00:09:32.215 "read": true, 00:09:32.215 "write": true, 00:09:32.215 "unmap": true, 00:09:32.215 "write_zeroes": true, 00:09:32.215 "flush": true, 00:09:32.215 "reset": true, 00:09:32.215 "compare": false, 00:09:32.215 "compare_and_write": false, 00:09:32.215 "abort": true, 00:09:32.215 "nvme_admin": false, 00:09:32.215 "nvme_io": false 00:09:32.215 }, 00:09:32.215 "memory_domains": [ 00:09:32.215 { 00:09:32.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.215 "dma_device_type": 2 00:09:32.215 } 00:09:32.215 ], 00:09:32.215 "driver_specific": {} 00:09:32.215 } 00:09:32.215 ] 00:09:32.215 20:47:23 -- common/autotest_common.sh@895 -- # return 0 00:09:32.215 20:47:23 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:32.215 20:47:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:32.215 20:47:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:32.215 20:47:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:32.215 20:47:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:32.215 20:47:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:32.215 20:47:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:32.215 20:47:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:32.215 20:47:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:32.215 20:47:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:32.215 20:47:23 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:32.216 20:47:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.475 20:47:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:32.475 "name": "Existed_Raid", 00:09:32.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.475 "strip_size_kb": 0, 00:09:32.475 "state": "configuring", 00:09:32.475 "raid_level": "raid1", 00:09:32.475 "superblock": false, 00:09:32.475 "num_base_bdevs": 4, 00:09:32.475 "num_base_bdevs_discovered": 1, 00:09:32.475 "num_base_bdevs_operational": 4, 00:09:32.475 "base_bdevs_list": [ 00:09:32.475 { 00:09:32.475 "name": "BaseBdev1", 00:09:32.476 "uuid": "865f79d8-fc32-11ee-80f8-ef3e42bb1492", 00:09:32.476 "is_configured": true, 00:09:32.476 "data_offset": 0, 00:09:32.476 "data_size": 65536 00:09:32.476 }, 00:09:32.476 { 00:09:32.476 "name": "BaseBdev2", 00:09:32.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.476 "is_configured": false, 00:09:32.476 "data_offset": 0, 00:09:32.476 "data_size": 0 00:09:32.476 }, 00:09:32.476 { 00:09:32.476 "name": "BaseBdev3", 00:09:32.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.476 "is_configured": false, 00:09:32.476 "data_offset": 0, 00:09:32.476 "data_size": 0 00:09:32.476 }, 00:09:32.476 { 00:09:32.476 "name": "BaseBdev4", 00:09:32.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.476 "is_configured": false, 00:09:32.476 "data_offset": 0, 00:09:32.476 "data_size": 0 00:09:32.476 } 00:09:32.476 ] 00:09:32.476 }' 00:09:32.476 20:47:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:32.476 20:47:23 -- common/autotest_common.sh@10 -- # set +x 00:09:32.735 20:47:23 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:32.736 [2024-04-16 20:47:23.775349] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:32.736 [2024-04-16 20:47:23.775365] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b74d500 name Existed_Raid, state configuring 00:09:32.736 20:47:23 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:09:32.736 20:47:23 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:09:32.995 [2024-04-16 20:47:23.959381] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:32.995 [2024-04-16 20:47:23.959986] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:32.995 [2024-04-16 20:47:23.960018] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:32.995 [2024-04-16 20:47:23.960021] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:32.995 [2024-04-16 20:47:23.960027] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:32.995 [2024-04-16 20:47:23.960030] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:32.995 [2024-04-16 20:47:23.960045] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:32.995 20:47:23 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:09:32.995 20:47:23 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:32.995 20:47:23 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:32.995 20:47:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:32.995 20:47:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:32.995 20:47:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:32.995 20:47:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:32.995 20:47:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:32.995 20:47:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:32.995 20:47:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:32.995 20:47:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:32.995 20:47:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:32.995 20:47:23 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:32.995 20:47:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.255 20:47:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:33.255 "name": "Existed_Raid", 00:09:33.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.255 "strip_size_kb": 0, 00:09:33.255 "state": "configuring", 00:09:33.255 "raid_level": "raid1", 00:09:33.255 "superblock": false, 00:09:33.255 "num_base_bdevs": 4, 00:09:33.255 "num_base_bdevs_discovered": 1, 00:09:33.255 "num_base_bdevs_operational": 4, 00:09:33.255 "base_bdevs_list": [ 00:09:33.255 { 00:09:33.255 "name": "BaseBdev1", 00:09:33.255 "uuid": "865f79d8-fc32-11ee-80f8-ef3e42bb1492", 00:09:33.255 "is_configured": true, 00:09:33.255 "data_offset": 0, 00:09:33.255 "data_size": 65536 00:09:33.255 }, 00:09:33.255 { 00:09:33.255 "name": "BaseBdev2", 00:09:33.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.255 "is_configured": false, 00:09:33.255 "data_offset": 0, 00:09:33.255 "data_size": 0 00:09:33.255 }, 00:09:33.255 { 00:09:33.255 "name": "BaseBdev3", 00:09:33.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.255 "is_configured": false, 00:09:33.255 "data_offset": 0, 00:09:33.255 "data_size": 0 00:09:33.255 }, 00:09:33.255 { 00:09:33.255 "name": "BaseBdev4", 00:09:33.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.255 "is_configured": false, 00:09:33.256 "data_offset": 0, 00:09:33.256 "data_size": 0 00:09:33.256 } 00:09:33.256 ] 00:09:33.256 }' 00:09:33.256 20:47:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:33.256 20:47:24 -- common/autotest_common.sh@10 -- # set +x 00:09:33.516 20:47:24 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:33.516 [2024-04-16 20:47:24.575540] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:33.516 BaseBdev2 00:09:33.516 20:47:24 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:09:33.516 20:47:24 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:09:33.516 20:47:24 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:33.516 20:47:24 -- common/autotest_common.sh@889 -- # local i 00:09:33.516 20:47:24 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:33.516 20:47:24 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:33.516 20:47:24 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:33.775 20:47:24 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:34.036 [ 00:09:34.036 { 00:09:34.036 "name": "BaseBdev2", 00:09:34.036 "aliases": [ 00:09:34.036 "876de4a6-fc32-11ee-80f8-ef3e42bb1492" 00:09:34.036 ], 00:09:34.036 "product_name": "Malloc disk", 00:09:34.036 "block_size": 512, 00:09:34.036 "num_blocks": 65536, 00:09:34.036 "uuid": "876de4a6-fc32-11ee-80f8-ef3e42bb1492", 00:09:34.036 "assigned_rate_limits": { 00:09:34.036 "rw_ios_per_sec": 0, 00:09:34.036 "rw_mbytes_per_sec": 0, 00:09:34.036 "r_mbytes_per_sec": 0, 00:09:34.036 "w_mbytes_per_sec": 0 00:09:34.036 }, 00:09:34.036 "claimed": true, 00:09:34.036 "claim_type": "exclusive_write", 00:09:34.036 "zoned": false, 00:09:34.036 "supported_io_types": { 00:09:34.036 "read": true, 00:09:34.036 "write": true, 00:09:34.036 "unmap": true, 00:09:34.036 "write_zeroes": true, 00:09:34.036 "flush": true, 00:09:34.036 "reset": true, 00:09:34.036 "compare": false, 00:09:34.036 "compare_and_write": false, 00:09:34.036 "abort": true, 00:09:34.036 "nvme_admin": false, 00:09:34.036 "nvme_io": false 00:09:34.036 }, 00:09:34.036 "memory_domains": [ 00:09:34.036 { 00:09:34.036 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.036 "dma_device_type": 2 00:09:34.036 } 00:09:34.036 ], 00:09:34.036 "driver_specific": {} 00:09:34.036 } 00:09:34.036 ] 00:09:34.036 20:47:24 -- common/autotest_common.sh@895 -- # return 0 00:09:34.036 20:47:24 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:09:34.036 20:47:24 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:34.036 20:47:24 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:34.036 20:47:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:34.036 20:47:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:34.036 20:47:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:34.036 20:47:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:34.036 20:47:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:34.036 20:47:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:34.036 20:47:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:34.036 20:47:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:34.036 20:47:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:34.036 20:47:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.036 20:47:24 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:34.036 20:47:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:34.036 "name": "Existed_Raid", 00:09:34.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.036 "strip_size_kb": 0, 00:09:34.036 "state": "configuring", 00:09:34.036 "raid_level": "raid1", 00:09:34.036 "superblock": false, 00:09:34.036 "num_base_bdevs": 4, 00:09:34.036 "num_base_bdevs_discovered": 2, 00:09:34.036 "num_base_bdevs_operational": 4, 00:09:34.036 "base_bdevs_list": [ 00:09:34.036 { 00:09:34.036 "name": "BaseBdev1", 00:09:34.036 "uuid": "865f79d8-fc32-11ee-80f8-ef3e42bb1492", 00:09:34.036 "is_configured": true, 00:09:34.036 "data_offset": 0, 00:09:34.036 "data_size": 65536 00:09:34.036 }, 00:09:34.036 { 00:09:34.036 "name": "BaseBdev2", 00:09:34.036 "uuid": "876de4a6-fc32-11ee-80f8-ef3e42bb1492", 00:09:34.036 "is_configured": true, 00:09:34.036 "data_offset": 0, 00:09:34.036 "data_size": 65536 00:09:34.036 }, 00:09:34.036 { 00:09:34.036 "name": "BaseBdev3", 00:09:34.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.036 "is_configured": false, 00:09:34.036 "data_offset": 0, 00:09:34.036 "data_size": 0 00:09:34.036 }, 00:09:34.036 { 00:09:34.036 "name": "BaseBdev4", 00:09:34.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.036 "is_configured": false, 00:09:34.036 "data_offset": 0, 00:09:34.036 "data_size": 0 00:09:34.036 } 00:09:34.036 ] 00:09:34.036 }' 00:09:34.036 20:47:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:34.036 20:47:25 -- common/autotest_common.sh@10 -- # set +x 00:09:34.296 20:47:25 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:09:34.556 [2024-04-16 20:47:25.523631] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:34.556 BaseBdev3 00:09:34.556 20:47:25 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:09:34.556 20:47:25 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:09:34.556 20:47:25 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:34.556 20:47:25 -- common/autotest_common.sh@889 -- # local i 00:09:34.556 20:47:25 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:34.556 20:47:25 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:34.556 20:47:25 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:34.815 20:47:25 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:34.815 [ 00:09:34.815 { 00:09:34.815 "name": "BaseBdev3", 00:09:34.815 "aliases": [ 00:09:34.815 "87fe9090-fc32-11ee-80f8-ef3e42bb1492" 00:09:34.815 ], 00:09:34.815 "product_name": "Malloc disk", 00:09:34.815 "block_size": 512, 00:09:34.815 "num_blocks": 65536, 00:09:34.815 "uuid": "87fe9090-fc32-11ee-80f8-ef3e42bb1492", 00:09:34.815 "assigned_rate_limits": { 00:09:34.815 "rw_ios_per_sec": 0, 00:09:34.815 "rw_mbytes_per_sec": 0, 00:09:34.815 "r_mbytes_per_sec": 0, 00:09:34.815 "w_mbytes_per_sec": 0 00:09:34.815 }, 00:09:34.815 "claimed": true, 00:09:34.815 "claim_type": "exclusive_write", 00:09:34.815 "zoned": false, 00:09:34.815 "supported_io_types": { 00:09:34.815 "read": true, 00:09:34.815 "write": true, 00:09:34.815 "unmap": true, 00:09:34.815 "write_zeroes": true, 00:09:34.815 "flush": true, 00:09:34.815 "reset": true, 00:09:34.815 "compare": false, 00:09:34.816 "compare_and_write": false, 00:09:34.816 "abort": true, 00:09:34.816 "nvme_admin": false, 00:09:34.816 "nvme_io": false 00:09:34.816 }, 00:09:34.816 "memory_domains": [ 00:09:34.816 { 00:09:34.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.816 "dma_device_type": 2 00:09:34.816 } 00:09:34.816 ], 00:09:34.816 "driver_specific": {} 00:09:34.816 } 00:09:34.816 ] 00:09:34.816 20:47:25 -- common/autotest_common.sh@895 -- # return 0 00:09:34.816 20:47:25 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:09:34.816 20:47:25 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:34.816 20:47:25 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:34.816 20:47:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:34.816 20:47:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:34.816 20:47:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:34.816 20:47:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:34.816 20:47:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:34.816 20:47:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:34.816 20:47:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:34.816 20:47:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:34.816 20:47:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:34.816 20:47:25 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:34.816 20:47:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.075 20:47:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:35.075 "name": "Existed_Raid", 00:09:35.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.075 "strip_size_kb": 0, 00:09:35.075 "state": "configuring", 00:09:35.075 "raid_level": "raid1", 00:09:35.075 "superblock": false, 00:09:35.075 "num_base_bdevs": 4, 00:09:35.075 "num_base_bdevs_discovered": 3, 00:09:35.075 "num_base_bdevs_operational": 4, 00:09:35.075 "base_bdevs_list": [ 00:09:35.075 { 00:09:35.075 "name": "BaseBdev1", 00:09:35.075 "uuid": "865f79d8-fc32-11ee-80f8-ef3e42bb1492", 00:09:35.075 "is_configured": true, 00:09:35.075 "data_offset": 0, 00:09:35.075 "data_size": 65536 00:09:35.075 }, 00:09:35.075 { 00:09:35.075 "name": "BaseBdev2", 00:09:35.075 "uuid": "876de4a6-fc32-11ee-80f8-ef3e42bb1492", 00:09:35.075 "is_configured": true, 00:09:35.075 "data_offset": 0, 00:09:35.075 "data_size": 65536 00:09:35.075 }, 00:09:35.075 { 00:09:35.075 "name": "BaseBdev3", 00:09:35.075 "uuid": "87fe9090-fc32-11ee-80f8-ef3e42bb1492", 00:09:35.075 "is_configured": true, 00:09:35.075 "data_offset": 0, 00:09:35.075 "data_size": 65536 00:09:35.075 }, 00:09:35.075 { 00:09:35.075 "name": "BaseBdev4", 00:09:35.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.075 "is_configured": false, 00:09:35.075 "data_offset": 0, 00:09:35.075 "data_size": 0 00:09:35.075 } 00:09:35.075 ] 00:09:35.075 }' 00:09:35.075 20:47:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:35.075 20:47:26 -- common/autotest_common.sh@10 -- # set +x 00:09:35.334 20:47:26 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:09:35.594 [2024-04-16 20:47:26.523732] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:35.594 [2024-04-16 20:47:26.523747] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b74da00 00:09:35.594 [2024-04-16 20:47:26.523750] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:35.594 [2024-04-16 20:47:26.523771] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b7b0ec0 00:09:35.594 [2024-04-16 20:47:26.523863] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b74da00 00:09:35.594 [2024-04-16 20:47:26.523867] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82b74da00 00:09:35.594 [2024-04-16 20:47:26.523888] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:35.594 BaseBdev4 00:09:35.594 20:47:26 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:09:35.594 20:47:26 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:09:35.594 20:47:26 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:35.594 20:47:26 -- common/autotest_common.sh@889 -- # local i 00:09:35.594 20:47:26 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:35.594 20:47:26 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:35.594 20:47:26 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:35.594 20:47:26 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:35.854 [ 00:09:35.854 { 00:09:35.854 "name": "BaseBdev4", 00:09:35.854 "aliases": [ 00:09:35.854 "88972b25-fc32-11ee-80f8-ef3e42bb1492" 00:09:35.854 ], 00:09:35.854 "product_name": "Malloc disk", 00:09:35.854 "block_size": 512, 00:09:35.854 "num_blocks": 65536, 00:09:35.854 "uuid": "88972b25-fc32-11ee-80f8-ef3e42bb1492", 00:09:35.854 "assigned_rate_limits": { 00:09:35.854 "rw_ios_per_sec": 0, 00:09:35.854 "rw_mbytes_per_sec": 0, 00:09:35.854 "r_mbytes_per_sec": 0, 00:09:35.854 "w_mbytes_per_sec": 0 00:09:35.854 }, 00:09:35.854 "claimed": true, 00:09:35.854 "claim_type": "exclusive_write", 00:09:35.854 "zoned": false, 00:09:35.854 "supported_io_types": { 00:09:35.854 "read": true, 00:09:35.854 "write": true, 00:09:35.854 "unmap": true, 00:09:35.854 "write_zeroes": true, 00:09:35.854 "flush": true, 00:09:35.854 "reset": true, 00:09:35.854 "compare": false, 00:09:35.854 "compare_and_write": false, 00:09:35.854 "abort": true, 00:09:35.854 "nvme_admin": false, 00:09:35.854 "nvme_io": false 00:09:35.854 }, 00:09:35.854 "memory_domains": [ 00:09:35.854 { 00:09:35.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.854 "dma_device_type": 2 00:09:35.854 } 00:09:35.854 ], 00:09:35.854 "driver_specific": {} 00:09:35.854 } 00:09:35.854 ] 00:09:35.854 20:47:26 -- common/autotest_common.sh@895 -- # return 0 00:09:35.854 20:47:26 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:09:35.854 20:47:26 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:35.854 20:47:26 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:09:35.854 20:47:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:35.854 20:47:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:35.854 20:47:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:35.854 20:47:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:35.854 20:47:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:35.854 20:47:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:35.854 20:47:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:35.854 20:47:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:35.854 20:47:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:35.854 20:47:26 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:35.854 20:47:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.114 20:47:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:36.114 "name": "Existed_Raid", 00:09:36.114 "uuid": "88972e29-fc32-11ee-80f8-ef3e42bb1492", 00:09:36.114 "strip_size_kb": 0, 00:09:36.114 "state": "online", 00:09:36.114 "raid_level": "raid1", 00:09:36.114 "superblock": false, 00:09:36.114 "num_base_bdevs": 4, 00:09:36.114 "num_base_bdevs_discovered": 4, 00:09:36.114 "num_base_bdevs_operational": 4, 00:09:36.114 "base_bdevs_list": [ 00:09:36.114 { 00:09:36.114 "name": "BaseBdev1", 00:09:36.114 "uuid": "865f79d8-fc32-11ee-80f8-ef3e42bb1492", 00:09:36.114 "is_configured": true, 00:09:36.114 "data_offset": 0, 00:09:36.114 "data_size": 65536 00:09:36.114 }, 00:09:36.114 { 00:09:36.114 "name": "BaseBdev2", 00:09:36.114 "uuid": "876de4a6-fc32-11ee-80f8-ef3e42bb1492", 00:09:36.114 "is_configured": true, 00:09:36.114 "data_offset": 0, 00:09:36.114 "data_size": 65536 00:09:36.114 }, 00:09:36.114 { 00:09:36.114 "name": "BaseBdev3", 00:09:36.114 "uuid": "87fe9090-fc32-11ee-80f8-ef3e42bb1492", 00:09:36.114 "is_configured": true, 00:09:36.114 "data_offset": 0, 00:09:36.114 "data_size": 65536 00:09:36.114 }, 00:09:36.114 { 00:09:36.114 "name": "BaseBdev4", 00:09:36.114 "uuid": "88972b25-fc32-11ee-80f8-ef3e42bb1492", 00:09:36.114 "is_configured": true, 00:09:36.114 "data_offset": 0, 00:09:36.114 "data_size": 65536 00:09:36.114 } 00:09:36.114 ] 00:09:36.114 }' 00:09:36.114 20:47:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:36.114 20:47:27 -- common/autotest_common.sh@10 -- # set +x 00:09:36.373 20:47:27 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:36.633 [2024-04-16 20:47:27.515784] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:36.633 20:47:27 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:09:36.633 20:47:27 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:09:36.633 20:47:27 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:09:36.633 20:47:27 -- bdev/bdev_raid.sh@196 -- # return 0 00:09:36.633 20:47:27 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:09:36.633 20:47:27 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:36.633 20:47:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:36.633 20:47:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:36.633 20:47:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:36.633 20:47:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:36.633 20:47:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:09:36.633 20:47:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:36.633 20:47:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:36.633 20:47:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:36.633 20:47:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:36.633 20:47:27 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:36.633 20:47:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.633 20:47:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:36.633 "name": "Existed_Raid", 00:09:36.633 "uuid": "88972e29-fc32-11ee-80f8-ef3e42bb1492", 00:09:36.633 "strip_size_kb": 0, 00:09:36.633 "state": "online", 00:09:36.633 "raid_level": "raid1", 00:09:36.633 "superblock": false, 00:09:36.633 "num_base_bdevs": 4, 00:09:36.633 "num_base_bdevs_discovered": 3, 00:09:36.633 "num_base_bdevs_operational": 3, 00:09:36.633 "base_bdevs_list": [ 00:09:36.633 { 00:09:36.633 "name": null, 00:09:36.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.633 "is_configured": false, 00:09:36.633 "data_offset": 0, 00:09:36.633 "data_size": 65536 00:09:36.633 }, 00:09:36.633 { 00:09:36.633 "name": "BaseBdev2", 00:09:36.633 "uuid": "876de4a6-fc32-11ee-80f8-ef3e42bb1492", 00:09:36.633 "is_configured": true, 00:09:36.633 "data_offset": 0, 00:09:36.633 "data_size": 65536 00:09:36.633 }, 00:09:36.633 { 00:09:36.633 "name": "BaseBdev3", 00:09:36.633 "uuid": "87fe9090-fc32-11ee-80f8-ef3e42bb1492", 00:09:36.633 "is_configured": true, 00:09:36.633 "data_offset": 0, 00:09:36.633 "data_size": 65536 00:09:36.633 }, 00:09:36.633 { 00:09:36.633 "name": "BaseBdev4", 00:09:36.633 "uuid": "88972b25-fc32-11ee-80f8-ef3e42bb1492", 00:09:36.633 "is_configured": true, 00:09:36.633 "data_offset": 0, 00:09:36.633 "data_size": 65536 00:09:36.633 } 00:09:36.633 ] 00:09:36.633 }' 00:09:36.633 20:47:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:36.633 20:47:27 -- common/autotest_common.sh@10 -- # set +x 00:09:36.893 20:47:27 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:09:36.893 20:47:27 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:36.893 20:47:27 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:36.893 20:47:27 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:09:37.152 20:47:28 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:09:37.152 20:47:28 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:37.152 20:47:28 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:09:37.412 [2024-04-16 20:47:28.312490] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:37.412 20:47:28 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:09:37.412 20:47:28 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:37.412 20:47:28 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:37.412 20:47:28 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:09:37.412 20:47:28 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:09:37.412 20:47:28 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:37.412 20:47:28 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:09:37.671 [2024-04-16 20:47:28.661152] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:37.671 20:47:28 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:09:37.671 20:47:28 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:37.671 20:47:28 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:37.671 20:47:28 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:09:37.930 20:47:28 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:09:37.930 20:47:28 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:37.930 20:47:28 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:09:37.930 [2024-04-16 20:47:29.025892] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:37.930 [2024-04-16 20:47:29.025906] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:37.930 [2024-04-16 20:47:29.025913] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:37.930 [2024-04-16 20:47:29.030641] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:37.930 [2024-04-16 20:47:29.030651] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b74da00 name Existed_Raid, state offline 00:09:37.930 20:47:29 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:09:37.930 20:47:29 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:37.930 20:47:29 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:37.930 20:47:29 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:09:38.190 20:47:29 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:09:38.190 20:47:29 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:09:38.190 20:47:29 -- bdev/bdev_raid.sh@287 -- # killprocess 52900 00:09:38.190 20:47:29 -- common/autotest_common.sh@926 -- # '[' -z 52900 ']' 00:09:38.190 20:47:29 -- common/autotest_common.sh@930 -- # kill -0 52900 00:09:38.190 20:47:29 -- common/autotest_common.sh@931 -- # uname 00:09:38.190 20:47:29 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:09:38.190 20:47:29 -- common/autotest_common.sh@934 -- # ps -c -o command 52900 00:09:38.190 20:47:29 -- common/autotest_common.sh@934 -- # tail -1 00:09:38.190 20:47:29 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:09:38.190 20:47:29 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:09:38.190 killing process with pid 52900 00:09:38.190 20:47:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 52900' 00:09:38.190 20:47:29 -- common/autotest_common.sh@945 -- # kill 52900 00:09:38.190 [2024-04-16 20:47:29.241964] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:38.190 [2024-04-16 20:47:29.241995] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:38.190 20:47:29 -- common/autotest_common.sh@950 -- # wait 52900 00:09:38.451 20:47:29 -- bdev/bdev_raid.sh@289 -- # return 0 00:09:38.451 00:09:38.451 real 0m8.621s 00:09:38.451 user 0m14.975s 00:09:38.451 sys 0m1.588s 00:09:38.451 20:47:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:38.451 20:47:29 -- common/autotest_common.sh@10 -- # set +x 00:09:38.451 ************************************ 00:09:38.451 END TEST raid_state_function_test 00:09:38.451 ************************************ 00:09:38.451 20:47:29 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:09:38.451 20:47:29 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:09:38.451 20:47:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:38.451 20:47:29 -- common/autotest_common.sh@10 -- # set +x 00:09:38.451 ************************************ 00:09:38.451 START TEST raid_state_function_test_sb 00:09:38.451 ************************************ 00:09:38.451 20:47:29 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 4 true 00:09:38.451 20:47:29 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:09:38.451 20:47:29 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:09:38.451 20:47:29 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:09:38.451 20:47:29 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:09:38.451 20:47:29 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:09:38.451 20:47:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:38.451 20:47:29 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:09:38.451 20:47:29 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:09:38.451 20:47:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:38.451 20:47:29 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:09:38.451 20:47:29 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:09:38.451 20:47:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:38.451 20:47:29 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:09:38.451 20:47:29 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:09:38.451 20:47:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:38.451 20:47:29 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:09:38.451 20:47:29 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:09:38.451 20:47:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:38.451 20:47:29 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:38.451 20:47:29 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:09:38.451 20:47:29 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:09:38.451 20:47:29 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:09:38.451 20:47:29 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:09:38.451 20:47:29 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:09:38.451 20:47:29 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:09:38.451 20:47:29 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:09:38.451 20:47:29 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:09:38.451 20:47:29 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:09:38.451 20:47:29 -- bdev/bdev_raid.sh@226 -- # raid_pid=53170 00:09:38.451 20:47:29 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 53170' 00:09:38.451 Process raid pid: 53170 00:09:38.451 20:47:29 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:38.451 20:47:29 -- bdev/bdev_raid.sh@228 -- # waitforlisten 53170 /var/tmp/spdk-raid.sock 00:09:38.451 20:47:29 -- common/autotest_common.sh@819 -- # '[' -z 53170 ']' 00:09:38.451 20:47:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:38.451 20:47:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:38.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:38.451 20:47:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:38.451 20:47:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:38.451 20:47:29 -- common/autotest_common.sh@10 -- # set +x 00:09:38.451 [2024-04-16 20:47:29.453852] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:09:38.451 [2024-04-16 20:47:29.454180] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:09:39.021 EAL: TSC is not safe to use in SMP mode 00:09:39.021 EAL: TSC is not invariant 00:09:39.021 [2024-04-16 20:47:29.877729] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.021 [2024-04-16 20:47:29.968030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.021 [2024-04-16 20:47:29.968418] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:39.021 [2024-04-16 20:47:29.968426] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:39.282 20:47:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:39.282 20:47:30 -- common/autotest_common.sh@852 -- # return 0 00:09:39.282 20:47:30 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:09:39.542 [2024-04-16 20:47:30.507456] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:39.542 [2024-04-16 20:47:30.507495] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:39.542 [2024-04-16 20:47:30.507498] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:39.542 [2024-04-16 20:47:30.507504] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:39.542 [2024-04-16 20:47:30.507507] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:39.542 [2024-04-16 20:47:30.507512] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:39.542 [2024-04-16 20:47:30.507514] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:39.542 [2024-04-16 20:47:30.507535] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:39.542 20:47:30 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:39.542 20:47:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:39.542 20:47:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:39.542 20:47:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:39.542 20:47:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:39.542 20:47:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:39.542 20:47:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:39.542 20:47:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:39.542 20:47:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:39.542 20:47:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:39.542 20:47:30 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:39.542 20:47:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.803 20:47:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:39.803 "name": "Existed_Raid", 00:09:39.803 "uuid": "8af70b59-fc32-11ee-80f8-ef3e42bb1492", 00:09:39.803 "strip_size_kb": 0, 00:09:39.803 "state": "configuring", 00:09:39.803 "raid_level": "raid1", 00:09:39.803 "superblock": true, 00:09:39.803 "num_base_bdevs": 4, 00:09:39.803 "num_base_bdevs_discovered": 0, 00:09:39.803 "num_base_bdevs_operational": 4, 00:09:39.803 "base_bdevs_list": [ 00:09:39.803 { 00:09:39.803 "name": "BaseBdev1", 00:09:39.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.803 "is_configured": false, 00:09:39.803 "data_offset": 0, 00:09:39.803 "data_size": 0 00:09:39.803 }, 00:09:39.803 { 00:09:39.803 "name": "BaseBdev2", 00:09:39.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.803 "is_configured": false, 00:09:39.803 "data_offset": 0, 00:09:39.803 "data_size": 0 00:09:39.803 }, 00:09:39.803 { 00:09:39.803 "name": "BaseBdev3", 00:09:39.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.803 "is_configured": false, 00:09:39.803 "data_offset": 0, 00:09:39.803 "data_size": 0 00:09:39.803 }, 00:09:39.803 { 00:09:39.803 "name": "BaseBdev4", 00:09:39.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.803 "is_configured": false, 00:09:39.803 "data_offset": 0, 00:09:39.803 "data_size": 0 00:09:39.803 } 00:09:39.803 ] 00:09:39.803 }' 00:09:39.803 20:47:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:39.803 20:47:30 -- common/autotest_common.sh@10 -- # set +x 00:09:40.090 20:47:30 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:40.090 [2024-04-16 20:47:31.139521] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:40.090 [2024-04-16 20:47:31.139536] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82f45a500 name Existed_Raid, state configuring 00:09:40.090 20:47:31 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:09:40.357 [2024-04-16 20:47:31.323563] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:40.357 [2024-04-16 20:47:31.323595] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:40.357 [2024-04-16 20:47:31.323598] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:40.357 [2024-04-16 20:47:31.323603] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:40.357 [2024-04-16 20:47:31.323605] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:40.357 [2024-04-16 20:47:31.323610] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:40.357 [2024-04-16 20:47:31.323612] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:40.357 [2024-04-16 20:47:31.323616] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:40.357 20:47:31 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:40.616 [2024-04-16 20:47:31.504351] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:40.616 BaseBdev1 00:09:40.616 20:47:31 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:09:40.616 20:47:31 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:09:40.616 20:47:31 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:40.616 20:47:31 -- common/autotest_common.sh@889 -- # local i 00:09:40.616 20:47:31 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:40.616 20:47:31 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:40.616 20:47:31 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:40.616 20:47:31 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:40.875 [ 00:09:40.875 { 00:09:40.875 "name": "BaseBdev1", 00:09:40.875 "aliases": [ 00:09:40.875 "8b8f0aa5-fc32-11ee-80f8-ef3e42bb1492" 00:09:40.875 ], 00:09:40.875 "product_name": "Malloc disk", 00:09:40.875 "block_size": 512, 00:09:40.875 "num_blocks": 65536, 00:09:40.875 "uuid": "8b8f0aa5-fc32-11ee-80f8-ef3e42bb1492", 00:09:40.875 "assigned_rate_limits": { 00:09:40.875 "rw_ios_per_sec": 0, 00:09:40.875 "rw_mbytes_per_sec": 0, 00:09:40.875 "r_mbytes_per_sec": 0, 00:09:40.875 "w_mbytes_per_sec": 0 00:09:40.875 }, 00:09:40.875 "claimed": true, 00:09:40.875 "claim_type": "exclusive_write", 00:09:40.875 "zoned": false, 00:09:40.875 "supported_io_types": { 00:09:40.875 "read": true, 00:09:40.875 "write": true, 00:09:40.875 "unmap": true, 00:09:40.875 "write_zeroes": true, 00:09:40.875 "flush": true, 00:09:40.875 "reset": true, 00:09:40.875 "compare": false, 00:09:40.875 "compare_and_write": false, 00:09:40.875 "abort": true, 00:09:40.875 "nvme_admin": false, 00:09:40.875 "nvme_io": false 00:09:40.875 }, 00:09:40.875 "memory_domains": [ 00:09:40.875 { 00:09:40.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.875 "dma_device_type": 2 00:09:40.875 } 00:09:40.875 ], 00:09:40.875 "driver_specific": {} 00:09:40.875 } 00:09:40.875 ] 00:09:40.875 20:47:31 -- common/autotest_common.sh@895 -- # return 0 00:09:40.875 20:47:31 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:40.875 20:47:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:40.875 20:47:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:40.875 20:47:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:40.875 20:47:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:40.875 20:47:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:40.875 20:47:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:40.875 20:47:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:40.875 20:47:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:40.875 20:47:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:40.875 20:47:31 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:40.875 20:47:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.133 20:47:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:41.133 "name": "Existed_Raid", 00:09:41.133 "uuid": "8b73929b-fc32-11ee-80f8-ef3e42bb1492", 00:09:41.133 "strip_size_kb": 0, 00:09:41.133 "state": "configuring", 00:09:41.133 "raid_level": "raid1", 00:09:41.134 "superblock": true, 00:09:41.134 "num_base_bdevs": 4, 00:09:41.134 "num_base_bdevs_discovered": 1, 00:09:41.134 "num_base_bdevs_operational": 4, 00:09:41.134 "base_bdevs_list": [ 00:09:41.134 { 00:09:41.134 "name": "BaseBdev1", 00:09:41.134 "uuid": "8b8f0aa5-fc32-11ee-80f8-ef3e42bb1492", 00:09:41.134 "is_configured": true, 00:09:41.134 "data_offset": 2048, 00:09:41.134 "data_size": 63488 00:09:41.134 }, 00:09:41.134 { 00:09:41.134 "name": "BaseBdev2", 00:09:41.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.134 "is_configured": false, 00:09:41.134 "data_offset": 0, 00:09:41.134 "data_size": 0 00:09:41.134 }, 00:09:41.134 { 00:09:41.134 "name": "BaseBdev3", 00:09:41.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.134 "is_configured": false, 00:09:41.134 "data_offset": 0, 00:09:41.134 "data_size": 0 00:09:41.134 }, 00:09:41.134 { 00:09:41.134 "name": "BaseBdev4", 00:09:41.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.134 "is_configured": false, 00:09:41.134 "data_offset": 0, 00:09:41.134 "data_size": 0 00:09:41.134 } 00:09:41.134 ] 00:09:41.134 }' 00:09:41.134 20:47:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:41.134 20:47:32 -- common/autotest_common.sh@10 -- # set +x 00:09:41.392 20:47:32 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:41.392 [2024-04-16 20:47:32.475679] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:41.392 [2024-04-16 20:47:32.475695] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82f45a500 name Existed_Raid, state configuring 00:09:41.392 20:47:32 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:09:41.392 20:47:32 -- bdev/bdev_raid.sh@246 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:41.651 20:47:32 -- bdev/bdev_raid.sh@247 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:41.910 BaseBdev1 00:09:41.910 20:47:32 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:09:41.910 20:47:32 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:09:41.910 20:47:32 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:41.910 20:47:32 -- common/autotest_common.sh@889 -- # local i 00:09:41.910 20:47:32 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:41.910 20:47:32 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:41.910 20:47:32 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:41.910 20:47:33 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:42.169 [ 00:09:42.169 { 00:09:42.169 "name": "BaseBdev1", 00:09:42.169 "aliases": [ 00:09:42.169 "8c575995-fc32-11ee-80f8-ef3e42bb1492" 00:09:42.169 ], 00:09:42.169 "product_name": "Malloc disk", 00:09:42.169 "block_size": 512, 00:09:42.169 "num_blocks": 65536, 00:09:42.169 "uuid": "8c575995-fc32-11ee-80f8-ef3e42bb1492", 00:09:42.169 "assigned_rate_limits": { 00:09:42.169 "rw_ios_per_sec": 0, 00:09:42.169 "rw_mbytes_per_sec": 0, 00:09:42.169 "r_mbytes_per_sec": 0, 00:09:42.169 "w_mbytes_per_sec": 0 00:09:42.169 }, 00:09:42.169 "claimed": false, 00:09:42.169 "zoned": false, 00:09:42.169 "supported_io_types": { 00:09:42.169 "read": true, 00:09:42.169 "write": true, 00:09:42.169 "unmap": true, 00:09:42.169 "write_zeroes": true, 00:09:42.169 "flush": true, 00:09:42.169 "reset": true, 00:09:42.169 "compare": false, 00:09:42.169 "compare_and_write": false, 00:09:42.169 "abort": true, 00:09:42.169 "nvme_admin": false, 00:09:42.169 "nvme_io": false 00:09:42.169 }, 00:09:42.169 "memory_domains": [ 00:09:42.169 { 00:09:42.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.169 "dma_device_type": 2 00:09:42.169 } 00:09:42.169 ], 00:09:42.169 "driver_specific": {} 00:09:42.169 } 00:09:42.169 ] 00:09:42.169 20:47:33 -- common/autotest_common.sh@895 -- # return 0 00:09:42.169 20:47:33 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:09:42.428 [2024-04-16 20:47:33.360390] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:42.428 [2024-04-16 20:47:33.360795] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:42.428 [2024-04-16 20:47:33.360830] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:42.428 [2024-04-16 20:47:33.360834] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:42.428 [2024-04-16 20:47:33.360839] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:42.428 [2024-04-16 20:47:33.360842] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:42.428 [2024-04-16 20:47:33.360847] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:42.428 20:47:33 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:09:42.428 20:47:33 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:42.428 20:47:33 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:42.428 20:47:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:42.428 20:47:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:42.428 20:47:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:42.428 20:47:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:42.428 20:47:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:42.428 20:47:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:42.428 20:47:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:42.428 20:47:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:42.428 20:47:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:42.428 20:47:33 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:42.428 20:47:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.428 20:47:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:42.428 "name": "Existed_Raid", 00:09:42.428 "uuid": "8caa5e1d-fc32-11ee-80f8-ef3e42bb1492", 00:09:42.428 "strip_size_kb": 0, 00:09:42.428 "state": "configuring", 00:09:42.428 "raid_level": "raid1", 00:09:42.428 "superblock": true, 00:09:42.428 "num_base_bdevs": 4, 00:09:42.428 "num_base_bdevs_discovered": 1, 00:09:42.428 "num_base_bdevs_operational": 4, 00:09:42.428 "base_bdevs_list": [ 00:09:42.428 { 00:09:42.428 "name": "BaseBdev1", 00:09:42.428 "uuid": "8c575995-fc32-11ee-80f8-ef3e42bb1492", 00:09:42.428 "is_configured": true, 00:09:42.428 "data_offset": 2048, 00:09:42.428 "data_size": 63488 00:09:42.428 }, 00:09:42.428 { 00:09:42.428 "name": "BaseBdev2", 00:09:42.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.428 "is_configured": false, 00:09:42.429 "data_offset": 0, 00:09:42.429 "data_size": 0 00:09:42.429 }, 00:09:42.429 { 00:09:42.429 "name": "BaseBdev3", 00:09:42.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.429 "is_configured": false, 00:09:42.429 "data_offset": 0, 00:09:42.429 "data_size": 0 00:09:42.429 }, 00:09:42.429 { 00:09:42.429 "name": "BaseBdev4", 00:09:42.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.429 "is_configured": false, 00:09:42.429 "data_offset": 0, 00:09:42.429 "data_size": 0 00:09:42.429 } 00:09:42.429 ] 00:09:42.429 }' 00:09:42.429 20:47:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:42.429 20:47:33 -- common/autotest_common.sh@10 -- # set +x 00:09:42.687 20:47:33 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:42.946 [2024-04-16 20:47:33.976530] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:42.946 BaseBdev2 00:09:42.946 20:47:33 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:09:42.946 20:47:33 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:09:42.946 20:47:33 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:42.946 20:47:33 -- common/autotest_common.sh@889 -- # local i 00:09:42.946 20:47:33 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:42.946 20:47:33 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:42.946 20:47:33 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:43.205 20:47:34 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:43.205 [ 00:09:43.205 { 00:09:43.205 "name": "BaseBdev2", 00:09:43.205 "aliases": [ 00:09:43.205 "8d085ef6-fc32-11ee-80f8-ef3e42bb1492" 00:09:43.205 ], 00:09:43.205 "product_name": "Malloc disk", 00:09:43.205 "block_size": 512, 00:09:43.205 "num_blocks": 65536, 00:09:43.205 "uuid": "8d085ef6-fc32-11ee-80f8-ef3e42bb1492", 00:09:43.205 "assigned_rate_limits": { 00:09:43.205 "rw_ios_per_sec": 0, 00:09:43.205 "rw_mbytes_per_sec": 0, 00:09:43.205 "r_mbytes_per_sec": 0, 00:09:43.205 "w_mbytes_per_sec": 0 00:09:43.205 }, 00:09:43.205 "claimed": true, 00:09:43.205 "claim_type": "exclusive_write", 00:09:43.205 "zoned": false, 00:09:43.205 "supported_io_types": { 00:09:43.205 "read": true, 00:09:43.205 "write": true, 00:09:43.205 "unmap": true, 00:09:43.205 "write_zeroes": true, 00:09:43.205 "flush": true, 00:09:43.205 "reset": true, 00:09:43.205 "compare": false, 00:09:43.205 "compare_and_write": false, 00:09:43.205 "abort": true, 00:09:43.205 "nvme_admin": false, 00:09:43.205 "nvme_io": false 00:09:43.205 }, 00:09:43.205 "memory_domains": [ 00:09:43.205 { 00:09:43.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.205 "dma_device_type": 2 00:09:43.205 } 00:09:43.205 ], 00:09:43.205 "driver_specific": {} 00:09:43.205 } 00:09:43.205 ] 00:09:43.463 20:47:34 -- common/autotest_common.sh@895 -- # return 0 00:09:43.463 20:47:34 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:09:43.463 20:47:34 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:43.463 20:47:34 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:43.463 20:47:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:43.463 20:47:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:43.463 20:47:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:43.463 20:47:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:43.463 20:47:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:43.463 20:47:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:43.463 20:47:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:43.463 20:47:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:43.463 20:47:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:43.463 20:47:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.463 20:47:34 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:43.463 20:47:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:43.463 "name": "Existed_Raid", 00:09:43.463 "uuid": "8caa5e1d-fc32-11ee-80f8-ef3e42bb1492", 00:09:43.463 "strip_size_kb": 0, 00:09:43.463 "state": "configuring", 00:09:43.463 "raid_level": "raid1", 00:09:43.463 "superblock": true, 00:09:43.463 "num_base_bdevs": 4, 00:09:43.463 "num_base_bdevs_discovered": 2, 00:09:43.463 "num_base_bdevs_operational": 4, 00:09:43.463 "base_bdevs_list": [ 00:09:43.463 { 00:09:43.463 "name": "BaseBdev1", 00:09:43.463 "uuid": "8c575995-fc32-11ee-80f8-ef3e42bb1492", 00:09:43.463 "is_configured": true, 00:09:43.463 "data_offset": 2048, 00:09:43.463 "data_size": 63488 00:09:43.463 }, 00:09:43.463 { 00:09:43.463 "name": "BaseBdev2", 00:09:43.463 "uuid": "8d085ef6-fc32-11ee-80f8-ef3e42bb1492", 00:09:43.463 "is_configured": true, 00:09:43.463 "data_offset": 2048, 00:09:43.463 "data_size": 63488 00:09:43.463 }, 00:09:43.463 { 00:09:43.463 "name": "BaseBdev3", 00:09:43.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.463 "is_configured": false, 00:09:43.463 "data_offset": 0, 00:09:43.463 "data_size": 0 00:09:43.463 }, 00:09:43.463 { 00:09:43.463 "name": "BaseBdev4", 00:09:43.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.463 "is_configured": false, 00:09:43.463 "data_offset": 0, 00:09:43.463 "data_size": 0 00:09:43.463 } 00:09:43.463 ] 00:09:43.463 }' 00:09:43.463 20:47:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:43.463 20:47:34 -- common/autotest_common.sh@10 -- # set +x 00:09:43.722 20:47:34 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:09:43.981 [2024-04-16 20:47:34.952602] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:43.981 BaseBdev3 00:09:43.981 20:47:34 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:09:43.981 20:47:34 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:09:43.981 20:47:34 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:43.981 20:47:34 -- common/autotest_common.sh@889 -- # local i 00:09:43.981 20:47:34 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:43.981 20:47:34 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:43.981 20:47:34 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:44.239 20:47:35 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:44.239 [ 00:09:44.239 { 00:09:44.239 "name": "BaseBdev3", 00:09:44.239 "aliases": [ 00:09:44.239 "8d9d4fef-fc32-11ee-80f8-ef3e42bb1492" 00:09:44.239 ], 00:09:44.239 "product_name": "Malloc disk", 00:09:44.239 "block_size": 512, 00:09:44.239 "num_blocks": 65536, 00:09:44.239 "uuid": "8d9d4fef-fc32-11ee-80f8-ef3e42bb1492", 00:09:44.239 "assigned_rate_limits": { 00:09:44.240 "rw_ios_per_sec": 0, 00:09:44.240 "rw_mbytes_per_sec": 0, 00:09:44.240 "r_mbytes_per_sec": 0, 00:09:44.240 "w_mbytes_per_sec": 0 00:09:44.240 }, 00:09:44.240 "claimed": true, 00:09:44.240 "claim_type": "exclusive_write", 00:09:44.240 "zoned": false, 00:09:44.240 "supported_io_types": { 00:09:44.240 "read": true, 00:09:44.240 "write": true, 00:09:44.240 "unmap": true, 00:09:44.240 "write_zeroes": true, 00:09:44.240 "flush": true, 00:09:44.240 "reset": true, 00:09:44.240 "compare": false, 00:09:44.240 "compare_and_write": false, 00:09:44.240 "abort": true, 00:09:44.240 "nvme_admin": false, 00:09:44.240 "nvme_io": false 00:09:44.240 }, 00:09:44.240 "memory_domains": [ 00:09:44.240 { 00:09:44.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.240 "dma_device_type": 2 00:09:44.240 } 00:09:44.240 ], 00:09:44.240 "driver_specific": {} 00:09:44.240 } 00:09:44.240 ] 00:09:44.240 20:47:35 -- common/autotest_common.sh@895 -- # return 0 00:09:44.240 20:47:35 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:09:44.240 20:47:35 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:44.240 20:47:35 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:44.240 20:47:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:44.240 20:47:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:44.240 20:47:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:44.240 20:47:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:44.240 20:47:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:44.240 20:47:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:44.240 20:47:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:44.240 20:47:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:44.240 20:47:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:44.240 20:47:35 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:44.240 20:47:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.499 20:47:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:44.499 "name": "Existed_Raid", 00:09:44.499 "uuid": "8caa5e1d-fc32-11ee-80f8-ef3e42bb1492", 00:09:44.499 "strip_size_kb": 0, 00:09:44.499 "state": "configuring", 00:09:44.499 "raid_level": "raid1", 00:09:44.499 "superblock": true, 00:09:44.499 "num_base_bdevs": 4, 00:09:44.499 "num_base_bdevs_discovered": 3, 00:09:44.499 "num_base_bdevs_operational": 4, 00:09:44.499 "base_bdevs_list": [ 00:09:44.499 { 00:09:44.499 "name": "BaseBdev1", 00:09:44.499 "uuid": "8c575995-fc32-11ee-80f8-ef3e42bb1492", 00:09:44.499 "is_configured": true, 00:09:44.499 "data_offset": 2048, 00:09:44.499 "data_size": 63488 00:09:44.499 }, 00:09:44.499 { 00:09:44.499 "name": "BaseBdev2", 00:09:44.499 "uuid": "8d085ef6-fc32-11ee-80f8-ef3e42bb1492", 00:09:44.499 "is_configured": true, 00:09:44.499 "data_offset": 2048, 00:09:44.499 "data_size": 63488 00:09:44.499 }, 00:09:44.499 { 00:09:44.499 "name": "BaseBdev3", 00:09:44.499 "uuid": "8d9d4fef-fc32-11ee-80f8-ef3e42bb1492", 00:09:44.499 "is_configured": true, 00:09:44.499 "data_offset": 2048, 00:09:44.499 "data_size": 63488 00:09:44.499 }, 00:09:44.499 { 00:09:44.499 "name": "BaseBdev4", 00:09:44.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.499 "is_configured": false, 00:09:44.499 "data_offset": 0, 00:09:44.499 "data_size": 0 00:09:44.499 } 00:09:44.499 ] 00:09:44.499 }' 00:09:44.499 20:47:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:44.499 20:47:35 -- common/autotest_common.sh@10 -- # set +x 00:09:44.758 20:47:35 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:09:45.019 [2024-04-16 20:47:35.920697] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:45.019 [2024-04-16 20:47:35.920759] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82f45aa00 00:09:45.019 [2024-04-16 20:47:35.920764] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:45.019 [2024-04-16 20:47:35.920779] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82f4bdec0 00:09:45.019 [2024-04-16 20:47:35.920814] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82f45aa00 00:09:45.019 [2024-04-16 20:47:35.920816] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82f45aa00 00:09:45.019 [2024-04-16 20:47:35.920830] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:45.019 BaseBdev4 00:09:45.019 20:47:35 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:09:45.019 20:47:35 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:09:45.019 20:47:35 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:45.019 20:47:35 -- common/autotest_common.sh@889 -- # local i 00:09:45.019 20:47:35 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:45.019 20:47:35 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:45.019 20:47:35 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:45.019 20:47:36 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:45.279 [ 00:09:45.279 { 00:09:45.279 "name": "BaseBdev4", 00:09:45.279 "aliases": [ 00:09:45.279 "8e310824-fc32-11ee-80f8-ef3e42bb1492" 00:09:45.279 ], 00:09:45.279 "product_name": "Malloc disk", 00:09:45.279 "block_size": 512, 00:09:45.279 "num_blocks": 65536, 00:09:45.279 "uuid": "8e310824-fc32-11ee-80f8-ef3e42bb1492", 00:09:45.279 "assigned_rate_limits": { 00:09:45.279 "rw_ios_per_sec": 0, 00:09:45.279 "rw_mbytes_per_sec": 0, 00:09:45.279 "r_mbytes_per_sec": 0, 00:09:45.279 "w_mbytes_per_sec": 0 00:09:45.279 }, 00:09:45.279 "claimed": true, 00:09:45.279 "claim_type": "exclusive_write", 00:09:45.279 "zoned": false, 00:09:45.279 "supported_io_types": { 00:09:45.279 "read": true, 00:09:45.279 "write": true, 00:09:45.279 "unmap": true, 00:09:45.279 "write_zeroes": true, 00:09:45.279 "flush": true, 00:09:45.279 "reset": true, 00:09:45.279 "compare": false, 00:09:45.279 "compare_and_write": false, 00:09:45.279 "abort": true, 00:09:45.279 "nvme_admin": false, 00:09:45.279 "nvme_io": false 00:09:45.279 }, 00:09:45.279 "memory_domains": [ 00:09:45.279 { 00:09:45.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.279 "dma_device_type": 2 00:09:45.279 } 00:09:45.279 ], 00:09:45.279 "driver_specific": {} 00:09:45.279 } 00:09:45.279 ] 00:09:45.279 20:47:36 -- common/autotest_common.sh@895 -- # return 0 00:09:45.279 20:47:36 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:09:45.279 20:47:36 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:45.279 20:47:36 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:09:45.279 20:47:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:45.279 20:47:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:45.279 20:47:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:45.279 20:47:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:45.279 20:47:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:45.279 20:47:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:45.279 20:47:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:45.279 20:47:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:45.279 20:47:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:45.279 20:47:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.279 20:47:36 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:45.539 20:47:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:45.539 "name": "Existed_Raid", 00:09:45.539 "uuid": "8caa5e1d-fc32-11ee-80f8-ef3e42bb1492", 00:09:45.539 "strip_size_kb": 0, 00:09:45.539 "state": "online", 00:09:45.539 "raid_level": "raid1", 00:09:45.539 "superblock": true, 00:09:45.539 "num_base_bdevs": 4, 00:09:45.539 "num_base_bdevs_discovered": 4, 00:09:45.539 "num_base_bdevs_operational": 4, 00:09:45.539 "base_bdevs_list": [ 00:09:45.539 { 00:09:45.539 "name": "BaseBdev1", 00:09:45.539 "uuid": "8c575995-fc32-11ee-80f8-ef3e42bb1492", 00:09:45.539 "is_configured": true, 00:09:45.539 "data_offset": 2048, 00:09:45.539 "data_size": 63488 00:09:45.539 }, 00:09:45.539 { 00:09:45.539 "name": "BaseBdev2", 00:09:45.539 "uuid": "8d085ef6-fc32-11ee-80f8-ef3e42bb1492", 00:09:45.539 "is_configured": true, 00:09:45.539 "data_offset": 2048, 00:09:45.539 "data_size": 63488 00:09:45.539 }, 00:09:45.539 { 00:09:45.539 "name": "BaseBdev3", 00:09:45.539 "uuid": "8d9d4fef-fc32-11ee-80f8-ef3e42bb1492", 00:09:45.539 "is_configured": true, 00:09:45.539 "data_offset": 2048, 00:09:45.539 "data_size": 63488 00:09:45.539 }, 00:09:45.539 { 00:09:45.539 "name": "BaseBdev4", 00:09:45.539 "uuid": "8e310824-fc32-11ee-80f8-ef3e42bb1492", 00:09:45.539 "is_configured": true, 00:09:45.539 "data_offset": 2048, 00:09:45.539 "data_size": 63488 00:09:45.539 } 00:09:45.539 ] 00:09:45.539 }' 00:09:45.539 20:47:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:45.539 20:47:36 -- common/autotest_common.sh@10 -- # set +x 00:09:45.799 20:47:36 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:45.799 [2024-04-16 20:47:36.892754] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:45.799 20:47:36 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:09:45.799 20:47:36 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:09:45.799 20:47:36 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:09:45.799 20:47:36 -- bdev/bdev_raid.sh@196 -- # return 0 00:09:45.799 20:47:36 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:09:45.799 20:47:36 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:45.799 20:47:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:45.799 20:47:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:45.799 20:47:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:45.799 20:47:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:45.799 20:47:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:09:45.799 20:47:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:45.799 20:47:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:45.799 20:47:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:45.799 20:47:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:45.799 20:47:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.799 20:47:36 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:46.059 20:47:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:46.059 "name": "Existed_Raid", 00:09:46.059 "uuid": "8caa5e1d-fc32-11ee-80f8-ef3e42bb1492", 00:09:46.059 "strip_size_kb": 0, 00:09:46.059 "state": "online", 00:09:46.059 "raid_level": "raid1", 00:09:46.059 "superblock": true, 00:09:46.059 "num_base_bdevs": 4, 00:09:46.059 "num_base_bdevs_discovered": 3, 00:09:46.059 "num_base_bdevs_operational": 3, 00:09:46.059 "base_bdevs_list": [ 00:09:46.059 { 00:09:46.059 "name": null, 00:09:46.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.059 "is_configured": false, 00:09:46.059 "data_offset": 2048, 00:09:46.059 "data_size": 63488 00:09:46.059 }, 00:09:46.059 { 00:09:46.059 "name": "BaseBdev2", 00:09:46.059 "uuid": "8d085ef6-fc32-11ee-80f8-ef3e42bb1492", 00:09:46.059 "is_configured": true, 00:09:46.059 "data_offset": 2048, 00:09:46.059 "data_size": 63488 00:09:46.059 }, 00:09:46.059 { 00:09:46.059 "name": "BaseBdev3", 00:09:46.059 "uuid": "8d9d4fef-fc32-11ee-80f8-ef3e42bb1492", 00:09:46.059 "is_configured": true, 00:09:46.059 "data_offset": 2048, 00:09:46.059 "data_size": 63488 00:09:46.059 }, 00:09:46.059 { 00:09:46.059 "name": "BaseBdev4", 00:09:46.059 "uuid": "8e310824-fc32-11ee-80f8-ef3e42bb1492", 00:09:46.059 "is_configured": true, 00:09:46.059 "data_offset": 2048, 00:09:46.059 "data_size": 63488 00:09:46.059 } 00:09:46.059 ] 00:09:46.059 }' 00:09:46.059 20:47:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:46.059 20:47:37 -- common/autotest_common.sh@10 -- # set +x 00:09:46.318 20:47:37 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:09:46.318 20:47:37 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:46.318 20:47:37 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:46.318 20:47:37 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:09:46.577 20:47:37 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:09:46.577 20:47:37 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:46.577 20:47:37 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:09:46.577 [2024-04-16 20:47:37.705471] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:46.837 20:47:37 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:09:46.837 20:47:37 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:46.837 20:47:37 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:46.837 20:47:37 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:09:46.837 20:47:37 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:09:46.837 20:47:37 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:46.837 20:47:37 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:09:47.096 [2024-04-16 20:47:38.066139] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:47.096 20:47:38 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:09:47.096 20:47:38 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:47.096 20:47:38 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:09:47.096 20:47:38 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:47.355 20:47:38 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:09:47.355 20:47:38 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:47.355 20:47:38 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:09:47.355 [2024-04-16 20:47:38.430808] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:47.355 [2024-04-16 20:47:38.430824] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:47.355 [2024-04-16 20:47:38.430836] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:47.355 [2024-04-16 20:47:38.435522] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:47.355 [2024-04-16 20:47:38.435535] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82f45aa00 name Existed_Raid, state offline 00:09:47.355 20:47:38 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:09:47.355 20:47:38 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:47.355 20:47:38 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:09:47.355 20:47:38 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:47.615 20:47:38 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:09:47.615 20:47:38 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:09:47.615 20:47:38 -- bdev/bdev_raid.sh@287 -- # killprocess 53170 00:09:47.615 20:47:38 -- common/autotest_common.sh@926 -- # '[' -z 53170 ']' 00:09:47.615 20:47:38 -- common/autotest_common.sh@930 -- # kill -0 53170 00:09:47.615 20:47:38 -- common/autotest_common.sh@931 -- # uname 00:09:47.615 20:47:38 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:09:47.615 20:47:38 -- common/autotest_common.sh@934 -- # ps -c -o command 53170 00:09:47.615 20:47:38 -- common/autotest_common.sh@934 -- # tail -1 00:09:47.615 20:47:38 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:09:47.615 20:47:38 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:09:47.615 killing process with pid 53170 00:09:47.615 20:47:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 53170' 00:09:47.615 20:47:38 -- common/autotest_common.sh@945 -- # kill 53170 00:09:47.615 [2024-04-16 20:47:38.638836] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:47.615 [2024-04-16 20:47:38.638865] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:47.615 20:47:38 -- common/autotest_common.sh@950 -- # wait 53170 00:09:47.875 20:47:38 -- bdev/bdev_raid.sh@289 -- # return 0 00:09:47.875 00:09:47.875 real 0m9.343s 00:09:47.875 user 0m16.369s 00:09:47.875 sys 0m1.589s 00:09:47.875 20:47:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:47.875 20:47:38 -- common/autotest_common.sh@10 -- # set +x 00:09:47.875 ************************************ 00:09:47.875 END TEST raid_state_function_test_sb 00:09:47.875 ************************************ 00:09:47.875 20:47:38 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:09:47.875 20:47:38 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:09:47.875 20:47:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:47.875 20:47:38 -- common/autotest_common.sh@10 -- # set +x 00:09:47.875 ************************************ 00:09:47.875 START TEST raid_superblock_test 00:09:47.875 ************************************ 00:09:47.875 20:47:38 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 4 00:09:47.875 20:47:38 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:09:47.875 20:47:38 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:09:47.875 20:47:38 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:09:47.875 20:47:38 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:09:47.875 20:47:38 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:09:47.875 20:47:38 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:09:47.875 20:47:38 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:09:47.875 20:47:38 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:09:47.875 20:47:38 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:09:47.875 20:47:38 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:09:47.875 20:47:38 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:09:47.875 20:47:38 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:09:47.875 20:47:38 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:09:47.875 20:47:38 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:09:47.875 20:47:38 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:09:47.876 20:47:38 -- bdev/bdev_raid.sh@357 -- # raid_pid=53443 00:09:47.876 20:47:38 -- bdev/bdev_raid.sh@358 -- # waitforlisten 53443 /var/tmp/spdk-raid.sock 00:09:47.876 20:47:38 -- bdev/bdev_raid.sh@356 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:09:47.876 20:47:38 -- common/autotest_common.sh@819 -- # '[' -z 53443 ']' 00:09:47.876 20:47:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:47.876 20:47:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:47.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:47.876 20:47:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:47.876 20:47:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:47.876 20:47:38 -- common/autotest_common.sh@10 -- # set +x 00:09:47.876 [2024-04-16 20:47:38.847963] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:09:47.876 [2024-04-16 20:47:38.848245] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:09:48.444 EAL: TSC is not safe to use in SMP mode 00:09:48.444 EAL: TSC is not invariant 00:09:48.444 [2024-04-16 20:47:39.279615] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.444 [2024-04-16 20:47:39.368035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.444 [2024-04-16 20:47:39.368431] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:48.444 [2024-04-16 20:47:39.368439] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:48.703 20:47:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:48.703 20:47:39 -- common/autotest_common.sh@852 -- # return 0 00:09:48.703 20:47:39 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:09:48.703 20:47:39 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:09:48.703 20:47:39 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:09:48.703 20:47:39 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:09:48.703 20:47:39 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:48.703 20:47:39 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:48.703 20:47:39 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:09:48.703 20:47:39 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:48.703 20:47:39 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:09:48.997 malloc1 00:09:48.997 20:47:39 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:48.997 [2024-04-16 20:47:40.075601] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:48.997 [2024-04-16 20:47:40.075642] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.998 [2024-04-16 20:47:40.076173] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b16f780 00:09:48.998 [2024-04-16 20:47:40.076204] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.998 [2024-04-16 20:47:40.076875] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.998 [2024-04-16 20:47:40.076905] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:48.998 pt1 00:09:48.998 20:47:40 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:09:48.998 20:47:40 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:09:48.998 20:47:40 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:09:48.998 20:47:40 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:09:48.998 20:47:40 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:48.998 20:47:40 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:48.998 20:47:40 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:09:48.998 20:47:40 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:48.998 20:47:40 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:09:49.257 malloc2 00:09:49.258 20:47:40 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:49.517 [2024-04-16 20:47:40.435634] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:49.517 [2024-04-16 20:47:40.435669] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.517 [2024-04-16 20:47:40.435691] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b16fc80 00:09:49.517 [2024-04-16 20:47:40.435697] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.517 [2024-04-16 20:47:40.436112] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.517 [2024-04-16 20:47:40.436143] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:49.517 pt2 00:09:49.517 20:47:40 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:09:49.517 20:47:40 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:09:49.517 20:47:40 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:09:49.517 20:47:40 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:09:49.518 20:47:40 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:49.518 20:47:40 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:49.518 20:47:40 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:09:49.518 20:47:40 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:49.518 20:47:40 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:09:49.518 malloc3 00:09:49.518 20:47:40 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:49.777 [2024-04-16 20:47:40.795672] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:49.777 [2024-04-16 20:47:40.795712] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.777 [2024-04-16 20:47:40.795734] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b170180 00:09:49.777 [2024-04-16 20:47:40.795739] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.777 [2024-04-16 20:47:40.796195] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.777 [2024-04-16 20:47:40.796221] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:49.777 pt3 00:09:49.777 20:47:40 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:09:49.777 20:47:40 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:09:49.777 20:47:40 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:09:49.777 20:47:40 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:09:49.777 20:47:40 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:09:49.777 20:47:40 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:49.777 20:47:40 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:09:49.777 20:47:40 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:49.777 20:47:40 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:09:50.037 malloc4 00:09:50.037 20:47:40 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:50.037 [2024-04-16 20:47:41.143703] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:50.037 [2024-04-16 20:47:41.143743] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.037 [2024-04-16 20:47:41.143781] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b170680 00:09:50.037 [2024-04-16 20:47:41.143787] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.037 [2024-04-16 20:47:41.144215] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.037 [2024-04-16 20:47:41.144261] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:50.037 pt4 00:09:50.037 20:47:41 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:09:50.037 20:47:41 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:09:50.037 20:47:41 -- bdev/bdev_raid.sh@375 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:09:50.297 [2024-04-16 20:47:41.323724] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:50.297 [2024-04-16 20:47:41.324105] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:50.297 [2024-04-16 20:47:41.324124] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:50.297 [2024-04-16 20:47:41.324132] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:50.297 [2024-04-16 20:47:41.324179] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b170900 00:09:50.297 [2024-04-16 20:47:41.324189] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:50.297 [2024-04-16 20:47:41.324214] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b1d2e20 00:09:50.297 [2024-04-16 20:47:41.324266] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b170900 00:09:50.297 [2024-04-16 20:47:41.324273] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82b170900 00:09:50.297 [2024-04-16 20:47:41.324290] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:50.297 20:47:41 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:09:50.297 20:47:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:09:50.297 20:47:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:50.297 20:47:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:50.297 20:47:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:50.297 20:47:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:50.297 20:47:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:50.297 20:47:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:50.297 20:47:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:50.297 20:47:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:50.297 20:47:41 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:50.297 20:47:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:50.556 20:47:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:50.556 "name": "raid_bdev1", 00:09:50.556 "uuid": "916979dd-fc32-11ee-80f8-ef3e42bb1492", 00:09:50.556 "strip_size_kb": 0, 00:09:50.556 "state": "online", 00:09:50.556 "raid_level": "raid1", 00:09:50.556 "superblock": true, 00:09:50.556 "num_base_bdevs": 4, 00:09:50.556 "num_base_bdevs_discovered": 4, 00:09:50.556 "num_base_bdevs_operational": 4, 00:09:50.556 "base_bdevs_list": [ 00:09:50.556 { 00:09:50.556 "name": "pt1", 00:09:50.556 "uuid": "d199ea4d-f972-dc55-a8a5-bdcf872be790", 00:09:50.556 "is_configured": true, 00:09:50.556 "data_offset": 2048, 00:09:50.556 "data_size": 63488 00:09:50.556 }, 00:09:50.556 { 00:09:50.556 "name": "pt2", 00:09:50.556 "uuid": "42fc614d-ca83-6c50-bd7e-42d6dd7aff0b", 00:09:50.556 "is_configured": true, 00:09:50.556 "data_offset": 2048, 00:09:50.556 "data_size": 63488 00:09:50.556 }, 00:09:50.556 { 00:09:50.556 "name": "pt3", 00:09:50.556 "uuid": "3239d7c5-b5be-2051-a8ec-947e1f411877", 00:09:50.556 "is_configured": true, 00:09:50.556 "data_offset": 2048, 00:09:50.556 "data_size": 63488 00:09:50.556 }, 00:09:50.556 { 00:09:50.556 "name": "pt4", 00:09:50.556 "uuid": "c20d3f21-0caf-395a-9242-fb6dc69fbddd", 00:09:50.556 "is_configured": true, 00:09:50.556 "data_offset": 2048, 00:09:50.556 "data_size": 63488 00:09:50.556 } 00:09:50.556 ] 00:09:50.556 }' 00:09:50.556 20:47:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:50.556 20:47:41 -- common/autotest_common.sh@10 -- # set +x 00:09:50.816 20:47:41 -- bdev/bdev_raid.sh@379 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:50.816 20:47:41 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:09:51.075 [2024-04-16 20:47:41.951792] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:51.075 20:47:41 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=916979dd-fc32-11ee-80f8-ef3e42bb1492 00:09:51.075 20:47:41 -- bdev/bdev_raid.sh@380 -- # '[' -z 916979dd-fc32-11ee-80f8-ef3e42bb1492 ']' 00:09:51.075 20:47:41 -- bdev/bdev_raid.sh@385 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:09:51.075 [2024-04-16 20:47:42.135791] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:51.075 [2024-04-16 20:47:42.135809] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:51.075 [2024-04-16 20:47:42.135821] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:51.075 [2024-04-16 20:47:42.135834] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:51.075 [2024-04-16 20:47:42.135837] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b170900 name raid_bdev1, state offline 00:09:51.075 20:47:42 -- bdev/bdev_raid.sh@386 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:51.075 20:47:42 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:09:51.335 20:47:42 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:09:51.335 20:47:42 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:09:51.335 20:47:42 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:09:51.335 20:47:42 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:09:51.595 20:47:42 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:09:51.595 20:47:42 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:09:51.595 20:47:42 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:09:51.595 20:47:42 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:09:51.854 20:47:42 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:09:51.854 20:47:42 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:09:52.113 20:47:43 -- bdev/bdev_raid.sh@395 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:09:52.113 20:47:43 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:52.114 20:47:43 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:09:52.114 20:47:43 -- bdev/bdev_raid.sh@401 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:09:52.114 20:47:43 -- common/autotest_common.sh@640 -- # local es=0 00:09:52.114 20:47:43 -- common/autotest_common.sh@642 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:09:52.114 20:47:43 -- common/autotest_common.sh@628 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:52.114 20:47:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:52.114 20:47:43 -- common/autotest_common.sh@632 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:52.114 20:47:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:52.114 20:47:43 -- common/autotest_common.sh@634 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:52.114 20:47:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:52.114 20:47:43 -- common/autotest_common.sh@634 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:52.114 20:47:43 -- common/autotest_common.sh@634 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:52.114 20:47:43 -- common/autotest_common.sh@643 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:09:52.373 [2024-04-16 20:47:43.359917] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:52.373 [2024-04-16 20:47:43.360384] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:52.373 [2024-04-16 20:47:43.360404] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:52.373 [2024-04-16 20:47:43.360410] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:09:52.373 [2024-04-16 20:47:43.360421] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:09:52.373 [2024-04-16 20:47:43.360450] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:09:52.373 [2024-04-16 20:47:43.360458] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:09:52.373 [2024-04-16 20:47:43.360464] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:09:52.373 [2024-04-16 20:47:43.360487] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:52.373 [2024-04-16 20:47:43.360490] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b170680 name raid_bdev1, state configuring 00:09:52.373 request: 00:09:52.373 { 00:09:52.373 "name": "raid_bdev1", 00:09:52.373 "raid_level": "raid1", 00:09:52.373 "base_bdevs": [ 00:09:52.373 "malloc1", 00:09:52.373 "malloc2", 00:09:52.373 "malloc3", 00:09:52.373 "malloc4" 00:09:52.373 ], 00:09:52.373 "superblock": false, 00:09:52.373 "method": "bdev_raid_create", 00:09:52.373 "req_id": 1 00:09:52.373 } 00:09:52.373 Got JSON-RPC error response 00:09:52.373 response: 00:09:52.373 { 00:09:52.373 "code": -17, 00:09:52.373 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:52.373 } 00:09:52.373 20:47:43 -- common/autotest_common.sh@643 -- # es=1 00:09:52.373 20:47:43 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:52.373 20:47:43 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:52.373 20:47:43 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:52.373 20:47:43 -- bdev/bdev_raid.sh@403 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:52.373 20:47:43 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:09:52.632 20:47:43 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:09:52.632 20:47:43 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:09:52.632 20:47:43 -- bdev/bdev_raid.sh@409 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:52.632 [2024-04-16 20:47:43.723946] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:52.632 [2024-04-16 20:47:43.723998] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:52.632 [2024-04-16 20:47:43.724021] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b170180 00:09:52.632 [2024-04-16 20:47:43.724027] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:52.632 [2024-04-16 20:47:43.724477] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:52.632 [2024-04-16 20:47:43.724505] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:52.632 [2024-04-16 20:47:43.724522] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:09:52.632 [2024-04-16 20:47:43.724543] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:52.632 pt1 00:09:52.632 20:47:43 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:09:52.632 20:47:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:09:52.632 20:47:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:52.632 20:47:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:52.632 20:47:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:52.632 20:47:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:52.632 20:47:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:52.632 20:47:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:52.632 20:47:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:52.632 20:47:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:52.632 20:47:43 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:52.632 20:47:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:52.892 20:47:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:52.892 "name": "raid_bdev1", 00:09:52.892 "uuid": "916979dd-fc32-11ee-80f8-ef3e42bb1492", 00:09:52.892 "strip_size_kb": 0, 00:09:52.892 "state": "configuring", 00:09:52.892 "raid_level": "raid1", 00:09:52.892 "superblock": true, 00:09:52.892 "num_base_bdevs": 4, 00:09:52.892 "num_base_bdevs_discovered": 1, 00:09:52.892 "num_base_bdevs_operational": 4, 00:09:52.892 "base_bdevs_list": [ 00:09:52.892 { 00:09:52.892 "name": "pt1", 00:09:52.892 "uuid": "d199ea4d-f972-dc55-a8a5-bdcf872be790", 00:09:52.892 "is_configured": true, 00:09:52.892 "data_offset": 2048, 00:09:52.892 "data_size": 63488 00:09:52.892 }, 00:09:52.892 { 00:09:52.892 "name": null, 00:09:52.892 "uuid": "42fc614d-ca83-6c50-bd7e-42d6dd7aff0b", 00:09:52.892 "is_configured": false, 00:09:52.892 "data_offset": 2048, 00:09:52.892 "data_size": 63488 00:09:52.892 }, 00:09:52.892 { 00:09:52.892 "name": null, 00:09:52.892 "uuid": "3239d7c5-b5be-2051-a8ec-947e1f411877", 00:09:52.892 "is_configured": false, 00:09:52.892 "data_offset": 2048, 00:09:52.892 "data_size": 63488 00:09:52.892 }, 00:09:52.892 { 00:09:52.892 "name": null, 00:09:52.892 "uuid": "c20d3f21-0caf-395a-9242-fb6dc69fbddd", 00:09:52.892 "is_configured": false, 00:09:52.892 "data_offset": 2048, 00:09:52.892 "data_size": 63488 00:09:52.892 } 00:09:52.892 ] 00:09:52.892 }' 00:09:52.892 20:47:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:52.892 20:47:43 -- common/autotest_common.sh@10 -- # set +x 00:09:53.151 20:47:44 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:09:53.151 20:47:44 -- bdev/bdev_raid.sh@416 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:53.409 [2024-04-16 20:47:44.336063] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:53.409 [2024-04-16 20:47:44.336097] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:53.409 [2024-04-16 20:47:44.336136] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b16f780 00:09:53.410 [2024-04-16 20:47:44.336143] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:53.410 [2024-04-16 20:47:44.336242] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:53.410 [2024-04-16 20:47:44.336253] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:53.410 [2024-04-16 20:47:44.336267] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:09:53.410 [2024-04-16 20:47:44.336274] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:53.410 pt2 00:09:53.410 20:47:44 -- bdev/bdev_raid.sh@417 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:09:53.410 [2024-04-16 20:47:44.516080] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:53.410 20:47:44 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:09:53.410 20:47:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:09:53.410 20:47:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:53.410 20:47:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:53.410 20:47:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:53.410 20:47:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:53.410 20:47:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:53.410 20:47:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:53.410 20:47:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:53.410 20:47:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:53.410 20:47:44 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:53.410 20:47:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:53.669 20:47:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:53.669 "name": "raid_bdev1", 00:09:53.669 "uuid": "916979dd-fc32-11ee-80f8-ef3e42bb1492", 00:09:53.669 "strip_size_kb": 0, 00:09:53.669 "state": "configuring", 00:09:53.669 "raid_level": "raid1", 00:09:53.669 "superblock": true, 00:09:53.669 "num_base_bdevs": 4, 00:09:53.669 "num_base_bdevs_discovered": 1, 00:09:53.669 "num_base_bdevs_operational": 4, 00:09:53.669 "base_bdevs_list": [ 00:09:53.669 { 00:09:53.669 "name": "pt1", 00:09:53.669 "uuid": "d199ea4d-f972-dc55-a8a5-bdcf872be790", 00:09:53.669 "is_configured": true, 00:09:53.669 "data_offset": 2048, 00:09:53.669 "data_size": 63488 00:09:53.669 }, 00:09:53.669 { 00:09:53.669 "name": null, 00:09:53.669 "uuid": "42fc614d-ca83-6c50-bd7e-42d6dd7aff0b", 00:09:53.669 "is_configured": false, 00:09:53.669 "data_offset": 2048, 00:09:53.669 "data_size": 63488 00:09:53.669 }, 00:09:53.669 { 00:09:53.669 "name": null, 00:09:53.669 "uuid": "3239d7c5-b5be-2051-a8ec-947e1f411877", 00:09:53.669 "is_configured": false, 00:09:53.669 "data_offset": 2048, 00:09:53.669 "data_size": 63488 00:09:53.669 }, 00:09:53.669 { 00:09:53.669 "name": null, 00:09:53.669 "uuid": "c20d3f21-0caf-395a-9242-fb6dc69fbddd", 00:09:53.669 "is_configured": false, 00:09:53.669 "data_offset": 2048, 00:09:53.669 "data_size": 63488 00:09:53.669 } 00:09:53.669 ] 00:09:53.669 }' 00:09:53.669 20:47:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:53.669 20:47:44 -- common/autotest_common.sh@10 -- # set +x 00:09:53.927 20:47:44 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:09:53.927 20:47:44 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:09:53.927 20:47:44 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:54.185 [2024-04-16 20:47:45.148141] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:54.185 [2024-04-16 20:47:45.148174] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.185 [2024-04-16 20:47:45.148195] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b16f780 00:09:54.185 [2024-04-16 20:47:45.148201] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.185 [2024-04-16 20:47:45.148290] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.185 [2024-04-16 20:47:45.148297] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:54.185 [2024-04-16 20:47:45.148310] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:09:54.185 [2024-04-16 20:47:45.148316] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:54.185 pt2 00:09:54.185 20:47:45 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:09:54.185 20:47:45 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:09:54.185 20:47:45 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:54.445 [2024-04-16 20:47:45.328161] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:54.445 [2024-04-16 20:47:45.328193] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.445 [2024-04-16 20:47:45.328210] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b170b80 00:09:54.445 [2024-04-16 20:47:45.328216] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.445 [2024-04-16 20:47:45.328286] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.445 [2024-04-16 20:47:45.328293] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:54.445 [2024-04-16 20:47:45.328305] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:09:54.445 [2024-04-16 20:47:45.328310] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:54.445 pt3 00:09:54.445 20:47:45 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:09:54.445 20:47:45 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:09:54.445 20:47:45 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:54.445 [2024-04-16 20:47:45.508177] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:54.445 [2024-04-16 20:47:45.508201] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.445 [2024-04-16 20:47:45.508215] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b170900 00:09:54.445 [2024-04-16 20:47:45.508221] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.445 [2024-04-16 20:47:45.508285] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.445 [2024-04-16 20:47:45.508291] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:54.445 [2024-04-16 20:47:45.508302] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:09:54.445 [2024-04-16 20:47:45.508309] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:54.445 [2024-04-16 20:47:45.508328] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b16fc80 00:09:54.445 [2024-04-16 20:47:45.508331] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:54.445 [2024-04-16 20:47:45.508345] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b1d2e20 00:09:54.445 [2024-04-16 20:47:45.508379] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b16fc80 00:09:54.445 [2024-04-16 20:47:45.508382] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82b16fc80 00:09:54.445 [2024-04-16 20:47:45.508396] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:54.445 pt4 00:09:54.445 20:47:45 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:09:54.445 20:47:45 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:09:54.445 20:47:45 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:09:54.445 20:47:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:09:54.445 20:47:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:54.445 20:47:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:54.445 20:47:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:54.445 20:47:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:54.445 20:47:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:54.445 20:47:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:54.445 20:47:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:54.445 20:47:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:54.445 20:47:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:54.445 20:47:45 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:54.705 20:47:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:54.705 "name": "raid_bdev1", 00:09:54.705 "uuid": "916979dd-fc32-11ee-80f8-ef3e42bb1492", 00:09:54.705 "strip_size_kb": 0, 00:09:54.705 "state": "online", 00:09:54.705 "raid_level": "raid1", 00:09:54.705 "superblock": true, 00:09:54.705 "num_base_bdevs": 4, 00:09:54.705 "num_base_bdevs_discovered": 4, 00:09:54.705 "num_base_bdevs_operational": 4, 00:09:54.705 "base_bdevs_list": [ 00:09:54.705 { 00:09:54.705 "name": "pt1", 00:09:54.705 "uuid": "d199ea4d-f972-dc55-a8a5-bdcf872be790", 00:09:54.705 "is_configured": true, 00:09:54.705 "data_offset": 2048, 00:09:54.705 "data_size": 63488 00:09:54.705 }, 00:09:54.705 { 00:09:54.705 "name": "pt2", 00:09:54.705 "uuid": "42fc614d-ca83-6c50-bd7e-42d6dd7aff0b", 00:09:54.705 "is_configured": true, 00:09:54.705 "data_offset": 2048, 00:09:54.705 "data_size": 63488 00:09:54.705 }, 00:09:54.705 { 00:09:54.705 "name": "pt3", 00:09:54.705 "uuid": "3239d7c5-b5be-2051-a8ec-947e1f411877", 00:09:54.705 "is_configured": true, 00:09:54.705 "data_offset": 2048, 00:09:54.705 "data_size": 63488 00:09:54.705 }, 00:09:54.705 { 00:09:54.705 "name": "pt4", 00:09:54.705 "uuid": "c20d3f21-0caf-395a-9242-fb6dc69fbddd", 00:09:54.705 "is_configured": true, 00:09:54.705 "data_offset": 2048, 00:09:54.705 "data_size": 63488 00:09:54.705 } 00:09:54.705 ] 00:09:54.705 }' 00:09:54.705 20:47:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:54.705 20:47:45 -- common/autotest_common.sh@10 -- # set +x 00:09:54.965 20:47:45 -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:54.965 20:47:45 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:09:55.224 [2024-04-16 20:47:46.144264] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:55.224 20:47:46 -- bdev/bdev_raid.sh@430 -- # '[' 916979dd-fc32-11ee-80f8-ef3e42bb1492 '!=' 916979dd-fc32-11ee-80f8-ef3e42bb1492 ']' 00:09:55.224 20:47:46 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:09:55.224 20:47:46 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:09:55.224 20:47:46 -- bdev/bdev_raid.sh@196 -- # return 0 00:09:55.224 20:47:46 -- bdev/bdev_raid.sh@436 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:09:55.224 [2024-04-16 20:47:46.328262] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:09:55.224 20:47:46 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:55.224 20:47:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:09:55.224 20:47:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:55.224 20:47:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:55.224 20:47:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:55.224 20:47:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:09:55.224 20:47:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:55.224 20:47:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:55.224 20:47:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:55.224 20:47:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:55.224 20:47:46 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:55.224 20:47:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:55.484 20:47:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:55.484 "name": "raid_bdev1", 00:09:55.484 "uuid": "916979dd-fc32-11ee-80f8-ef3e42bb1492", 00:09:55.484 "strip_size_kb": 0, 00:09:55.484 "state": "online", 00:09:55.484 "raid_level": "raid1", 00:09:55.484 "superblock": true, 00:09:55.484 "num_base_bdevs": 4, 00:09:55.484 "num_base_bdevs_discovered": 3, 00:09:55.484 "num_base_bdevs_operational": 3, 00:09:55.484 "base_bdevs_list": [ 00:09:55.484 { 00:09:55.484 "name": null, 00:09:55.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.484 "is_configured": false, 00:09:55.484 "data_offset": 2048, 00:09:55.484 "data_size": 63488 00:09:55.484 }, 00:09:55.484 { 00:09:55.484 "name": "pt2", 00:09:55.484 "uuid": "42fc614d-ca83-6c50-bd7e-42d6dd7aff0b", 00:09:55.484 "is_configured": true, 00:09:55.484 "data_offset": 2048, 00:09:55.484 "data_size": 63488 00:09:55.484 }, 00:09:55.484 { 00:09:55.484 "name": "pt3", 00:09:55.484 "uuid": "3239d7c5-b5be-2051-a8ec-947e1f411877", 00:09:55.484 "is_configured": true, 00:09:55.484 "data_offset": 2048, 00:09:55.484 "data_size": 63488 00:09:55.484 }, 00:09:55.484 { 00:09:55.484 "name": "pt4", 00:09:55.484 "uuid": "c20d3f21-0caf-395a-9242-fb6dc69fbddd", 00:09:55.484 "is_configured": true, 00:09:55.484 "data_offset": 2048, 00:09:55.484 "data_size": 63488 00:09:55.484 } 00:09:55.484 ] 00:09:55.484 }' 00:09:55.484 20:47:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:55.484 20:47:46 -- common/autotest_common.sh@10 -- # set +x 00:09:55.743 20:47:46 -- bdev/bdev_raid.sh@442 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:09:56.003 [2024-04-16 20:47:46.944314] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:56.003 [2024-04-16 20:47:46.944328] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:56.003 [2024-04-16 20:47:46.944337] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:56.003 [2024-04-16 20:47:46.944349] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:56.003 [2024-04-16 20:47:46.944353] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b16fc80 name raid_bdev1, state offline 00:09:56.003 20:47:46 -- bdev/bdev_raid.sh@443 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:56.003 20:47:46 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:09:56.263 20:47:47 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:09:56.263 20:47:47 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:09:56.263 20:47:47 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:09:56.263 20:47:47 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:09:56.263 20:47:47 -- bdev/bdev_raid.sh@450 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:09:56.263 20:47:47 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:09:56.263 20:47:47 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:09:56.263 20:47:47 -- bdev/bdev_raid.sh@450 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:09:56.523 20:47:47 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:09:56.523 20:47:47 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:09:56.523 20:47:47 -- bdev/bdev_raid.sh@450 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:09:56.523 20:47:47 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:09:56.523 20:47:47 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:09:56.523 20:47:47 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:09:56.523 20:47:47 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:09:56.523 20:47:47 -- bdev/bdev_raid.sh@455 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:56.783 [2024-04-16 20:47:47.792401] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:56.783 [2024-04-16 20:47:47.792435] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:56.783 [2024-04-16 20:47:47.792477] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b170900 00:09:56.783 [2024-04-16 20:47:47.792483] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:56.783 [2024-04-16 20:47:47.792969] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:56.783 [2024-04-16 20:47:47.792991] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:56.783 [2024-04-16 20:47:47.793008] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:09:56.783 [2024-04-16 20:47:47.793017] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:56.783 pt2 00:09:56.783 20:47:47 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:56.783 20:47:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:09:56.783 20:47:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:56.783 20:47:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:56.783 20:47:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:56.783 20:47:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:09:56.783 20:47:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:56.783 20:47:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:56.783 20:47:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:56.783 20:47:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:56.783 20:47:47 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:56.783 20:47:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:57.042 20:47:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:57.042 "name": "raid_bdev1", 00:09:57.042 "uuid": "916979dd-fc32-11ee-80f8-ef3e42bb1492", 00:09:57.042 "strip_size_kb": 0, 00:09:57.042 "state": "configuring", 00:09:57.042 "raid_level": "raid1", 00:09:57.042 "superblock": true, 00:09:57.042 "num_base_bdevs": 4, 00:09:57.042 "num_base_bdevs_discovered": 1, 00:09:57.042 "num_base_bdevs_operational": 3, 00:09:57.042 "base_bdevs_list": [ 00:09:57.042 { 00:09:57.042 "name": null, 00:09:57.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.042 "is_configured": false, 00:09:57.042 "data_offset": 2048, 00:09:57.042 "data_size": 63488 00:09:57.042 }, 00:09:57.042 { 00:09:57.042 "name": "pt2", 00:09:57.042 "uuid": "42fc614d-ca83-6c50-bd7e-42d6dd7aff0b", 00:09:57.042 "is_configured": true, 00:09:57.043 "data_offset": 2048, 00:09:57.043 "data_size": 63488 00:09:57.043 }, 00:09:57.043 { 00:09:57.043 "name": null, 00:09:57.043 "uuid": "3239d7c5-b5be-2051-a8ec-947e1f411877", 00:09:57.043 "is_configured": false, 00:09:57.043 "data_offset": 2048, 00:09:57.043 "data_size": 63488 00:09:57.043 }, 00:09:57.043 { 00:09:57.043 "name": null, 00:09:57.043 "uuid": "c20d3f21-0caf-395a-9242-fb6dc69fbddd", 00:09:57.043 "is_configured": false, 00:09:57.043 "data_offset": 2048, 00:09:57.043 "data_size": 63488 00:09:57.043 } 00:09:57.043 ] 00:09:57.043 }' 00:09:57.043 20:47:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:57.043 20:47:47 -- common/autotest_common.sh@10 -- # set +x 00:09:57.304 20:47:48 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:09:57.304 20:47:48 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:09:57.304 20:47:48 -- bdev/bdev_raid.sh@455 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:57.304 [2024-04-16 20:47:48.420471] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:57.304 [2024-04-16 20:47:48.420501] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:57.304 [2024-04-16 20:47:48.420524] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b170680 00:09:57.304 [2024-04-16 20:47:48.420545] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:57.304 [2024-04-16 20:47:48.420616] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:57.304 [2024-04-16 20:47:48.420622] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:57.304 [2024-04-16 20:47:48.420636] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:09:57.304 [2024-04-16 20:47:48.420641] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:57.304 pt3 00:09:57.579 20:47:48 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:57.579 20:47:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:09:57.579 20:47:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:57.579 20:47:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:57.579 20:47:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:57.579 20:47:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:09:57.579 20:47:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:57.579 20:47:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:57.579 20:47:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:57.579 20:47:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:57.579 20:47:48 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:57.579 20:47:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:57.579 20:47:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:57.579 "name": "raid_bdev1", 00:09:57.579 "uuid": "916979dd-fc32-11ee-80f8-ef3e42bb1492", 00:09:57.579 "strip_size_kb": 0, 00:09:57.580 "state": "configuring", 00:09:57.580 "raid_level": "raid1", 00:09:57.580 "superblock": true, 00:09:57.580 "num_base_bdevs": 4, 00:09:57.580 "num_base_bdevs_discovered": 2, 00:09:57.580 "num_base_bdevs_operational": 3, 00:09:57.580 "base_bdevs_list": [ 00:09:57.580 { 00:09:57.580 "name": null, 00:09:57.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.580 "is_configured": false, 00:09:57.580 "data_offset": 2048, 00:09:57.580 "data_size": 63488 00:09:57.580 }, 00:09:57.580 { 00:09:57.580 "name": "pt2", 00:09:57.580 "uuid": "42fc614d-ca83-6c50-bd7e-42d6dd7aff0b", 00:09:57.580 "is_configured": true, 00:09:57.580 "data_offset": 2048, 00:09:57.580 "data_size": 63488 00:09:57.580 }, 00:09:57.580 { 00:09:57.580 "name": "pt3", 00:09:57.580 "uuid": "3239d7c5-b5be-2051-a8ec-947e1f411877", 00:09:57.580 "is_configured": true, 00:09:57.580 "data_offset": 2048, 00:09:57.580 "data_size": 63488 00:09:57.580 }, 00:09:57.580 { 00:09:57.580 "name": null, 00:09:57.580 "uuid": "c20d3f21-0caf-395a-9242-fb6dc69fbddd", 00:09:57.580 "is_configured": false, 00:09:57.580 "data_offset": 2048, 00:09:57.580 "data_size": 63488 00:09:57.580 } 00:09:57.580 ] 00:09:57.580 }' 00:09:57.580 20:47:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:57.580 20:47:48 -- common/autotest_common.sh@10 -- # set +x 00:09:57.839 20:47:48 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:09:57.839 20:47:48 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:09:57.839 20:47:48 -- bdev/bdev_raid.sh@462 -- # i=3 00:09:57.839 20:47:48 -- bdev/bdev_raid.sh@463 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:58.099 [2024-04-16 20:47:49.040536] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:58.099 [2024-04-16 20:47:49.040567] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:58.099 [2024-04-16 20:47:49.040586] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b16fc80 00:09:58.099 [2024-04-16 20:47:49.040592] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:58.099 [2024-04-16 20:47:49.040656] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:58.099 [2024-04-16 20:47:49.040662] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:58.099 [2024-04-16 20:47:49.040675] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:09:58.099 [2024-04-16 20:47:49.040680] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:58.099 [2024-04-16 20:47:49.040701] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b16f780 00:09:58.099 [2024-04-16 20:47:49.040704] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:58.099 [2024-04-16 20:47:49.040718] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b1d2e20 00:09:58.099 [2024-04-16 20:47:49.040752] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b16f780 00:09:58.099 [2024-04-16 20:47:49.040798] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82b16f780 00:09:58.099 [2024-04-16 20:47:49.040814] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:58.099 pt4 00:09:58.099 20:47:49 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:58.099 20:47:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:09:58.099 20:47:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:58.099 20:47:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:58.099 20:47:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:58.099 20:47:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:09:58.099 20:47:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:58.099 20:47:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:58.099 20:47:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:58.099 20:47:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:58.099 20:47:49 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:58.099 20:47:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:58.358 20:47:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:58.358 "name": "raid_bdev1", 00:09:58.358 "uuid": "916979dd-fc32-11ee-80f8-ef3e42bb1492", 00:09:58.358 "strip_size_kb": 0, 00:09:58.358 "state": "online", 00:09:58.358 "raid_level": "raid1", 00:09:58.358 "superblock": true, 00:09:58.358 "num_base_bdevs": 4, 00:09:58.358 "num_base_bdevs_discovered": 3, 00:09:58.358 "num_base_bdevs_operational": 3, 00:09:58.358 "base_bdevs_list": [ 00:09:58.358 { 00:09:58.358 "name": null, 00:09:58.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.358 "is_configured": false, 00:09:58.358 "data_offset": 2048, 00:09:58.358 "data_size": 63488 00:09:58.358 }, 00:09:58.358 { 00:09:58.358 "name": "pt2", 00:09:58.358 "uuid": "42fc614d-ca83-6c50-bd7e-42d6dd7aff0b", 00:09:58.358 "is_configured": true, 00:09:58.358 "data_offset": 2048, 00:09:58.358 "data_size": 63488 00:09:58.358 }, 00:09:58.358 { 00:09:58.358 "name": "pt3", 00:09:58.358 "uuid": "3239d7c5-b5be-2051-a8ec-947e1f411877", 00:09:58.358 "is_configured": true, 00:09:58.358 "data_offset": 2048, 00:09:58.358 "data_size": 63488 00:09:58.358 }, 00:09:58.358 { 00:09:58.358 "name": "pt4", 00:09:58.358 "uuid": "c20d3f21-0caf-395a-9242-fb6dc69fbddd", 00:09:58.358 "is_configured": true, 00:09:58.358 "data_offset": 2048, 00:09:58.358 "data_size": 63488 00:09:58.358 } 00:09:58.358 ] 00:09:58.358 }' 00:09:58.358 20:47:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:58.358 20:47:49 -- common/autotest_common.sh@10 -- # set +x 00:09:58.617 20:47:49 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:09:58.617 20:47:49 -- bdev/bdev_raid.sh@470 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:09:58.617 [2024-04-16 20:47:49.664591] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:58.617 [2024-04-16 20:47:49.664605] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:58.617 [2024-04-16 20:47:49.664618] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:58.617 [2024-04-16 20:47:49.664630] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:58.617 [2024-04-16 20:47:49.664632] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b16f780 name raid_bdev1, state offline 00:09:58.617 20:47:49 -- bdev/bdev_raid.sh@471 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:58.617 20:47:49 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:09:58.877 20:47:49 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:09:58.877 20:47:49 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:09:58.877 20:47:49 -- bdev/bdev_raid.sh@478 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:59.136 [2024-04-16 20:47:50.032638] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:59.136 [2024-04-16 20:47:50.032682] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:59.136 [2024-04-16 20:47:50.032706] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b170b80 00:09:59.136 [2024-04-16 20:47:50.032712] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:59.136 [2024-04-16 20:47:50.033178] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:59.136 [2024-04-16 20:47:50.033206] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:59.136 [2024-04-16 20:47:50.033223] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:09:59.136 [2024-04-16 20:47:50.033241] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:59.136 pt1 00:09:59.136 20:47:50 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:09:59.136 20:47:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:09:59.136 20:47:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:59.136 20:47:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:59.136 20:47:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:59.136 20:47:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:59.136 20:47:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:59.136 20:47:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:59.136 20:47:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:59.136 20:47:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:59.136 20:47:50 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:59.136 20:47:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:59.136 20:47:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:59.136 "name": "raid_bdev1", 00:09:59.136 "uuid": "916979dd-fc32-11ee-80f8-ef3e42bb1492", 00:09:59.136 "strip_size_kb": 0, 00:09:59.136 "state": "configuring", 00:09:59.136 "raid_level": "raid1", 00:09:59.136 "superblock": true, 00:09:59.136 "num_base_bdevs": 4, 00:09:59.136 "num_base_bdevs_discovered": 1, 00:09:59.136 "num_base_bdevs_operational": 4, 00:09:59.136 "base_bdevs_list": [ 00:09:59.136 { 00:09:59.136 "name": "pt1", 00:09:59.136 "uuid": "d199ea4d-f972-dc55-a8a5-bdcf872be790", 00:09:59.137 "is_configured": true, 00:09:59.137 "data_offset": 2048, 00:09:59.137 "data_size": 63488 00:09:59.137 }, 00:09:59.137 { 00:09:59.137 "name": null, 00:09:59.137 "uuid": "42fc614d-ca83-6c50-bd7e-42d6dd7aff0b", 00:09:59.137 "is_configured": false, 00:09:59.137 "data_offset": 2048, 00:09:59.137 "data_size": 63488 00:09:59.137 }, 00:09:59.137 { 00:09:59.137 "name": null, 00:09:59.137 "uuid": "3239d7c5-b5be-2051-a8ec-947e1f411877", 00:09:59.137 "is_configured": false, 00:09:59.137 "data_offset": 2048, 00:09:59.137 "data_size": 63488 00:09:59.137 }, 00:09:59.137 { 00:09:59.137 "name": null, 00:09:59.137 "uuid": "c20d3f21-0caf-395a-9242-fb6dc69fbddd", 00:09:59.137 "is_configured": false, 00:09:59.137 "data_offset": 2048, 00:09:59.137 "data_size": 63488 00:09:59.137 } 00:09:59.137 ] 00:09:59.137 }' 00:09:59.137 20:47:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:59.137 20:47:50 -- common/autotest_common.sh@10 -- # set +x 00:09:59.396 20:47:50 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:09:59.396 20:47:50 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:09:59.396 20:47:50 -- bdev/bdev_raid.sh@485 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:09:59.656 20:47:50 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:09:59.656 20:47:50 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:09:59.656 20:47:50 -- bdev/bdev_raid.sh@485 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:09:59.915 20:47:50 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:09:59.915 20:47:50 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:09:59.915 20:47:50 -- bdev/bdev_raid.sh@485 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:09:59.915 20:47:51 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:09:59.915 20:47:51 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:09:59.915 20:47:51 -- bdev/bdev_raid.sh@489 -- # i=3 00:09:59.915 20:47:51 -- bdev/bdev_raid.sh@490 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:00.175 [2024-04-16 20:47:51.172749] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:00.175 [2024-04-16 20:47:51.172781] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:00.175 [2024-04-16 20:47:51.172821] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b16fc80 00:10:00.175 [2024-04-16 20:47:51.172828] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:00.175 [2024-04-16 20:47:51.172901] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:00.175 [2024-04-16 20:47:51.172908] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:00.175 [2024-04-16 20:47:51.172921] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:10:00.175 [2024-04-16 20:47:51.172926] bdev_raid.c:3239:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:10:00.175 [2024-04-16 20:47:51.172929] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:00.175 [2024-04-16 20:47:51.172933] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b170180 name raid_bdev1, state configuring 00:10:00.175 [2024-04-16 20:47:51.172943] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:00.175 pt4 00:10:00.175 20:47:51 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:00.175 20:47:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:10:00.175 20:47:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:00.175 20:47:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:10:00.175 20:47:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:10:00.175 20:47:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:10:00.175 20:47:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:00.175 20:47:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:00.175 20:47:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:00.176 20:47:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:00.176 20:47:51 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:00.176 20:47:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:00.435 20:47:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:00.435 "name": "raid_bdev1", 00:10:00.435 "uuid": "916979dd-fc32-11ee-80f8-ef3e42bb1492", 00:10:00.435 "strip_size_kb": 0, 00:10:00.435 "state": "configuring", 00:10:00.435 "raid_level": "raid1", 00:10:00.435 "superblock": true, 00:10:00.435 "num_base_bdevs": 4, 00:10:00.435 "num_base_bdevs_discovered": 1, 00:10:00.435 "num_base_bdevs_operational": 3, 00:10:00.435 "base_bdevs_list": [ 00:10:00.435 { 00:10:00.435 "name": null, 00:10:00.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.436 "is_configured": false, 00:10:00.436 "data_offset": 2048, 00:10:00.436 "data_size": 63488 00:10:00.436 }, 00:10:00.436 { 00:10:00.436 "name": null, 00:10:00.436 "uuid": "42fc614d-ca83-6c50-bd7e-42d6dd7aff0b", 00:10:00.436 "is_configured": false, 00:10:00.436 "data_offset": 2048, 00:10:00.436 "data_size": 63488 00:10:00.436 }, 00:10:00.436 { 00:10:00.436 "name": null, 00:10:00.436 "uuid": "3239d7c5-b5be-2051-a8ec-947e1f411877", 00:10:00.436 "is_configured": false, 00:10:00.436 "data_offset": 2048, 00:10:00.436 "data_size": 63488 00:10:00.436 }, 00:10:00.436 { 00:10:00.436 "name": "pt4", 00:10:00.436 "uuid": "c20d3f21-0caf-395a-9242-fb6dc69fbddd", 00:10:00.436 "is_configured": true, 00:10:00.436 "data_offset": 2048, 00:10:00.436 "data_size": 63488 00:10:00.436 } 00:10:00.436 ] 00:10:00.436 }' 00:10:00.436 20:47:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:00.436 20:47:51 -- common/autotest_common.sh@10 -- # set +x 00:10:00.695 20:47:51 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:10:00.695 20:47:51 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:10:00.695 20:47:51 -- bdev/bdev_raid.sh@498 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:00.695 [2024-04-16 20:47:51.784824] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:00.695 [2024-04-16 20:47:51.784856] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:00.695 [2024-04-16 20:47:51.784891] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b170680 00:10:00.695 [2024-04-16 20:47:51.784897] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:00.695 [2024-04-16 20:47:51.784956] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:00.695 [2024-04-16 20:47:51.784966] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:00.695 [2024-04-16 20:47:51.784978] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:10:00.695 [2024-04-16 20:47:51.784984] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:00.695 pt2 00:10:00.695 20:47:51 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:10:00.696 20:47:51 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:10:00.696 20:47:51 -- bdev/bdev_raid.sh@498 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:00.955 [2024-04-16 20:47:51.964841] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:00.955 [2024-04-16 20:47:51.964868] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:00.955 [2024-04-16 20:47:51.964882] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b170900 00:10:00.955 [2024-04-16 20:47:51.964887] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:00.955 [2024-04-16 20:47:51.964935] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:00.955 [2024-04-16 20:47:51.964941] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:00.955 [2024-04-16 20:47:51.964952] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:10:00.955 [2024-04-16 20:47:51.964956] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:00.955 [2024-04-16 20:47:51.964974] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b170180 00:10:00.955 [2024-04-16 20:47:51.964977] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:00.955 [2024-04-16 20:47:51.965003] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b1d2e20 00:10:00.955 [2024-04-16 20:47:51.965032] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b170180 00:10:00.955 [2024-04-16 20:47:51.965034] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82b170180 00:10:00.955 [2024-04-16 20:47:51.965049] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:00.955 pt3 00:10:00.955 20:47:51 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:10:00.955 20:47:51 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:10:00.955 20:47:51 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:00.955 20:47:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:10:00.955 20:47:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:10:00.955 20:47:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:10:00.955 20:47:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:10:00.955 20:47:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:10:00.955 20:47:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:00.955 20:47:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:00.955 20:47:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:00.955 20:47:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:00.955 20:47:51 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:00.955 20:47:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:01.215 20:47:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:01.215 "name": "raid_bdev1", 00:10:01.215 "uuid": "916979dd-fc32-11ee-80f8-ef3e42bb1492", 00:10:01.215 "strip_size_kb": 0, 00:10:01.215 "state": "online", 00:10:01.215 "raid_level": "raid1", 00:10:01.215 "superblock": true, 00:10:01.215 "num_base_bdevs": 4, 00:10:01.215 "num_base_bdevs_discovered": 3, 00:10:01.215 "num_base_bdevs_operational": 3, 00:10:01.215 "base_bdevs_list": [ 00:10:01.215 { 00:10:01.215 "name": null, 00:10:01.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.215 "is_configured": false, 00:10:01.215 "data_offset": 2048, 00:10:01.215 "data_size": 63488 00:10:01.215 }, 00:10:01.215 { 00:10:01.215 "name": "pt2", 00:10:01.215 "uuid": "42fc614d-ca83-6c50-bd7e-42d6dd7aff0b", 00:10:01.215 "is_configured": true, 00:10:01.215 "data_offset": 2048, 00:10:01.215 "data_size": 63488 00:10:01.215 }, 00:10:01.215 { 00:10:01.215 "name": "pt3", 00:10:01.215 "uuid": "3239d7c5-b5be-2051-a8ec-947e1f411877", 00:10:01.215 "is_configured": true, 00:10:01.215 "data_offset": 2048, 00:10:01.215 "data_size": 63488 00:10:01.215 }, 00:10:01.215 { 00:10:01.215 "name": "pt4", 00:10:01.215 "uuid": "c20d3f21-0caf-395a-9242-fb6dc69fbddd", 00:10:01.215 "is_configured": true, 00:10:01.215 "data_offset": 2048, 00:10:01.215 "data_size": 63488 00:10:01.215 } 00:10:01.215 ] 00:10:01.215 }' 00:10:01.215 20:47:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:01.215 20:47:52 -- common/autotest_common.sh@10 -- # set +x 00:10:01.475 20:47:52 -- bdev/bdev_raid.sh@506 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:01.475 20:47:52 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:10:01.475 [2024-04-16 20:47:52.584930] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:01.475 20:47:52 -- bdev/bdev_raid.sh@506 -- # '[' 916979dd-fc32-11ee-80f8-ef3e42bb1492 '!=' 916979dd-fc32-11ee-80f8-ef3e42bb1492 ']' 00:10:01.475 20:47:52 -- bdev/bdev_raid.sh@511 -- # killprocess 53443 00:10:01.475 20:47:52 -- common/autotest_common.sh@926 -- # '[' -z 53443 ']' 00:10:01.475 20:47:52 -- common/autotest_common.sh@930 -- # kill -0 53443 00:10:01.475 20:47:52 -- common/autotest_common.sh@931 -- # uname 00:10:01.735 20:47:52 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:10:01.735 20:47:52 -- common/autotest_common.sh@934 -- # ps -c -o command 53443 00:10:01.735 20:47:52 -- common/autotest_common.sh@934 -- # tail -1 00:10:01.735 20:47:52 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:10:01.735 20:47:52 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:10:01.735 killing process with pid 53443 00:10:01.735 20:47:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 53443' 00:10:01.735 20:47:52 -- common/autotest_common.sh@945 -- # kill 53443 00:10:01.735 [2024-04-16 20:47:52.614823] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:01.735 [2024-04-16 20:47:52.614837] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:01.735 [2024-04-16 20:47:52.614858] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:01.735 [2024-04-16 20:47:52.614861] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b170180 name raid_bdev1, state offline 00:10:01.735 20:47:52 -- common/autotest_common.sh@950 -- # wait 53443 00:10:01.735 [2024-04-16 20:47:52.633339] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:01.735 20:47:52 -- bdev/bdev_raid.sh@513 -- # return 0 00:10:01.735 00:10:01.735 real 0m13.937s 00:10:01.735 user 0m24.771s 00:10:01.735 sys 0m2.420s 00:10:01.735 20:47:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:01.735 20:47:52 -- common/autotest_common.sh@10 -- # set +x 00:10:01.735 ************************************ 00:10:01.735 END TEST raid_superblock_test 00:10:01.735 ************************************ 00:10:01.735 20:47:52 -- bdev/bdev_raid.sh@733 -- # '[' '' = true ']' 00:10:01.735 20:47:52 -- bdev/bdev_raid.sh@742 -- # '[' n == y ']' 00:10:01.735 20:47:52 -- bdev/bdev_raid.sh@754 -- # rm -f /raidrandtest 00:10:01.735 00:10:01.735 real 3m37.756s 00:10:01.735 user 6m13.246s 00:10:01.735 sys 0m41.103s 00:10:01.735 20:47:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:01.735 20:47:52 -- common/autotest_common.sh@10 -- # set +x 00:10:01.735 ************************************ 00:10:01.735 END TEST bdev_raid 00:10:01.735 ************************************ 00:10:01.995 20:47:52 -- spdk/autotest.sh@197 -- # run_test bdevperf_config /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:10:01.995 20:47:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:01.995 20:47:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:01.995 20:47:52 -- common/autotest_common.sh@10 -- # set +x 00:10:01.995 ************************************ 00:10:01.995 START TEST bdevperf_config 00:10:01.995 ************************************ 00:10:01.995 20:47:52 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:10:01.995 * Looking for test storage... 00:10:01.995 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf 00:10:01.995 20:47:53 -- bdevperf/test_config.sh@10 -- # source /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh 00:10:01.995 20:47:53 -- bdevperf/common.sh@5 -- # bdevperf=/usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf 00:10:01.995 20:47:53 -- bdevperf/test_config.sh@12 -- # jsonconf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json 00:10:01.995 20:47:53 -- bdevperf/test_config.sh@13 -- # testconf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:10:01.995 20:47:53 -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:01.995 20:47:53 -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:10:01.995 20:47:53 -- bdevperf/common.sh@8 -- # local job_section=global 00:10:01.995 20:47:53 -- bdevperf/common.sh@9 -- # local rw=read 00:10:01.995 20:47:53 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:10:01.995 20:47:53 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:10:01.995 20:47:53 -- bdevperf/common.sh@13 -- # cat 00:10:01.995 20:47:53 -- bdevperf/common.sh@18 -- # job='[global]' 00:10:01.995 00:10:01.995 20:47:53 -- bdevperf/common.sh@19 -- # echo 00:10:01.995 20:47:53 -- bdevperf/common.sh@20 -- # cat 00:10:01.995 20:47:53 -- bdevperf/test_config.sh@18 -- # create_job job0 00:10:01.995 20:47:53 -- bdevperf/common.sh@8 -- # local job_section=job0 00:10:01.995 20:47:53 -- bdevperf/common.sh@9 -- # local rw= 00:10:01.995 20:47:53 -- bdevperf/common.sh@10 -- # local filename= 00:10:01.995 20:47:53 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:10:01.995 20:47:53 -- bdevperf/common.sh@18 -- # job='[job0]' 00:10:01.995 00:10:01.995 20:47:53 -- bdevperf/common.sh@19 -- # echo 00:10:01.995 20:47:53 -- bdevperf/common.sh@20 -- # cat 00:10:01.995 20:47:53 -- bdevperf/test_config.sh@19 -- # create_job job1 00:10:01.995 20:47:53 -- bdevperf/common.sh@8 -- # local job_section=job1 00:10:01.995 20:47:53 -- bdevperf/common.sh@9 -- # local rw= 00:10:01.995 20:47:53 -- bdevperf/common.sh@10 -- # local filename= 00:10:01.995 20:47:53 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:10:01.995 20:47:53 -- bdevperf/common.sh@18 -- # job='[job1]' 00:10:01.995 00:10:01.995 20:47:53 -- bdevperf/common.sh@19 -- # echo 00:10:01.995 20:47:53 -- bdevperf/common.sh@20 -- # cat 00:10:01.995 20:47:53 -- bdevperf/test_config.sh@20 -- # create_job job2 00:10:01.995 20:47:53 -- bdevperf/common.sh@8 -- # local job_section=job2 00:10:01.995 20:47:53 -- bdevperf/common.sh@9 -- # local rw= 00:10:01.995 20:47:53 -- bdevperf/common.sh@10 -- # local filename= 00:10:01.995 20:47:53 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:10:01.995 20:47:53 -- bdevperf/common.sh@18 -- # job='[job2]' 00:10:01.995 00:10:01.995 20:47:53 -- bdevperf/common.sh@19 -- # echo 00:10:01.995 20:47:53 -- bdevperf/common.sh@20 -- # cat 00:10:01.995 20:47:53 -- bdevperf/test_config.sh@21 -- # create_job job3 00:10:01.995 20:47:53 -- bdevperf/common.sh@8 -- # local job_section=job3 00:10:01.995 20:47:53 -- bdevperf/common.sh@9 -- # local rw= 00:10:01.995 20:47:53 -- bdevperf/common.sh@10 -- # local filename= 00:10:01.995 20:47:53 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:10:01.995 20:47:53 -- bdevperf/common.sh@18 -- # job='[job3]' 00:10:01.995 00:10:01.995 20:47:53 -- bdevperf/common.sh@19 -- # echo 00:10:01.995 20:47:53 -- bdevperf/common.sh@20 -- # cat 00:10:01.995 20:47:53 -- bdevperf/test_config.sh@22 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:10:05.287 20:47:55 -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-04-16 20:47:53.102226] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:10:05.287 [2024-04-16 20:47:53.102576] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:05.287 Using job config with 4 jobs 00:10:05.287 EAL: TSC is not safe to use in SMP mode 00:10:05.287 EAL: TSC is not invariant 00:10:05.287 [2024-04-16 20:47:53.549230] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.287 [2024-04-16 20:47:53.638511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.287 cpumask for '\''job0'\'' is too big 00:10:05.288 cpumask for '\''job1'\'' is too big 00:10:05.288 cpumask for '\''job2'\'' is too big 00:10:05.288 cpumask for '\''job3'\'' is too big 00:10:05.288 Running I/O for 2 seconds... 00:10:05.288 00:10:05.288 Latency(us) 00:10:05.288 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:05.288 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:10:05.288 Malloc0 : 2.00 419496.38 409.66 0.00 0.00 610.07 159.76 1171.00 00:10:05.288 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:10:05.288 Malloc0 : 2.00 419484.01 409.65 0.00 0.00 610.00 137.45 999.63 00:10:05.288 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:10:05.288 Malloc0 : 2.00 419472.06 409.64 0.00 0.00 609.91 140.13 838.98 00:10:05.288 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:10:05.288 Malloc0 : 2.00 419544.64 409.71 0.00 0.00 609.71 49.98 789.00 00:10:05.288 =================================================================================================================== 00:10:05.288 Total : 1677997.09 1638.67 0.00 0.00 609.92 49.98 1171.00' 00:10:05.288 20:47:55 -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-04-16 20:47:53.102226] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:10:05.288 [2024-04-16 20:47:53.102576] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:05.288 Using job config with 4 jobs 00:10:05.288 EAL: TSC is not safe to use in SMP mode 00:10:05.288 EAL: TSC is not invariant 00:10:05.288 [2024-04-16 20:47:53.549230] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.288 [2024-04-16 20:47:53.638511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.288 cpumask for '\''job0'\'' is too big 00:10:05.288 cpumask for '\''job1'\'' is too big 00:10:05.288 cpumask for '\''job2'\'' is too big 00:10:05.288 cpumask for '\''job3'\'' is too big 00:10:05.288 Running I/O for 2 seconds... 00:10:05.288 00:10:05.288 Latency(us) 00:10:05.288 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:05.288 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:10:05.288 Malloc0 : 2.00 419496.38 409.66 0.00 0.00 610.07 159.76 1171.00 00:10:05.288 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:10:05.288 Malloc0 : 2.00 419484.01 409.65 0.00 0.00 610.00 137.45 999.63 00:10:05.288 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:10:05.288 Malloc0 : 2.00 419472.06 409.64 0.00 0.00 609.91 140.13 838.98 00:10:05.288 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:10:05.288 Malloc0 : 2.00 419544.64 409.71 0.00 0.00 609.71 49.98 789.00 00:10:05.288 =================================================================================================================== 00:10:05.288 Total : 1677997.09 1638.67 0.00 0.00 609.92 49.98 1171.00' 00:10:05.288 20:47:55 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:10:05.288 20:47:55 -- bdevperf/common.sh@32 -- # echo '[2024-04-16 20:47:53.102226] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:10:05.288 [2024-04-16 20:47:53.102576] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:05.288 Using job config with 4 jobs 00:10:05.288 EAL: TSC is not safe to use in SMP mode 00:10:05.288 EAL: TSC is not invariant 00:10:05.288 [2024-04-16 20:47:53.549230] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.288 [2024-04-16 20:47:53.638511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.288 cpumask for '\''job0'\'' is too big 00:10:05.288 cpumask for '\''job1'\'' is too big 00:10:05.288 cpumask for '\''job2'\'' is too big 00:10:05.288 cpumask for '\''job3'\'' is too big 00:10:05.288 Running I/O for 2 seconds... 00:10:05.288 00:10:05.288 Latency(us) 00:10:05.288 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:05.288 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:10:05.288 Malloc0 : 2.00 419496.38 409.66 0.00 0.00 610.07 159.76 1171.00 00:10:05.288 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:10:05.288 Malloc0 : 2.00 419484.01 409.65 0.00 0.00 610.00 137.45 999.63 00:10:05.288 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:10:05.288 Malloc0 : 2.00 419472.06 409.64 0.00 0.00 609.91 140.13 838.98 00:10:05.288 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:10:05.288 Malloc0 : 2.00 419544.64 409.71 0.00 0.00 609.71 49.98 789.00 00:10:05.288 =================================================================================================================== 00:10:05.288 Total : 1677997.09 1638.67 0.00 0.00 609.92 49.98 1171.00' 00:10:05.288 20:47:55 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:10:05.288 20:47:55 -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:10:05.288 20:47:55 -- bdevperf/test_config.sh@25 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:10:05.288 [2024-04-16 20:47:55.836275] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:10:05.288 [2024-04-16 20:47:55.836412] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:05.288 EAL: TSC is not safe to use in SMP mode 00:10:05.288 EAL: TSC is not invariant 00:10:05.288 [2024-04-16 20:47:56.252139] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.288 [2024-04-16 20:47:56.328123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.288 cpumask for 'job0' is too big 00:10:05.288 cpumask for 'job1' is too big 00:10:05.288 cpumask for 'job2' is too big 00:10:05.288 cpumask for 'job3' is too big 00:10:07.838 20:47:58 -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:10:07.838 Running I/O for 2 seconds... 00:10:07.838 00:10:07.838 Latency(us) 00:10:07.838 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:07.838 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:10:07.838 Malloc0 : 2.00 424776.28 414.82 0.00 0.00 602.48 160.66 1235.26 00:10:07.838 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:10:07.838 Malloc0 : 2.00 424785.98 414.83 0.00 0.00 602.38 148.16 1063.90 00:10:07.838 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:10:07.838 Malloc0 : 2.00 424773.46 414.82 0.00 0.00 602.29 154.41 878.25 00:10:07.838 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:10:07.838 Malloc0 : 2.00 424754.36 414.80 0.00 0.00 602.20 159.76 774.72 00:10:07.838 =================================================================================================================== 00:10:07.838 Total : 1699090.07 1659.27 0.00 0.00 602.34 148.16 1235.26' 00:10:07.838 20:47:58 -- bdevperf/test_config.sh@27 -- # cleanup 00:10:07.838 20:47:58 -- bdevperf/common.sh@36 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:10:07.838 20:47:58 -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:10:07.838 20:47:58 -- bdevperf/common.sh@8 -- # local job_section=job0 00:10:07.838 20:47:58 -- bdevperf/common.sh@9 -- # local rw=write 00:10:07.838 20:47:58 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:10:07.838 20:47:58 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:10:07.838 20:47:58 -- bdevperf/common.sh@18 -- # job='[job0]' 00:10:07.838 00:10:07.838 20:47:58 -- bdevperf/common.sh@19 -- # echo 00:10:07.838 20:47:58 -- bdevperf/common.sh@20 -- # cat 00:10:07.838 20:47:58 -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:10:07.838 20:47:58 -- bdevperf/common.sh@8 -- # local job_section=job1 00:10:07.838 20:47:58 -- bdevperf/common.sh@9 -- # local rw=write 00:10:07.838 20:47:58 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:10:07.838 20:47:58 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:10:07.838 20:47:58 -- bdevperf/common.sh@18 -- # job='[job1]' 00:10:07.838 00:10:07.838 20:47:58 -- bdevperf/common.sh@19 -- # echo 00:10:07.838 20:47:58 -- bdevperf/common.sh@20 -- # cat 00:10:07.838 20:47:58 -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:10:07.838 20:47:58 -- bdevperf/common.sh@8 -- # local job_section=job2 00:10:07.838 20:47:58 -- bdevperf/common.sh@9 -- # local rw=write 00:10:07.838 20:47:58 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:10:07.838 20:47:58 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:10:07.838 20:47:58 -- bdevperf/common.sh@18 -- # job='[job2]' 00:10:07.838 00:10:07.838 20:47:58 -- bdevperf/common.sh@19 -- # echo 00:10:07.838 20:47:58 -- bdevperf/common.sh@20 -- # cat 00:10:07.838 20:47:58 -- bdevperf/test_config.sh@32 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:10:10.376 20:48:01 -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-04-16 20:47:58.543225] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:10:10.376 [2024-04-16 20:47:58.543582] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:10.376 Using job config with 3 jobs 00:10:10.376 EAL: TSC is not safe to use in SMP mode 00:10:10.376 EAL: TSC is not invariant 00:10:10.376 [2024-04-16 20:47:58.968748] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.376 [2024-04-16 20:47:59.059020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.376 cpumask for '\''job0'\'' is too big 00:10:10.376 cpumask for '\''job1'\'' is too big 00:10:10.376 cpumask for '\''job2'\'' is too big 00:10:10.376 Running I/O for 2 seconds... 00:10:10.376 00:10:10.376 Latency(us) 00:10:10.376 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:10.376 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:10:10.376 Malloc0 : 2.00 522367.52 510.12 0.00 0.00 489.90 185.65 860.40 00:10:10.376 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:10:10.376 Malloc0 : 2.00 522351.33 510.11 0.00 0.00 489.82 151.73 710.45 00:10:10.376 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:10:10.376 Malloc0 : 2.00 522336.98 510.09 0.00 0.00 489.75 141.91 614.06 00:10:10.376 =================================================================================================================== 00:10:10.376 Total : 1567055.83 1530.33 0.00 0.00 489.82 141.91 860.40' 00:10:10.376 20:48:01 -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-04-16 20:47:58.543225] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:10:10.376 [2024-04-16 20:47:58.543582] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:10.376 Using job config with 3 jobs 00:10:10.376 EAL: TSC is not safe to use in SMP mode 00:10:10.376 EAL: TSC is not invariant 00:10:10.376 [2024-04-16 20:47:58.968748] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.376 [2024-04-16 20:47:59.059020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.376 cpumask for '\''job0'\'' is too big 00:10:10.376 cpumask for '\''job1'\'' is too big 00:10:10.376 cpumask for '\''job2'\'' is too big 00:10:10.376 Running I/O for 2 seconds... 00:10:10.376 00:10:10.376 Latency(us) 00:10:10.376 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:10.376 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:10:10.376 Malloc0 : 2.00 522367.52 510.12 0.00 0.00 489.90 185.65 860.40 00:10:10.376 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:10:10.376 Malloc0 : 2.00 522351.33 510.11 0.00 0.00 489.82 151.73 710.45 00:10:10.376 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:10:10.376 Malloc0 : 2.00 522336.98 510.09 0.00 0.00 489.75 141.91 614.06 00:10:10.376 =================================================================================================================== 00:10:10.376 Total : 1567055.83 1530.33 0.00 0.00 489.82 141.91 860.40' 00:10:10.376 20:48:01 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:10:10.376 20:48:01 -- bdevperf/common.sh@32 -- # echo '[2024-04-16 20:47:58.543225] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:10:10.376 [2024-04-16 20:47:58.543582] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:10.376 Using job config with 3 jobs 00:10:10.376 EAL: TSC is not safe to use in SMP mode 00:10:10.376 EAL: TSC is not invariant 00:10:10.376 [2024-04-16 20:47:58.968748] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.376 [2024-04-16 20:47:59.059020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.376 cpumask for '\''job0'\'' is too big 00:10:10.376 cpumask for '\''job1'\'' is too big 00:10:10.376 cpumask for '\''job2'\'' is too big 00:10:10.376 Running I/O for 2 seconds... 00:10:10.376 00:10:10.376 Latency(us) 00:10:10.376 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:10.376 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:10:10.376 Malloc0 : 2.00 522367.52 510.12 0.00 0.00 489.90 185.65 860.40 00:10:10.376 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:10:10.376 Malloc0 : 2.00 522351.33 510.11 0.00 0.00 489.82 151.73 710.45 00:10:10.376 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:10:10.376 Malloc0 : 2.00 522336.98 510.09 0.00 0.00 489.75 141.91 614.06 00:10:10.376 =================================================================================================================== 00:10:10.376 Total : 1567055.83 1530.33 0.00 0.00 489.82 141.91 860.40' 00:10:10.376 20:48:01 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:10:10.376 20:48:01 -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:10:10.376 20:48:01 -- bdevperf/test_config.sh@35 -- # cleanup 00:10:10.376 20:48:01 -- bdevperf/common.sh@36 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:10:10.376 20:48:01 -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:10:10.376 20:48:01 -- bdevperf/common.sh@8 -- # local job_section=global 00:10:10.376 20:48:01 -- bdevperf/common.sh@9 -- # local rw=rw 00:10:10.376 20:48:01 -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:10:10.376 20:48:01 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:10:10.376 20:48:01 -- bdevperf/common.sh@13 -- # cat 00:10:10.376 00:10:10.376 20:48:01 -- bdevperf/common.sh@18 -- # job='[global]' 00:10:10.376 20:48:01 -- bdevperf/common.sh@19 -- # echo 00:10:10.376 20:48:01 -- bdevperf/common.sh@20 -- # cat 00:10:10.377 20:48:01 -- bdevperf/test_config.sh@38 -- # create_job job0 00:10:10.377 20:48:01 -- bdevperf/common.sh@8 -- # local job_section=job0 00:10:10.377 20:48:01 -- bdevperf/common.sh@9 -- # local rw= 00:10:10.377 20:48:01 -- bdevperf/common.sh@10 -- # local filename= 00:10:10.377 20:48:01 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:10:10.377 20:48:01 -- bdevperf/common.sh@18 -- # job='[job0]' 00:10:10.377 00:10:10.377 20:48:01 -- bdevperf/common.sh@19 -- # echo 00:10:10.377 20:48:01 -- bdevperf/common.sh@20 -- # cat 00:10:10.377 20:48:01 -- bdevperf/test_config.sh@39 -- # create_job job1 00:10:10.377 20:48:01 -- bdevperf/common.sh@8 -- # local job_section=job1 00:10:10.377 20:48:01 -- bdevperf/common.sh@9 -- # local rw= 00:10:10.377 20:48:01 -- bdevperf/common.sh@10 -- # local filename= 00:10:10.377 20:48:01 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:10:10.377 20:48:01 -- bdevperf/common.sh@18 -- # job='[job1]' 00:10:10.377 00:10:10.377 20:48:01 -- bdevperf/common.sh@19 -- # echo 00:10:10.377 20:48:01 -- bdevperf/common.sh@20 -- # cat 00:10:10.377 20:48:01 -- bdevperf/test_config.sh@40 -- # create_job job2 00:10:10.377 20:48:01 -- bdevperf/common.sh@8 -- # local job_section=job2 00:10:10.377 20:48:01 -- bdevperf/common.sh@9 -- # local rw= 00:10:10.377 20:48:01 -- bdevperf/common.sh@10 -- # local filename= 00:10:10.377 20:48:01 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:10:10.377 20:48:01 -- bdevperf/common.sh@18 -- # job='[job2]' 00:10:10.377 00:10:10.377 20:48:01 -- bdevperf/common.sh@19 -- # echo 00:10:10.377 20:48:01 -- bdevperf/common.sh@20 -- # cat 00:10:10.377 20:48:01 -- bdevperf/test_config.sh@41 -- # create_job job3 00:10:10.377 20:48:01 -- bdevperf/common.sh@8 -- # local job_section=job3 00:10:10.377 20:48:01 -- bdevperf/common.sh@9 -- # local rw= 00:10:10.377 20:48:01 -- bdevperf/common.sh@10 -- # local filename= 00:10:10.377 20:48:01 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:10:10.377 20:48:01 -- bdevperf/common.sh@18 -- # job='[job3]' 00:10:10.377 00:10:10.377 20:48:01 -- bdevperf/common.sh@19 -- # echo 00:10:10.377 20:48:01 -- bdevperf/common.sh@20 -- # cat 00:10:10.377 20:48:01 -- bdevperf/test_config.sh@42 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:10:12.917 20:48:03 -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-04-16 20:48:01.292022] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:10:12.917 [2024-04-16 20:48:01.292389] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:12.917 Using job config with 4 jobs 00:10:12.917 EAL: TSC is not safe to use in SMP mode 00:10:12.917 EAL: TSC is not invariant 00:10:12.917 [2024-04-16 20:48:01.719865] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.917 [2024-04-16 20:48:01.809833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.917 cpumask for '\''job0'\'' is too big 00:10:12.917 cpumask for '\''job1'\'' is too big 00:10:12.917 cpumask for '\''job2'\'' is too big 00:10:12.917 cpumask for '\''job3'\'' is too big 00:10:12.917 Running I/O for 2 seconds... 00:10:12.917 00:10:12.917 Latency(us) 00:10:12.917 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:12.917 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:10:12.917 Malloc0 : 2.00 192168.87 187.66 0.00 0.00 1331.92 415.92 2741.85 00:10:12.917 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:10:12.917 Malloc1 : 2.00 192161.19 187.66 0.00 0.00 1331.79 405.21 2713.29 00:10:12.917 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:10:12.917 Malloc0 : 2.00 192155.72 187.65 0.00 0.00 1331.43 390.93 2299.16 00:10:12.917 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:10:12.917 Malloc1 : 2.00 192149.00 187.65 0.00 0.00 1331.39 364.15 2299.16 00:10:12.917 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:10:12.917 Malloc0 : 2.00 192195.65 187.69 0.00 0.00 1330.67 387.36 1899.30 00:10:12.917 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:10:12.917 Malloc1 : 2.00 192185.09 187.68 0.00 0.00 1330.65 371.29 1885.02 00:10:12.917 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:10:12.917 Malloc0 : 2.00 192176.94 187.67 0.00 0.00 1330.28 380.22 1620.83 00:10:12.917 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:10:12.918 Malloc1 : 2.00 192168.21 187.66 0.00 0.00 1330.18 344.52 1599.41 00:10:12.918 =================================================================================================================== 00:10:12.918 Total : 1537360.67 1501.33 0.00 0.00 1331.04 344.52 2741.85' 00:10:12.918 20:48:03 -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-04-16 20:48:01.292022] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:10:12.918 [2024-04-16 20:48:01.292389] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:12.918 Using job config with 4 jobs 00:10:12.918 EAL: TSC is not safe to use in SMP mode 00:10:12.918 EAL: TSC is not invariant 00:10:12.918 [2024-04-16 20:48:01.719865] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.918 [2024-04-16 20:48:01.809833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.918 cpumask for '\''job0'\'' is too big 00:10:12.918 cpumask for '\''job1'\'' is too big 00:10:12.918 cpumask for '\''job2'\'' is too big 00:10:12.918 cpumask for '\''job3'\'' is too big 00:10:12.918 Running I/O for 2 seconds... 00:10:12.918 00:10:12.918 Latency(us) 00:10:12.918 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:12.918 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:10:12.918 Malloc0 : 2.00 192168.87 187.66 0.00 0.00 1331.92 415.92 2741.85 00:10:12.918 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:10:12.918 Malloc1 : 2.00 192161.19 187.66 0.00 0.00 1331.79 405.21 2713.29 00:10:12.918 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:10:12.918 Malloc0 : 2.00 192155.72 187.65 0.00 0.00 1331.43 390.93 2299.16 00:10:12.918 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:10:12.918 Malloc1 : 2.00 192149.00 187.65 0.00 0.00 1331.39 364.15 2299.16 00:10:12.918 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:10:12.918 Malloc0 : 2.00 192195.65 187.69 0.00 0.00 1330.67 387.36 1899.30 00:10:12.918 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:10:12.918 Malloc1 : 2.00 192185.09 187.68 0.00 0.00 1330.65 371.29 1885.02 00:10:12.918 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:10:12.918 Malloc0 : 2.00 192176.94 187.67 0.00 0.00 1330.28 380.22 1620.83 00:10:12.918 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:10:12.918 Malloc1 : 2.00 192168.21 187.66 0.00 0.00 1330.18 344.52 1599.41 00:10:12.918 =================================================================================================================== 00:10:12.918 Total : 1537360.67 1501.33 0.00 0.00 1331.04 344.52 2741.85' 00:10:12.918 20:48:03 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:10:12.918 20:48:03 -- bdevperf/common.sh@32 -- # echo '[2024-04-16 20:48:01.292022] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:10:12.918 [2024-04-16 20:48:01.292389] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:12.918 Using job config with 4 jobs 00:10:12.918 EAL: TSC is not safe to use in SMP mode 00:10:12.918 EAL: TSC is not invariant 00:10:12.918 [2024-04-16 20:48:01.719865] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.918 [2024-04-16 20:48:01.809833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.918 cpumask for '\''job0'\'' is too big 00:10:12.918 cpumask for '\''job1'\'' is too big 00:10:12.918 cpumask for '\''job2'\'' is too big 00:10:12.918 cpumask for '\''job3'\'' is too big 00:10:12.918 Running I/O for 2 seconds... 00:10:12.918 00:10:12.918 Latency(us) 00:10:12.918 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:12.918 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:10:12.918 Malloc0 : 2.00 192168.87 187.66 0.00 0.00 1331.92 415.92 2741.85 00:10:12.918 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:10:12.918 Malloc1 : 2.00 192161.19 187.66 0.00 0.00 1331.79 405.21 2713.29 00:10:12.918 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:10:12.918 Malloc0 : 2.00 192155.72 187.65 0.00 0.00 1331.43 390.93 2299.16 00:10:12.918 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:10:12.918 Malloc1 : 2.00 192149.00 187.65 0.00 0.00 1331.39 364.15 2299.16 00:10:12.918 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:10:12.918 Malloc0 : 2.00 192195.65 187.69 0.00 0.00 1330.67 387.36 1899.30 00:10:12.918 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:10:12.918 Malloc1 : 2.00 192185.09 187.68 0.00 0.00 1330.65 371.29 1885.02 00:10:12.918 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:10:12.918 Malloc0 : 2.00 192176.94 187.67 0.00 0.00 1330.28 380.22 1620.83 00:10:12.918 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:10:12.918 Malloc1 : 2.00 192168.21 187.66 0.00 0.00 1330.18 344.52 1599.41 00:10:12.918 =================================================================================================================== 00:10:12.918 Total : 1537360.67 1501.33 0.00 0.00 1331.04 344.52 2741.85' 00:10:12.918 20:48:03 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:10:12.918 20:48:04 -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:10:12.918 20:48:04 -- bdevperf/test_config.sh@44 -- # cleanup 00:10:12.918 20:48:04 -- bdevperf/common.sh@36 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:10:12.918 20:48:04 -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:10:12.918 00:10:12.918 real 0m11.123s 00:10:12.918 user 0m9.098s 00:10:12.918 sys 0m2.098s 00:10:12.918 20:48:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:12.918 20:48:04 -- common/autotest_common.sh@10 -- # set +x 00:10:12.918 ************************************ 00:10:12.918 END TEST bdevperf_config 00:10:12.918 ************************************ 00:10:13.178 20:48:04 -- spdk/autotest.sh@198 -- # uname -s 00:10:13.178 20:48:04 -- spdk/autotest.sh@198 -- # [[ FreeBSD == Linux ]] 00:10:13.178 20:48:04 -- spdk/autotest.sh@204 -- # uname -s 00:10:13.178 20:48:04 -- spdk/autotest.sh@204 -- # [[ FreeBSD == Linux ]] 00:10:13.178 20:48:04 -- spdk/autotest.sh@217 -- # '[' 1 -eq 1 ']' 00:10:13.179 20:48:04 -- spdk/autotest.sh@218 -- # run_test blockdev_nvme /usr/home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:10:13.179 20:48:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:10:13.179 20:48:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:13.179 20:48:04 -- common/autotest_common.sh@10 -- # set +x 00:10:13.179 ************************************ 00:10:13.179 START TEST blockdev_nvme 00:10:13.179 ************************************ 00:10:13.179 20:48:04 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:10:13.179 * Looking for test storage... 00:10:13.179 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/bdev 00:10:13.179 20:48:04 -- bdev/blockdev.sh@10 -- # source /usr/home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:13.179 20:48:04 -- bdev/nbd_common.sh@6 -- # set -e 00:10:13.179 20:48:04 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:10:13.179 20:48:04 -- bdev/blockdev.sh@13 -- # conf_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:13.179 20:48:04 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:10:13.179 20:48:04 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:10:13.179 20:48:04 -- bdev/blockdev.sh@18 -- # : 00:10:13.179 20:48:04 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:10:13.179 20:48:04 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:10:13.179 20:48:04 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:10:13.179 20:48:04 -- bdev/blockdev.sh@672 -- # uname -s 00:10:13.179 20:48:04 -- bdev/blockdev.sh@672 -- # '[' FreeBSD = Linux ']' 00:10:13.179 20:48:04 -- bdev/blockdev.sh@677 -- # PRE_RESERVED_MEM=2048 00:10:13.179 20:48:04 -- bdev/blockdev.sh@680 -- # test_type=nvme 00:10:13.179 20:48:04 -- bdev/blockdev.sh@681 -- # crypto_device= 00:10:13.179 20:48:04 -- bdev/blockdev.sh@682 -- # dek= 00:10:13.179 20:48:04 -- bdev/blockdev.sh@683 -- # env_ctx= 00:10:13.179 20:48:04 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:10:13.179 20:48:04 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:10:13.179 20:48:04 -- bdev/blockdev.sh@688 -- # [[ nvme == bdev ]] 00:10:13.179 20:48:04 -- bdev/blockdev.sh@688 -- # [[ nvme == crypto_* ]] 00:10:13.179 20:48:04 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:10:13.179 20:48:04 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=53993 00:10:13.179 20:48:04 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:13.179 20:48:04 -- bdev/blockdev.sh@44 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:10:13.179 20:48:04 -- bdev/blockdev.sh@47 -- # waitforlisten 53993 00:10:13.179 20:48:04 -- common/autotest_common.sh@819 -- # '[' -z 53993 ']' 00:10:13.179 20:48:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.179 20:48:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:13.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.179 20:48:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.179 20:48:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:13.179 20:48:04 -- common/autotest_common.sh@10 -- # set +x 00:10:13.179 [2024-04-16 20:48:04.273525] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:10:13.179 [2024-04-16 20:48:04.273836] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:13.747 EAL: TSC is not safe to use in SMP mode 00:10:13.747 EAL: TSC is not invariant 00:10:13.747 [2024-04-16 20:48:04.738564] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.747 [2024-04-16 20:48:04.829822] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:13.747 [2024-04-16 20:48:04.829904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.317 20:48:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:14.317 20:48:05 -- common/autotest_common.sh@852 -- # return 0 00:10:14.317 20:48:05 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:10:14.317 20:48:05 -- bdev/blockdev.sh@697 -- # setup_nvme_conf 00:10:14.317 20:48:05 -- bdev/blockdev.sh@79 -- # local json 00:10:14.317 20:48:05 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:10:14.317 20:48:05 -- bdev/blockdev.sh@80 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:14.317 20:48:05 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } } ] }'\''' 00:10:14.317 20:48:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:14.317 20:48:05 -- common/autotest_common.sh@10 -- # set +x 00:10:14.317 [2024-04-16 20:48:05.302072] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:10:14.317 20:48:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:14.317 20:48:05 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:10:14.317 20:48:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:14.317 20:48:05 -- common/autotest_common.sh@10 -- # set +x 00:10:14.317 20:48:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:14.317 20:48:05 -- bdev/blockdev.sh@738 -- # cat 00:10:14.317 20:48:05 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:10:14.317 20:48:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:14.317 20:48:05 -- common/autotest_common.sh@10 -- # set +x 00:10:14.317 20:48:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:14.317 20:48:05 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:10:14.317 20:48:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:14.317 20:48:05 -- common/autotest_common.sh@10 -- # set +x 00:10:14.317 20:48:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:14.317 20:48:05 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:10:14.317 20:48:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:14.317 20:48:05 -- common/autotest_common.sh@10 -- # set +x 00:10:14.317 20:48:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:14.317 20:48:05 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:10:14.317 20:48:05 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:10:14.317 20:48:05 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:10:14.317 20:48:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:14.317 20:48:05 -- common/autotest_common.sh@10 -- # set +x 00:10:14.577 20:48:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:14.577 20:48:05 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:10:14.577 20:48:05 -- bdev/blockdev.sh@747 -- # jq -r .name 00:10:14.577 20:48:05 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "9fbc34e9-fc32-11ee-80f8-ef3e42bb1492"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "9fbc34e9-fc32-11ee-80f8-ef3e42bb1492",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:06.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:06.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:10:14.577 20:48:05 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:10:14.577 20:48:05 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1 00:10:14.577 20:48:05 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:10:14.577 20:48:05 -- bdev/blockdev.sh@752 -- # killprocess 53993 00:10:14.577 20:48:05 -- common/autotest_common.sh@926 -- # '[' -z 53993 ']' 00:10:14.577 20:48:05 -- common/autotest_common.sh@930 -- # kill -0 53993 00:10:14.577 20:48:05 -- common/autotest_common.sh@931 -- # uname 00:10:14.577 20:48:05 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:10:14.577 20:48:05 -- common/autotest_common.sh@934 -- # ps -c -o command 53993 00:10:14.577 20:48:05 -- common/autotest_common.sh@934 -- # tail -1 00:10:14.577 20:48:05 -- common/autotest_common.sh@934 -- # process_name=spdk_tgt 00:10:14.577 20:48:05 -- common/autotest_common.sh@936 -- # '[' spdk_tgt = sudo ']' 00:10:14.577 killing process with pid 53993 00:10:14.577 20:48:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 53993' 00:10:14.577 20:48:05 -- common/autotest_common.sh@945 -- # kill 53993 00:10:14.577 20:48:05 -- common/autotest_common.sh@950 -- # wait 53993 00:10:14.577 20:48:05 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:14.577 20:48:05 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /usr/home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:10:14.577 20:48:05 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:14.577 20:48:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:14.577 20:48:05 -- common/autotest_common.sh@10 -- # set +x 00:10:14.577 ************************************ 00:10:14.577 START TEST bdev_hello_world 00:10:14.577 ************************************ 00:10:14.577 20:48:05 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:10:14.837 [2024-04-16 20:48:05.707524] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:10:14.837 [2024-04-16 20:48:05.707893] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:15.096 EAL: TSC is not safe to use in SMP mode 00:10:15.096 EAL: TSC is not invariant 00:10:15.096 [2024-04-16 20:48:06.131073] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.096 [2024-04-16 20:48:06.219687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.356 [2024-04-16 20:48:06.275260] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:10:15.356 [2024-04-16 20:48:06.345541] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:10:15.356 [2024-04-16 20:48:06.345571] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:10:15.356 [2024-04-16 20:48:06.345596] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:10:15.356 [2024-04-16 20:48:06.346098] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:10:15.356 [2024-04-16 20:48:06.346403] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:10:15.356 [2024-04-16 20:48:06.346422] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:10:15.356 [2024-04-16 20:48:06.346533] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:10:15.356 00:10:15.356 [2024-04-16 20:48:06.346561] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:10:15.616 00:10:15.616 real 0m0.795s 00:10:15.616 user 0m0.336s 00:10:15.616 sys 0m0.458s 00:10:15.616 20:48:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:15.616 20:48:06 -- common/autotest_common.sh@10 -- # set +x 00:10:15.616 ************************************ 00:10:15.616 END TEST bdev_hello_world 00:10:15.616 ************************************ 00:10:15.616 20:48:06 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:10:15.616 20:48:06 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:10:15.616 20:48:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:15.616 20:48:06 -- common/autotest_common.sh@10 -- # set +x 00:10:15.616 ************************************ 00:10:15.616 START TEST bdev_bounds 00:10:15.616 ************************************ 00:10:15.616 20:48:06 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:10:15.616 20:48:06 -- bdev/blockdev.sh@288 -- # bdevio_pid=54052 00:10:15.616 20:48:06 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:10:15.616 20:48:06 -- bdev/blockdev.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 2048 --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:15.616 Process bdevio pid: 54052 00:10:15.616 20:48:06 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 54052' 00:10:15.616 20:48:06 -- bdev/blockdev.sh@291 -- # waitforlisten 54052 00:10:15.616 20:48:06 -- common/autotest_common.sh@819 -- # '[' -z 54052 ']' 00:10:15.616 20:48:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.616 20:48:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:15.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.616 20:48:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.616 20:48:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:15.616 20:48:06 -- common/autotest_common.sh@10 -- # set +x 00:10:15.616 [2024-04-16 20:48:06.557541] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:10:15.616 [2024-04-16 20:48:06.557891] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 2048 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:15.878 EAL: TSC is not safe to use in SMP mode 00:10:15.878 EAL: TSC is not invariant 00:10:15.878 [2024-04-16 20:48:06.990941] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:16.139 [2024-04-16 20:48:07.082704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.139 [2024-04-16 20:48:07.082556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:16.139 [2024-04-16 20:48:07.082707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:16.139 [2024-04-16 20:48:07.138262] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:10:16.397 20:48:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:16.397 20:48:07 -- common/autotest_common.sh@852 -- # return 0 00:10:16.397 20:48:07 -- bdev/blockdev.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:10:16.656 I/O targets: 00:10:16.656 Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:10:16.656 00:10:16.656 00:10:16.656 CUnit - A unit testing framework for C - Version 2.1-3 00:10:16.656 http://cunit.sourceforge.net/ 00:10:16.656 00:10:16.656 00:10:16.656 Suite: bdevio tests on: Nvme0n1 00:10:16.656 Test: blockdev write read block ...passed 00:10:16.656 Test: blockdev write zeroes read block ...passed 00:10:16.656 Test: blockdev write zeroes read no split ...passed 00:10:16.656 Test: blockdev write zeroes read split ...passed 00:10:16.656 Test: blockdev write zeroes read split partial ...passed 00:10:16.656 Test: blockdev reset ...[2024-04-16 20:48:07.562739] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:10:16.656 passed 00:10:16.656 Test: blockdev write read 8 blocks ...[2024-04-16 20:48:07.563692] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:16.656 passed 00:10:16.656 Test: blockdev write read size > 128k ...passed 00:10:16.656 Test: blockdev write read invalid size ...passed 00:10:16.656 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:16.656 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:16.656 Test: blockdev write read max offset ...passed 00:10:16.656 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:16.656 Test: blockdev writev readv 8 blocks ...passed 00:10:16.656 Test: blockdev writev readv 30 x 1block ...passed 00:10:16.656 Test: blockdev writev readv block ...passed 00:10:16.656 Test: blockdev writev readv size > 128k ...passed 00:10:16.656 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:16.656 Test: blockdev comparev and writev ...[2024-04-16 20:48:07.567185] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x297947000 len:0x1000 00:10:16.656 [2024-04-16 20:48:07.567221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:16.656 passed 00:10:16.656 Test: blockdev nvme passthru rw ...passed 00:10:16.656 Test: blockdev nvme passthru vendor specific ...[2024-04-16 20:48:07.567577] nvme_qpair.c: 220:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:16.656 [2024-04-16 20:48:07.567590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:16.656 passed 00:10:16.656 Test: blockdev nvme admin passthru ...passed 00:10:16.656 Test: blockdev copy ...passed 00:10:16.656 00:10:16.656 Run Summary: Type Total Ran Passed Failed Inactive 00:10:16.656 suites 1 1 n/a 0 0 00:10:16.656 tests 23 23 23 0 0 00:10:16.656 asserts 152 152 152 0 n/a 00:10:16.656 00:10:16.656 Elapsed time = 0.047 seconds 00:10:16.656 0 00:10:16.656 20:48:07 -- bdev/blockdev.sh@293 -- # killprocess 54052 00:10:16.656 20:48:07 -- common/autotest_common.sh@926 -- # '[' -z 54052 ']' 00:10:16.656 20:48:07 -- common/autotest_common.sh@930 -- # kill -0 54052 00:10:16.656 20:48:07 -- common/autotest_common.sh@931 -- # uname 00:10:16.656 20:48:07 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:10:16.656 20:48:07 -- common/autotest_common.sh@934 -- # ps -c -o command 54052 00:10:16.656 20:48:07 -- common/autotest_common.sh@934 -- # tail -1 00:10:16.656 20:48:07 -- common/autotest_common.sh@934 -- # process_name=bdevio 00:10:16.656 20:48:07 -- common/autotest_common.sh@936 -- # '[' bdevio = sudo ']' 00:10:16.656 killing process with pid 54052 00:10:16.656 20:48:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 54052' 00:10:16.656 20:48:07 -- common/autotest_common.sh@945 -- # kill 54052 00:10:16.656 20:48:07 -- common/autotest_common.sh@950 -- # wait 54052 00:10:16.656 20:48:07 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:10:16.656 00:10:16.656 real 0m1.198s 00:10:16.656 user 0m2.297s 00:10:16.656 sys 0m0.549s 00:10:16.656 20:48:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:16.656 20:48:07 -- common/autotest_common.sh@10 -- # set +x 00:10:16.656 ************************************ 00:10:16.656 END TEST bdev_bounds 00:10:16.656 ************************************ 00:10:16.916 20:48:07 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:10:16.916 20:48:07 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:10:16.916 20:48:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:16.916 20:48:07 -- common/autotest_common.sh@10 -- # set +x 00:10:16.916 ************************************ 00:10:16.916 START TEST bdev_nbd 00:10:16.916 ************************************ 00:10:16.916 20:48:07 -- common/autotest_common.sh@1104 -- # nbd_function_test /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:10:16.916 20:48:07 -- bdev/blockdev.sh@298 -- # uname -s 00:10:16.916 20:48:07 -- bdev/blockdev.sh@298 -- # [[ FreeBSD == Linux ]] 00:10:16.916 20:48:07 -- bdev/blockdev.sh@298 -- # return 0 00:10:16.916 00:10:16.916 real 0m0.007s 00:10:16.916 user 0m0.003s 00:10:16.916 sys 0m0.006s 00:10:16.916 20:48:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:16.916 20:48:07 -- common/autotest_common.sh@10 -- # set +x 00:10:16.916 ************************************ 00:10:16.916 END TEST bdev_nbd 00:10:16.916 ************************************ 00:10:16.916 20:48:07 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:10:16.916 20:48:07 -- bdev/blockdev.sh@762 -- # '[' nvme = nvme ']' 00:10:16.916 skipping fio tests on NVMe due to multi-ns failures. 00:10:16.916 20:48:07 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:10:16.916 20:48:07 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:16.916 20:48:07 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:16.916 20:48:07 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:10:16.916 20:48:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:16.916 20:48:07 -- common/autotest_common.sh@10 -- # set +x 00:10:16.916 ************************************ 00:10:16.916 START TEST bdev_verify 00:10:16.916 ************************************ 00:10:16.916 20:48:07 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:16.916 [2024-04-16 20:48:07.875430] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:10:16.916 [2024-04-16 20:48:07.875718] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:17.175 EAL: TSC is not safe to use in SMP mode 00:10:17.175 EAL: TSC is not invariant 00:10:17.441 [2024-04-16 20:48:08.310490] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:17.441 [2024-04-16 20:48:08.402547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.441 [2024-04-16 20:48:08.402550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:17.441 [2024-04-16 20:48:08.456875] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:10:17.441 Running I/O for 5 seconds... 00:10:22.727 00:10:22.727 Latency(us) 00:10:22.728 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:22.728 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:22.728 Verification LBA range: start 0x0 length 0xa0000 00:10:22.728 Nvme0n1 : 5.00 38125.55 148.93 0.00 0.00 3350.52 141.91 11310.14 00:10:22.728 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:22.728 Verification LBA range: start 0xa0000 length 0xa0000 00:10:22.728 Nvme0n1 : 5.00 37443.25 146.26 0.00 0.00 3411.40 167.80 10510.43 00:10:22.728 =================================================================================================================== 00:10:22.728 Total : 75568.80 295.19 0.00 0.00 3380.68 141.91 11310.14 00:11:01.474 00:11:01.474 real 0m42.244s 00:11:01.474 user 1m23.368s 00:11:01.474 sys 0m0.450s 00:11:01.474 20:48:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:01.474 20:48:50 -- common/autotest_common.sh@10 -- # set +x 00:11:01.474 ************************************ 00:11:01.474 END TEST bdev_verify 00:11:01.474 ************************************ 00:11:01.474 20:48:50 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:11:01.474 20:48:50 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:11:01.474 20:48:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:01.474 20:48:50 -- common/autotest_common.sh@10 -- # set +x 00:11:01.474 ************************************ 00:11:01.474 START TEST bdev_verify_big_io 00:11:01.474 ************************************ 00:11:01.474 20:48:50 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:11:01.474 [2024-04-16 20:48:50.172773] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:11:01.474 [2024-04-16 20:48:50.173114] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:11:01.474 EAL: TSC is not safe to use in SMP mode 00:11:01.474 EAL: TSC is not invariant 00:11:01.474 [2024-04-16 20:48:50.598979] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:01.474 [2024-04-16 20:48:50.681137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.474 [2024-04-16 20:48:50.681139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:01.474 [2024-04-16 20:48:50.735366] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:11:01.474 Running I/O for 5 seconds... 00:11:04.753 00:11:04.753 Latency(us) 00:11:04.753 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:04.753 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:04.753 Verification LBA range: start 0x0 length 0xa000 00:11:04.753 Nvme0n1 : 5.01 18086.60 1130.41 0.00 0.00 7035.12 129.42 25590.61 00:11:04.753 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:04.753 Verification LBA range: start 0xa000 length 0xa000 00:11:04.753 Nvme0n1 : 5.01 17004.84 1062.80 0.00 0.00 7480.07 132.99 20792.37 00:11:04.753 =================================================================================================================== 00:11:04.753 Total : 35091.44 2193.22 0.00 0.00 7250.71 129.42 25590.61 00:11:10.039 ************************************ 00:11:10.039 END TEST bdev_verify_big_io 00:11:10.039 ************************************ 00:11:10.039 00:11:10.039 real 0m9.977s 00:11:10.039 user 0m18.860s 00:11:10.039 sys 0m0.465s 00:11:10.039 20:49:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:10.039 20:49:00 -- common/autotest_common.sh@10 -- # set +x 00:11:10.039 20:49:00 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:10.039 20:49:00 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:11:10.039 20:49:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:10.039 20:49:00 -- common/autotest_common.sh@10 -- # set +x 00:11:10.039 ************************************ 00:11:10.039 START TEST bdev_write_zeroes 00:11:10.039 ************************************ 00:11:10.039 20:49:00 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:10.039 [2024-04-16 20:49:00.193761] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:11:10.039 [2024-04-16 20:49:00.194106] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:11:10.039 EAL: TSC is not safe to use in SMP mode 00:11:10.039 EAL: TSC is not invariant 00:11:10.039 [2024-04-16 20:49:00.625223] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.039 [2024-04-16 20:49:00.717498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.039 [2024-04-16 20:49:00.772980] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:11:10.039 Running I/O for 1 seconds... 00:11:10.972 00:11:10.972 Latency(us) 00:11:10.973 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:10.973 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:10.973 Nvme0n1 : 1.00 68234.32 266.54 0.00 0.00 1873.88 799.71 27532.76 00:11:10.973 =================================================================================================================== 00:11:10.973 Total : 68234.32 266.54 0.00 0.00 1873.88 799.71 27532.76 00:11:10.973 00:11:10.973 real 0m1.807s 00:11:10.973 user 0m1.322s 00:11:10.973 sys 0m0.480s 00:11:10.973 20:49:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:10.973 20:49:01 -- common/autotest_common.sh@10 -- # set +x 00:11:10.973 ************************************ 00:11:10.973 END TEST bdev_write_zeroes 00:11:10.973 ************************************ 00:11:10.973 20:49:02 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:10.973 20:49:02 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:11:10.973 20:49:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:10.973 20:49:02 -- common/autotest_common.sh@10 -- # set +x 00:11:10.973 ************************************ 00:11:10.973 START TEST bdev_json_nonenclosed 00:11:10.973 ************************************ 00:11:10.973 20:49:02 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:10.973 [2024-04-16 20:49:02.054100] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:11:10.973 [2024-04-16 20:49:02.054418] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:11:11.543 EAL: TSC is not safe to use in SMP mode 00:11:11.543 EAL: TSC is not invariant 00:11:11.543 [2024-04-16 20:49:02.483093] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.543 [2024-04-16 20:49:02.573986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.543 [2024-04-16 20:49:02.574099] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:11:11.543 [2024-04-16 20:49:02.574108] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:11.543 00:11:11.543 real 0m0.621s 00:11:11.543 user 0m0.154s 00:11:11.543 sys 0m0.465s 00:11:11.543 20:49:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:11.543 20:49:02 -- common/autotest_common.sh@10 -- # set +x 00:11:11.543 ************************************ 00:11:11.543 END TEST bdev_json_nonenclosed 00:11:11.543 ************************************ 00:11:11.802 20:49:02 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:11.802 20:49:02 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:11:11.802 20:49:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:11.802 20:49:02 -- common/autotest_common.sh@10 -- # set +x 00:11:11.802 ************************************ 00:11:11.802 START TEST bdev_json_nonarray 00:11:11.802 ************************************ 00:11:11.802 20:49:02 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:11.802 [2024-04-16 20:49:02.727612] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:11:11.802 [2024-04-16 20:49:02.727987] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:11:12.061 EAL: TSC is not safe to use in SMP mode 00:11:12.061 EAL: TSC is not invariant 00:11:12.061 [2024-04-16 20:49:03.148522] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.320 [2024-04-16 20:49:03.225953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.320 [2024-04-16 20:49:03.226047] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:11:12.320 [2024-04-16 20:49:03.226056] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:12.320 00:11:12.320 real 0m0.601s 00:11:12.320 user 0m0.125s 00:11:12.320 sys 0m0.473s 00:11:12.320 20:49:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:12.320 20:49:03 -- common/autotest_common.sh@10 -- # set +x 00:11:12.320 ************************************ 00:11:12.320 END TEST bdev_json_nonarray 00:11:12.320 ************************************ 00:11:12.320 20:49:03 -- bdev/blockdev.sh@785 -- # [[ nvme == bdev ]] 00:11:12.320 20:49:03 -- bdev/blockdev.sh@792 -- # [[ nvme == gpt ]] 00:11:12.320 20:49:03 -- bdev/blockdev.sh@796 -- # [[ nvme == crypto_sw ]] 00:11:12.320 20:49:03 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:11:12.320 20:49:03 -- bdev/blockdev.sh@809 -- # cleanup 00:11:12.320 20:49:03 -- bdev/blockdev.sh@21 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:11:12.320 20:49:03 -- bdev/blockdev.sh@22 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:12.320 20:49:03 -- bdev/blockdev.sh@24 -- # [[ nvme == rbd ]] 00:11:12.320 20:49:03 -- bdev/blockdev.sh@28 -- # [[ nvme == daos ]] 00:11:12.320 20:49:03 -- bdev/blockdev.sh@32 -- # [[ nvme = \g\p\t ]] 00:11:12.320 20:49:03 -- bdev/blockdev.sh@38 -- # [[ nvme == xnvme ]] 00:11:12.320 00:11:12.320 real 0m59.306s 00:11:12.320 user 1m48.119s 00:11:12.320 sys 0m4.369s 00:11:12.320 20:49:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:12.320 20:49:03 -- common/autotest_common.sh@10 -- # set +x 00:11:12.320 ************************************ 00:11:12.320 END TEST blockdev_nvme 00:11:12.320 ************************************ 00:11:12.320 20:49:03 -- spdk/autotest.sh@219 -- # uname -s 00:11:12.320 20:49:03 -- spdk/autotest.sh@219 -- # [[ FreeBSD == Linux ]] 00:11:12.320 20:49:03 -- spdk/autotest.sh@222 -- # run_test nvme /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:11:12.321 20:49:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:12.321 20:49:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:12.321 20:49:03 -- common/autotest_common.sh@10 -- # set +x 00:11:12.321 ************************************ 00:11:12.321 START TEST nvme 00:11:12.321 ************************************ 00:11:12.321 20:49:03 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:11:12.579 * Looking for test storage... 00:11:12.579 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/nvme 00:11:12.579 20:49:03 -- nvme/nvme.sh@77 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:12.838 hw.nic_uio.bdfs="0:6:0" 00:11:12.838 20:49:03 -- nvme/nvme.sh@79 -- # uname 00:11:12.838 20:49:03 -- nvme/nvme.sh@79 -- # '[' FreeBSD = Linux ']' 00:11:12.838 20:49:03 -- nvme/nvme.sh@84 -- # run_test nvme_reset /usr/home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:11:12.838 20:49:03 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:11:12.838 20:49:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:12.838 20:49:03 -- common/autotest_common.sh@10 -- # set +x 00:11:12.838 ************************************ 00:11:12.838 START TEST nvme_reset 00:11:12.838 ************************************ 00:11:12.838 20:49:03 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:11:13.407 EAL: TSC is not safe to use in SMP mode 00:11:13.407 EAL: TSC is not invariant 00:11:13.407 [2024-04-16 20:49:04.242838] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:11:13.407 Initializing NVMe Controllers 00:11:13.407 Skipping QEMU NVMe SSD at 0000:00:06.0 00:11:13.407 No NVMe controller found, /usr/home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:11:13.407 00:11:13.407 real 0m0.488s 00:11:13.407 user 0m0.018s 00:11:13.407 sys 0m0.469s 00:11:13.407 20:49:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:13.407 20:49:04 -- common/autotest_common.sh@10 -- # set +x 00:11:13.407 ************************************ 00:11:13.407 END TEST nvme_reset 00:11:13.407 ************************************ 00:11:13.407 20:49:04 -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:11:13.407 20:49:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:13.407 20:49:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:13.407 20:49:04 -- common/autotest_common.sh@10 -- # set +x 00:11:13.407 ************************************ 00:11:13.407 START TEST nvme_identify 00:11:13.407 ************************************ 00:11:13.407 20:49:04 -- common/autotest_common.sh@1104 -- # nvme_identify 00:11:13.407 20:49:04 -- nvme/nvme.sh@12 -- # bdfs=() 00:11:13.407 20:49:04 -- nvme/nvme.sh@12 -- # local bdfs bdf 00:11:13.407 20:49:04 -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:11:13.407 20:49:04 -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:11:13.407 20:49:04 -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:13.407 20:49:04 -- common/autotest_common.sh@1498 -- # local bdfs 00:11:13.407 20:49:04 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:13.407 20:49:04 -- common/autotest_common.sh@1499 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:13.407 20:49:04 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:13.407 20:49:04 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:11:13.407 20:49:04 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:11:13.407 20:49:04 -- nvme/nvme.sh@14 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:11:13.976 EAL: TSC is not safe to use in SMP mode 00:11:13.976 EAL: TSC is not invariant 00:11:13.976 [2024-04-16 20:49:04.861183] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:11:13.976 ===================================================== 00:11:13.976 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:11:13.976 ===================================================== 00:11:13.976 Controller Capabilities/Features 00:11:13.976 ================================ 00:11:13.976 Vendor ID: 1b36 00:11:13.976 Subsystem Vendor ID: 1af4 00:11:13.976 Serial Number: 12340 00:11:13.976 Model Number: QEMU NVMe Ctrl 00:11:13.976 Firmware Version: 8.0.0 00:11:13.976 Recommended Arb Burst: 6 00:11:13.976 IEEE OUI Identifier: 00 54 52 00:11:13.976 Multi-path I/O 00:11:13.976 May have multiple subsystem ports: No 00:11:13.976 May have multiple controllers: No 00:11:13.976 Associated with SR-IOV VF: No 00:11:13.976 Max Data Transfer Size: 524288 00:11:13.976 Max Number of Namespaces: 256 00:11:13.976 Max Number of I/O Queues: 64 00:11:13.976 NVMe Specification Version (VS): 1.4 00:11:13.976 NVMe Specification Version (Identify): 1.4 00:11:13.976 Maximum Queue Entries: 2048 00:11:13.976 Contiguous Queues Required: Yes 00:11:13.976 Arbitration Mechanisms Supported 00:11:13.976 Weighted Round Robin: Not Supported 00:11:13.976 Vendor Specific: Not Supported 00:11:13.976 Reset Timeout: 7500 ms 00:11:13.976 Doorbell Stride: 4 bytes 00:11:13.976 NVM Subsystem Reset: Not Supported 00:11:13.976 Command Sets Supported 00:11:13.976 NVM Command Set: Supported 00:11:13.976 Boot Partition: Not Supported 00:11:13.976 Memory Page Size Minimum: 4096 bytes 00:11:13.976 Memory Page Size Maximum: 65536 bytes 00:11:13.976 Persistent Memory Region: Not Supported 00:11:13.976 Optional Asynchronous Events Supported 00:11:13.976 Namespace Attribute Notices: Supported 00:11:13.976 Firmware Activation Notices: Not Supported 00:11:13.976 ANA Change Notices: Not Supported 00:11:13.976 PLE Aggregate Log Change Notices: Not Supported 00:11:13.976 LBA Status Info Alert Notices: Not Supported 00:11:13.976 EGE Aggregate Log Change Notices: Not Supported 00:11:13.976 Normal NVM Subsystem Shutdown event: Not Supported 00:11:13.976 Zone Descriptor Change Notices: Not Supported 00:11:13.976 Discovery Log Change Notices: Not Supported 00:11:13.976 Controller Attributes 00:11:13.976 128-bit Host Identifier: Not Supported 00:11:13.976 Non-Operational Permissive Mode: Not Supported 00:11:13.976 NVM Sets: Not Supported 00:11:13.976 Read Recovery Levels: Not Supported 00:11:13.976 Endurance Groups: Not Supported 00:11:13.976 Predictable Latency Mode: Not Supported 00:11:13.976 Traffic Based Keep ALive: Not Supported 00:11:13.976 Namespace Granularity: Not Supported 00:11:13.976 SQ Associations: Not Supported 00:11:13.976 UUID List: Not Supported 00:11:13.976 Multi-Domain Subsystem: Not Supported 00:11:13.976 Fixed Capacity Management: Not Supported 00:11:13.976 Variable Capacity Management: Not Supported 00:11:13.976 Delete Endurance Group: Not Supported 00:11:13.976 Delete NVM Set: Not Supported 00:11:13.976 Extended LBA Formats Supported: Supported 00:11:13.976 Flexible Data Placement Supported: Not Supported 00:11:13.976 00:11:13.976 Controller Memory Buffer Support 00:11:13.976 ================================ 00:11:13.976 Supported: No 00:11:13.976 00:11:13.976 Persistent Memory Region Support 00:11:13.976 ================================ 00:11:13.976 Supported: No 00:11:13.976 00:11:13.976 Admin Command Set Attributes 00:11:13.976 ============================ 00:11:13.976 Security Send/Receive: Not Supported 00:11:13.976 Format NVM: Supported 00:11:13.976 Firmware Activate/Download: Not Supported 00:11:13.976 Namespace Management: Supported 00:11:13.976 Device Self-Test: Not Supported 00:11:13.976 Directives: Supported 00:11:13.976 NVMe-MI: Not Supported 00:11:13.976 Virtualization Management: Not Supported 00:11:13.976 Doorbell Buffer Config: Supported 00:11:13.976 Get LBA Status Capability: Not Supported 00:11:13.976 Command & Feature Lockdown Capability: Not Supported 00:11:13.976 Abort Command Limit: 4 00:11:13.976 Async Event Request Limit: 4 00:11:13.976 Number of Firmware Slots: N/A 00:11:13.976 Firmware Slot 1 Read-Only: N/A 00:11:13.976 Firmware Activation Without Reset: N/A 00:11:13.976 Multiple Update Detection Support: N/A 00:11:13.976 Firmware Update Granularity: No Information Provided 00:11:13.976 Per-Namespace SMART Log: Yes 00:11:13.976 Asymmetric Namespace Access Log Page: Not Supported 00:11:13.976 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:11:13.976 Command Effects Log Page: Supported 00:11:13.976 Get Log Page Extended Data: Supported 00:11:13.976 Telemetry Log Pages: Not Supported 00:11:13.976 Persistent Event Log Pages: Not Supported 00:11:13.976 Supported Log Pages Log Page: May Support 00:11:13.976 Commands Supported & Effects Log Page: Not Supported 00:11:13.976 Feature Identifiers & Effects Log Page:May Support 00:11:13.976 NVMe-MI Commands & Effects Log Page: May Support 00:11:13.976 Data Area 4 for Telemetry Log: Not Supported 00:11:13.976 Error Log Page Entries Supported: 1 00:11:13.976 Keep Alive: Not Supported 00:11:13.976 00:11:13.976 NVM Command Set Attributes 00:11:13.976 ========================== 00:11:13.976 Submission Queue Entry Size 00:11:13.976 Max: 64 00:11:13.976 Min: 64 00:11:13.976 Completion Queue Entry Size 00:11:13.976 Max: 16 00:11:13.976 Min: 16 00:11:13.976 Number of Namespaces: 256 00:11:13.976 Compare Command: Supported 00:11:13.976 Write Uncorrectable Command: Not Supported 00:11:13.976 Dataset Management Command: Supported 00:11:13.976 Write Zeroes Command: Supported 00:11:13.976 Set Features Save Field: Supported 00:11:13.976 Reservations: Not Supported 00:11:13.976 Timestamp: Supported 00:11:13.976 Copy: Supported 00:11:13.976 Volatile Write Cache: Present 00:11:13.976 Atomic Write Unit (Normal): 1 00:11:13.976 Atomic Write Unit (PFail): 1 00:11:13.976 Atomic Compare & Write Unit: 1 00:11:13.976 Fused Compare & Write: Not Supported 00:11:13.976 Scatter-Gather List 00:11:13.976 SGL Command Set: Supported 00:11:13.976 SGL Keyed: Not Supported 00:11:13.976 SGL Bit Bucket Descriptor: Not Supported 00:11:13.976 SGL Metadata Pointer: Not Supported 00:11:13.976 Oversized SGL: Not Supported 00:11:13.976 SGL Metadata Address: Not Supported 00:11:13.976 SGL Offset: Not Supported 00:11:13.976 Transport SGL Data Block: Not Supported 00:11:13.976 Replay Protected Memory Block: Not Supported 00:11:13.976 00:11:13.976 Firmware Slot Information 00:11:13.976 ========================= 00:11:13.976 Active slot: 1 00:11:13.976 Slot 1 Firmware Revision: 1.0 00:11:13.976 00:11:13.976 00:11:13.976 Commands Supported and Effects 00:11:13.976 ============================== 00:11:13.976 Admin Commands 00:11:13.976 -------------- 00:11:13.976 Delete I/O Submission Queue (00h): Supported 00:11:13.977 Create I/O Submission Queue (01h): Supported 00:11:13.977 Get Log Page (02h): Supported 00:11:13.977 Delete I/O Completion Queue (04h): Supported 00:11:13.977 Create I/O Completion Queue (05h): Supported 00:11:13.977 Identify (06h): Supported 00:11:13.977 Abort (08h): Supported 00:11:13.977 Set Features (09h): Supported 00:11:13.977 Get Features (0Ah): Supported 00:11:13.977 Asynchronous Event Request (0Ch): Supported 00:11:13.977 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:13.977 Directive Send (19h): Supported 00:11:13.977 Directive Receive (1Ah): Supported 00:11:13.977 Virtualization Management (1Ch): Supported 00:11:13.977 Doorbell Buffer Config (7Ch): Supported 00:11:13.977 Format NVM (80h): Supported LBA-Change 00:11:13.977 I/O Commands 00:11:13.977 ------------ 00:11:13.977 Flush (00h): Supported LBA-Change 00:11:13.977 Write (01h): Supported LBA-Change 00:11:13.977 Read (02h): Supported 00:11:13.977 Compare (05h): Supported 00:11:13.977 Write Zeroes (08h): Supported LBA-Change 00:11:13.977 Dataset Management (09h): Supported LBA-Change 00:11:13.977 Unknown (0Ch): Supported 00:11:13.977 Unknown (12h): Supported 00:11:13.977 Copy (19h): Supported LBA-Change 00:11:13.977 Unknown (1Dh): Supported LBA-Change 00:11:13.977 00:11:13.977 Error Log 00:11:13.977 ========= 00:11:13.977 00:11:13.977 Arbitration 00:11:13.977 =========== 00:11:13.977 Arbitration Burst: no limit 00:11:13.977 00:11:13.977 Power Management 00:11:13.977 ================ 00:11:13.977 Number of Power States: 1 00:11:13.977 Current Power State: Power State #0 00:11:13.977 Power State #0: 00:11:13.977 Max Power: 25.00 W 00:11:13.977 Non-Operational State: Operational 00:11:13.977 Entry Latency: 16 microseconds 00:11:13.977 Exit Latency: 4 microseconds 00:11:13.977 Relative Read Throughput: 0 00:11:13.977 Relative Read Latency: 0 00:11:13.977 Relative Write Throughput: 0 00:11:13.977 Relative Write Latency: 0 00:11:13.977 Idle Power: Not Reported 00:11:13.977 Active Power: Not Reported 00:11:13.977 Non-Operational Permissive Mode: Not Supported 00:11:13.977 00:11:13.977 Health Information 00:11:13.977 ================== 00:11:13.977 Critical Warnings: 00:11:13.977 Available Spare Space: OK 00:11:13.977 Temperature: OK 00:11:13.977 Device Reliability: OK 00:11:13.977 Read Only: No 00:11:13.977 Volatile Memory Backup: OK 00:11:13.977 Current Temperature: 323 Kelvin (50 Celsius) 00:11:13.977 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:13.977 Available Spare: 0% 00:11:13.977 Available Spare Threshold: 0% 00:11:13.977 Life Percentage Used: 0% 00:11:13.977 Data Units Read: 25528 00:11:13.977 Data Units Written: 12854 00:11:13.977 Host Read Commands: 553939 00:11:13.977 Host Write Commands: 278028 00:11:13.977 Controller Busy Time: 0 minutes 00:11:13.977 Power Cycles: 0 00:11:13.977 Power On Hours: 0 hours 00:11:13.977 Unsafe Shutdowns: 0 00:11:13.977 Unrecoverable Media Errors: 0 00:11:13.977 Lifetime Error Log Entries: 0 00:11:13.977 Warning Temperature Time: 0 minutes 00:11:13.977 Critical Temperature Time: 0 minutes 00:11:13.977 00:11:13.977 Number of Queues 00:11:13.977 ================ 00:11:13.977 Number of I/O Submission Queues: 64 00:11:13.977 Number of I/O Completion Queues: 64 00:11:13.977 00:11:13.977 ZNS Specific Controller Data 00:11:13.977 ============================ 00:11:13.977 Zone Append Size Limit: 0 00:11:13.977 00:11:13.977 00:11:13.977 Active Namespaces 00:11:13.977 ================= 00:11:13.977 Namespace ID:1 00:11:13.977 Error Recovery Timeout: Unlimited 00:11:13.977 Command Set Identifier: NVM (00h) 00:11:13.977 Deallocate: Supported 00:11:13.977 Deallocated/Unwritten Error: Supported 00:11:13.977 Deallocated Read Value: All 0x00 00:11:13.977 Deallocate in Write Zeroes: Not Supported 00:11:13.977 Deallocated Guard Field: 0xFFFF 00:11:13.977 Flush: Supported 00:11:13.977 Reservation: Not Supported 00:11:13.977 Namespace Sharing Capabilities: Private 00:11:13.977 Size (in LBAs): 1310720 (5GiB) 00:11:13.977 Capacity (in LBAs): 1310720 (5GiB) 00:11:13.977 Utilization (in LBAs): 1310720 (5GiB) 00:11:13.977 Thin Provisioning: Not Supported 00:11:13.977 Per-NS Atomic Units: No 00:11:13.977 Maximum Single Source Range Length: 128 00:11:13.977 Maximum Copy Length: 128 00:11:13.977 Maximum Source Range Count: 128 00:11:13.977 NGUID/EUI64 Never Reused: No 00:11:13.977 Namespace Write Protected: No 00:11:13.977 Number of LBA Formats: 8 00:11:13.977 Current LBA Format: LBA Format #04 00:11:13.977 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:13.977 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:13.977 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:13.977 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:13.977 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:13.977 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:13.977 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:13.977 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:13.977 00:11:13.977 20:49:04 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:13.977 20:49:04 -- nvme/nvme.sh@16 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:11:14.238 EAL: TSC is not safe to use in SMP mode 00:11:14.238 EAL: TSC is not invariant 00:11:14.238 [2024-04-16 20:49:05.352820] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:11:14.238 ===================================================== 00:11:14.238 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:11:14.238 ===================================================== 00:11:14.238 Controller Capabilities/Features 00:11:14.238 ================================ 00:11:14.238 Vendor ID: 1b36 00:11:14.238 Subsystem Vendor ID: 1af4 00:11:14.238 Serial Number: 12340 00:11:14.238 Model Number: QEMU NVMe Ctrl 00:11:14.238 Firmware Version: 8.0.0 00:11:14.238 Recommended Arb Burst: 6 00:11:14.238 IEEE OUI Identifier: 00 54 52 00:11:14.238 Multi-path I/O 00:11:14.238 May have multiple subsystem ports: No 00:11:14.238 May have multiple controllers: No 00:11:14.238 Associated with SR-IOV VF: No 00:11:14.238 Max Data Transfer Size: 524288 00:11:14.238 Max Number of Namespaces: 256 00:11:14.238 Max Number of I/O Queues: 64 00:11:14.238 NVMe Specification Version (VS): 1.4 00:11:14.238 NVMe Specification Version (Identify): 1.4 00:11:14.238 Maximum Queue Entries: 2048 00:11:14.238 Contiguous Queues Required: Yes 00:11:14.238 Arbitration Mechanisms Supported 00:11:14.238 Weighted Round Robin: Not Supported 00:11:14.238 Vendor Specific: Not Supported 00:11:14.238 Reset Timeout: 7500 ms 00:11:14.238 Doorbell Stride: 4 bytes 00:11:14.238 NVM Subsystem Reset: Not Supported 00:11:14.238 Command Sets Supported 00:11:14.238 NVM Command Set: Supported 00:11:14.238 Boot Partition: Not Supported 00:11:14.238 Memory Page Size Minimum: 4096 bytes 00:11:14.238 Memory Page Size Maximum: 65536 bytes 00:11:14.238 Persistent Memory Region: Not Supported 00:11:14.238 Optional Asynchronous Events Supported 00:11:14.238 Namespace Attribute Notices: Supported 00:11:14.238 Firmware Activation Notices: Not Supported 00:11:14.238 ANA Change Notices: Not Supported 00:11:14.238 PLE Aggregate Log Change Notices: Not Supported 00:11:14.238 LBA Status Info Alert Notices: Not Supported 00:11:14.238 EGE Aggregate Log Change Notices: Not Supported 00:11:14.238 Normal NVM Subsystem Shutdown event: Not Supported 00:11:14.238 Zone Descriptor Change Notices: Not Supported 00:11:14.238 Discovery Log Change Notices: Not Supported 00:11:14.238 Controller Attributes 00:11:14.238 128-bit Host Identifier: Not Supported 00:11:14.238 Non-Operational Permissive Mode: Not Supported 00:11:14.238 NVM Sets: Not Supported 00:11:14.238 Read Recovery Levels: Not Supported 00:11:14.238 Endurance Groups: Not Supported 00:11:14.238 Predictable Latency Mode: Not Supported 00:11:14.238 Traffic Based Keep ALive: Not Supported 00:11:14.238 Namespace Granularity: Not Supported 00:11:14.238 SQ Associations: Not Supported 00:11:14.238 UUID List: Not Supported 00:11:14.238 Multi-Domain Subsystem: Not Supported 00:11:14.238 Fixed Capacity Management: Not Supported 00:11:14.238 Variable Capacity Management: Not Supported 00:11:14.238 Delete Endurance Group: Not Supported 00:11:14.238 Delete NVM Set: Not Supported 00:11:14.238 Extended LBA Formats Supported: Supported 00:11:14.238 Flexible Data Placement Supported: Not Supported 00:11:14.238 00:11:14.238 Controller Memory Buffer Support 00:11:14.238 ================================ 00:11:14.238 Supported: No 00:11:14.238 00:11:14.238 Persistent Memory Region Support 00:11:14.238 ================================ 00:11:14.238 Supported: No 00:11:14.238 00:11:14.238 Admin Command Set Attributes 00:11:14.238 ============================ 00:11:14.238 Security Send/Receive: Not Supported 00:11:14.238 Format NVM: Supported 00:11:14.238 Firmware Activate/Download: Not Supported 00:11:14.238 Namespace Management: Supported 00:11:14.238 Device Self-Test: Not Supported 00:11:14.238 Directives: Supported 00:11:14.238 NVMe-MI: Not Supported 00:11:14.238 Virtualization Management: Not Supported 00:11:14.238 Doorbell Buffer Config: Supported 00:11:14.238 Get LBA Status Capability: Not Supported 00:11:14.238 Command & Feature Lockdown Capability: Not Supported 00:11:14.238 Abort Command Limit: 4 00:11:14.238 Async Event Request Limit: 4 00:11:14.238 Number of Firmware Slots: N/A 00:11:14.238 Firmware Slot 1 Read-Only: N/A 00:11:14.238 Firmware Activation Without Reset: N/A 00:11:14.238 Multiple Update Detection Support: N/A 00:11:14.238 Firmware Update Granularity: No Information Provided 00:11:14.238 Per-Namespace SMART Log: Yes 00:11:14.238 Asymmetric Namespace Access Log Page: Not Supported 00:11:14.238 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:11:14.238 Command Effects Log Page: Supported 00:11:14.238 Get Log Page Extended Data: Supported 00:11:14.238 Telemetry Log Pages: Not Supported 00:11:14.238 Persistent Event Log Pages: Not Supported 00:11:14.238 Supported Log Pages Log Page: May Support 00:11:14.238 Commands Supported & Effects Log Page: Not Supported 00:11:14.238 Feature Identifiers & Effects Log Page:May Support 00:11:14.238 NVMe-MI Commands & Effects Log Page: May Support 00:11:14.238 Data Area 4 for Telemetry Log: Not Supported 00:11:14.238 Error Log Page Entries Supported: 1 00:11:14.238 Keep Alive: Not Supported 00:11:14.238 00:11:14.238 NVM Command Set Attributes 00:11:14.238 ========================== 00:11:14.238 Submission Queue Entry Size 00:11:14.238 Max: 64 00:11:14.238 Min: 64 00:11:14.238 Completion Queue Entry Size 00:11:14.238 Max: 16 00:11:14.238 Min: 16 00:11:14.238 Number of Namespaces: 256 00:11:14.238 Compare Command: Supported 00:11:14.238 Write Uncorrectable Command: Not Supported 00:11:14.238 Dataset Management Command: Supported 00:11:14.238 Write Zeroes Command: Supported 00:11:14.238 Set Features Save Field: Supported 00:11:14.238 Reservations: Not Supported 00:11:14.238 Timestamp: Supported 00:11:14.238 Copy: Supported 00:11:14.238 Volatile Write Cache: Present 00:11:14.238 Atomic Write Unit (Normal): 1 00:11:14.238 Atomic Write Unit (PFail): 1 00:11:14.238 Atomic Compare & Write Unit: 1 00:11:14.238 Fused Compare & Write: Not Supported 00:11:14.238 Scatter-Gather List 00:11:14.238 SGL Command Set: Supported 00:11:14.238 SGL Keyed: Not Supported 00:11:14.238 SGL Bit Bucket Descriptor: Not Supported 00:11:14.238 SGL Metadata Pointer: Not Supported 00:11:14.238 Oversized SGL: Not Supported 00:11:14.238 SGL Metadata Address: Not Supported 00:11:14.238 SGL Offset: Not Supported 00:11:14.238 Transport SGL Data Block: Not Supported 00:11:14.238 Replay Protected Memory Block: Not Supported 00:11:14.238 00:11:14.238 Firmware Slot Information 00:11:14.238 ========================= 00:11:14.238 Active slot: 1 00:11:14.238 Slot 1 Firmware Revision: 1.0 00:11:14.238 00:11:14.238 00:11:14.238 Commands Supported and Effects 00:11:14.238 ============================== 00:11:14.238 Admin Commands 00:11:14.238 -------------- 00:11:14.238 Delete I/O Submission Queue (00h): Supported 00:11:14.238 Create I/O Submission Queue (01h): Supported 00:11:14.238 Get Log Page (02h): Supported 00:11:14.238 Delete I/O Completion Queue (04h): Supported 00:11:14.238 Create I/O Completion Queue (05h): Supported 00:11:14.238 Identify (06h): Supported 00:11:14.238 Abort (08h): Supported 00:11:14.238 Set Features (09h): Supported 00:11:14.238 Get Features (0Ah): Supported 00:11:14.239 Asynchronous Event Request (0Ch): Supported 00:11:14.239 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:14.239 Directive Send (19h): Supported 00:11:14.239 Directive Receive (1Ah): Supported 00:11:14.239 Virtualization Management (1Ch): Supported 00:11:14.239 Doorbell Buffer Config (7Ch): Supported 00:11:14.239 Format NVM (80h): Supported LBA-Change 00:11:14.239 I/O Commands 00:11:14.239 ------------ 00:11:14.239 Flush (00h): Supported LBA-Change 00:11:14.239 Write (01h): Supported LBA-Change 00:11:14.239 Read (02h): Supported 00:11:14.239 Compare (05h): Supported 00:11:14.239 Write Zeroes (08h): Supported LBA-Change 00:11:14.239 Dataset Management (09h): Supported LBA-Change 00:11:14.239 Unknown (0Ch): Supported 00:11:14.239 Unknown (12h): Supported 00:11:14.239 Copy (19h): Supported LBA-Change 00:11:14.239 Unknown (1Dh): Supported LBA-Change 00:11:14.239 00:11:14.239 Error Log 00:11:14.239 ========= 00:11:14.239 00:11:14.239 Arbitration 00:11:14.239 =========== 00:11:14.239 Arbitration Burst: no limit 00:11:14.239 00:11:14.239 Power Management 00:11:14.239 ================ 00:11:14.239 Number of Power States: 1 00:11:14.239 Current Power State: Power State #0 00:11:14.239 Power State #0: 00:11:14.239 Max Power: 25.00 W 00:11:14.239 Non-Operational State: Operational 00:11:14.239 Entry Latency: 16 microseconds 00:11:14.239 Exit Latency: 4 microseconds 00:11:14.239 Relative Read Throughput: 0 00:11:14.239 Relative Read Latency: 0 00:11:14.239 Relative Write Throughput: 0 00:11:14.239 Relative Write Latency: 0 00:11:14.499 Idle Power: Not Reported 00:11:14.499 Active Power: Not Reported 00:11:14.499 Non-Operational Permissive Mode: Not Supported 00:11:14.499 00:11:14.499 Health Information 00:11:14.499 ================== 00:11:14.499 Critical Warnings: 00:11:14.499 Available Spare Space: OK 00:11:14.499 Temperature: OK 00:11:14.499 Device Reliability: OK 00:11:14.499 Read Only: No 00:11:14.499 Volatile Memory Backup: OK 00:11:14.499 Current Temperature: 323 Kelvin (50 Celsius) 00:11:14.499 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:14.499 Available Spare: 0% 00:11:14.499 Available Spare Threshold: 0% 00:11:14.499 Life Percentage Used: 0% 00:11:14.499 Data Units Read: 25528 00:11:14.499 Data Units Written: 12854 00:11:14.499 Host Read Commands: 553939 00:11:14.499 Host Write Commands: 278028 00:11:14.499 Controller Busy Time: 0 minutes 00:11:14.499 Power Cycles: 0 00:11:14.499 Power On Hours: 0 hours 00:11:14.499 Unsafe Shutdowns: 0 00:11:14.499 Unrecoverable Media Errors: 0 00:11:14.499 Lifetime Error Log Entries: 0 00:11:14.499 Warning Temperature Time: 0 minutes 00:11:14.499 Critical Temperature Time: 0 minutes 00:11:14.499 00:11:14.499 Number of Queues 00:11:14.499 ================ 00:11:14.499 Number of I/O Submission Queues: 64 00:11:14.499 Number of I/O Completion Queues: 64 00:11:14.499 00:11:14.499 ZNS Specific Controller Data 00:11:14.499 ============================ 00:11:14.499 Zone Append Size Limit: 0 00:11:14.499 00:11:14.499 00:11:14.499 Active Namespaces 00:11:14.499 ================= 00:11:14.499 Namespace ID:1 00:11:14.499 Error Recovery Timeout: Unlimited 00:11:14.499 Command Set Identifier: NVM (00h) 00:11:14.499 Deallocate: Supported 00:11:14.499 Deallocated/Unwritten Error: Supported 00:11:14.499 Deallocated Read Value: All 0x00 00:11:14.499 Deallocate in Write Zeroes: Not Supported 00:11:14.499 Deallocated Guard Field: 0xFFFF 00:11:14.499 Flush: Supported 00:11:14.499 Reservation: Not Supported 00:11:14.499 Namespace Sharing Capabilities: Private 00:11:14.499 Size (in LBAs): 1310720 (5GiB) 00:11:14.499 Capacity (in LBAs): 1310720 (5GiB) 00:11:14.499 Utilization (in LBAs): 1310720 (5GiB) 00:11:14.499 Thin Provisioning: Not Supported 00:11:14.499 Per-NS Atomic Units: No 00:11:14.499 Maximum Single Source Range Length: 128 00:11:14.499 Maximum Copy Length: 128 00:11:14.499 Maximum Source Range Count: 128 00:11:14.499 NGUID/EUI64 Never Reused: No 00:11:14.499 Namespace Write Protected: No 00:11:14.499 Number of LBA Formats: 8 00:11:14.499 Current LBA Format: LBA Format #04 00:11:14.499 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:14.499 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:14.499 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:14.499 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:14.499 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:14.499 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:14.499 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:14.499 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:14.499 00:11:14.499 00:11:14.499 real 0m1.050s 00:11:14.499 user 0m0.080s 00:11:14.499 sys 0m0.991s 00:11:14.499 20:49:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:14.499 20:49:05 -- common/autotest_common.sh@10 -- # set +x 00:11:14.499 ************************************ 00:11:14.499 END TEST nvme_identify 00:11:14.499 ************************************ 00:11:14.499 20:49:05 -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:11:14.499 20:49:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:14.499 20:49:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:14.499 20:49:05 -- common/autotest_common.sh@10 -- # set +x 00:11:14.499 ************************************ 00:11:14.499 START TEST nvme_perf 00:11:14.499 ************************************ 00:11:14.499 20:49:05 -- common/autotest_common.sh@1104 -- # nvme_perf 00:11:14.499 20:49:05 -- nvme/nvme.sh@22 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:11:14.759 EAL: TSC is not safe to use in SMP mode 00:11:14.759 EAL: TSC is not invariant 00:11:15.018 [2024-04-16 20:49:05.888604] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:11:15.955 Initializing NVMe Controllers 00:11:15.955 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:11:15.955 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:11:15.955 Initialization complete. Launching workers. 00:11:15.955 ======================================================== 00:11:15.955 Latency(us) 00:11:15.955 Device Information : IOPS MiB/s Average min max 00:11:15.955 PCIE (0000:00:06.0) NSID 1 from core 0: 105304.44 1234.04 1215.49 928.19 3623.17 00:11:15.955 ======================================================== 00:11:15.955 Total : 105304.44 1234.04 1215.49 928.19 3623.17 00:11:15.955 00:11:15.955 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:11:15.955 ================================================================================= 00:11:15.955 1.00000% : 999.633us 00:11:15.955 10.00000% : 1078.176us 00:11:15.955 25.00000% : 1135.298us 00:11:15.955 50.00000% : 1206.700us 00:11:15.955 75.00000% : 1278.103us 00:11:15.955 90.00000% : 1342.365us 00:11:15.955 95.00000% : 1378.066us 00:11:15.955 98.00000% : 1478.029us 00:11:15.955 99.00000% : 1777.919us 00:11:15.955 99.50000% : 2213.474us 00:11:15.955 99.90000% : 3013.180us 00:11:15.955 99.99000% : 3570.119us 00:11:15.955 99.99900% : 3612.960us 00:11:15.955 99.99990% : 3627.241us 00:11:15.956 99.99999% : 3627.241us 00:11:15.956 00:11:15.956 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:11:15.956 ============================================================================== 00:11:15.956 Range in us Cumulative IO count 00:11:15.956 921.091 - 928.231: 0.0009% ( 1) 00:11:15.956 928.231 - 935.371: 0.0057% ( 5) 00:11:15.956 935.371 - 942.511: 0.0199% ( 15) 00:11:15.956 942.511 - 949.652: 0.0503% ( 32) 00:11:15.956 949.652 - 956.792: 0.0816% ( 33) 00:11:15.956 956.792 - 963.932: 0.1338% ( 55) 00:11:15.956 963.932 - 971.072: 0.1945% ( 64) 00:11:15.956 971.072 - 978.213: 0.3027% ( 114) 00:11:15.956 978.213 - 985.353: 0.4545% ( 160) 00:11:15.956 985.353 - 992.493: 0.6738% ( 231) 00:11:15.956 992.493 - 999.633: 1.0002% ( 344) 00:11:15.956 999.633 - 1006.774: 1.4149% ( 437) 00:11:15.956 1006.774 - 1013.914: 1.9767% ( 592) 00:11:15.956 1013.914 - 1021.054: 2.6191% ( 677) 00:11:15.956 1021.054 - 1028.194: 3.3754% ( 797) 00:11:15.956 1028.194 - 1035.334: 4.2646% ( 937) 00:11:15.956 1035.334 - 1042.475: 5.2325% ( 1020) 00:11:15.956 1042.475 - 1049.615: 6.2830% ( 1107) 00:11:15.956 1049.615 - 1056.755: 7.4170% ( 1195) 00:11:15.956 1056.755 - 1063.895: 8.6070% ( 1254) 00:11:15.956 1063.895 - 1071.036: 9.8767% ( 1338) 00:11:15.956 1071.036 - 1078.176: 11.1939% ( 1388) 00:11:15.956 1078.176 - 1085.316: 12.6031% ( 1485) 00:11:15.956 1085.316 - 1092.456: 14.1831% ( 1665) 00:11:15.956 1092.456 - 1099.597: 15.8267% ( 1732) 00:11:15.956 1099.597 - 1106.737: 17.5661% ( 1833) 00:11:15.956 1106.737 - 1113.877: 19.3549% ( 1885) 00:11:15.956 1113.877 - 1121.017: 21.2889% ( 2038) 00:11:15.956 1121.017 - 1128.158: 23.3737% ( 2197) 00:11:15.956 1128.158 - 1135.298: 25.6218% ( 2369) 00:11:15.956 1135.298 - 1142.438: 27.9287% ( 2431) 00:11:15.956 1142.438 - 1149.578: 30.3486% ( 2550) 00:11:15.956 1149.578 - 1156.719: 32.8671% ( 2654) 00:11:15.956 1156.719 - 1163.859: 35.4758% ( 2749) 00:11:15.956 1163.859 - 1170.999: 38.1395% ( 2807) 00:11:15.956 1170.999 - 1178.139: 40.9285% ( 2939) 00:11:15.956 1178.139 - 1185.279: 43.7061% ( 2927) 00:11:15.956 1185.279 - 1192.420: 46.5064% ( 2951) 00:11:15.956 1192.420 - 1199.560: 49.2897% ( 2933) 00:11:15.956 1199.560 - 1206.700: 52.0626% ( 2922) 00:11:15.956 1206.700 - 1213.840: 54.8088% ( 2894) 00:11:15.956 1213.840 - 1220.981: 57.5153% ( 2852) 00:11:15.956 1220.981 - 1228.121: 60.1780% ( 2806) 00:11:15.956 1228.121 - 1235.261: 62.7516% ( 2712) 00:11:15.956 1235.261 - 1242.401: 65.2730% ( 2657) 00:11:15.956 1242.401 - 1249.542: 67.6748% ( 2531) 00:11:15.956 1249.542 - 1256.682: 69.9817% ( 2431) 00:11:15.956 1256.682 - 1263.822: 72.1738% ( 2310) 00:11:15.956 1263.822 - 1270.962: 74.2966% ( 2237) 00:11:15.956 1270.962 - 1278.103: 76.3017% ( 2113) 00:11:15.956 1278.103 - 1285.243: 78.2139% ( 2015) 00:11:15.956 1285.243 - 1292.383: 80.0520% ( 1937) 00:11:15.956 1292.383 - 1299.523: 81.8133% ( 1856) 00:11:15.956 1299.523 - 1306.664: 83.4939% ( 1771) 00:11:15.956 1306.664 - 1313.804: 85.1289% ( 1723) 00:11:15.956 1313.804 - 1320.944: 86.6643% ( 1618) 00:11:15.956 1320.944 - 1328.084: 88.1513% ( 1567) 00:11:15.956 1328.084 - 1335.224: 89.5197% ( 1442) 00:11:15.956 1335.224 - 1342.365: 90.8018% ( 1351) 00:11:15.956 1342.365 - 1349.505: 91.9491% ( 1209) 00:11:15.956 1349.505 - 1356.645: 92.9663% ( 1072) 00:11:15.956 1356.645 - 1363.785: 93.8441% ( 925) 00:11:15.956 1363.785 - 1370.926: 94.6071% ( 804) 00:11:15.956 1370.926 - 1378.066: 95.2609% ( 689) 00:11:15.956 1378.066 - 1385.206: 95.8037% ( 572) 00:11:15.956 1385.206 - 1392.346: 96.2459% ( 466) 00:11:15.956 1392.346 - 1399.487: 96.6132% ( 387) 00:11:15.956 1399.487 - 1406.627: 96.9225% ( 326) 00:11:15.956 1406.627 - 1413.767: 97.1655% ( 256) 00:11:15.956 1413.767 - 1420.907: 97.3600% ( 205) 00:11:15.956 1420.907 - 1428.048: 97.5137% ( 162) 00:11:15.956 1428.048 - 1435.188: 97.6456% ( 139) 00:11:15.956 1435.188 - 1442.328: 97.7443% ( 104) 00:11:15.956 1442.328 - 1449.468: 97.8231% ( 83) 00:11:15.956 1449.468 - 1456.608: 97.8867% ( 67) 00:11:15.956 1456.608 - 1463.749: 97.9465% ( 63) 00:11:15.956 1463.749 - 1470.889: 97.9949% ( 51) 00:11:15.956 1470.889 - 1478.029: 98.0300% ( 37) 00:11:15.956 1478.029 - 1485.169: 98.0660% ( 38) 00:11:15.956 1485.169 - 1492.310: 98.0964% ( 32) 00:11:15.956 1492.310 - 1499.450: 98.1211% ( 26) 00:11:15.956 1499.450 - 1506.590: 98.1486% ( 29) 00:11:15.956 1506.590 - 1513.730: 98.1771% ( 30) 00:11:15.956 1513.730 - 1520.871: 98.2036% ( 28) 00:11:15.956 1520.871 - 1528.011: 98.2311% ( 29) 00:11:15.956 1528.011 - 1535.151: 98.2606% ( 31) 00:11:15.956 1535.151 - 1542.291: 98.2824% ( 23) 00:11:15.956 1542.291 - 1549.432: 98.3042% ( 23) 00:11:15.956 1549.432 - 1556.572: 98.3279% ( 25) 00:11:15.956 1556.572 - 1563.712: 98.3517% ( 25) 00:11:15.956 1563.712 - 1570.852: 98.3744% ( 24) 00:11:15.956 1570.852 - 1577.993: 98.3963% ( 23) 00:11:15.956 1577.993 - 1585.133: 98.4152% ( 20) 00:11:15.956 1585.133 - 1592.273: 98.4399% ( 26) 00:11:15.956 1592.273 - 1599.413: 98.4655% ( 27) 00:11:15.956 1599.413 - 1606.553: 98.4902% ( 26) 00:11:15.956 1606.553 - 1613.694: 98.5149% ( 26) 00:11:15.956 1613.694 - 1620.834: 98.5367% ( 23) 00:11:15.956 1620.834 - 1627.974: 98.5604% ( 25) 00:11:15.956 1627.974 - 1635.114: 98.5794% ( 20) 00:11:15.956 1635.114 - 1642.255: 98.5993% ( 21) 00:11:15.956 1642.255 - 1649.395: 98.6164% ( 18) 00:11:15.956 1649.395 - 1656.535: 98.6373% ( 22) 00:11:15.956 1656.535 - 1663.675: 98.6553% ( 19) 00:11:15.956 1663.675 - 1670.816: 98.6800% ( 26) 00:11:15.956 1670.816 - 1677.956: 98.6999% ( 21) 00:11:15.956 1677.956 - 1685.096: 98.7246% ( 26) 00:11:15.956 1685.096 - 1692.236: 98.7483% ( 25) 00:11:15.956 1692.236 - 1699.377: 98.7730% ( 26) 00:11:15.956 1699.377 - 1706.517: 98.7958% ( 24) 00:11:15.956 1706.517 - 1713.657: 98.8204% ( 26) 00:11:15.956 1713.657 - 1720.797: 98.8442% ( 25) 00:11:15.956 1720.797 - 1727.938: 98.8688% ( 26) 00:11:15.956 1727.938 - 1735.078: 98.8935% ( 26) 00:11:15.956 1735.078 - 1742.218: 98.9153% ( 23) 00:11:15.956 1742.218 - 1749.358: 98.9353% ( 21) 00:11:15.956 1749.358 - 1756.498: 98.9571% ( 23) 00:11:15.956 1756.498 - 1763.639: 98.9742% ( 18) 00:11:15.956 1763.639 - 1770.779: 98.9884% ( 15) 00:11:15.956 1770.779 - 1777.919: 99.0007% ( 13) 00:11:15.956 1777.919 - 1785.059: 99.0112% ( 11) 00:11:15.956 1785.059 - 1792.200: 99.0216% ( 11) 00:11:15.956 1792.200 - 1799.340: 99.0292% ( 8) 00:11:15.956 1799.340 - 1806.480: 99.0397% ( 11) 00:11:15.956 1806.480 - 1813.620: 99.0472% ( 8) 00:11:15.956 1813.620 - 1820.761: 99.0577% ( 11) 00:11:15.956 1820.761 - 1827.901: 99.0653% ( 8) 00:11:15.956 1827.901 - 1842.181: 99.0833% ( 19) 00:11:15.956 1842.181 - 1856.462: 99.0966% ( 14) 00:11:15.956 1856.462 - 1870.742: 99.1118% ( 16) 00:11:15.956 1870.742 - 1885.023: 99.1251% ( 14) 00:11:15.956 1885.023 - 1899.303: 99.1346% ( 10) 00:11:15.956 1913.584 - 1927.864: 99.1365% ( 2) 00:11:15.956 1942.145 - 1956.425: 99.1374% ( 1) 00:11:15.956 1956.425 - 1970.706: 99.1383% ( 1) 00:11:15.956 1999.267 - 2013.547: 99.1402% ( 2) 00:11:15.956 2013.547 - 2027.828: 99.1459% ( 6) 00:11:15.956 2027.828 - 2042.108: 99.1516% ( 6) 00:11:15.956 2042.108 - 2056.388: 99.1583% ( 7) 00:11:15.956 2056.388 - 2070.669: 99.1668% ( 9) 00:11:15.956 2070.669 - 2084.949: 99.1839% ( 18) 00:11:15.956 2084.949 - 2099.230: 99.2038% ( 21) 00:11:15.956 2099.230 - 2113.510: 99.2323% ( 30) 00:11:15.956 2113.510 - 2127.791: 99.2712% ( 41) 00:11:15.956 2127.791 - 2142.071: 99.3120% ( 43) 00:11:15.956 2142.071 - 2156.352: 99.3443% ( 34) 00:11:15.956 2156.352 - 2170.632: 99.3851% ( 43) 00:11:15.956 2170.632 - 2184.913: 99.4268% ( 44) 00:11:15.956 2184.913 - 2199.193: 99.4667% ( 42) 00:11:15.956 2199.193 - 2213.474: 99.5037% ( 39) 00:11:15.957 2213.474 - 2227.754: 99.5284% ( 26) 00:11:15.957 2227.754 - 2242.035: 99.5426% ( 15) 00:11:15.957 2242.035 - 2256.315: 99.5549% ( 13) 00:11:15.957 2256.315 - 2270.596: 99.5635% ( 9) 00:11:15.957 2270.596 - 2284.876: 99.5682% ( 5) 00:11:15.957 2284.876 - 2299.157: 99.5758% ( 8) 00:11:15.957 2299.157 - 2313.437: 99.5825% ( 7) 00:11:15.957 2313.437 - 2327.717: 99.5882% ( 6) 00:11:15.957 2327.717 - 2341.998: 99.5948% ( 7) 00:11:15.957 2341.998 - 2356.278: 99.6014% ( 7) 00:11:15.957 2356.278 - 2370.559: 99.6081% ( 7) 00:11:15.957 2370.559 - 2384.839: 99.6147% ( 7) 00:11:15.957 2384.839 - 2399.120: 99.6204% ( 6) 00:11:15.957 2399.120 - 2413.400: 99.6271% ( 7) 00:11:15.957 2413.400 - 2427.681: 99.6337% ( 7) 00:11:15.957 2427.681 - 2441.961: 99.6384% ( 5) 00:11:15.957 2441.961 - 2456.242: 99.6441% ( 6) 00:11:15.957 2456.242 - 2470.522: 99.6517% ( 8) 00:11:15.957 2470.522 - 2484.803: 99.6574% ( 6) 00:11:15.957 2484.803 - 2499.083: 99.6679% ( 11) 00:11:15.957 2499.083 - 2513.364: 99.6830% ( 16) 00:11:15.957 2513.364 - 2527.644: 99.7039% ( 22) 00:11:15.957 2527.644 - 2541.925: 99.7258% ( 23) 00:11:15.957 2541.925 - 2556.205: 99.7485% ( 24) 00:11:15.957 2556.205 - 2570.486: 99.7656% ( 18) 00:11:15.957 2570.486 - 2584.766: 99.7855% ( 21) 00:11:15.957 2584.766 - 2599.047: 99.8026% ( 18) 00:11:15.957 2599.047 - 2613.327: 99.8187% ( 17) 00:11:15.957 2613.327 - 2627.607: 99.8301% ( 12) 00:11:15.957 2627.607 - 2641.888: 99.8434% ( 14) 00:11:15.957 2641.888 - 2656.168: 99.8558% ( 13) 00:11:15.957 2656.168 - 2670.449: 99.8662% ( 11) 00:11:15.957 2670.449 - 2684.729: 99.8719% ( 6) 00:11:15.957 2684.729 - 2699.010: 99.8785% ( 7) 00:11:15.957 2884.656 - 2898.937: 99.8795% ( 1) 00:11:15.957 2898.937 - 2913.217: 99.8814% ( 2) 00:11:15.957 2913.217 - 2927.497: 99.8842% ( 3) 00:11:15.957 2927.497 - 2941.778: 99.8871% ( 3) 00:11:15.957 2941.778 - 2956.058: 99.8899% ( 3) 00:11:15.957 2956.058 - 2970.339: 99.8928% ( 3) 00:11:15.957 2970.339 - 2984.619: 99.8966% ( 4) 00:11:15.957 2984.619 - 2998.900: 99.8985% ( 2) 00:11:15.957 2998.900 - 3013.180: 99.9013% ( 3) 00:11:15.957 3013.180 - 3027.461: 99.9051% ( 4) 00:11:15.957 3027.461 - 3041.741: 99.9080% ( 3) 00:11:15.957 3041.741 - 3056.022: 99.9108% ( 3) 00:11:15.957 3056.022 - 3070.302: 99.9136% ( 3) 00:11:15.957 3070.302 - 3084.583: 99.9174% ( 4) 00:11:15.957 3084.583 - 3098.863: 99.9203% ( 3) 00:11:15.957 3098.863 - 3113.144: 99.9231% ( 3) 00:11:15.957 3113.144 - 3127.424: 99.9250% ( 2) 00:11:15.957 3127.424 - 3141.705: 99.9269% ( 2) 00:11:15.957 3141.705 - 3155.985: 99.9288% ( 2) 00:11:15.957 3284.509 - 3298.790: 99.9317% ( 3) 00:11:15.957 3298.790 - 3313.070: 99.9345% ( 3) 00:11:15.957 3313.070 - 3327.351: 99.9383% ( 4) 00:11:15.957 3327.351 - 3341.631: 99.9412% ( 3) 00:11:15.957 3341.631 - 3355.912: 99.9440% ( 3) 00:11:15.957 3355.912 - 3370.192: 99.9478% ( 4) 00:11:15.957 3370.192 - 3384.473: 99.9497% ( 2) 00:11:15.957 3384.473 - 3398.753: 99.9535% ( 4) 00:11:15.957 3398.753 - 3413.034: 99.9563% ( 3) 00:11:15.957 3413.034 - 3427.314: 99.9592% ( 3) 00:11:15.957 3427.314 - 3441.595: 99.9630% ( 4) 00:11:15.957 3441.595 - 3455.875: 99.9658% ( 3) 00:11:15.957 3455.875 - 3470.156: 99.9696% ( 4) 00:11:15.957 3470.156 - 3484.436: 99.9715% ( 2) 00:11:15.957 3484.436 - 3498.716: 99.9753% ( 4) 00:11:15.957 3498.716 - 3512.997: 99.9782% ( 3) 00:11:15.957 3512.997 - 3527.277: 99.9810% ( 3) 00:11:15.957 3527.277 - 3541.558: 99.9848% ( 4) 00:11:15.957 3541.558 - 3555.838: 99.9877% ( 3) 00:11:15.957 3555.838 - 3570.119: 99.9915% ( 4) 00:11:15.957 3570.119 - 3584.399: 99.9943% ( 3) 00:11:15.957 3584.399 - 3598.680: 99.9962% ( 2) 00:11:15.957 3598.680 - 3612.960: 99.9991% ( 3) 00:11:15.957 3612.960 - 3627.241: 100.0000% ( 1) 00:11:15.957 00:11:15.957 20:49:06 -- nvme/nvme.sh@23 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:11:16.556 EAL: TSC is not safe to use in SMP mode 00:11:16.556 EAL: TSC is not invariant 00:11:16.556 [2024-04-16 20:49:07.392698] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:11:17.489 Initializing NVMe Controllers 00:11:17.489 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:11:17.489 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:11:17.489 Initialization complete. Launching workers. 00:11:17.489 ======================================================== 00:11:17.489 Latency(us) 00:11:17.489 Device Information : IOPS MiB/s Average min max 00:11:17.489 PCIE (0000:00:06.0) NSID 1 from core 0: 88798.39 1040.61 1441.98 443.45 13976.39 00:11:17.489 ======================================================== 00:11:17.489 Total : 88798.39 1040.61 1441.98 443.45 13976.39 00:11:17.489 00:11:17.489 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:11:17.489 ================================================================================= 00:11:17.489 1.00000% : 860.399us 00:11:17.489 10.00000% : 1021.054us 00:11:17.489 25.00000% : 1156.719us 00:11:17.489 50.00000% : 1328.084us 00:11:17.489 75.00000% : 1577.993us 00:11:17.489 90.00000% : 1956.425us 00:11:17.489 95.00000% : 2270.596us 00:11:17.489 98.00000% : 2784.693us 00:11:17.489 99.00000% : 3827.167us 00:11:17.489 99.50000% : 4284.143us 00:11:17.489 99.90000% : 6311.970us 00:11:17.489 99.99000% : 11595.746us 00:11:17.489 99.99900% : 13994.866us 00:11:17.489 99.99990% : 13994.866us 00:11:17.489 99.99999% : 13994.866us 00:11:17.489 00:11:17.489 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:11:17.489 ============================================================================== 00:11:17.489 Range in us Cumulative IO count 00:11:17.489 442.695 - 444.480: 0.0011% ( 1) 00:11:17.489 549.798 - 553.368: 0.0023% ( 1) 00:11:17.489 553.368 - 556.939: 0.0034% ( 1) 00:11:17.489 574.789 - 578.359: 0.0045% ( 1) 00:11:17.489 589.070 - 592.640: 0.0056% ( 1) 00:11:17.489 596.210 - 599.780: 0.0068% ( 1) 00:11:17.489 610.490 - 614.060: 0.0079% ( 1) 00:11:17.489 614.060 - 617.631: 0.0090% ( 1) 00:11:17.489 628.341 - 631.911: 0.0101% ( 1) 00:11:17.489 631.911 - 635.481: 0.0113% ( 1) 00:11:17.489 635.481 - 639.051: 0.0135% ( 2) 00:11:17.489 639.051 - 642.621: 0.0146% ( 1) 00:11:17.489 649.762 - 653.332: 0.0158% ( 1) 00:11:17.489 660.472 - 664.042: 0.0169% ( 1) 00:11:17.489 664.042 - 667.612: 0.0191% ( 2) 00:11:17.489 667.612 - 671.182: 0.0203% ( 1) 00:11:17.489 671.182 - 674.752: 0.0214% ( 1) 00:11:17.489 674.752 - 678.323: 0.0225% ( 1) 00:11:17.489 681.893 - 685.463: 0.0236% ( 1) 00:11:17.489 685.463 - 689.033: 0.0248% ( 1) 00:11:17.489 689.033 - 692.603: 0.0293% ( 4) 00:11:17.489 692.603 - 696.173: 0.0304% ( 1) 00:11:17.489 696.173 - 699.743: 0.0315% ( 1) 00:11:17.489 706.884 - 710.454: 0.0338% ( 2) 00:11:17.489 710.454 - 714.024: 0.0484% ( 13) 00:11:17.489 714.024 - 717.594: 0.0540% ( 5) 00:11:17.489 717.594 - 721.164: 0.0597% ( 5) 00:11:17.489 731.874 - 735.444: 0.0608% ( 1) 00:11:17.489 735.444 - 739.015: 0.0631% ( 2) 00:11:17.489 739.015 - 742.585: 0.0676% ( 4) 00:11:17.489 742.585 - 746.155: 0.0788% ( 10) 00:11:17.489 746.155 - 749.725: 0.0833% ( 4) 00:11:17.489 749.725 - 753.295: 0.0856% ( 2) 00:11:17.489 753.295 - 756.865: 0.0867% ( 1) 00:11:17.489 760.435 - 764.005: 0.0878% ( 1) 00:11:17.489 764.005 - 767.576: 0.0912% ( 3) 00:11:17.489 771.146 - 774.716: 0.0935% ( 2) 00:11:17.489 774.716 - 778.286: 0.0968% ( 3) 00:11:17.489 778.286 - 781.856: 0.1013% ( 4) 00:11:17.489 781.856 - 785.426: 0.1081% ( 6) 00:11:17.489 785.426 - 788.996: 0.1216% ( 12) 00:11:17.489 788.996 - 792.566: 0.1317% ( 9) 00:11:17.489 792.566 - 796.137: 0.1453% ( 12) 00:11:17.489 796.137 - 799.707: 0.1633% ( 16) 00:11:17.489 799.707 - 803.277: 0.1745% ( 10) 00:11:17.489 803.277 - 806.847: 0.1880% ( 12) 00:11:17.489 806.847 - 810.417: 0.2162% ( 25) 00:11:17.489 810.417 - 813.987: 0.2657% ( 44) 00:11:17.489 813.987 - 817.557: 0.2995% ( 30) 00:11:17.489 817.557 - 821.127: 0.3761% ( 68) 00:11:17.489 821.127 - 824.697: 0.4155% ( 35) 00:11:17.489 824.697 - 828.268: 0.4470% ( 28) 00:11:17.489 828.268 - 831.838: 0.4864% ( 35) 00:11:17.489 831.838 - 835.408: 0.5304% ( 39) 00:11:17.489 835.408 - 838.978: 0.5979% ( 60) 00:11:17.489 838.978 - 842.548: 0.6644% ( 59) 00:11:17.489 842.548 - 846.118: 0.7274% ( 56) 00:11:17.489 846.118 - 849.688: 0.7826% ( 49) 00:11:17.489 849.688 - 853.258: 0.8592% ( 68) 00:11:17.489 853.258 - 856.829: 0.9222% ( 56) 00:11:17.489 856.829 - 860.399: 1.0168% ( 84) 00:11:17.489 860.399 - 863.969: 1.1024% ( 76) 00:11:17.489 863.969 - 867.539: 1.2218% ( 106) 00:11:17.489 867.539 - 871.109: 1.3163% ( 84) 00:11:17.489 871.109 - 874.679: 1.4672% ( 134) 00:11:17.489 874.679 - 878.249: 1.5855% ( 105) 00:11:17.489 878.249 - 881.819: 1.6936% ( 96) 00:11:17.489 881.819 - 885.389: 1.8118% ( 105) 00:11:17.489 885.389 - 888.960: 1.9345% ( 109) 00:11:17.489 888.960 - 892.530: 2.0809% ( 130) 00:11:17.489 892.530 - 896.100: 2.1901% ( 97) 00:11:17.489 896.100 - 899.670: 2.3523% ( 144) 00:11:17.489 899.670 - 903.240: 2.5336% ( 161) 00:11:17.489 903.240 - 906.810: 2.6957% ( 144) 00:11:17.489 906.810 - 910.380: 2.8320% ( 121) 00:11:17.489 910.380 - 913.950: 2.9435% ( 99) 00:11:17.489 913.950 - 921.091: 3.2599% ( 281) 00:11:17.489 921.091 - 928.231: 3.6675% ( 362) 00:11:17.489 928.231 - 935.371: 4.0008% ( 296) 00:11:17.489 935.371 - 942.511: 4.4006% ( 355) 00:11:17.489 942.511 - 949.652: 4.9016% ( 445) 00:11:17.489 949.652 - 956.792: 5.4140% ( 455) 00:11:17.489 956.792 - 963.932: 5.8959% ( 428) 00:11:17.489 963.932 - 971.072: 6.3554% ( 408) 00:11:17.489 971.072 - 978.213: 6.8576% ( 446) 00:11:17.489 978.213 - 985.353: 7.3823% ( 466) 00:11:17.489 985.353 - 992.493: 7.9093% ( 468) 00:11:17.489 992.493 - 999.633: 8.3180% ( 363) 00:11:17.489 999.633 - 1006.774: 8.8777% ( 497) 00:11:17.489 1006.774 - 1013.914: 9.5409% ( 589) 00:11:17.489 1013.914 - 1021.054: 10.0735% ( 473) 00:11:17.489 1021.054 - 1028.194: 10.5589% ( 431) 00:11:17.489 1028.194 - 1035.334: 11.3595% ( 711) 00:11:17.489 1035.334 - 1042.475: 12.0385% ( 603) 00:11:17.489 1042.475 - 1049.615: 12.6859% ( 575) 00:11:17.489 1049.615 - 1056.755: 13.3571% ( 596) 00:11:17.489 1056.755 - 1063.895: 14.0529% ( 618) 00:11:17.489 1063.895 - 1071.036: 14.8524% ( 710) 00:11:17.489 1071.036 - 1078.176: 15.5934% ( 658) 00:11:17.489 1078.176 - 1085.316: 16.2521% ( 585) 00:11:17.489 1085.316 - 1092.456: 17.2295% ( 868) 00:11:17.489 1092.456 - 1099.597: 18.1269% ( 797) 00:11:17.489 1099.597 - 1106.737: 18.9535% ( 734) 00:11:17.489 1106.737 - 1113.877: 19.7237% ( 684) 00:11:17.489 1113.877 - 1121.017: 20.5682% ( 750) 00:11:17.489 1121.017 - 1128.158: 21.5996% ( 916) 00:11:17.489 1128.158 - 1135.298: 22.5275% ( 824) 00:11:17.489 1135.298 - 1142.438: 23.6276% ( 977) 00:11:17.489 1142.438 - 1149.578: 24.6287% ( 889) 00:11:17.489 1149.578 - 1156.719: 25.5689% ( 835) 00:11:17.489 1156.719 - 1163.859: 26.5261% ( 850) 00:11:17.489 1163.859 - 1170.999: 27.5530% ( 912) 00:11:17.489 1170.999 - 1178.139: 28.5755% ( 908) 00:11:17.489 1178.139 - 1185.279: 29.8614% ( 1142) 00:11:17.489 1185.279 - 1192.420: 31.1045% ( 1104) 00:11:17.489 1192.420 - 1199.560: 32.1371% ( 917) 00:11:17.489 1199.560 - 1206.700: 33.0751% ( 833) 00:11:17.489 1206.700 - 1213.840: 33.8532% ( 691) 00:11:17.489 1213.840 - 1220.981: 34.6921% ( 745) 00:11:17.489 1220.981 - 1228.121: 35.7551% ( 944) 00:11:17.489 1228.121 - 1235.261: 36.7325% ( 868) 00:11:17.489 1235.261 - 1242.401: 37.8889% ( 1027) 00:11:17.489 1242.401 - 1249.542: 39.0352% ( 1018) 00:11:17.489 1249.542 - 1256.682: 40.2333% ( 1064) 00:11:17.489 1256.682 - 1263.822: 41.3244% ( 969) 00:11:17.489 1263.822 - 1270.962: 42.5462% ( 1085) 00:11:17.489 1270.962 - 1278.103: 43.7511% ( 1070) 00:11:17.489 1278.103 - 1285.243: 44.7217% ( 862) 00:11:17.489 1285.243 - 1292.383: 45.8838% ( 1032) 00:11:17.489 1292.383 - 1299.523: 46.9524% ( 949) 00:11:17.489 1299.523 - 1306.664: 47.9895% ( 921) 00:11:17.489 1306.664 - 1313.804: 49.0051% ( 902) 00:11:17.489 1313.804 - 1320.944: 49.8339% ( 736) 00:11:17.489 1320.944 - 1328.084: 50.8102% ( 867) 00:11:17.489 1328.084 - 1335.224: 51.7088% ( 798) 00:11:17.489 1335.224 - 1342.365: 52.3382% ( 559) 00:11:17.489 1342.365 - 1349.505: 53.1186% ( 693) 00:11:17.489 1349.505 - 1356.645: 54.0926% ( 865) 00:11:17.489 1356.645 - 1363.785: 54.9427% ( 755) 00:11:17.489 1363.785 - 1370.926: 55.7422% ( 710) 00:11:17.489 1370.926 - 1378.066: 56.5023% ( 675) 00:11:17.489 1378.066 - 1385.206: 57.2128% ( 631) 00:11:17.489 1385.206 - 1392.346: 58.0270% ( 723) 00:11:17.489 1392.346 - 1399.487: 58.9604% ( 829) 00:11:17.489 1399.487 - 1406.627: 59.9131% ( 846) 00:11:17.489 1406.627 - 1413.767: 60.7621% ( 754) 00:11:17.489 1413.767 - 1420.907: 61.5267% ( 679) 00:11:17.489 1420.907 - 1428.048: 62.3329% ( 716) 00:11:17.489 1428.048 - 1435.188: 63.0648% ( 650) 00:11:17.489 1435.188 - 1442.328: 63.6707% ( 538) 00:11:17.489 1442.328 - 1449.468: 64.3868% ( 636) 00:11:17.489 1449.468 - 1456.608: 64.8417% ( 404) 00:11:17.489 1456.608 - 1463.749: 65.5590% ( 637) 00:11:17.489 1463.749 - 1470.889: 66.2650% ( 627) 00:11:17.489 1470.889 - 1478.029: 66.8157% ( 489) 00:11:17.489 1478.029 - 1485.169: 67.5318% ( 636) 00:11:17.489 1485.169 - 1492.310: 68.2007% ( 594) 00:11:17.489 1492.310 - 1499.450: 68.9585% ( 673) 00:11:17.489 1499.450 - 1506.590: 69.4911% ( 473) 00:11:17.489 1506.590 - 1513.730: 70.2433% ( 668) 00:11:17.489 1513.730 - 1520.871: 70.7714% ( 469) 00:11:17.489 1520.871 - 1528.011: 71.2478% ( 423) 00:11:17.489 1528.011 - 1535.151: 71.9482% ( 622) 00:11:17.489 1535.151 - 1542.291: 72.5033% ( 493) 00:11:17.489 1542.291 - 1549.432: 73.1834% ( 604) 00:11:17.489 1549.432 - 1556.572: 73.6755% ( 437) 00:11:17.489 1556.572 - 1563.712: 74.2002% ( 466) 00:11:17.489 1563.712 - 1570.852: 74.7891% ( 523) 00:11:17.489 1570.852 - 1577.993: 75.2429% ( 403) 00:11:17.489 1577.993 - 1585.133: 75.9625% ( 639) 00:11:17.489 1585.133 - 1592.273: 76.4647% ( 446) 00:11:17.489 1592.273 - 1599.413: 77.0029% ( 478) 00:11:17.489 1599.413 - 1606.553: 77.4781% ( 422) 00:11:17.489 1606.553 - 1613.694: 77.9353% ( 406) 00:11:17.489 1613.694 - 1620.834: 78.3857% ( 400) 00:11:17.489 1620.834 - 1627.974: 78.8586% ( 420) 00:11:17.489 1627.974 - 1635.114: 79.3079% ( 399) 00:11:17.489 1635.114 - 1642.255: 79.8372% ( 470) 00:11:17.489 1642.255 - 1649.395: 80.3822% ( 484) 00:11:17.489 1649.395 - 1656.535: 80.7785% ( 352) 00:11:17.489 1656.535 - 1663.675: 81.2391% ( 409) 00:11:17.489 1663.675 - 1670.816: 81.6287% ( 346) 00:11:17.489 1670.816 - 1677.956: 82.0014% ( 331) 00:11:17.489 1677.956 - 1685.096: 82.3460% ( 306) 00:11:17.489 1685.096 - 1692.236: 82.8257% ( 426) 00:11:17.489 1692.236 - 1699.377: 83.1691% ( 305) 00:11:17.489 1699.377 - 1706.517: 83.4551% ( 254) 00:11:17.489 1706.517 - 1713.657: 83.8065% ( 312) 00:11:17.489 1713.657 - 1720.797: 84.1364% ( 293) 00:11:17.489 1720.797 - 1727.938: 84.3965% ( 231) 00:11:17.489 1727.938 - 1735.078: 84.6532% ( 228) 00:11:17.489 1735.078 - 1742.218: 84.8717% ( 194) 00:11:17.489 1742.218 - 1749.358: 85.1014% ( 204) 00:11:17.489 1749.358 - 1756.498: 85.3514% ( 222) 00:11:17.489 1756.498 - 1763.639: 85.5867% ( 209) 00:11:17.489 1763.639 - 1770.779: 85.8772% ( 258) 00:11:17.489 1770.779 - 1777.919: 86.1092% ( 206) 00:11:17.489 1777.919 - 1785.059: 86.4087% ( 266) 00:11:17.489 1785.059 - 1792.200: 86.6261% ( 193) 00:11:17.489 1792.200 - 1799.340: 86.8197% ( 172) 00:11:17.489 1799.340 - 1806.480: 87.0551% ( 209) 00:11:17.489 1806.480 - 1813.620: 87.2352% ( 160) 00:11:17.490 1813.620 - 1820.761: 87.4222% ( 166) 00:11:17.490 1820.761 - 1827.901: 87.6181% ( 174) 00:11:17.490 1827.901 - 1842.181: 87.9491% ( 294) 00:11:17.490 1842.181 - 1856.462: 88.2453% ( 263) 00:11:17.490 1856.462 - 1870.742: 88.6146% ( 328) 00:11:17.490 1870.742 - 1885.023: 88.9615% ( 308) 00:11:17.490 1885.023 - 1899.303: 89.2148% ( 225) 00:11:17.490 1899.303 - 1913.584: 89.4828% ( 238) 00:11:17.490 1913.584 - 1927.864: 89.7418% ( 230) 00:11:17.490 1927.864 - 1942.145: 89.9783% ( 210) 00:11:17.490 1942.145 - 1956.425: 90.2418% ( 234) 00:11:17.490 1956.425 - 1970.706: 90.5289% ( 255) 00:11:17.490 1970.706 - 1984.986: 90.8859% ( 317) 00:11:17.490 1984.986 - 1999.267: 91.1336% ( 220) 00:11:17.490 1999.267 - 2013.547: 91.3948% ( 232) 00:11:17.490 2013.547 - 2027.828: 91.6099% ( 191) 00:11:17.490 2027.828 - 2042.108: 91.8711% ( 232) 00:11:17.490 2042.108 - 2056.388: 92.0986% ( 202) 00:11:17.490 2056.388 - 2070.669: 92.3632% ( 235) 00:11:17.490 2070.669 - 2084.949: 92.6121% ( 221) 00:11:17.490 2084.949 - 2099.230: 92.7967% ( 164) 00:11:17.490 2099.230 - 2113.510: 93.0219% ( 200) 00:11:17.490 2113.510 - 2127.791: 93.2111% ( 168) 00:11:17.490 2127.791 - 2142.071: 93.4453% ( 208) 00:11:17.490 2142.071 - 2156.352: 93.6559% ( 187) 00:11:17.490 2156.352 - 2170.632: 93.8563% ( 178) 00:11:17.490 2170.632 - 2184.913: 94.0331% ( 157) 00:11:17.490 2184.913 - 2199.193: 94.2234% ( 169) 00:11:17.490 2199.193 - 2213.474: 94.3687% ( 129) 00:11:17.490 2213.474 - 2227.754: 94.5770% ( 185) 00:11:17.490 2227.754 - 2242.035: 94.8011% ( 199) 00:11:17.490 2242.035 - 2256.315: 94.9576% ( 139) 00:11:17.490 2256.315 - 2270.596: 95.0544% ( 86) 00:11:17.490 2270.596 - 2284.876: 95.2233% ( 150) 00:11:17.490 2284.876 - 2299.157: 95.3450% ( 108) 00:11:17.490 2299.157 - 2313.437: 95.4452% ( 89) 00:11:17.490 2313.437 - 2327.717: 95.5871% ( 126) 00:11:17.490 2327.717 - 2341.998: 95.7211% ( 119) 00:11:17.490 2341.998 - 2356.278: 95.8562% ( 120) 00:11:17.490 2356.278 - 2370.559: 95.9699% ( 101) 00:11:17.490 2370.559 - 2384.839: 96.1152% ( 129) 00:11:17.490 2384.839 - 2399.120: 96.2323% ( 104) 00:11:17.490 2399.120 - 2413.400: 96.3325% ( 89) 00:11:17.490 2413.400 - 2427.681: 96.4271% ( 84) 00:11:17.490 2427.681 - 2441.961: 96.4969% ( 62) 00:11:17.490 2441.961 - 2456.242: 96.5892% ( 82) 00:11:17.490 2456.242 - 2470.522: 96.6895% ( 89) 00:11:17.490 2470.522 - 2484.803: 96.7660% ( 68) 00:11:17.490 2484.803 - 2499.083: 96.8505% ( 75) 00:11:17.490 2499.083 - 2513.364: 96.9338% ( 74) 00:11:17.490 2513.364 - 2527.644: 97.0228% ( 79) 00:11:17.490 2527.644 - 2541.925: 97.0959% ( 65) 00:11:17.490 2541.925 - 2556.205: 97.1883% ( 82) 00:11:17.490 2556.205 - 2570.486: 97.2480% ( 53) 00:11:17.490 2570.486 - 2584.766: 97.3178% ( 62) 00:11:17.490 2584.766 - 2599.047: 97.3718% ( 48) 00:11:17.490 2599.047 - 2613.327: 97.4326% ( 54) 00:11:17.490 2613.327 - 2627.607: 97.4867% ( 48) 00:11:17.490 2627.607 - 2641.888: 97.5351% ( 43) 00:11:17.490 2641.888 - 2656.168: 97.5948% ( 53) 00:11:17.490 2656.168 - 2670.449: 97.6319% ( 33) 00:11:17.490 2670.449 - 2684.729: 97.6860% ( 48) 00:11:17.490 2684.729 - 2699.010: 97.7502% ( 57) 00:11:17.490 2699.010 - 2713.290: 97.7896% ( 35) 00:11:17.490 2713.290 - 2727.571: 97.8369% ( 42) 00:11:17.490 2727.571 - 2741.851: 97.8898% ( 47) 00:11:17.490 2741.851 - 2756.132: 97.9213% ( 28) 00:11:17.490 2756.132 - 2770.412: 97.9607% ( 35) 00:11:17.490 2770.412 - 2784.693: 98.0249% ( 57) 00:11:17.490 2784.693 - 2798.973: 98.0565% ( 28) 00:11:17.490 2798.973 - 2813.254: 98.0756% ( 17) 00:11:17.490 2813.254 - 2827.534: 98.0902% ( 13) 00:11:17.490 2827.534 - 2841.815: 98.1274% ( 33) 00:11:17.490 2841.815 - 2856.095: 98.1612% ( 30) 00:11:17.490 2856.095 - 2870.376: 98.1702% ( 8) 00:11:17.490 2870.376 - 2884.656: 98.2107% ( 36) 00:11:17.490 2884.656 - 2898.937: 98.2411% ( 27) 00:11:17.490 2898.937 - 2913.217: 98.2749% ( 30) 00:11:17.490 2913.217 - 2927.497: 98.3019% ( 24) 00:11:17.490 2927.497 - 2941.778: 98.3188% ( 15) 00:11:17.490 2941.778 - 2956.058: 98.3312% ( 11) 00:11:17.490 2956.058 - 2970.339: 98.3549% ( 21) 00:11:17.490 2970.339 - 2984.619: 98.3763% ( 19) 00:11:17.490 2984.619 - 2998.900: 98.3999% ( 21) 00:11:17.490 2998.900 - 3013.180: 98.4055% ( 5) 00:11:17.490 3013.180 - 3027.461: 98.4112% ( 5) 00:11:17.490 3027.461 - 3041.741: 98.4179% ( 6) 00:11:17.490 3041.741 - 3056.022: 98.4247% ( 6) 00:11:17.490 3056.022 - 3070.302: 98.4303% ( 5) 00:11:17.490 3070.302 - 3084.583: 98.4382% ( 7) 00:11:17.490 3084.583 - 3098.863: 98.4427% ( 4) 00:11:17.490 3098.863 - 3113.144: 98.4472% ( 4) 00:11:17.490 3113.144 - 3127.424: 98.4551% ( 7) 00:11:17.490 3127.424 - 3141.705: 98.4596% ( 4) 00:11:17.490 3141.705 - 3155.985: 98.4641% ( 4) 00:11:17.490 3155.985 - 3170.266: 98.4697% ( 5) 00:11:17.490 3170.266 - 3184.546: 98.4787% ( 8) 00:11:17.490 3184.546 - 3198.827: 98.4821% ( 3) 00:11:17.490 3198.827 - 3213.107: 98.4866% ( 4) 00:11:17.490 3213.107 - 3227.387: 98.4922% ( 5) 00:11:17.490 3227.387 - 3241.668: 98.5046% ( 11) 00:11:17.490 3241.668 - 3255.948: 98.5103% ( 5) 00:11:17.490 3255.948 - 3270.229: 98.5148% ( 4) 00:11:17.490 3270.229 - 3284.509: 98.5170% ( 2) 00:11:17.490 3298.790 - 3313.070: 98.5226% ( 5) 00:11:17.490 3313.070 - 3327.351: 98.5238% ( 1) 00:11:17.490 3327.351 - 3341.631: 98.5440% ( 18) 00:11:17.490 3341.631 - 3355.912: 98.5666% ( 20) 00:11:17.490 3355.912 - 3370.192: 98.5767% ( 9) 00:11:17.490 3370.192 - 3384.473: 98.5868% ( 9) 00:11:17.490 3384.473 - 3398.753: 98.5925% ( 5) 00:11:17.490 3398.753 - 3413.034: 98.5947% ( 2) 00:11:17.490 3413.034 - 3427.314: 98.5992% ( 4) 00:11:17.490 3427.314 - 3441.595: 98.6026% ( 3) 00:11:17.490 3455.875 - 3470.156: 98.6060% ( 3) 00:11:17.490 3470.156 - 3484.436: 98.6206% ( 13) 00:11:17.490 3484.436 - 3498.716: 98.6364% ( 14) 00:11:17.490 3498.716 - 3512.997: 98.6454% ( 8) 00:11:17.490 3512.997 - 3527.277: 98.6555% ( 9) 00:11:17.490 3527.277 - 3541.558: 98.6679% ( 11) 00:11:17.490 3541.558 - 3555.838: 98.6792% ( 10) 00:11:17.490 3555.838 - 3570.119: 98.6904% ( 10) 00:11:17.490 3570.119 - 3584.399: 98.7062% ( 14) 00:11:17.490 3584.399 - 3598.680: 98.7298% ( 21) 00:11:17.490 3598.680 - 3612.960: 98.7546% ( 22) 00:11:17.490 3612.960 - 3627.241: 98.7760% ( 19) 00:11:17.490 3627.241 - 3641.521: 98.7873% ( 10) 00:11:17.490 3641.521 - 3655.802: 98.7963% ( 8) 00:11:17.490 3655.802 - 3684.363: 98.8210% ( 22) 00:11:17.490 3684.363 - 3712.924: 98.8548% ( 30) 00:11:17.490 3712.924 - 3741.485: 98.9111% ( 50) 00:11:17.490 3741.485 - 3770.046: 98.9370% ( 23) 00:11:17.490 3770.046 - 3798.606: 98.9731% ( 32) 00:11:17.490 3798.606 - 3827.167: 99.0012% ( 25) 00:11:17.490 3827.167 - 3855.728: 99.0226% ( 19) 00:11:17.490 3855.728 - 3884.289: 99.0440% ( 19) 00:11:17.490 3884.289 - 3912.850: 99.0586% ( 13) 00:11:17.490 3912.850 - 3941.411: 99.0902% ( 28) 00:11:17.490 3941.411 - 3969.972: 99.1656% ( 67) 00:11:17.490 3969.972 - 3998.533: 99.2106% ( 40) 00:11:17.490 3998.533 - 4027.094: 99.2433% ( 29) 00:11:17.490 4027.094 - 4055.655: 99.2760% ( 29) 00:11:17.490 4055.655 - 4084.216: 99.3266% ( 45) 00:11:17.490 4084.216 - 4112.777: 99.3705% ( 39) 00:11:17.490 4112.777 - 4141.338: 99.3852% ( 13) 00:11:17.490 4141.338 - 4169.899: 99.4077% ( 20) 00:11:17.490 4169.899 - 4198.460: 99.4223% ( 13) 00:11:17.490 4198.460 - 4227.021: 99.4674% ( 40) 00:11:17.490 4227.021 - 4255.582: 99.4944% ( 24) 00:11:17.490 4255.582 - 4284.143: 99.5226% ( 25) 00:11:17.490 4284.143 - 4312.704: 99.5563% ( 30) 00:11:17.490 4312.704 - 4341.265: 99.5912% ( 31) 00:11:17.490 4341.265 - 4369.825: 99.6284% ( 33) 00:11:17.490 4369.825 - 4398.386: 99.6780% ( 44) 00:11:17.490 4398.386 - 4426.947: 99.7140% ( 32) 00:11:17.490 4426.947 - 4455.508: 99.7219% ( 7) 00:11:17.490 4484.069 - 4512.630: 99.7365% ( 13) 00:11:17.490 4512.630 - 4541.191: 99.7466% ( 9) 00:11:17.490 4541.191 - 4569.752: 99.7568% ( 9) 00:11:17.490 4569.752 - 4598.313: 99.7613% ( 4) 00:11:17.490 4598.313 - 4626.874: 99.7635% ( 2) 00:11:17.490 4626.874 - 4655.435: 99.7725% ( 8) 00:11:17.490 4655.435 - 4683.996: 99.7804% ( 7) 00:11:17.490 4683.996 - 4712.557: 99.7883% ( 7) 00:11:17.490 4712.557 - 4741.118: 99.7928% ( 4) 00:11:17.490 4741.118 - 4769.679: 99.7939% ( 1) 00:11:17.490 4769.679 - 4798.240: 99.7951% ( 1) 00:11:17.490 4798.240 - 4826.801: 99.8007% ( 5) 00:11:17.490 4826.801 - 4855.362: 99.8142% ( 12) 00:11:17.490 4855.362 - 4883.923: 99.8153% ( 1) 00:11:17.490 4883.923 - 4912.484: 99.8277% ( 11) 00:11:17.490 4912.484 - 4941.045: 99.8491% ( 19) 00:11:17.490 4941.045 - 4969.605: 99.8626% ( 12) 00:11:17.490 4969.605 - 4998.166: 99.8694% ( 6) 00:11:17.490 4998.166 - 5026.727: 99.8705% ( 1) 00:11:17.490 5083.849 - 5112.410: 99.8728% ( 2) 00:11:17.490 5112.410 - 5140.971: 99.8784% ( 5) 00:11:17.490 5169.532 - 5198.093: 99.8795% ( 1) 00:11:17.490 5369.459 - 5398.020: 99.8806% ( 1) 00:11:17.490 5426.581 - 5455.142: 99.8818% ( 1) 00:11:17.490 5512.264 - 5540.824: 99.8840% ( 2) 00:11:17.490 5540.824 - 5569.385: 99.8851% ( 1) 00:11:17.490 5569.385 - 5597.946: 99.8919% ( 6) 00:11:17.490 5740.751 - 5769.312: 99.8930% ( 1) 00:11:17.490 5826.434 - 5854.995: 99.8942% ( 1) 00:11:17.490 5883.556 - 5912.117: 99.8953% ( 1) 00:11:17.490 5940.678 - 5969.239: 99.8987% ( 3) 00:11:17.490 6083.483 - 6112.043: 99.8998% ( 1) 00:11:17.490 6283.409 - 6311.970: 99.9009% ( 1) 00:11:17.490 6311.970 - 6340.531: 99.9032% ( 2) 00:11:17.490 6426.214 - 6454.775: 99.9043% ( 1) 00:11:17.490 6483.336 - 6511.897: 99.9054% ( 1) 00:11:17.490 6854.628 - 6883.189: 99.9065% ( 1) 00:11:17.490 6997.433 - 7025.994: 99.9077% ( 1) 00:11:17.490 7254.482 - 7283.042: 99.9088% ( 1) 00:11:17.490 7368.725 - 7425.847: 99.9099% ( 1) 00:11:17.490 7425.847 - 7482.969: 99.9133% ( 3) 00:11:17.490 7482.969 - 7540.091: 99.9144% ( 1) 00:11:17.490 7540.091 - 7597.213: 99.9155% ( 1) 00:11:17.490 7768.579 - 7825.701: 99.9167% ( 1) 00:11:17.490 7825.701 - 7882.822: 99.9178% ( 1) 00:11:17.490 8054.188 - 8111.310: 99.9189% ( 1) 00:11:17.490 8282.676 - 8339.798: 99.9201% ( 1) 00:11:18.425 8454.041 - 8511.163: 99.9212% ( 1) 00:11:18.425 8568.285 - 8625.407: 99.9223% ( 1) 00:11:18.425 8682.529 - 8739.651: 99.9246% ( 2) 00:11:18.425 8739.651 - 8796.773: 99.9279% ( 3) 00:11:18.425 8796.773 - 8853.895: 99.9302% ( 2) 00:11:18.425 8853.895 - 8911.017: 99.9336% ( 3) 00:11:18.425 8911.017 - 8968.139: 99.9347% ( 1) 00:11:18.425 8968.139 - 9025.260: 99.9369% ( 2) 00:11:18.425 9025.260 - 9082.382: 99.9414% ( 4) 00:11:18.425 9082.382 - 9139.504: 99.9448% ( 3) 00:11:18.425 9139.504 - 9196.626: 99.9516% ( 6) 00:11:18.425 9196.626 - 9253.748: 99.9583% ( 6) 00:11:18.425 9310.870 - 9367.992: 99.9595% ( 1) 00:11:18.425 9425.114 - 9482.236: 99.9617% ( 2) 00:11:18.425 9596.480 - 9653.601: 99.9707% ( 8) 00:11:18.425 9653.601 - 9710.723: 99.9741% ( 3) 00:11:18.425 9710.723 - 9767.845: 99.9764% ( 2) 00:11:18.425 10967.405 - 11024.527: 99.9865% ( 9) 00:11:18.425 11024.527 - 11081.649: 99.9887% ( 2) 00:11:18.425 11310.137 - 11367.258: 99.9899% ( 1) 00:11:18.425 11538.624 - 11595.746: 99.9910% ( 1) 00:11:18.425 11824.234 - 11881.356: 99.9921% ( 1) 00:11:18.425 12224.087 - 12281.209: 99.9932% ( 1) 00:11:18.425 12966.672 - 13023.794: 99.9944% ( 1) 00:11:18.425 13138.037 - 13195.159: 99.9955% ( 1) 00:11:18.425 13537.891 - 13595.013: 99.9977% ( 2) 00:11:18.425 13880.622 - 13937.744: 99.9989% ( 1) 00:11:18.425 13937.744 - 13994.866: 100.0000% ( 1) 00:11:18.425 00:11:18.425 20:49:09 -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:11:18.425 00:11:18.425 real 0m4.087s 00:11:18.425 user 0m3.157s 00:11:18.425 sys 0m0.928s 00:11:18.425 20:49:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:18.425 20:49:09 -- common/autotest_common.sh@10 -- # set +x 00:11:18.425 ************************************ 00:11:18.425 END TEST nvme_perf 00:11:18.425 ************************************ 00:11:18.683 20:49:09 -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /usr/home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:11:18.683 20:49:09 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:11:18.683 20:49:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:18.683 20:49:09 -- common/autotest_common.sh@10 -- # set +x 00:11:18.683 ************************************ 00:11:18.683 START TEST nvme_hello_world 00:11:18.683 ************************************ 00:11:18.683 20:49:09 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:11:18.942 EAL: TSC is not safe to use in SMP mode 00:11:18.942 EAL: TSC is not invariant 00:11:18.942 [2024-04-16 20:49:10.024489] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:11:19.200 Initializing NVMe Controllers 00:11:19.200 Attaching to 0000:00:06.0 00:11:19.200 Attached to 0000:00:06.0 00:11:19.200 Namespace ID: 1 size: 5GB 00:11:19.200 Initialization complete. 00:11:19.200 INFO: using host memory buffer for IO 00:11:19.200 Hello world! 00:11:19.200 00:11:19.200 real 0m0.491s 00:11:19.200 user 0m0.006s 00:11:19.200 sys 0m0.484s 00:11:19.200 20:49:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:19.200 20:49:10 -- common/autotest_common.sh@10 -- # set +x 00:11:19.200 ************************************ 00:11:19.200 END TEST nvme_hello_world 00:11:19.200 ************************************ 00:11:19.200 20:49:10 -- nvme/nvme.sh@88 -- # run_test nvme_sgl /usr/home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:11:19.201 20:49:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:19.201 20:49:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:19.201 20:49:10 -- common/autotest_common.sh@10 -- # set +x 00:11:19.201 ************************************ 00:11:19.201 START TEST nvme_sgl 00:11:19.201 ************************************ 00:11:19.201 20:49:10 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:11:19.459 EAL: TSC is not safe to use in SMP mode 00:11:19.459 EAL: TSC is not invariant 00:11:19.459 [2024-04-16 20:49:10.555595] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:11:19.459 0000:00:06.0: build_io_request_0 Invalid IO length parameter 00:11:19.459 0000:00:06.0: build_io_request_1 Invalid IO length parameter 00:11:19.459 0000:00:06.0: build_io_request_3 Invalid IO length parameter 00:11:19.459 0000:00:06.0: build_io_request_8 Invalid IO length parameter 00:11:19.459 0000:00:06.0: build_io_request_9 Invalid IO length parameter 00:11:19.459 0000:00:06.0: build_io_request_11 Invalid IO length parameter 00:11:19.717 NVMe Readv/Writev Request test 00:11:19.717 Attaching to 0000:00:06.0 00:11:19.717 Attached to 0000:00:06.0 00:11:19.717 0000:00:06.0: build_io_request_2 test passed 00:11:19.717 0000:00:06.0: build_io_request_4 test passed 00:11:19.717 0000:00:06.0: build_io_request_5 test passed 00:11:19.717 0000:00:06.0: build_io_request_6 test passed 00:11:19.717 0000:00:06.0: build_io_request_7 test passed 00:11:19.717 0000:00:06.0: build_io_request_10 test passed 00:11:19.717 Cleaning up... 00:11:19.717 00:11:19.717 real 0m0.494s 00:11:19.717 user 0m0.028s 00:11:19.717 sys 0m0.469s 00:11:19.717 20:49:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:19.717 20:49:10 -- common/autotest_common.sh@10 -- # set +x 00:11:19.717 ************************************ 00:11:19.717 END TEST nvme_sgl 00:11:19.717 ************************************ 00:11:19.717 20:49:10 -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /usr/home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:11:19.717 20:49:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:19.717 20:49:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:19.717 20:49:10 -- common/autotest_common.sh@10 -- # set +x 00:11:19.717 ************************************ 00:11:19.717 START TEST nvme_e2edp 00:11:19.717 ************************************ 00:11:19.717 20:49:10 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:11:19.976 EAL: TSC is not safe to use in SMP mode 00:11:19.976 EAL: TSC is not invariant 00:11:19.976 [2024-04-16 20:49:11.101278] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:11:20.234 NVMe Write/Read with End-to-End data protection test 00:11:20.234 Attaching to 0000:00:06.0 00:11:20.234 Attached to 0000:00:06.0 00:11:20.234 Cleaning up... 00:11:20.234 00:11:20.234 real 0m0.489s 00:11:20.234 user 0m0.022s 00:11:20.234 sys 0m0.469s 00:11:20.234 20:49:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:20.234 20:49:11 -- common/autotest_common.sh@10 -- # set +x 00:11:20.235 ************************************ 00:11:20.235 END TEST nvme_e2edp 00:11:20.235 ************************************ 00:11:20.235 20:49:11 -- nvme/nvme.sh@90 -- # run_test nvme_reserve /usr/home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:11:20.235 20:49:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:20.235 20:49:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:20.235 20:49:11 -- common/autotest_common.sh@10 -- # set +x 00:11:20.235 ************************************ 00:11:20.235 START TEST nvme_reserve 00:11:20.235 ************************************ 00:11:20.235 20:49:11 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:11:20.801 EAL: TSC is not safe to use in SMP mode 00:11:20.801 EAL: TSC is not invariant 00:11:20.801 [2024-04-16 20:49:11.635291] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:11:20.801 ===================================================== 00:11:20.801 NVMe Controller at PCI bus 0, device 6, function 0 00:11:20.801 ===================================================== 00:11:20.801 Reservations: Not Supported 00:11:20.801 Reservation test passed 00:11:20.801 00:11:20.801 real 0m0.485s 00:11:20.801 user 0m0.025s 00:11:20.801 sys 0m0.461s 00:11:20.801 20:49:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:20.801 20:49:11 -- common/autotest_common.sh@10 -- # set +x 00:11:20.801 ************************************ 00:11:20.801 END TEST nvme_reserve 00:11:20.801 ************************************ 00:11:20.801 20:49:11 -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /usr/home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:11:20.801 20:49:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:20.801 20:49:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:20.801 20:49:11 -- common/autotest_common.sh@10 -- # set +x 00:11:20.801 ************************************ 00:11:20.801 START TEST nvme_err_injection 00:11:20.801 ************************************ 00:11:20.801 20:49:11 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:11:21.060 EAL: TSC is not safe to use in SMP mode 00:11:21.060 EAL: TSC is not invariant 00:11:21.060 [2024-04-16 20:49:12.171252] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:11:21.319 NVMe Error Injection test 00:11:21.319 Attaching to 0000:00:06.0 00:11:21.319 Attached to 0000:00:06.0 00:11:21.319 0000:00:06.0: get features failed as expected 00:11:21.319 0000:00:06.0: get features successfully as expected 00:11:21.319 0000:00:06.0: read failed as expected 00:11:21.319 0000:00:06.0: read successfully as expected 00:11:21.319 Cleaning up... 00:11:21.319 00:11:21.319 real 0m0.492s 00:11:21.319 user 0m0.013s 00:11:21.319 sys 0m0.478s 00:11:21.319 20:49:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:21.319 20:49:12 -- common/autotest_common.sh@10 -- # set +x 00:11:21.319 ************************************ 00:11:21.319 END TEST nvme_err_injection 00:11:21.319 ************************************ 00:11:21.319 20:49:12 -- nvme/nvme.sh@92 -- # run_test nvme_overhead /usr/home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:11:21.319 20:49:12 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:11:21.319 20:49:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:21.319 20:49:12 -- common/autotest_common.sh@10 -- # set +x 00:11:21.319 ************************************ 00:11:21.319 START TEST nvme_overhead 00:11:21.319 ************************************ 00:11:21.319 20:49:12 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:11:21.578 EAL: TSC is not safe to use in SMP mode 00:11:21.578 EAL: TSC is not invariant 00:11:21.836 [2024-04-16 20:49:12.707376] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:11:22.775 Initializing NVMe Controllers 00:11:22.775 Attaching to 0000:00:06.0 00:11:22.775 Attached to 0000:00:06.0 00:11:22.775 Initialization complete. Launching workers. 00:11:22.775 submit (in ns) avg, min, max = 8580.0, 4675.3, 30991.9 00:11:22.775 complete (in ns) avg, min, max = 10256.4, 3968.4, 87868.0 00:11:22.775 00:11:22.775 Submit histogram 00:11:22.775 ================ 00:11:22.775 Range in us Cumulative Count 00:11:22.775 4.658 - 4.686: 0.0131% ( 1) 00:11:22.775 4.769 - 4.797: 0.0262% ( 1) 00:11:22.775 4.797 - 4.825: 0.0393% ( 1) 00:11:22.775 4.853 - 4.881: 0.0525% ( 1) 00:11:22.775 5.076 - 5.104: 0.0656% ( 1) 00:11:22.775 5.272 - 5.299: 0.0787% ( 1) 00:11:22.775 5.299 - 5.327: 0.0918% ( 1) 00:11:22.775 5.495 - 5.523: 0.1049% ( 1) 00:11:22.775 5.662 - 5.690: 0.1180% ( 1) 00:11:22.775 5.997 - 6.025: 0.1311% ( 1) 00:11:22.775 6.917 - 6.945: 0.1442% ( 1) 00:11:22.775 6.945 - 6.973: 0.1836% ( 3) 00:11:22.775 7.001 - 7.029: 0.2360% ( 4) 00:11:22.775 7.029 - 7.057: 0.3016% ( 5) 00:11:22.775 7.057 - 7.084: 0.4065% ( 8) 00:11:22.775 7.084 - 7.112: 0.6294% ( 17) 00:11:22.775 7.112 - 7.140: 0.9441% ( 24) 00:11:22.775 7.140 - 7.196: 1.8883% ( 72) 00:11:22.775 7.196 - 7.252: 3.2258% ( 102) 00:11:22.775 7.252 - 7.308: 4.8387% ( 123) 00:11:22.775 7.308 - 7.363: 7.5662% ( 208) 00:11:22.775 7.363 - 7.419: 10.9756% ( 260) 00:11:22.775 7.419 - 7.475: 14.3457% ( 257) 00:11:22.775 7.475 - 7.531: 18.2796% ( 300) 00:11:22.775 7.531 - 7.587: 22.2528% ( 303) 00:11:22.775 7.587 - 7.642: 26.3703% ( 314) 00:11:22.775 7.642 - 7.698: 30.2387% ( 295) 00:11:22.775 7.698 - 7.754: 33.7136% ( 265) 00:11:22.775 7.754 - 7.810: 36.6903% ( 227) 00:11:22.775 7.810 - 7.865: 39.4178% ( 208) 00:11:22.775 7.865 - 7.921: 41.9093% ( 190) 00:11:22.775 7.921 - 7.977: 44.0336% ( 162) 00:11:22.775 7.977 - 8.033: 46.1317% ( 160) 00:11:22.775 8.033 - 8.089: 48.0855% ( 149) 00:11:22.775 8.089 - 8.144: 49.9213% ( 140) 00:11:22.775 8.144 - 8.200: 51.5211% ( 122) 00:11:22.775 8.200 - 8.256: 53.0029% ( 113) 00:11:22.775 8.256 - 8.312: 54.4322% ( 109) 00:11:22.775 8.312 - 8.367: 55.8746% ( 110) 00:11:22.775 8.367 - 8.423: 57.1073% ( 94) 00:11:22.775 8.423 - 8.479: 58.5759% ( 112) 00:11:22.775 8.479 - 8.535: 59.9003% ( 101) 00:11:22.775 8.535 - 8.591: 61.1985% ( 99) 00:11:22.775 8.591 - 8.646: 62.6672% ( 112) 00:11:22.775 8.646 - 8.702: 63.9916% ( 101) 00:11:22.775 8.702 - 8.758: 65.4472% ( 111) 00:11:22.775 8.758 - 8.814: 66.6929% ( 95) 00:11:22.775 8.814 - 8.870: 67.8075% ( 85) 00:11:22.775 8.870 - 8.925: 68.8434% ( 79) 00:11:22.775 8.925 - 8.981: 69.8925% ( 80) 00:11:22.775 8.981 - 9.037: 71.3742% ( 113) 00:11:22.775 9.037 - 9.093: 72.9871% ( 123) 00:11:22.775 9.093 - 9.148: 75.0590% ( 158) 00:11:22.775 9.148 - 9.204: 77.1833% ( 162) 00:11:22.775 9.204 - 9.260: 79.1109% ( 147) 00:11:22.775 9.260 - 9.316: 80.6976% ( 121) 00:11:22.775 9.316 - 9.372: 81.9302% ( 94) 00:11:22.775 9.372 - 9.427: 83.2284% ( 99) 00:11:22.775 9.427 - 9.483: 84.9462% ( 131) 00:11:22.775 9.483 - 9.539: 86.5329% ( 121) 00:11:22.775 9.539 - 9.595: 87.8573% ( 101) 00:11:22.775 9.595 - 9.650: 89.0506% ( 91) 00:11:22.775 9.650 - 9.706: 90.1259% ( 82) 00:11:22.775 9.706 - 9.762: 90.7815% ( 50) 00:11:22.775 9.762 - 9.818: 91.3061% ( 40) 00:11:22.775 9.818 - 9.874: 91.8568% ( 42) 00:11:22.775 9.874 - 9.929: 92.2895% ( 33) 00:11:22.775 9.929 - 9.985: 92.5780% ( 22) 00:11:22.775 9.985 - 10.041: 92.9845% ( 31) 00:11:22.775 10.041 - 10.097: 93.3517% ( 28) 00:11:22.775 10.097 - 10.153: 93.7189% ( 28) 00:11:22.775 10.153 - 10.208: 93.9549% ( 18) 00:11:22.775 10.208 - 10.264: 94.2434% ( 22) 00:11:22.775 10.264 - 10.320: 94.5319% ( 22) 00:11:22.775 10.320 - 10.376: 94.9253% ( 30) 00:11:22.775 10.376 - 10.431: 95.1088% ( 14) 00:11:22.775 10.431 - 10.487: 95.3711% ( 20) 00:11:22.775 10.487 - 10.543: 95.6071% ( 18) 00:11:22.775 10.543 - 10.599: 95.8563% ( 19) 00:11:22.775 10.599 - 10.655: 96.1054% ( 19) 00:11:22.775 10.655 - 10.710: 96.2890% ( 14) 00:11:22.775 10.710 - 10.766: 96.3677% ( 6) 00:11:22.775 10.766 - 10.822: 96.5119% ( 11) 00:11:22.775 10.822 - 10.878: 96.6693% ( 12) 00:11:22.775 10.878 - 10.933: 96.8529% ( 14) 00:11:22.775 10.933 - 10.989: 96.9315% ( 6) 00:11:22.775 10.989 - 11.045: 97.1020% ( 13) 00:11:22.775 11.045 - 11.101: 97.2200% ( 9) 00:11:22.775 11.101 - 11.157: 97.3643% ( 11) 00:11:22.775 11.157 - 11.212: 97.4036% ( 3) 00:11:22.775 11.212 - 11.268: 97.4954% ( 7) 00:11:22.775 11.268 - 11.324: 97.5610% ( 5) 00:11:22.775 11.324 - 11.380: 97.6397% ( 6) 00:11:22.775 11.380 - 11.436: 97.6659% ( 2) 00:11:22.775 11.491 - 11.547: 97.6921% ( 2) 00:11:22.775 11.547 - 11.603: 97.7183% ( 2) 00:11:22.775 11.603 - 11.659: 97.7577% ( 3) 00:11:22.775 11.659 - 11.714: 97.7839% ( 2) 00:11:22.775 11.714 - 11.770: 97.7970% ( 1) 00:11:22.775 11.770 - 11.826: 97.8363% ( 3) 00:11:22.775 11.938 - 11.993: 97.8626% ( 2) 00:11:22.775 12.049 - 12.105: 97.8888% ( 2) 00:11:22.775 12.105 - 12.161: 97.9281% ( 3) 00:11:22.775 12.161 - 12.217: 97.9544% ( 2) 00:11:22.775 12.217 - 12.272: 97.9675% ( 1) 00:11:22.775 12.272 - 12.328: 98.0068% ( 3) 00:11:22.775 12.328 - 12.384: 98.0330% ( 2) 00:11:22.775 12.384 - 12.440: 98.0593% ( 2) 00:11:22.775 12.440 - 12.495: 98.0724% ( 1) 00:11:22.775 12.551 - 12.607: 98.0986% ( 2) 00:11:22.775 12.607 - 12.663: 98.1248% ( 2) 00:11:22.775 12.663 - 12.719: 98.1379% ( 1) 00:11:22.775 12.719 - 12.774: 98.1511% ( 1) 00:11:22.775 12.997 - 13.053: 98.1642% ( 1) 00:11:22.776 13.053 - 13.109: 98.1773% ( 1) 00:11:22.776 13.611 - 13.667: 98.1904% ( 1) 00:11:22.776 13.778 - 13.834: 98.2035% ( 1) 00:11:22.776 13.890 - 13.946: 98.2166% ( 1) 00:11:22.776 14.113 - 14.169: 98.2297% ( 1) 00:11:22.776 14.225 - 14.280: 98.2953% ( 5) 00:11:22.776 14.280 - 14.392: 98.3346% ( 3) 00:11:22.776 14.392 - 14.504: 98.3871% ( 4) 00:11:22.776 14.504 - 14.615: 98.4264% ( 3) 00:11:22.776 14.727 - 14.838: 98.4789% ( 4) 00:11:22.776 14.838 - 14.950: 98.5313% ( 4) 00:11:22.776 14.950 - 15.061: 98.5969% ( 5) 00:11:22.776 15.061 - 15.173: 98.6100% ( 1) 00:11:22.776 15.173 - 15.285: 98.6756% ( 5) 00:11:22.776 15.285 - 15.396: 98.7018% ( 2) 00:11:22.776 15.396 - 15.508: 98.7411% ( 3) 00:11:22.776 15.508 - 15.619: 98.7936% ( 4) 00:11:22.776 15.731 - 15.842: 98.8067% ( 1) 00:11:22.776 15.842 - 15.954: 98.8329% ( 2) 00:11:22.776 16.066 - 16.177: 98.8461% ( 1) 00:11:22.776 16.177 - 16.289: 98.8592% ( 1) 00:11:22.776 16.512 - 16.623: 98.8723% ( 1) 00:11:22.776 17.404 - 17.516: 98.8854% ( 1) 00:11:22.776 17.516 - 17.627: 98.9116% ( 2) 00:11:22.776 17.739 - 17.851: 98.9247% ( 1) 00:11:22.776 18.074 - 18.185: 98.9378% ( 1) 00:11:22.776 18.408 - 18.520: 98.9510% ( 1) 00:11:22.776 18.520 - 18.632: 98.9641% ( 1) 00:11:22.776 19.189 - 19.301: 98.9772% ( 1) 00:11:22.776 19.747 - 19.859: 99.0296% ( 4) 00:11:22.776 19.859 - 19.970: 99.0952% ( 5) 00:11:22.776 19.970 - 20.082: 99.1870% ( 7) 00:11:22.776 20.193 - 20.305: 99.2263% ( 3) 00:11:22.776 20.305 - 20.417: 99.2526% ( 2) 00:11:22.776 20.417 - 20.528: 99.2919% ( 3) 00:11:22.776 20.528 - 20.640: 99.3575% ( 5) 00:11:22.776 20.640 - 20.751: 99.4099% ( 4) 00:11:22.776 20.751 - 20.863: 99.4755% ( 5) 00:11:22.776 20.863 - 20.974: 99.5279% ( 4) 00:11:22.776 20.974 - 21.086: 99.5935% ( 5) 00:11:22.776 21.086 - 21.198: 99.6328% ( 3) 00:11:22.776 21.309 - 21.421: 99.6591% ( 2) 00:11:22.776 21.532 - 21.644: 99.6722% ( 1) 00:11:22.776 21.644 - 21.755: 99.6984% ( 2) 00:11:22.776 21.755 - 21.867: 99.7377% ( 3) 00:11:22.776 21.867 - 21.979: 99.7771% ( 3) 00:11:22.776 21.979 - 22.090: 99.8033% ( 2) 00:11:22.776 22.648 - 22.760: 99.8164% ( 1) 00:11:22.776 22.760 - 22.871: 99.8295% ( 1) 00:11:22.776 22.983 - 23.094: 99.8426% ( 1) 00:11:22.776 23.764 - 23.875: 99.8558% ( 1) 00:11:22.776 24.210 - 24.321: 99.8689% ( 1) 00:11:22.776 24.656 - 24.768: 99.8951% ( 2) 00:11:22.776 24.991 - 25.102: 99.9082% ( 1) 00:11:22.776 25.102 - 25.214: 99.9213% ( 1) 00:11:22.776 25.326 - 25.437: 99.9344% ( 1) 00:11:22.776 25.772 - 25.883: 99.9475% ( 1) 00:11:22.776 25.883 - 25.995: 99.9607% ( 1) 00:11:22.776 27.668 - 27.780: 99.9738% ( 1) 00:11:22.776 27.892 - 28.003: 99.9869% ( 1) 00:11:22.776 30.792 - 31.015: 100.0000% ( 1) 00:11:22.776 00:11:22.776 Complete histogram 00:11:22.776 ================== 00:11:22.776 Range in us Cumulative Count 00:11:22.776 3.961 - 3.988: 0.0131% ( 1) 00:11:22.776 4.016 - 4.044: 0.0262% ( 1) 00:11:22.776 4.212 - 4.240: 0.0393% ( 1) 00:11:22.776 4.323 - 4.351: 0.0525% ( 1) 00:11:22.776 4.379 - 4.407: 0.0656% ( 1) 00:11:22.776 4.407 - 4.435: 0.0918% ( 2) 00:11:22.776 4.463 - 4.491: 0.1311% ( 3) 00:11:22.776 4.491 - 4.518: 0.1574% ( 2) 00:11:22.776 4.518 - 4.546: 0.1967% ( 3) 00:11:22.776 4.546 - 4.574: 0.3147% ( 9) 00:11:22.776 4.574 - 4.602: 0.3672% ( 4) 00:11:22.776 4.602 - 4.630: 0.4458% ( 6) 00:11:22.776 4.630 - 4.658: 0.4983% ( 4) 00:11:22.776 4.658 - 4.686: 0.5639% ( 5) 00:11:22.776 4.686 - 4.714: 0.6294% ( 5) 00:11:22.776 4.714 - 4.742: 0.7081% ( 6) 00:11:22.776 4.742 - 4.769: 0.8261% ( 9) 00:11:22.776 4.769 - 4.797: 0.9441% ( 9) 00:11:22.776 4.797 - 4.825: 1.0228% ( 6) 00:11:22.776 4.825 - 4.853: 1.0884% ( 5) 00:11:22.776 4.853 - 4.881: 1.1408% ( 4) 00:11:22.776 4.881 - 4.909: 1.2720% ( 10) 00:11:22.776 4.909 - 4.937: 1.3638% ( 7) 00:11:22.776 4.937 - 4.965: 1.4949% ( 10) 00:11:22.776 4.965 - 4.993: 1.5605% ( 5) 00:11:22.776 4.993 - 5.020: 1.7178% ( 12) 00:11:22.776 5.020 - 5.048: 1.8489% ( 10) 00:11:22.776 5.048 - 5.076: 2.0325% ( 14) 00:11:22.776 5.076 - 5.104: 2.1768% ( 11) 00:11:22.776 5.104 - 5.132: 2.3603% ( 14) 00:11:22.776 5.132 - 5.160: 2.5964% ( 18) 00:11:22.776 5.160 - 5.188: 2.9373% ( 26) 00:11:22.776 5.188 - 5.216: 3.1996% ( 20) 00:11:22.776 5.216 - 5.244: 3.7766% ( 44) 00:11:22.776 5.244 - 5.272: 4.3929% ( 47) 00:11:22.776 5.272 - 5.299: 4.9830% ( 45) 00:11:22.776 5.299 - 5.327: 5.5993% ( 47) 00:11:22.776 5.327 - 5.355: 6.1369% ( 41) 00:11:22.776 5.355 - 5.383: 6.5303% ( 30) 00:11:22.776 5.383 - 5.411: 7.0286% ( 38) 00:11:22.776 5.411 - 5.439: 7.5662% ( 41) 00:11:22.776 5.439 - 5.467: 8.0645% ( 38) 00:11:22.776 5.467 - 5.495: 8.5366% ( 36) 00:11:22.776 5.495 - 5.523: 8.9562% ( 32) 00:11:22.776 5.523 - 5.550: 9.3365% ( 29) 00:11:22.776 5.550 - 5.578: 9.7430% ( 31) 00:11:22.776 5.578 - 5.606: 10.0184% ( 21) 00:11:22.776 5.606 - 5.634: 10.2675% ( 19) 00:11:22.776 5.634 - 5.662: 10.5429% ( 21) 00:11:22.776 5.662 - 5.690: 10.7920% ( 19) 00:11:22.776 5.690 - 5.718: 11.0281% ( 18) 00:11:22.776 5.718 - 5.746: 11.2510% ( 17) 00:11:22.776 5.746 - 5.774: 11.5132% ( 20) 00:11:22.776 5.774 - 5.801: 12.5885% ( 82) 00:11:22.776 5.801 - 5.829: 14.6604% ( 158) 00:11:22.776 5.829 - 5.857: 16.2733% ( 123) 00:11:22.776 5.857 - 5.885: 17.3092% ( 79) 00:11:22.776 5.885 - 5.913: 18.2271% ( 70) 00:11:22.776 5.913 - 5.941: 18.4894% ( 20) 00:11:22.776 5.941 - 5.969: 18.6467% ( 12) 00:11:22.776 5.969 - 5.997: 18.7910% ( 11) 00:11:22.776 5.997 - 6.025: 18.8697% ( 6) 00:11:22.776 6.025 - 6.052: 18.9352% ( 5) 00:11:22.776 6.052 - 6.080: 19.0795% ( 11) 00:11:22.776 6.080 - 6.108: 19.4729% ( 30) 00:11:22.776 6.108 - 6.136: 20.5744% ( 84) 00:11:22.776 6.136 - 6.164: 21.4398% ( 66) 00:11:22.776 6.164 - 6.192: 22.2397% ( 61) 00:11:22.776 6.192 - 6.220: 23.0265% ( 60) 00:11:22.776 6.220 - 6.248: 23.4986% ( 36) 00:11:22.776 6.248 - 6.276: 23.9837% ( 37) 00:11:22.776 6.276 - 6.303: 24.3378% ( 27) 00:11:22.776 6.303 - 6.331: 24.5607% ( 17) 00:11:22.776 6.331 - 6.359: 24.6656% ( 8) 00:11:22.776 6.359 - 6.387: 24.7181% ( 4) 00:11:22.776 6.387 - 6.415: 24.7967% ( 6) 00:11:22.776 6.415 - 6.443: 24.8623% ( 5) 00:11:22.776 6.443 - 6.471: 24.9410% ( 6) 00:11:22.776 6.471 - 6.499: 24.9934% ( 4) 00:11:22.776 6.499 - 6.527: 25.0721% ( 6) 00:11:22.776 6.527 - 6.555: 25.3868% ( 24) 00:11:22.776 6.555 - 6.582: 25.6360% ( 19) 00:11:22.776 6.582 - 6.610: 25.8327% ( 15) 00:11:22.776 6.610 - 6.638: 25.9769% ( 11) 00:11:22.776 6.638 - 6.666: 26.0425% ( 5) 00:11:22.776 6.666 - 6.694: 26.0818% ( 3) 00:11:22.776 6.694 - 6.722: 26.1212% ( 3) 00:11:22.776 6.722 - 6.750: 26.1736% ( 4) 00:11:22.776 6.750 - 6.778: 26.2261% ( 4) 00:11:22.776 6.778 - 6.806: 26.3047% ( 6) 00:11:22.776 6.806 - 6.833: 26.3441% ( 3) 00:11:22.776 6.833 - 6.861: 26.3965% ( 4) 00:11:22.776 6.861 - 6.889: 26.4490% ( 4) 00:11:22.776 6.889 - 6.917: 26.5277% ( 6) 00:11:22.776 6.917 - 6.945: 26.6195% ( 7) 00:11:22.776 6.945 - 6.973: 26.6457% ( 2) 00:11:22.776 6.973 - 7.001: 26.6588% ( 1) 00:11:22.776 7.029 - 7.057: 26.7244% ( 5) 00:11:22.776 7.057 - 7.084: 26.7375% ( 1) 00:11:22.776 7.084 - 7.112: 26.7637% ( 2) 00:11:22.776 7.112 - 7.140: 26.7899% ( 2) 00:11:22.776 7.140 - 7.196: 26.8162% ( 2) 00:11:22.776 7.196 - 7.252: 26.9079% ( 7) 00:11:22.776 7.252 - 7.308: 27.0391% ( 10) 00:11:22.776 7.308 - 7.363: 27.1178% ( 6) 00:11:22.776 7.363 - 7.419: 27.1440% ( 2) 00:11:22.776 7.419 - 7.475: 27.1571% ( 1) 00:11:22.776 7.475 - 7.531: 27.1702% ( 1) 00:11:22.776 7.810 - 7.865: 27.1833% ( 1) 00:11:22.776 7.921 - 7.977: 27.2095% ( 2) 00:11:22.776 7.977 - 8.033: 27.2227% ( 1) 00:11:22.776 8.033 - 8.089: 27.2489% ( 2) 00:11:22.776 8.089 - 8.144: 27.2620% ( 1) 00:11:22.776 8.144 - 8.200: 27.3013% ( 3) 00:11:22.776 8.200 - 8.256: 27.3276% ( 2) 00:11:22.776 8.256 - 8.312: 27.3407% ( 1) 00:11:22.776 8.367 - 8.423: 27.3669% ( 2) 00:11:22.776 8.479 - 8.535: 27.3800% ( 1) 00:11:22.776 8.591 - 8.646: 27.3931% ( 1) 00:11:22.776 8.702 - 8.758: 27.4194% ( 2) 00:11:22.776 8.925 - 8.981: 27.4325% ( 1) 00:11:22.776 9.093 - 9.148: 27.4456% ( 1) 00:11:22.776 9.148 - 9.204: 27.4587% ( 1) 00:11:22.776 9.204 - 9.260: 27.4980% ( 3) 00:11:22.776 9.260 - 9.316: 27.5243% ( 2) 00:11:22.776 9.316 - 9.372: 27.5374% ( 1) 00:11:22.776 9.372 - 9.427: 27.5636% ( 2) 00:11:22.777 9.483 - 9.539: 27.6029% ( 3) 00:11:22.777 9.539 - 9.595: 27.6685% ( 5) 00:11:22.777 9.595 - 9.650: 27.7078% ( 3) 00:11:22.777 9.706 - 9.762: 27.7210% ( 1) 00:11:22.777 9.762 - 9.818: 27.7603% ( 3) 00:11:22.777 9.818 - 9.874: 27.8390% ( 6) 00:11:22.777 9.874 - 9.929: 27.9308% ( 7) 00:11:22.777 9.929 - 9.985: 27.9439% ( 1) 00:11:22.777 9.985 - 10.041: 27.9832% ( 3) 00:11:22.777 10.041 - 10.097: 27.9963% ( 1) 00:11:22.777 10.097 - 10.153: 28.0226% ( 2) 00:11:22.777 10.208 - 10.264: 28.0619% ( 3) 00:11:22.777 10.264 - 10.320: 28.0881% ( 2) 00:11:22.777 10.320 - 10.376: 28.1012% ( 1) 00:11:22.777 10.376 - 10.431: 28.1143% ( 1) 00:11:22.777 10.431 - 10.487: 28.1668% ( 4) 00:11:22.777 10.487 - 10.543: 28.2455% ( 6) 00:11:22.777 10.543 - 10.599: 28.2848% ( 3) 00:11:22.777 10.599 - 10.655: 28.3635% ( 6) 00:11:22.777 10.655 - 10.710: 28.4815% ( 9) 00:11:22.777 10.710 - 10.766: 28.7175% ( 18) 00:11:22.777 10.766 - 10.822: 29.0716% ( 27) 00:11:22.777 10.822 - 10.878: 29.4256% ( 27) 00:11:22.777 10.878 - 10.933: 30.2649% ( 64) 00:11:22.777 10.933 - 10.989: 31.3270% ( 81) 00:11:22.777 10.989 - 11.045: 32.8219% ( 114) 00:11:22.777 11.045 - 11.101: 34.6971% ( 143) 00:11:22.777 11.101 - 11.157: 37.1623% ( 188) 00:11:22.777 11.157 - 11.212: 39.3916% ( 170) 00:11:22.777 11.212 - 11.268: 42.1584% ( 211) 00:11:22.777 11.268 - 11.324: 45.1088% ( 225) 00:11:22.777 11.324 - 11.380: 48.1379% ( 231) 00:11:22.777 11.380 - 11.436: 51.7440% ( 275) 00:11:22.777 11.436 - 11.491: 55.0223% ( 250) 00:11:22.777 11.491 - 11.547: 58.2743% ( 248) 00:11:22.777 11.547 - 11.603: 61.7755% ( 267) 00:11:22.777 11.603 - 11.659: 65.6176% ( 293) 00:11:22.777 11.659 - 11.714: 68.8434% ( 246) 00:11:22.777 11.714 - 11.770: 72.0168% ( 242) 00:11:22.777 11.770 - 11.826: 74.5476% ( 193) 00:11:22.777 11.826 - 11.882: 77.4718% ( 223) 00:11:22.777 11.882 - 11.938: 79.8059% ( 178) 00:11:22.777 11.938 - 11.993: 82.2581% ( 187) 00:11:22.777 11.993 - 12.049: 84.4348% ( 166) 00:11:22.777 12.049 - 12.105: 86.2838% ( 141) 00:11:22.777 12.105 - 12.161: 87.9885% ( 130) 00:11:22.777 12.161 - 12.217: 89.2473% ( 96) 00:11:22.777 12.217 - 12.272: 90.5717% ( 101) 00:11:22.777 12.272 - 12.328: 91.8568% ( 98) 00:11:22.777 12.328 - 12.384: 92.7092% ( 65) 00:11:22.777 12.384 - 12.440: 93.4566% ( 57) 00:11:22.777 12.440 - 12.495: 94.0205% ( 43) 00:11:22.777 12.495 - 12.551: 94.6892% ( 51) 00:11:22.777 12.551 - 12.607: 95.0957% ( 31) 00:11:22.777 12.607 - 12.663: 95.3580% ( 20) 00:11:22.777 12.663 - 12.719: 95.6596% ( 23) 00:11:22.777 12.719 - 12.774: 95.8432% ( 14) 00:11:22.777 12.774 - 12.830: 96.0005% ( 12) 00:11:22.777 12.830 - 12.886: 96.1710% ( 13) 00:11:22.777 12.886 - 12.942: 96.3415% ( 13) 00:11:22.777 12.942 - 12.997: 96.4201% ( 6) 00:11:22.777 12.997 - 13.053: 96.5250% ( 8) 00:11:22.777 13.053 - 13.109: 96.6037% ( 6) 00:11:22.777 13.109 - 13.165: 96.6955% ( 7) 00:11:22.777 13.165 - 13.221: 96.8135% ( 9) 00:11:22.777 13.221 - 13.276: 96.8660% ( 4) 00:11:22.777 13.276 - 13.332: 96.9578% ( 7) 00:11:22.777 13.332 - 13.388: 96.9971% ( 3) 00:11:22.777 13.388 - 13.444: 97.0233% ( 2) 00:11:22.777 13.444 - 13.500: 97.0496% ( 2) 00:11:22.777 13.555 - 13.611: 97.0627% ( 1) 00:11:22.777 13.611 - 13.667: 97.1151% ( 4) 00:11:22.777 13.667 - 13.723: 97.1282% ( 1) 00:11:22.777 13.778 - 13.834: 97.1545% ( 2) 00:11:22.777 13.834 - 13.890: 97.1938% ( 3) 00:11:22.777 13.890 - 13.946: 97.2069% ( 1) 00:11:22.777 14.169 - 14.225: 97.2463% ( 3) 00:11:22.777 14.225 - 14.280: 97.2594% ( 1) 00:11:22.777 14.280 - 14.392: 97.2856% ( 2) 00:11:22.777 14.392 - 14.504: 97.3118% ( 2) 00:11:22.777 14.615 - 14.727: 97.3249% ( 1) 00:11:22.777 14.727 - 14.838: 97.3381% ( 1) 00:11:22.777 14.838 - 14.950: 97.4167% ( 6) 00:11:22.777 14.950 - 15.061: 97.4692% ( 4) 00:11:22.777 15.061 - 15.173: 97.5347% ( 5) 00:11:22.777 15.173 - 15.285: 97.5741% ( 3) 00:11:22.777 15.396 - 15.508: 97.6265% ( 4) 00:11:22.777 15.508 - 15.619: 97.6790% ( 4) 00:11:22.777 15.619 - 15.731: 97.7314% ( 4) 00:11:22.777 15.731 - 15.842: 97.7577% ( 2) 00:11:22.777 15.842 - 15.954: 97.7708% ( 1) 00:11:22.777 16.066 - 16.177: 97.7839% ( 1) 00:11:22.777 16.177 - 16.289: 97.7970% ( 1) 00:11:22.777 16.289 - 16.400: 97.8101% ( 1) 00:11:22.777 16.400 - 16.512: 97.9150% ( 8) 00:11:22.777 16.512 - 16.623: 97.9937% ( 6) 00:11:22.777 16.623 - 16.735: 98.2035% ( 16) 00:11:22.777 16.735 - 16.846: 98.2560% ( 4) 00:11:22.777 16.846 - 16.958: 98.3478% ( 7) 00:11:22.777 16.958 - 17.070: 98.4395% ( 7) 00:11:22.777 17.070 - 17.181: 98.4920% ( 4) 00:11:22.777 17.181 - 17.293: 98.5313% ( 3) 00:11:22.777 17.293 - 17.404: 98.5445% ( 1) 00:11:22.777 17.404 - 17.516: 98.5707% ( 2) 00:11:22.777 17.516 - 17.627: 98.6231% ( 4) 00:11:22.777 18.185 - 18.297: 98.6494% ( 2) 00:11:22.777 18.408 - 18.520: 98.6887% ( 3) 00:11:22.777 18.520 - 18.632: 98.7543% ( 5) 00:11:22.777 18.632 - 18.743: 98.7936% ( 3) 00:11:22.777 18.743 - 18.855: 98.8067% ( 1) 00:11:22.777 18.966 - 19.078: 98.8198% ( 1) 00:11:22.777 19.078 - 19.189: 98.8461% ( 2) 00:11:22.777 19.189 - 19.301: 98.8592% ( 1) 00:11:22.777 19.413 - 19.524: 98.8723% ( 1) 00:11:22.777 19.859 - 19.970: 98.8854% ( 1) 00:11:22.777 20.305 - 20.417: 98.8985% ( 1) 00:11:22.777 20.417 - 20.528: 98.9510% ( 4) 00:11:22.777 20.528 - 20.640: 98.9772% ( 2) 00:11:22.777 20.751 - 20.863: 98.9903% ( 1) 00:11:22.777 20.863 - 20.974: 99.0034% ( 1) 00:11:22.777 20.974 - 21.086: 99.0296% ( 2) 00:11:22.777 21.086 - 21.198: 99.0559% ( 2) 00:11:22.777 21.198 - 21.309: 99.0690% ( 1) 00:11:22.777 21.309 - 21.421: 99.0821% ( 1) 00:11:22.777 21.532 - 21.644: 99.1083% ( 2) 00:11:22.777 21.644 - 21.755: 99.1214% ( 1) 00:11:22.777 21.755 - 21.867: 99.1870% ( 5) 00:11:22.777 21.979 - 22.090: 99.2263% ( 3) 00:11:22.777 22.202 - 22.313: 99.2394% ( 1) 00:11:22.777 22.313 - 22.425: 99.2788% ( 3) 00:11:22.777 22.425 - 22.536: 99.2919% ( 1) 00:11:22.777 22.536 - 22.648: 99.3050% ( 1) 00:11:22.777 22.648 - 22.760: 99.3312% ( 2) 00:11:22.777 22.871 - 22.983: 99.3443% ( 1) 00:11:22.777 23.094 - 23.206: 99.3706% ( 2) 00:11:22.777 23.206 - 23.317: 99.3968% ( 2) 00:11:22.777 23.317 - 23.429: 99.4230% ( 2) 00:11:22.777 23.429 - 23.540: 99.4493% ( 2) 00:11:22.777 23.540 - 23.652: 99.5017% ( 4) 00:11:22.777 23.764 - 23.875: 99.5279% ( 2) 00:11:22.777 23.875 - 23.987: 99.5804% ( 4) 00:11:22.777 23.987 - 24.098: 99.6328% ( 4) 00:11:22.777 24.098 - 24.210: 99.7640% ( 10) 00:11:22.777 24.210 - 24.321: 99.7902% ( 2) 00:11:22.777 24.321 - 24.433: 99.8164% ( 2) 00:11:22.777 24.433 - 24.545: 99.8426% ( 2) 00:11:22.777 24.545 - 24.656: 99.8820% ( 3) 00:11:22.777 24.768 - 24.879: 99.9213% ( 3) 00:11:22.777 25.102 - 25.214: 99.9344% ( 1) 00:11:22.777 26.776 - 26.887: 99.9475% ( 1) 00:11:22.777 27.668 - 27.780: 99.9607% ( 1) 00:11:22.777 28.003 - 28.115: 99.9738% ( 1) 00:11:22.777 28.338 - 28.449: 99.9869% ( 1) 00:11:22.777 87.468 - 87.914: 100.0000% ( 1) 00:11:22.777 00:11:22.777 00:11:22.777 real 0m1.482s 00:11:22.777 user 0m1.017s 00:11:22.777 sys 0m0.468s 00:11:22.777 20:49:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:22.777 20:49:13 -- common/autotest_common.sh@10 -- # set +x 00:11:22.777 ************************************ 00:11:22.777 END TEST nvme_overhead 00:11:22.777 ************************************ 00:11:22.777 20:49:13 -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /usr/home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:11:22.777 20:49:13 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:11:22.777 20:49:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:22.777 20:49:13 -- common/autotest_common.sh@10 -- # set +x 00:11:22.777 ************************************ 00:11:22.777 START TEST nvme_arbitration 00:11:22.778 ************************************ 00:11:22.778 20:49:13 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:11:23.344 EAL: TSC is not safe to use in SMP mode 00:11:23.344 EAL: TSC is not invariant 00:11:23.344 [2024-04-16 20:49:14.247158] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:11:27.528 Initializing NVMe Controllers 00:11:27.528 Attaching to 0000:00:06.0 00:11:27.528 Attached to 0000:00:06.0 00:11:27.528 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:11:27.528 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:11:27.528 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:11:27.528 Associating QEMU NVMe Ctrl (12340 ) with lcore 3 00:11:27.528 /usr/home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:11:27.528 /usr/home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:11:27.528 Initialization complete. Launching workers. 00:11:27.528 Starting thread on core 1 with urgent priority queue 00:11:27.528 Starting thread on core 2 with urgent priority queue 00:11:27.528 Starting thread on core 3 with urgent priority queue 00:11:27.528 Starting thread on core 0 with urgent priority queue 00:11:27.528 QEMU NVMe Ctrl (12340 ) core 0: 5908.00 IO/s 16.93 secs/100000 ios 00:11:27.528 QEMU NVMe Ctrl (12340 ) core 1: 5935.33 IO/s 16.85 secs/100000 ios 00:11:27.528 QEMU NVMe Ctrl (12340 ) core 2: 5929.33 IO/s 16.87 secs/100000 ios 00:11:27.528 QEMU NVMe Ctrl (12340 ) core 3: 5923.33 IO/s 16.88 secs/100000 ios 00:11:27.528 ======================================================== 00:11:27.528 00:11:27.528 00:11:27.528 real 0m4.530s 00:11:27.528 user 0m13.089s 00:11:27.528 sys 0m0.478s 00:11:27.528 20:49:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:27.528 20:49:18 -- common/autotest_common.sh@10 -- # set +x 00:11:27.528 ************************************ 00:11:27.528 END TEST nvme_arbitration 00:11:27.528 ************************************ 00:11:27.528 20:49:18 -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /usr/home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:11:27.528 20:49:18 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:11:27.528 20:49:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:27.528 20:49:18 -- common/autotest_common.sh@10 -- # set +x 00:11:27.528 ************************************ 00:11:27.528 START TEST nvme_single_aen 00:11:27.528 ************************************ 00:11:27.528 20:49:18 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:11:27.529 [2024-04-16 20:49:18.397261] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:11:27.529 [2024-04-16 20:49:18.397465] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:11:27.789 EAL: TSC is not safe to use in SMP mode 00:11:27.789 EAL: TSC is not invariant 00:11:27.789 [2024-04-16 20:49:18.823604] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:11:27.789 [2024-04-16 20:49:18.831243] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:11:27.789 Asynchronous Event Request test 00:11:27.789 Attaching to 0000:00:06.0 00:11:27.789 Attached to 0000:00:06.0 00:11:27.789 Reset controller to setup AER completions for this process 00:11:27.789 Registering asynchronous event callbacks... 00:11:27.789 Getting orig temperature thresholds of all controllers 00:11:27.789 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:27.789 Setting all controllers temperature threshold low to trigger AER 00:11:27.789 Waiting for all controllers temperature threshold to be set lower 00:11:27.789 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:27.789 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:11:27.789 Waiting for all controllers to trigger AER and reset threshold 00:11:27.789 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:27.789 Cleaning up... 00:11:27.789 00:11:27.789 real 0m0.494s 00:11:27.789 user 0m0.018s 00:11:27.789 sys 0m0.475s 00:11:27.789 20:49:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:27.789 20:49:18 -- common/autotest_common.sh@10 -- # set +x 00:11:27.789 ************************************ 00:11:27.789 END TEST nvme_single_aen 00:11:27.789 ************************************ 00:11:28.049 20:49:18 -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:11:28.049 20:49:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:28.049 20:49:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:28.049 20:49:18 -- common/autotest_common.sh@10 -- # set +x 00:11:28.049 ************************************ 00:11:28.049 START TEST nvme_doorbell_aers 00:11:28.049 ************************************ 00:11:28.049 20:49:18 -- common/autotest_common.sh@1104 -- # nvme_doorbell_aers 00:11:28.049 20:49:18 -- nvme/nvme.sh@70 -- # bdfs=() 00:11:28.049 20:49:18 -- nvme/nvme.sh@70 -- # local bdfs bdf 00:11:28.049 20:49:18 -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:11:28.049 20:49:18 -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:11:28.049 20:49:18 -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:28.049 20:49:18 -- common/autotest_common.sh@1498 -- # local bdfs 00:11:28.049 20:49:18 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:28.049 20:49:18 -- common/autotest_common.sh@1499 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:28.049 20:49:18 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:28.049 20:49:19 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:11:28.049 20:49:19 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:11:28.049 20:49:19 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:28.049 20:49:19 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /usr/home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:06.0' 00:11:28.309 EAL: TSC is not safe to use in SMP mode 00:11:28.309 EAL: TSC is not invariant 00:11:28.568 [2024-04-16 20:49:19.442267] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:11:28.568 Executing: test_write_invalid_db 00:11:28.568 Waiting for AER completion... 00:11:28.568 Asynchronous Event received. 00:11:28.568 Error Informaton Log Page received. 00:11:28.568 Success: test_write_invalid_db 00:11:28.568 00:11:28.568 Executing: test_invalid_db_write_overflow_sq 00:11:28.568 Waiting for AER completion... 00:11:28.568 Asynchronous Event received. 00:11:28.568 Error Informaton Log Page received. 00:11:28.568 Success: test_invalid_db_write_overflow_sq 00:11:28.568 00:11:28.568 Executing: test_invalid_db_write_overflow_cq 00:11:28.568 Waiting for AER completion... 00:11:28.568 Asynchronous Event received. 00:11:28.568 Error Informaton Log Page received. 00:11:28.568 Success: test_invalid_db_write_overflow_cq 00:11:28.568 00:11:28.568 00:11:28.568 real 0m0.564s 00:11:28.568 user 0m0.046s 00:11:28.568 sys 0m0.542s 00:11:28.568 20:49:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:28.568 20:49:19 -- common/autotest_common.sh@10 -- # set +x 00:11:28.568 ************************************ 00:11:28.568 END TEST nvme_doorbell_aers 00:11:28.568 ************************************ 00:11:28.568 20:49:19 -- nvme/nvme.sh@97 -- # uname 00:11:28.568 20:49:19 -- nvme/nvme.sh@97 -- # '[' FreeBSD '!=' FreeBSD ']' 00:11:28.568 20:49:19 -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:11:28.568 20:49:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:28.568 20:49:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:28.568 20:49:19 -- common/autotest_common.sh@10 -- # set +x 00:11:28.568 ************************************ 00:11:28.568 START TEST bdev_nvme_reset_stuck_adm_cmd 00:11:28.568 ************************************ 00:11:28.568 20:49:19 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:11:28.829 * Looking for test storage... 00:11:28.829 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/nvme 00:11:28.829 20:49:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:11:28.829 20:49:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:11:28.829 20:49:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:11:28.829 20:49:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:11:28.829 20:49:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:11:28.829 20:49:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:11:28.829 20:49:19 -- common/autotest_common.sh@1509 -- # bdfs=() 00:11:28.829 20:49:19 -- common/autotest_common.sh@1509 -- # local bdfs 00:11:28.829 20:49:19 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:11:28.829 20:49:19 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:11:28.829 20:49:19 -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:28.829 20:49:19 -- common/autotest_common.sh@1498 -- # local bdfs 00:11:28.829 20:49:19 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:28.829 20:49:19 -- common/autotest_common.sh@1499 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:28.829 20:49:19 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:28.829 20:49:19 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:11:28.829 20:49:19 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:11:28.829 20:49:19 -- common/autotest_common.sh@1512 -- # echo 0000:00:06.0 00:11:28.829 20:49:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:06.0 00:11:28.829 20:49:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:06.0 ']' 00:11:28.829 20:49:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=54590 00:11:28.829 20:49:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:28.829 20:49:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:11:28.829 20:49:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 54590 00:11:28.829 20:49:19 -- common/autotest_common.sh@819 -- # '[' -z 54590 ']' 00:11:28.829 20:49:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:28.829 20:49:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:28.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:28.829 20:49:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:28.829 20:49:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:28.829 20:49:19 -- common/autotest_common.sh@10 -- # set +x 00:11:28.829 [2024-04-16 20:49:19.820324] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:11:28.829 [2024-04-16 20:49:19.820617] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:11:29.398 EAL: TSC is not safe to use in SMP mode 00:11:29.398 EAL: TSC is not invariant 00:11:29.398 [2024-04-16 20:49:20.258889] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:29.398 [2024-04-16 20:49:20.352253] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:29.398 [2024-04-16 20:49:20.352487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:29.398 [2024-04-16 20:49:20.352773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.398 [2024-04-16 20:49:20.352630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:29.398 [2024-04-16 20:49:20.352776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:29.656 20:49:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:29.656 20:49:20 -- common/autotest_common.sh@852 -- # return 0 00:11:29.656 20:49:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:06.0 00:11:29.656 20:49:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:29.656 20:49:20 -- common/autotest_common.sh@10 -- # set +x 00:11:29.656 [2024-04-16 20:49:20.741159] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:11:29.915 nvme0n1 00:11:29.915 20:49:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:29.915 20:49:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:11:29.915 20:49:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_XXXXX.txt 00:11:29.915 20:49:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:11:29.915 20:49:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:29.915 20:49:20 -- common/autotest_common.sh@10 -- # set +x 00:11:29.915 true 00:11:29.915 20:49:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:29.915 20:49:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:11:29.915 20:49:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1713300560 00:11:29.915 20:49:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=54598 00:11:29.915 20:49:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:11:29.915 20:49:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:29.915 20:49:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:11:31.821 20:49:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:11:31.821 20:49:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:31.821 20:49:22 -- common/autotest_common.sh@10 -- # set +x 00:11:31.821 [2024-04-16 20:49:22.878605] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:11:31.821 [2024-04-16 20:49:22.880582] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:11:31.821 [2024-04-16 20:49:22.880621] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:31.821 [2024-04-16 20:49:22.880630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.821 [2024-04-16 20:49:22.881357] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:31.821 20:49:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:31.821 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 54598 00:11:31.821 20:49:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 54598 00:11:31.821 20:49:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 54598 00:11:31.821 20:49:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:11:31.821 20:49:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:11:31.821 20:49:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:11:31.821 20:49:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:31.821 20:49:22 -- common/autotest_common.sh@10 -- # set +x 00:11:31.821 20:49:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:31.821 20:49:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:11:31.821 20:49:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_XXXXX.txt 00:11:31.821 20:49:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:11:31.821 20:49:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:11:31.821 20:49:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:11:31.821 20:49:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:11:31.821 20:49:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:11:31.821 20:49:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /tmp//sh-np.oWQ80W 00:11:31.821 20:49:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:11:32.081 20:49:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:11:32.081 20:49:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:11:32.081 20:49:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:11:32.081 20:49:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:11:32.081 20:49:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:11:32.081 20:49:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:11:32.081 20:49:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:11:32.081 20:49:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /tmp//sh-np.ifRAHe 00:11:32.081 20:49:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:11:32.081 20:49:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:11:32.081 20:49:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:11:32.081 20:49:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:11:32.081 20:49:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_XXXXX.txt 00:11:32.081 20:49:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 54590 00:11:32.081 20:49:22 -- common/autotest_common.sh@926 -- # '[' -z 54590 ']' 00:11:32.081 20:49:22 -- common/autotest_common.sh@930 -- # kill -0 54590 00:11:32.081 20:49:22 -- common/autotest_common.sh@931 -- # uname 00:11:32.081 20:49:22 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:11:32.081 20:49:22 -- common/autotest_common.sh@934 -- # ps -c -o command 54590 00:11:32.081 20:49:22 -- common/autotest_common.sh@934 -- # tail -1 00:11:32.081 20:49:22 -- common/autotest_common.sh@934 -- # process_name=spdk_tgt 00:11:32.081 20:49:22 -- common/autotest_common.sh@936 -- # '[' spdk_tgt = sudo ']' 00:11:32.081 killing process with pid 54590 00:11:32.081 20:49:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 54590' 00:11:32.081 20:49:22 -- common/autotest_common.sh@945 -- # kill 54590 00:11:32.081 20:49:22 -- common/autotest_common.sh@950 -- # wait 54590 00:11:32.081 20:49:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:11:32.081 20:49:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:11:32.081 00:11:32.081 real 0m3.636s 00:11:32.081 user 0m11.823s 00:11:32.081 sys 0m0.839s 00:11:32.081 20:49:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:32.081 20:49:23 -- common/autotest_common.sh@10 -- # set +x 00:11:32.081 ************************************ 00:11:32.081 END TEST bdev_nvme_reset_stuck_adm_cmd 00:11:32.081 ************************************ 00:11:32.340 20:49:23 -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:11:32.340 20:49:23 -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:11:32.340 20:49:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:32.340 20:49:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:32.340 20:49:23 -- common/autotest_common.sh@10 -- # set +x 00:11:32.340 ************************************ 00:11:32.340 START TEST nvme_fio 00:11:32.340 ************************************ 00:11:32.340 20:49:23 -- common/autotest_common.sh@1104 -- # nvme_fio_test 00:11:32.340 20:49:23 -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/usr/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:11:32.340 20:49:23 -- nvme/nvme.sh@32 -- # ran_fio=false 00:11:32.340 20:49:23 -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:11:32.340 20:49:23 -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:32.340 20:49:23 -- common/autotest_common.sh@1498 -- # local bdfs 00:11:32.340 20:49:23 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:32.340 20:49:23 -- common/autotest_common.sh@1499 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:32.340 20:49:23 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:32.340 20:49:23 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:11:32.340 20:49:23 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:11:32.340 20:49:23 -- nvme/nvme.sh@33 -- # bdfs=('0000:00:06.0') 00:11:32.340 20:49:23 -- nvme/nvme.sh@33 -- # local bdfs bdf 00:11:32.340 20:49:23 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:32.340 20:49:23 -- nvme/nvme.sh@35 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:11:32.340 20:49:23 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:32.909 EAL: TSC is not safe to use in SMP mode 00:11:32.909 EAL: TSC is not invariant 00:11:32.909 [2024-04-16 20:49:23.737557] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:11:32.909 20:49:23 -- nvme/nvme.sh@38 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:11:32.909 20:49:23 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:33.169 EAL: TSC is not safe to use in SMP mode 00:11:33.169 EAL: TSC is not invariant 00:11:33.169 [2024-04-16 20:49:24.212465] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:11:33.169 20:49:24 -- nvme/nvme.sh@41 -- # bs=4096 00:11:33.169 20:49:24 -- nvme/nvme.sh@43 -- # fio_nvme /usr/home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:11:33.169 20:49:24 -- common/autotest_common.sh@1339 -- # fio_plugin /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /usr/home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:11:33.169 20:49:24 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:11:33.169 20:49:24 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:33.169 20:49:24 -- common/autotest_common.sh@1318 -- # local sanitizers 00:11:33.169 20:49:24 -- common/autotest_common.sh@1319 -- # local plugin=/usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:33.169 20:49:24 -- common/autotest_common.sh@1320 -- # shift 00:11:33.169 20:49:24 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:11:33.169 20:49:24 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:11:33.169 20:49:24 -- common/autotest_common.sh@1324 -- # grep libasan 00:11:33.169 20:49:24 -- common/autotest_common.sh@1324 -- # ldd /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:33.169 20:49:24 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:11:33.169 20:49:24 -- common/autotest_common.sh@1324 -- # asan_lib= 00:11:33.169 20:49:24 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:11:33.169 20:49:24 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:11:33.169 20:49:24 -- common/autotest_common.sh@1324 -- # ldd /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:33.169 20:49:24 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:11:33.169 20:49:24 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:11:33.169 20:49:24 -- common/autotest_common.sh@1324 -- # asan_lib= 00:11:33.169 20:49:24 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:11:33.169 20:49:24 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:33.169 20:49:24 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /usr/home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:11:33.436 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:33.436 fio-3.35 00:11:33.436 Starting 1 thread 00:11:33.701 EAL: TSC is not safe to use in SMP mode 00:11:33.701 EAL: TSC is not invariant 00:11:33.701 [2024-04-16 20:49:24.816072] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:11:40.266 00:11:40.266 test: (groupid=0, jobs=1): err= 0: pid=102826: Tue Apr 16 20:49:30 2024 00:11:40.266 read: IOPS=57.7k, BW=225MiB/s (236MB/s)(451MiB/2001msec) 00:11:40.266 slat (nsec): min=430, max=15847, avg=493.78, stdev=159.41 00:11:40.267 clat (usec): min=243, max=4939, avg=1108.50, stdev=184.34 00:11:40.267 lat (usec): min=244, max=4955, avg=1108.99, stdev=184.40 00:11:40.267 clat percentiles (usec): 00:11:40.267 | 1.00th=[ 848], 5.00th=[ 914], 10.00th=[ 938], 20.00th=[ 979], 00:11:40.267 | 30.00th=[ 1020], 40.00th=[ 1057], 50.00th=[ 1106], 60.00th=[ 1139], 00:11:40.267 | 70.00th=[ 1172], 80.00th=[ 1221], 90.00th=[ 1270], 95.00th=[ 1303], 00:11:40.267 | 99.00th=[ 1631], 99.50th=[ 2008], 99.90th=[ 3130], 99.95th=[ 3949], 00:11:40.267 | 99.99th=[ 4752] 00:11:40.267 bw ( KiB/s): min=221655, max=233412, per=99.31%, avg=229208.67, stdev=6555.56, samples=3 00:11:40.267 iops : min=55413, max=58353, avg=57301.67, stdev=1639.13, samples=3 00:11:40.267 write: IOPS=57.6k, BW=225MiB/s (236MB/s)(450MiB/2001msec); 0 zone resets 00:11:40.267 slat (nsec): min=491, max=20686, avg=917.12, stdev=311.46 00:11:40.267 clat (usec): min=265, max=4908, avg=1108.75, stdev=187.77 00:11:40.267 lat (usec): min=268, max=4913, avg=1109.67, stdev=187.85 00:11:40.267 clat percentiles (usec): 00:11:40.267 | 1.00th=[ 857], 5.00th=[ 906], 10.00th=[ 938], 20.00th=[ 979], 00:11:40.267 | 30.00th=[ 1020], 40.00th=[ 1057], 50.00th=[ 1106], 60.00th=[ 1139], 00:11:40.267 | 70.00th=[ 1172], 80.00th=[ 1221], 90.00th=[ 1270], 95.00th=[ 1303], 00:11:40.267 | 99.00th=[ 1680], 99.50th=[ 2040], 99.90th=[ 3195], 99.95th=[ 4047], 00:11:40.267 | 99.99th=[ 4752] 00:11:40.267 bw ( KiB/s): min=222143, max=232952, per=99.19%, avg=228480.33, stdev=5640.85, samples=3 00:11:40.267 iops : min=55535, max=58238, avg=57119.67, stdev=1410.55, samples=3 00:11:40.267 lat (usec) : 250=0.01%, 500=0.12%, 750=0.32%, 1000=24.83% 00:11:40.267 lat (msec) : 2=74.22%, 4=0.47%, 10=0.05% 00:11:40.267 cpu : usr=100.05%, sys=0.00%, ctx=23, majf=0, minf=3 00:11:40.267 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:11:40.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:40.267 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:40.267 issued rwts: total=115453,115232,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:40.267 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:40.267 00:11:40.267 Run status group 0 (all jobs): 00:11:40.267 READ: bw=225MiB/s (236MB/s), 225MiB/s-225MiB/s (236MB/s-236MB/s), io=451MiB (473MB), run=2001-2001msec 00:11:40.267 WRITE: bw=225MiB/s (236MB/s), 225MiB/s-225MiB/s (236MB/s-236MB/s), io=450MiB (472MB), run=2001-2001msec 00:11:40.267 20:49:31 -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:40.267 20:49:31 -- nvme/nvme.sh@46 -- # true 00:11:40.267 00:11:40.267 real 0m7.976s 00:11:40.267 user 0m2.253s 00:11:40.267 sys 0m5.655s 00:11:40.267 20:49:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:40.267 20:49:31 -- common/autotest_common.sh@10 -- # set +x 00:11:40.267 ************************************ 00:11:40.267 END TEST nvme_fio 00:11:40.267 ************************************ 00:11:40.267 00:11:40.267 real 0m27.828s 00:11:40.267 user 0m32.041s 00:11:40.267 sys 0m13.823s 00:11:40.267 20:49:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:40.267 20:49:31 -- common/autotest_common.sh@10 -- # set +x 00:11:40.267 ************************************ 00:11:40.267 END TEST nvme 00:11:40.267 ************************************ 00:11:40.267 20:49:31 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:11:40.267 20:49:31 -- spdk/autotest.sh@227 -- # run_test nvme_scc /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:40.267 20:49:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:40.267 20:49:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:40.267 20:49:31 -- common/autotest_common.sh@10 -- # set +x 00:11:40.267 ************************************ 00:11:40.267 START TEST nvme_scc 00:11:40.267 ************************************ 00:11:40.267 20:49:31 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:40.526 * Looking for test storage... 00:11:40.526 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/nvme 00:11:40.526 20:49:31 -- cuse/common.sh@9 -- # source /usr/home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:40.526 20:49:31 -- nvme/functions.sh@7 -- # dirname /usr/home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:40.526 20:49:31 -- nvme/functions.sh@7 -- # readlink -f /usr/home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:40.526 20:49:31 -- nvme/functions.sh@7 -- # rootdir=/usr/home/vagrant/spdk_repo/spdk 00:11:40.526 20:49:31 -- nvme/functions.sh@8 -- # source /usr/home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:40.526 20:49:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:40.526 20:49:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:40.526 20:49:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:40.526 20:49:31 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:11:40.526 20:49:31 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:11:40.526 20:49:31 -- paths/export.sh@4 -- # export PATH 00:11:40.526 20:49:31 -- paths/export.sh@5 -- # echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:11:40.526 20:49:31 -- nvme/functions.sh@10 -- # ctrls=() 00:11:40.526 20:49:31 -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:40.526 20:49:31 -- nvme/functions.sh@11 -- # nvmes=() 00:11:40.526 20:49:31 -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:40.526 20:49:31 -- nvme/functions.sh@12 -- # bdfs=() 00:11:40.526 20:49:31 -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:40.526 20:49:31 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:40.526 20:49:31 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:40.526 20:49:31 -- nvme/functions.sh@14 -- # nvme_name= 00:11:40.526 20:49:31 -- cuse/common.sh@11 -- # rpc_py=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:40.526 20:49:31 -- nvme/nvme_scc.sh@12 -- # uname 00:11:40.526 20:49:31 -- nvme/nvme_scc.sh@12 -- # [[ FreeBSD == Linux ]] 00:11:40.526 20:49:31 -- nvme/nvme_scc.sh@12 -- # exit 0 00:11:40.526 00:11:40.526 real 0m0.215s 00:11:40.526 user 0m0.145s 00:11:40.526 sys 0m0.144s 00:11:40.526 20:49:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:40.526 20:49:31 -- common/autotest_common.sh@10 -- # set +x 00:11:40.526 ************************************ 00:11:40.526 END TEST nvme_scc 00:11:40.526 ************************************ 00:11:40.526 20:49:31 -- spdk/autotest.sh@229 -- # [[ 0 -eq 1 ]] 00:11:40.526 20:49:31 -- spdk/autotest.sh@232 -- # [[ 0 -eq 1 ]] 00:11:40.526 20:49:31 -- spdk/autotest.sh@235 -- # [[ '' -eq 1 ]] 00:11:40.526 20:49:31 -- spdk/autotest.sh@238 -- # [[ 0 -eq 1 ]] 00:11:40.526 20:49:31 -- spdk/autotest.sh@242 -- # [[ '' -eq 1 ]] 00:11:40.526 20:49:31 -- spdk/autotest.sh@246 -- # run_test nvme_rpc /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:40.526 20:49:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:40.526 20:49:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:40.526 20:49:31 -- common/autotest_common.sh@10 -- # set +x 00:11:40.526 ************************************ 00:11:40.526 START TEST nvme_rpc 00:11:40.526 ************************************ 00:11:40.526 20:49:31 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:40.784 * Looking for test storage... 00:11:40.784 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/nvme 00:11:40.784 20:49:31 -- nvme/nvme_rpc.sh@11 -- # rpc_py=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:40.784 20:49:31 -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:11:40.784 20:49:31 -- common/autotest_common.sh@1509 -- # bdfs=() 00:11:40.784 20:49:31 -- common/autotest_common.sh@1509 -- # local bdfs 00:11:40.784 20:49:31 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:11:40.784 20:49:31 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:11:40.784 20:49:31 -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:40.784 20:49:31 -- common/autotest_common.sh@1498 -- # local bdfs 00:11:40.784 20:49:31 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:40.784 20:49:31 -- common/autotest_common.sh@1499 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:40.784 20:49:31 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:40.784 20:49:31 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:11:40.784 20:49:31 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:11:40.784 20:49:31 -- common/autotest_common.sh@1512 -- # echo 0000:00:06.0 00:11:40.784 20:49:31 -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:06.0 00:11:40.784 20:49:31 -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=54806 00:11:40.784 20:49:31 -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:11:40.784 20:49:31 -- nvme/nvme_rpc.sh@15 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:40.784 20:49:31 -- nvme/nvme_rpc.sh@19 -- # waitforlisten 54806 00:11:40.784 20:49:31 -- common/autotest_common.sh@819 -- # '[' -z 54806 ']' 00:11:40.784 20:49:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:40.784 20:49:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:40.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:40.784 20:49:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:40.784 20:49:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:40.784 20:49:31 -- common/autotest_common.sh@10 -- # set +x 00:11:40.784 [2024-04-16 20:49:31.830867] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:11:40.784 [2024-04-16 20:49:31.831143] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:11:41.351 EAL: TSC is not safe to use in SMP mode 00:11:41.351 EAL: TSC is not invariant 00:11:41.351 [2024-04-16 20:49:32.280849] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:41.351 [2024-04-16 20:49:32.363174] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:41.351 [2024-04-16 20:49:32.363423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.351 [2024-04-16 20:49:32.363426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:41.918 20:49:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:41.918 20:49:32 -- common/autotest_common.sh@852 -- # return 0 00:11:41.918 20:49:32 -- nvme/nvme_rpc.sh@21 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:11:41.918 [2024-04-16 20:49:32.931684] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:11:41.918 Nvme0n1 00:11:41.918 20:49:33 -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:11:41.918 20:49:33 -- nvme/nvme_rpc.sh@32 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:11:42.177 request: 00:11:42.177 { 00:11:42.177 "filename": "non_existing_file", 00:11:42.177 "bdev_name": "Nvme0n1", 00:11:42.177 "method": "bdev_nvme_apply_firmware", 00:11:42.177 "req_id": 1 00:11:42.177 } 00:11:42.177 Got JSON-RPC error response 00:11:42.177 response: 00:11:42.177 { 00:11:42.177 "code": -32603, 00:11:42.177 "message": "open file failed." 00:11:42.177 } 00:11:42.177 20:49:33 -- nvme/nvme_rpc.sh@32 -- # rv=1 00:11:42.177 20:49:33 -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:11:42.177 20:49:33 -- nvme/nvme_rpc.sh@37 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:11:42.442 20:49:33 -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:42.442 20:49:33 -- nvme/nvme_rpc.sh@40 -- # killprocess 54806 00:11:42.442 20:49:33 -- common/autotest_common.sh@926 -- # '[' -z 54806 ']' 00:11:42.442 20:49:33 -- common/autotest_common.sh@930 -- # kill -0 54806 00:11:42.442 20:49:33 -- common/autotest_common.sh@931 -- # uname 00:11:42.442 20:49:33 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:11:42.442 20:49:33 -- common/autotest_common.sh@934 -- # ps -c -o command 54806 00:11:42.442 20:49:33 -- common/autotest_common.sh@934 -- # tail -1 00:11:42.442 20:49:33 -- common/autotest_common.sh@934 -- # process_name=spdk_tgt 00:11:42.442 20:49:33 -- common/autotest_common.sh@936 -- # '[' spdk_tgt = sudo ']' 00:11:42.442 killing process with pid 54806 00:11:42.442 20:49:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 54806' 00:11:42.442 20:49:33 -- common/autotest_common.sh@945 -- # kill 54806 00:11:42.442 20:49:33 -- common/autotest_common.sh@950 -- # wait 54806 00:11:42.713 00:11:42.713 real 0m2.077s 00:11:42.713 user 0m3.546s 00:11:42.713 sys 0m0.827s 00:11:42.713 20:49:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:42.713 20:49:33 -- common/autotest_common.sh@10 -- # set +x 00:11:42.713 ************************************ 00:11:42.713 END TEST nvme_rpc 00:11:42.713 ************************************ 00:11:42.713 20:49:33 -- spdk/autotest.sh@247 -- # run_test nvme_rpc_timeouts /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:42.713 20:49:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:42.713 20:49:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:42.713 20:49:33 -- common/autotest_common.sh@10 -- # set +x 00:11:42.713 ************************************ 00:11:42.713 START TEST nvme_rpc_timeouts 00:11:42.713 ************************************ 00:11:42.713 20:49:33 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:42.972 * Looking for test storage... 00:11:42.972 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/nvme 00:11:42.972 20:49:33 -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:42.972 20:49:33 -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_54835 00:11:42.972 20:49:33 -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_54835 00:11:42.972 20:49:33 -- nvme/nvme_rpc_timeouts.sh@24 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:42.972 20:49:33 -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=54862 00:11:42.972 20:49:33 -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:11:42.972 20:49:33 -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 54862 00:11:42.972 20:49:33 -- common/autotest_common.sh@819 -- # '[' -z 54862 ']' 00:11:42.972 20:49:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.972 20:49:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:42.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.972 20:49:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.972 20:49:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:42.972 20:49:33 -- common/autotest_common.sh@10 -- # set +x 00:11:42.972 [2024-04-16 20:49:33.892349] Starting SPDK v24.01.1-pre git sha1 4b134b4ab / DPDK 23.11.0 initialization... 00:11:42.972 [2024-04-16 20:49:33.892805] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:11:43.538 EAL: TSC is not safe to use in SMP mode 00:11:43.538 EAL: TSC is not invariant 00:11:43.538 [2024-04-16 20:49:34.372741] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:43.538 [2024-04-16 20:49:34.466342] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:43.538 [2024-04-16 20:49:34.466511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.538 [2024-04-16 20:49:34.466501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:43.796 20:49:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:43.796 20:49:34 -- common/autotest_common.sh@852 -- # return 0 00:11:43.796 Checking default timeout settings: 00:11:43.796 20:49:34 -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:11:43.796 20:49:34 -- nvme/nvme_rpc_timeouts.sh@30 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:44.054 Making settings changes with rpc: 00:11:44.054 20:49:35 -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:11:44.054 20:49:35 -- nvme/nvme_rpc_timeouts.sh@34 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:11:44.312 Check default vs. modified settings: 00:11:44.312 20:49:35 -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:11:44.312 20:49:35 -- nvme/nvme_rpc_timeouts.sh@37 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:44.570 20:49:35 -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:11:44.570 20:49:35 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:44.570 20:49:35 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_54835 00:11:44.570 20:49:35 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:44.570 20:49:35 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:44.570 20:49:35 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:11:44.570 20:49:35 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:44.570 20:49:35 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_54835 00:11:44.570 20:49:35 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:44.570 20:49:35 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:11:44.570 20:49:35 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:11:44.570 Setting action_on_timeout is changed as expected. 00:11:44.570 20:49:35 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:11:44.570 20:49:35 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:44.570 20:49:35 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_54835 00:11:44.570 20:49:35 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:44.570 20:49:35 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:44.570 20:49:35 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:44.570 20:49:35 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_54835 00:11:44.570 20:49:35 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:44.570 20:49:35 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:44.570 20:49:35 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:11:44.570 20:49:35 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:11:44.570 Setting timeout_us is changed as expected. 00:11:44.570 20:49:35 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:11:44.570 20:49:35 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:44.570 20:49:35 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_54835 00:11:44.570 20:49:35 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:44.570 20:49:35 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:44.570 20:49:35 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:44.570 20:49:35 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_54835 00:11:44.570 20:49:35 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:44.570 20:49:35 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:44.570 20:49:35 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:11:44.570 20:49:35 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:11:44.570 Setting timeout_admin_us is changed as expected. 00:11:44.570 20:49:35 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:11:44.570 20:49:35 -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:11:44.570 20:49:35 -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_54835 /tmp/settings_modified_54835 00:11:44.570 20:49:35 -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 54862 00:11:44.570 20:49:35 -- common/autotest_common.sh@926 -- # '[' -z 54862 ']' 00:11:44.570 20:49:35 -- common/autotest_common.sh@930 -- # kill -0 54862 00:11:44.570 20:49:35 -- common/autotest_common.sh@931 -- # uname 00:11:44.571 20:49:35 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:11:44.571 20:49:35 -- common/autotest_common.sh@934 -- # ps -c -o command 54862 00:11:44.571 20:49:35 -- common/autotest_common.sh@934 -- # tail -1 00:11:44.829 20:49:35 -- common/autotest_common.sh@934 -- # process_name=spdk_tgt 00:11:44.829 20:49:35 -- common/autotest_common.sh@936 -- # '[' spdk_tgt = sudo ']' 00:11:44.829 killing process with pid 54862 00:11:44.829 20:49:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 54862' 00:11:44.829 20:49:35 -- common/autotest_common.sh@945 -- # kill 54862 00:11:44.829 20:49:35 -- common/autotest_common.sh@950 -- # wait 54862 00:11:44.829 RPC TIMEOUT SETTING TEST PASSED. 00:11:44.829 20:49:35 -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:11:44.829 00:11:44.829 real 0m2.223s 00:11:44.829 user 0m4.040s 00:11:44.829 sys 0m0.809s 00:11:44.829 20:49:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:44.829 20:49:35 -- common/autotest_common.sh@10 -- # set +x 00:11:44.829 ************************************ 00:11:44.829 END TEST nvme_rpc_timeouts 00:11:44.829 ************************************ 00:11:44.829 20:49:35 -- spdk/autotest.sh@251 -- # '[' 0 -eq 0 ']' 00:11:44.829 20:49:35 -- spdk/autotest.sh@251 -- # uname -s 00:11:45.086 20:49:35 -- spdk/autotest.sh@251 -- # '[' FreeBSD = Linux ']' 00:11:45.086 20:49:35 -- spdk/autotest.sh@255 -- # [[ 0 -eq 1 ]] 00:11:45.086 20:49:35 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:11:45.086 20:49:35 -- spdk/autotest.sh@268 -- # timing_exit lib 00:11:45.086 20:49:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:45.086 20:49:35 -- common/autotest_common.sh@10 -- # set +x 00:11:45.086 20:49:35 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:11:45.086 20:49:35 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:11:45.086 20:49:35 -- spdk/autotest.sh@287 -- # '[' 0 -eq 1 ']' 00:11:45.086 20:49:35 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:11:45.086 20:49:35 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:11:45.086 20:49:35 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:11:45.086 20:49:35 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:11:45.086 20:49:35 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:11:45.086 20:49:36 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:11:45.086 20:49:36 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:11:45.086 20:49:36 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:11:45.086 20:49:36 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:11:45.086 20:49:36 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:11:45.086 20:49:36 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:11:45.086 20:49:36 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:11:45.086 20:49:36 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:11:45.086 20:49:36 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:11:45.086 20:49:36 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:11:45.086 20:49:36 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:11:45.086 20:49:36 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:11:45.086 20:49:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:45.087 20:49:36 -- common/autotest_common.sh@10 -- # set +x 00:11:45.087 20:49:36 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:11:45.087 20:49:36 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:11:45.087 20:49:36 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:11:45.087 20:49:36 -- common/autotest_common.sh@10 -- # set +x 00:11:45.653 setup.sh cleanup function not yet supported on FreeBSD 00:11:45.653 20:49:36 -- common/autotest_common.sh@1436 -- # return 0 00:11:45.653 20:49:36 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:11:45.653 20:49:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:45.653 20:49:36 -- common/autotest_common.sh@10 -- # set +x 00:11:45.653 20:49:36 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:11:45.653 20:49:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:45.653 20:49:36 -- common/autotest_common.sh@10 -- # set +x 00:11:45.653 20:49:36 -- spdk/autotest.sh@390 -- # chmod a+r /usr/home/vagrant/spdk_repo/spdk/../output/timing.txt 00:11:45.653 20:49:36 -- spdk/autotest.sh@392 -- # [[ -f /usr/home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:11:45.653 20:49:36 -- spdk/autotest.sh@394 -- # hash lcov 00:11:45.653 /usr/home/vagrant/spdk_repo/spdk/autotest.sh: line 394: hash: lcov: not found 00:11:45.912 20:49:36 -- common/autobuild_common.sh@15 -- $ source /usr/home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:45.913 20:49:36 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:11:45.913 20:49:36 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:45.913 20:49:36 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:45.913 20:49:36 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:11:45.913 20:49:36 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:11:45.913 20:49:36 -- paths/export.sh@4 -- $ export PATH 00:11:45.913 20:49:36 -- paths/export.sh@5 -- $ echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:11:45.913 20:49:36 -- common/autobuild_common.sh@434 -- $ out=/usr/home/vagrant/spdk_repo/spdk/../output 00:11:45.913 20:49:36 -- common/autobuild_common.sh@435 -- $ date +%s 00:11:45.913 20:49:36 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713300576.XXXXXX 00:11:45.913 20:49:36 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713300576.XXXXXX.9eWr7QzU 00:11:45.913 20:49:36 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:11:45.913 20:49:36 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:11:45.913 20:49:36 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /usr/home/vagrant/spdk_repo/spdk/dpdk/' 00:11:45.913 20:49:36 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /usr/home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:11:45.913 20:49:36 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /usr/home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /usr/home/vagrant/spdk_repo/spdk/dpdk/ --exclude /usr/home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:11:45.913 20:49:36 -- common/autobuild_common.sh@451 -- $ get_config_params 00:11:45.913 20:49:36 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:11:45.913 20:49:36 -- common/autotest_common.sh@10 -- $ set +x 00:11:45.913 20:49:37 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio' 00:11:45.913 20:49:37 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:11:45.913 20:49:37 -- spdk/autopackage.sh@11 -- $ cd /usr/home/vagrant/spdk_repo/spdk 00:11:45.913 20:49:37 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:11:45.913 20:49:37 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:11:45.913 20:49:37 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:11:45.913 20:49:37 -- spdk/autopackage.sh@23 -- $ timing_enter build_release 00:11:45.913 20:49:37 -- common/autotest_common.sh@712 -- $ xtrace_disable 00:11:45.913 20:49:37 -- common/autotest_common.sh@10 -- $ set +x 00:11:45.913 20:49:37 -- spdk/autopackage.sh@26 -- $ [[ /usr/bin/clang == *clang* ]] 00:11:45.913 20:49:37 -- spdk/autopackage.sh@27 -- $ nproc 00:11:45.913 20:49:37 -- spdk/autopackage.sh@27 -- $ jobs=5 00:11:45.913 20:49:37 -- spdk/autopackage.sh@28 -- $ case "$(uname -s)" in 00:11:45.913 20:49:37 -- spdk/autopackage.sh@28 -- $ uname -s 00:11:45.913 20:49:37 -- spdk/autopackage.sh@28 -- $ case "$(uname -s)" in 00:11:45.913 20:49:37 -- spdk/autopackage.sh@32 -- $ export LD=ld.lld 00:11:45.913 20:49:37 -- spdk/autopackage.sh@32 -- $ LD=ld.lld 00:11:45.913 20:49:37 -- spdk/autopackage.sh@36 -- $ [[ -n '' ]] 00:11:45.913 20:49:37 -- spdk/autopackage.sh@40 -- $ get_config_params 00:11:45.913 20:49:37 -- spdk/autopackage.sh@40 -- $ sed s/--enable-debug//g 00:11:45.913 20:49:37 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:11:45.913 20:49:37 -- common/autotest_common.sh@10 -- $ set +x 00:11:46.172 20:49:37 -- spdk/autopackage.sh@40 -- $ config_params=' --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio' 00:11:46.172 20:49:37 -- spdk/autopackage.sh@41 -- $ /usr/home/vagrant/spdk_repo/spdk/configure --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --enable-lto 00:11:46.172 Notice: Vhost, rte_vhost library, virtio, and fuse 00:11:46.172 are only supported on Linux. Turning off default feature. 00:11:46.172 Using default SPDK env in /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:11:46.172 Using default DPDK in /usr/home/vagrant/spdk_repo/spdk/dpdk/build 00:11:46.432 RDMA_OPTION_ID_ACK_TIMEOUT is not supported 00:11:46.432 Using 'verbs' RDMA provider 00:11:56.681 Configuring ISA-L (logfile: /usr/home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:12:06.671 Configuring ISA-L-crypto (logfile: /usr/home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:12:06.671 Creating mk/config.mk...done. 00:12:06.671 Creating mk/cc.flags.mk...done. 00:12:06.671 Type 'gmake' to build. 00:12:06.671 20:49:56 -- spdk/autopackage.sh@43 -- $ gmake -j10 00:12:06.671 gmake[1]: Nothing to be done for 'all'. 00:12:06.671 ps: stdin: not a terminal 00:12:11.946 The Meson build system 00:12:11.946 Version: 1.3.1 00:12:11.946 Source dir: /usr/home/vagrant/spdk_repo/spdk/dpdk 00:12:11.946 Build dir: /usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:12:11.946 Build type: native build 00:12:11.946 Program cat found: YES (/bin/cat) 00:12:11.946 Project name: DPDK 00:12:11.946 Project version: 23.11.0 00:12:11.946 C compiler for the host machine: /usr/bin/clang (clang 14.0.5 "FreeBSD clang version 14.0.5 (https://github.com/llvm/llvm-project.git llvmorg-14.0.5-0-gc12386ae247c)") 00:12:11.946 C linker for the host machine: /usr/bin/clang ld.lld 14.0.5 00:12:11.946 Host machine cpu family: x86_64 00:12:11.946 Host machine cpu: x86_64 00:12:11.946 Message: ## Building in Developer Mode ## 00:12:11.946 Program pkg-config found: YES (/usr/local/bin/pkg-config) 00:12:11.946 Program check-symbols.sh found: YES (/usr/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:12:11.946 Program options-ibverbs-static.sh found: YES (/usr/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:12:11.946 Program python3 found: YES (/usr/local/bin/python3.9) 00:12:11.946 Program cat found: YES (/bin/cat) 00:12:11.946 Compiler for C supports arguments -march=native: YES 00:12:11.946 Checking for size of "void *" : 8 00:12:11.946 Checking for size of "void *" : 8 (cached) 00:12:11.946 Library m found: YES 00:12:11.946 Library numa found: NO 00:12:11.946 Library fdt found: NO 00:12:11.946 Library execinfo found: YES 00:12:11.946 Has header "execinfo.h" : YES 00:12:11.946 Found pkg-config: YES (/usr/local/bin/pkg-config) 2.0.3 00:12:11.946 Run-time dependency libarchive found: NO (tried pkgconfig) 00:12:11.946 Run-time dependency libbsd found: NO (tried pkgconfig) 00:12:11.946 Run-time dependency jansson found: NO (tried pkgconfig) 00:12:11.946 Run-time dependency openssl found: YES 3.0.13 00:12:11.946 Run-time dependency libpcap found: NO (tried pkgconfig) 00:12:11.946 Library pcap found: YES 00:12:11.946 Has header "pcap.h" with dependency -lpcap: YES 00:12:11.946 Compiler for C supports arguments -Wcast-qual: YES 00:12:11.946 Compiler for C supports arguments -Wdeprecated: YES 00:12:11.946 Compiler for C supports arguments -Wformat: YES 00:12:11.946 Compiler for C supports arguments -Wformat-nonliteral: YES 00:12:11.946 Compiler for C supports arguments -Wformat-security: YES 00:12:11.946 Compiler for C supports arguments -Wmissing-declarations: YES 00:12:11.946 Compiler for C supports arguments -Wmissing-prototypes: YES 00:12:11.946 Compiler for C supports arguments -Wnested-externs: YES 00:12:11.946 Compiler for C supports arguments -Wold-style-definition: YES 00:12:11.946 Compiler for C supports arguments -Wpointer-arith: YES 00:12:11.946 Compiler for C supports arguments -Wsign-compare: YES 00:12:11.946 Compiler for C supports arguments -Wstrict-prototypes: YES 00:12:11.946 Compiler for C supports arguments -Wundef: YES 00:12:11.946 Compiler for C supports arguments -Wwrite-strings: YES 00:12:11.946 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:12:11.946 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:12:11.946 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:12:11.946 Compiler for C supports arguments -mavx512f: YES 00:12:11.946 Checking if "AVX512 checking" compiles: YES 00:12:11.946 Fetching value of define "__SSE4_2__" : 1 00:12:11.946 Fetching value of define "__AES__" : 1 00:12:11.946 Fetching value of define "__AVX__" : 1 00:12:11.946 Fetching value of define "__AVX2__" : 1 00:12:11.946 Fetching value of define "__AVX512BW__" : 1 00:12:11.946 Fetching value of define "__AVX512CD__" : 1 00:12:11.946 Fetching value of define "__AVX512DQ__" : 1 00:12:11.946 Fetching value of define "__AVX512F__" : 1 00:12:11.946 Fetching value of define "__AVX512VL__" : 1 00:12:11.946 Fetching value of define "__PCLMUL__" : 1 00:12:11.946 Fetching value of define "__RDRND__" : 1 00:12:11.946 Fetching value of define "__RDSEED__" : 1 00:12:11.946 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:12:11.946 Fetching value of define "__znver1__" : (undefined) 00:12:11.946 Fetching value of define "__znver2__" : (undefined) 00:12:11.946 Fetching value of define "__znver3__" : (undefined) 00:12:11.946 Fetching value of define "__znver4__" : (undefined) 00:12:11.946 Compiler for C supports arguments -Wno-format-truncation: NO 00:12:11.946 Message: lib/log: Defining dependency "log" 00:12:11.946 Message: lib/kvargs: Defining dependency "kvargs" 00:12:11.946 Message: lib/telemetry: Defining dependency "telemetry" 00:12:11.946 Checking if "Detect argument count for CPU_OR" compiles: YES 00:12:11.946 Checking for function "getentropy" : YES 00:12:11.946 Message: lib/eal: Defining dependency "eal" 00:12:11.946 Message: lib/ring: Defining dependency "ring" 00:12:11.946 Message: lib/rcu: Defining dependency "rcu" 00:12:11.946 Message: lib/mempool: Defining dependency "mempool" 00:12:11.946 Message: lib/mbuf: Defining dependency "mbuf" 00:12:11.946 Fetching value of define "__PCLMUL__" : 1 (cached) 00:12:11.946 Fetching value of define "__AVX512F__" : 1 (cached) 00:12:11.946 Fetching value of define "__AVX512BW__" : 1 (cached) 00:12:11.946 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:12:11.946 Fetching value of define "__AVX512VL__" : 1 (cached) 00:12:11.946 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:12:11.946 Compiler for C supports arguments -mpclmul: YES 00:12:11.946 Compiler for C supports arguments -maes: YES 00:12:11.946 Compiler for C supports arguments -mavx512f: YES (cached) 00:12:11.946 Compiler for C supports arguments -mavx512bw: YES 00:12:11.946 Compiler for C supports arguments -mavx512dq: YES 00:12:11.946 Compiler for C supports arguments -mavx512vl: YES 00:12:11.946 Compiler for C supports arguments -mvpclmulqdq: YES 00:12:11.946 Compiler for C supports arguments -mavx2: YES 00:12:11.946 Compiler for C supports arguments -mavx: YES 00:12:11.946 Message: lib/net: Defining dependency "net" 00:12:11.946 Message: lib/meter: Defining dependency "meter" 00:12:11.946 Message: lib/ethdev: Defining dependency "ethdev" 00:12:11.946 Message: lib/pci: Defining dependency "pci" 00:12:11.946 Message: lib/cmdline: Defining dependency "cmdline" 00:12:11.946 Message: lib/hash: Defining dependency "hash" 00:12:11.946 Message: lib/timer: Defining dependency "timer" 00:12:11.946 Message: lib/compressdev: Defining dependency "compressdev" 00:12:11.946 Message: lib/cryptodev: Defining dependency "cryptodev" 00:12:11.946 Message: lib/dmadev: Defining dependency "dmadev" 00:12:11.946 Compiler for C supports arguments -Wno-cast-qual: YES 00:12:11.946 Message: lib/reorder: Defining dependency "reorder" 00:12:11.946 Message: lib/security: Defining dependency "security" 00:12:11.946 Has header "linux/userfaultfd.h" : NO 00:12:11.946 Has header "linux/vduse.h" : NO 00:12:11.946 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:12:11.946 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:12:11.946 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:12:11.946 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:12:11.946 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:12:11.946 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:12:11.946 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:12:11.946 Message: Disabling vdpa/* drivers: missing internal dependency "vhost" 00:12:11.946 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:12:11.946 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:12:11.946 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:12:11.946 Program doxygen found: YES (/usr/local/bin/doxygen) 00:12:11.946 Configuring doxy-api-html.conf using configuration 00:12:11.946 Configuring doxy-api-man.conf using configuration 00:12:11.946 Program mandb found: NO 00:12:11.946 Program sphinx-build found: NO 00:12:11.946 Configuring rte_build_config.h using configuration 00:12:11.946 Message: 00:12:11.946 ================= 00:12:11.946 Applications Enabled 00:12:11.946 ================= 00:12:11.946 00:12:11.946 apps: 00:12:11.946 00:12:11.946 00:12:11.946 Message: 00:12:11.946 ================= 00:12:11.946 Libraries Enabled 00:12:11.946 ================= 00:12:11.946 00:12:11.946 libs: 00:12:11.946 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:12:11.946 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:12:11.946 cryptodev, dmadev, reorder, security, 00:12:11.946 00:12:11.947 Message: 00:12:11.947 =============== 00:12:11.947 Drivers Enabled 00:12:11.947 =============== 00:12:11.947 00:12:11.947 common: 00:12:11.947 00:12:11.947 bus: 00:12:11.947 pci, vdev, 00:12:11.947 mempool: 00:12:11.947 ring, 00:12:11.947 dma: 00:12:11.947 00:12:11.947 net: 00:12:11.947 00:12:11.947 crypto: 00:12:11.947 00:12:11.947 compress: 00:12:11.947 00:12:11.947 00:12:11.947 Message: 00:12:11.947 ================= 00:12:11.947 Content Skipped 00:12:11.947 ================= 00:12:11.947 00:12:11.947 apps: 00:12:11.947 dumpcap: explicitly disabled via build config 00:12:11.947 graph: explicitly disabled via build config 00:12:11.947 pdump: explicitly disabled via build config 00:12:11.947 proc-info: explicitly disabled via build config 00:12:11.947 test-acl: explicitly disabled via build config 00:12:11.947 test-bbdev: explicitly disabled via build config 00:12:11.947 test-cmdline: explicitly disabled via build config 00:12:11.947 test-compress-perf: explicitly disabled via build config 00:12:11.947 test-crypto-perf: explicitly disabled via build config 00:12:11.947 test-dma-perf: explicitly disabled via build config 00:12:11.947 test-eventdev: explicitly disabled via build config 00:12:11.947 test-fib: explicitly disabled via build config 00:12:11.947 test-flow-perf: explicitly disabled via build config 00:12:11.947 test-gpudev: explicitly disabled via build config 00:12:11.947 test-mldev: explicitly disabled via build config 00:12:11.947 test-pipeline: explicitly disabled via build config 00:12:11.947 test-pmd: explicitly disabled via build config 00:12:11.947 test-regex: explicitly disabled via build config 00:12:11.947 test-sad: explicitly disabled via build config 00:12:11.947 test-security-perf: explicitly disabled via build config 00:12:11.947 00:12:11.947 libs: 00:12:11.947 metrics: explicitly disabled via build config 00:12:11.947 acl: explicitly disabled via build config 00:12:11.947 bbdev: explicitly disabled via build config 00:12:11.947 bitratestats: explicitly disabled via build config 00:12:11.947 bpf: explicitly disabled via build config 00:12:11.947 cfgfile: explicitly disabled via build config 00:12:11.947 distributor: explicitly disabled via build config 00:12:11.947 efd: explicitly disabled via build config 00:12:11.947 eventdev: explicitly disabled via build config 00:12:11.947 dispatcher: explicitly disabled via build config 00:12:11.947 gpudev: explicitly disabled via build config 00:12:11.947 gro: explicitly disabled via build config 00:12:11.947 gso: explicitly disabled via build config 00:12:11.947 ip_frag: explicitly disabled via build config 00:12:11.947 jobstats: explicitly disabled via build config 00:12:11.947 latencystats: explicitly disabled via build config 00:12:11.947 lpm: explicitly disabled via build config 00:12:11.947 member: explicitly disabled via build config 00:12:11.947 pcapng: explicitly disabled via build config 00:12:11.947 power: only supported on Linux 00:12:11.947 rawdev: explicitly disabled via build config 00:12:11.947 regexdev: explicitly disabled via build config 00:12:11.947 mldev: explicitly disabled via build config 00:12:11.947 rib: explicitly disabled via build config 00:12:11.947 sched: explicitly disabled via build config 00:12:11.947 stack: explicitly disabled via build config 00:12:11.947 vhost: only supported on Linux 00:12:11.947 ipsec: explicitly disabled via build config 00:12:11.947 pdcp: explicitly disabled via build config 00:12:11.947 fib: explicitly disabled via build config 00:12:11.947 port: explicitly disabled via build config 00:12:11.947 pdump: explicitly disabled via build config 00:12:11.947 table: explicitly disabled via build config 00:12:11.947 pipeline: explicitly disabled via build config 00:12:11.947 graph: explicitly disabled via build config 00:12:11.947 node: explicitly disabled via build config 00:12:11.947 00:12:11.947 drivers: 00:12:11.947 common/cpt: not in enabled drivers build config 00:12:11.947 common/dpaax: not in enabled drivers build config 00:12:11.947 common/iavf: not in enabled drivers build config 00:12:11.947 common/idpf: not in enabled drivers build config 00:12:11.947 common/mvep: not in enabled drivers build config 00:12:11.947 common/octeontx: not in enabled drivers build config 00:12:11.947 bus/auxiliary: not in enabled drivers build config 00:12:11.947 bus/cdx: not in enabled drivers build config 00:12:11.947 bus/dpaa: not in enabled drivers build config 00:12:11.947 bus/fslmc: not in enabled drivers build config 00:12:11.947 bus/ifpga: not in enabled drivers build config 00:12:11.947 bus/platform: not in enabled drivers build config 00:12:11.947 bus/vmbus: not in enabled drivers build config 00:12:11.947 common/cnxk: not in enabled drivers build config 00:12:11.947 common/mlx5: not in enabled drivers build config 00:12:11.947 common/nfp: not in enabled drivers build config 00:12:11.947 common/qat: not in enabled drivers build config 00:12:11.947 common/sfc_efx: not in enabled drivers build config 00:12:11.947 mempool/bucket: not in enabled drivers build config 00:12:11.947 mempool/cnxk: not in enabled drivers build config 00:12:11.947 mempool/dpaa: not in enabled drivers build config 00:12:11.947 mempool/dpaa2: not in enabled drivers build config 00:12:11.947 mempool/octeontx: not in enabled drivers build config 00:12:11.947 mempool/stack: not in enabled drivers build config 00:12:11.947 dma/cnxk: not in enabled drivers build config 00:12:11.947 dma/dpaa: not in enabled drivers build config 00:12:11.947 dma/dpaa2: not in enabled drivers build config 00:12:11.947 dma/hisilicon: not in enabled drivers build config 00:12:11.947 dma/idxd: not in enabled drivers build config 00:12:11.947 dma/ioat: not in enabled drivers build config 00:12:11.947 dma/skeleton: not in enabled drivers build config 00:12:11.947 net/af_packet: not in enabled drivers build config 00:12:11.947 net/af_xdp: not in enabled drivers build config 00:12:11.947 net/ark: not in enabled drivers build config 00:12:11.947 net/atlantic: not in enabled drivers build config 00:12:11.947 net/avp: not in enabled drivers build config 00:12:11.947 net/axgbe: not in enabled drivers build config 00:12:11.947 net/bnx2x: not in enabled drivers build config 00:12:11.947 net/bnxt: not in enabled drivers build config 00:12:11.947 net/bonding: not in enabled drivers build config 00:12:11.947 net/cnxk: not in enabled drivers build config 00:12:11.947 net/cpfl: not in enabled drivers build config 00:12:11.947 net/cxgbe: not in enabled drivers build config 00:12:11.947 net/dpaa: not in enabled drivers build config 00:12:11.947 net/dpaa2: not in enabled drivers build config 00:12:11.947 net/e1000: not in enabled drivers build config 00:12:11.947 net/ena: not in enabled drivers build config 00:12:11.947 net/enetc: not in enabled drivers build config 00:12:11.947 net/enetfec: not in enabled drivers build config 00:12:11.947 net/enic: not in enabled drivers build config 00:12:11.947 net/failsafe: not in enabled drivers build config 00:12:11.947 net/fm10k: not in enabled drivers build config 00:12:11.947 net/gve: not in enabled drivers build config 00:12:11.947 net/hinic: not in enabled drivers build config 00:12:11.947 net/hns3: not in enabled drivers build config 00:12:11.947 net/i40e: not in enabled drivers build config 00:12:11.947 net/iavf: not in enabled drivers build config 00:12:11.947 net/ice: not in enabled drivers build config 00:12:11.947 net/idpf: not in enabled drivers build config 00:12:11.947 net/igc: not in enabled drivers build config 00:12:11.947 net/ionic: not in enabled drivers build config 00:12:11.947 net/ipn3ke: not in enabled drivers build config 00:12:11.947 net/ixgbe: not in enabled drivers build config 00:12:11.947 net/mana: not in enabled drivers build config 00:12:11.947 net/memif: not in enabled drivers build config 00:12:11.947 net/mlx4: not in enabled drivers build config 00:12:11.947 net/mlx5: not in enabled drivers build config 00:12:11.947 net/mvneta: not in enabled drivers build config 00:12:11.947 net/mvpp2: not in enabled drivers build config 00:12:11.947 net/netvsc: not in enabled drivers build config 00:12:11.947 net/nfb: not in enabled drivers build config 00:12:11.947 net/nfp: not in enabled drivers build config 00:12:11.947 net/ngbe: not in enabled drivers build config 00:12:11.947 net/null: not in enabled drivers build config 00:12:11.947 net/octeontx: not in enabled drivers build config 00:12:11.947 net/octeon_ep: not in enabled drivers build config 00:12:11.947 net/pcap: not in enabled drivers build config 00:12:11.947 net/pfe: not in enabled drivers build config 00:12:11.947 net/qede: not in enabled drivers build config 00:12:11.947 net/ring: not in enabled drivers build config 00:12:11.947 net/sfc: not in enabled drivers build config 00:12:11.947 net/softnic: not in enabled drivers build config 00:12:11.947 net/tap: not in enabled drivers build config 00:12:11.947 net/thunderx: not in enabled drivers build config 00:12:11.947 net/txgbe: not in enabled drivers build config 00:12:11.947 net/vdev_netvsc: not in enabled drivers build config 00:12:11.947 net/vhost: not in enabled drivers build config 00:12:11.947 net/virtio: not in enabled drivers build config 00:12:11.947 net/vmxnet3: not in enabled drivers build config 00:12:11.947 raw/*: missing internal dependency, "rawdev" 00:12:11.947 crypto/armv8: not in enabled drivers build config 00:12:11.947 crypto/bcmfs: not in enabled drivers build config 00:12:11.947 crypto/caam_jr: not in enabled drivers build config 00:12:11.947 crypto/ccp: not in enabled drivers build config 00:12:11.947 crypto/cnxk: not in enabled drivers build config 00:12:11.947 crypto/dpaa_sec: not in enabled drivers build config 00:12:11.947 crypto/dpaa2_sec: not in enabled drivers build config 00:12:11.947 crypto/ipsec_mb: not in enabled drivers build config 00:12:11.947 crypto/mlx5: not in enabled drivers build config 00:12:11.947 crypto/mvsam: not in enabled drivers build config 00:12:11.947 crypto/nitrox: not in enabled drivers build config 00:12:11.947 crypto/null: not in enabled drivers build config 00:12:11.947 crypto/octeontx: not in enabled drivers build config 00:12:11.947 crypto/openssl: not in enabled drivers build config 00:12:11.947 crypto/scheduler: not in enabled drivers build config 00:12:11.947 crypto/uadk: not in enabled drivers build config 00:12:11.947 crypto/virtio: not in enabled drivers build config 00:12:11.947 compress/isal: not in enabled drivers build config 00:12:11.947 compress/mlx5: not in enabled drivers build config 00:12:11.947 compress/octeontx: not in enabled drivers build config 00:12:11.947 compress/zlib: not in enabled drivers build config 00:12:11.947 regex/*: missing internal dependency, "regexdev" 00:12:11.947 ml/*: missing internal dependency, "mldev" 00:12:11.948 vdpa/*: missing internal dependency, "vhost" 00:12:11.948 event/*: missing internal dependency, "eventdev" 00:12:11.948 baseband/*: missing internal dependency, "bbdev" 00:12:11.948 gpu/*: missing internal dependency, "gpudev" 00:12:11.948 00:12:11.948 00:12:11.948 Build targets in project: 81 00:12:11.948 00:12:11.948 DPDK 23.11.0 00:12:11.948 00:12:11.948 User defined options 00:12:11.948 default_library : static 00:12:11.948 libdir : lib 00:12:11.948 prefix : / 00:12:11.948 c_args : -fPIC -Werror 00:12:11.948 c_link_args : 00:12:11.948 cpu_instruction_set: native 00:12:11.948 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:12:11.948 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:12:11.948 enable_docs : false 00:12:11.948 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:12:11.948 enable_kmods : true 00:12:11.948 tests : false 00:12:11.948 00:12:11.948 Found ninja-1.11.1 at /usr/local/bin/ninja 00:12:11.948 ninja: Entering directory `/usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:12:11.948 [1/231] Compiling C object lib/librte_log.a.p/log_log_freebsd.c.o 00:12:11.948 [2/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:12:11.948 [3/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:12:11.948 [4/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:12:11.948 [5/231] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:12:11.948 [6/231] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:12:11.948 [7/231] Compiling C object lib/librte_log.a.p/log_log.c.o 00:12:11.948 [8/231] Linking static target lib/librte_kvargs.a 00:12:11.948 [9/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:12:11.948 [10/231] Linking static target lib/librte_log.a 00:12:11.948 [11/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:12:11.948 [12/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:12:11.948 [13/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:12:11.948 [14/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:12:11.948 [15/231] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:12:11.948 [16/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:12:11.948 [17/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:12:11.948 [18/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:12:12.206 [19/231] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:12:12.206 [20/231] Linking static target lib/librte_telemetry.a 00:12:12.206 [21/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:12:12.206 [22/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:12:12.206 [23/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:12:12.206 [24/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:12:12.206 [25/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:12:12.206 [26/231] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:12:12.206 [27/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:12:12.465 [28/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:12:12.465 [29/231] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:12:12.465 [30/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:12:12.465 [31/231] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:12:12.465 [32/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:12:12.465 [33/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:12:12.465 [34/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:12:12.465 [35/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:12:12.465 [36/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:12:12.722 [37/231] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:12:12.722 [38/231] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:12:12.722 [39/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:12:12.722 [40/231] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:12:12.722 [41/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:12:12.722 [42/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:12:12.722 [43/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:12:12.722 [44/231] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:12:12.722 [45/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:12:12.722 [46/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:12:12.980 [47/231] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:12:12.980 [48/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:12:12.980 [49/231] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:12:12.980 [50/231] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:12:12.980 [51/231] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:12:12.980 [52/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_cpuflags.c.o 00:12:12.980 [53/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_dev.c.o 00:12:12.980 [54/231] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:12:12.980 [55/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:12:12.980 [56/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:12:12.980 [57/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:12:12.980 [58/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:12:12.980 [59/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:12:12.980 [60/231] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:12:13.238 [61/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_hugepage_info.c.o 00:12:13.238 [62/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_alarm.c.o 00:12:13.238 [63/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_lcore.c.o 00:12:13.238 [64/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal.c.o 00:12:13.238 [65/231] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:12:13.238 [66/231] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:12:13.238 [67/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_memalloc.c.o 00:12:13.238 [68/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_thread.c.o 00:12:13.238 [69/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_timer.c.o 00:12:13.238 [70/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_interrupts.c.o 00:12:13.238 [71/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_memory.c.o 00:12:13.238 [72/231] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:12:13.494 [73/231] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:12:13.494 [74/231] Linking static target lib/librte_eal.a 00:12:13.494 [75/231] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:12:13.494 [76/231] Linking static target lib/librte_ring.a 00:12:13.494 [77/231] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:12:13.494 [78/231] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:12:13.751 [79/231] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:12:13.751 [80/231] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:12:13.751 [81/231] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:12:13.751 [82/231] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:12:13.751 [83/231] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:12:13.751 [84/231] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:12:13.751 [85/231] Linking static target lib/librte_mempool.a 00:12:13.751 [86/231] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:12:14.008 [87/231] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:12:14.008 [88/231] Linking static target lib/net/libnet_crc_avx512_lib.a 00:12:14.008 [89/231] Linking target lib/librte_log.so.24.0 00:12:14.008 [90/231] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:12:14.008 [91/231] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:12:14.008 [92/231] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:12:14.008 [93/231] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:12:14.008 [94/231] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:12:14.008 [95/231] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:12:14.008 [96/231] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:12:14.008 [97/231] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:12:14.008 [98/231] Linking static target lib/librte_mbuf.a 00:12:14.009 [99/231] Linking target lib/librte_telemetry.so.24.0 00:12:14.009 [100/231] Linking target lib/librte_kvargs.so.24.0 00:12:14.009 [101/231] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:12:14.009 [102/231] Linking static target lib/librte_rcu.a 00:12:14.265 [103/231] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:12:14.265 [104/231] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:12:14.265 [105/231] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:12:14.265 [106/231] Linking static target lib/librte_meter.a 00:12:14.265 [107/231] Linking static target lib/librte_net.a 00:12:14.265 [108/231] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:12:14.265 [109/231] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:12:14.265 [110/231] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:12:14.265 [111/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:12:14.265 [112/231] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:12:14.265 [113/231] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:12:14.523 [114/231] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:12:14.523 [115/231] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:12:14.780 [116/231] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:12:14.780 [117/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:12:14.780 [118/231] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:12:14.780 [119/231] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:12:14.780 [120/231] Linking static target lib/librte_pci.a 00:12:14.780 [121/231] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:12:14.780 [122/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:12:14.780 [123/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:12:14.780 [124/231] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:12:15.037 [125/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:12:15.037 [126/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:12:15.037 [127/231] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:12:15.037 [128/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:12:15.037 [129/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:12:15.037 [130/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:12:15.037 [131/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:12:15.037 [132/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:12:15.037 [133/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:12:15.037 [134/231] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:12:15.037 [135/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:12:15.037 [136/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:12:15.037 [137/231] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:12:15.037 [138/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:12:15.295 [139/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:12:15.295 [140/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:12:15.295 [141/231] Linking static target lib/librte_cmdline.a 00:12:15.295 [142/231] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:12:15.295 [143/231] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:12:15.295 [144/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:12:15.553 [145/231] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:12:15.553 [146/231] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:12:15.553 [147/231] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:12:15.553 [148/231] Linking static target lib/librte_timer.a 00:12:15.553 [149/231] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:12:15.553 [150/231] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:12:15.553 [151/231] Linking static target lib/librte_compressdev.a 00:12:15.553 [152/231] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:12:15.811 [153/231] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:12:15.811 [154/231] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:12:15.811 [155/231] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:12:15.811 [156/231] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:12:15.811 [157/231] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:12:15.811 [158/231] Linking static target lib/librte_dmadev.a 00:12:16.069 [159/231] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:12:16.069 [160/231] Linking static target lib/librte_reorder.a 00:12:16.069 [161/231] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:12:16.069 [162/231] Linking static target lib/librte_security.a 00:12:16.069 [163/231] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:12:16.069 [164/231] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:12:16.069 [165/231] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:12:16.069 [166/231] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:12:16.069 [167/231] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:12:16.069 [168/231] Linking static target lib/librte_hash.a 00:12:16.069 [169/231] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_bsd_pci.c.o 00:12:16.069 [170/231] Linking static target drivers/libtmp_rte_bus_pci.a 00:12:16.069 [171/231] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:12:16.327 [172/231] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:12:16.327 [173/231] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:12:16.327 [174/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:12:16.327 [175/231] Linking static target lib/librte_ethdev.a 00:12:16.327 [176/231] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:12:16.327 [177/231] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:12:16.327 [178/231] Generating kernel/freebsd/contigmem with a custom command 00:12:16.327 machine -> /usr/src/sys/amd64/include 00:12:16.327 x86 -> /usr/src/sys/x86/include 00:12:16.327 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/kern/device_if.m -h 00:12:16.327 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/kern/bus_if.m -h 00:12:16.327 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/dev/pci/pci_if.m -h 00:12:16.327 touch opt_global.h 00:12:16.327 clang -O2 -pipe -include rte_config.h -fno-strict-aliasing -Werror -D_KERNEL -DKLD_MODULE -nostdinc -I/usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp -I/usr/home/vagrant/spdk_repo/spdk/dpdk/config -include /usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp/kernel/freebsd/opt_global.h -I. -I/usr/src/sys -I/usr/src/sys/contrib/ck/include -fno-common -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -fdebug-prefix-map=./machine=/usr/src/sys/amd64/include -fdebug-prefix-map=./x86=/usr/src/sys/x86/include -MD -MF.depend.contigmem.o -MTcontigmem.o -mcmodel=kernel -mno-red-zone -mno-mmx -mno-sse -msoft-float -fno-asynchronous-unwind-tables -ffreestanding -fwrapv -fstack-protector -Wall -Wredundant-decls -Wnested-externs -Wstrict-prototypes -Wmissing-prototypes -Wpointer-arith -Wcast-qual -Wundef -Wno-pointer-sign -D__printf__=__freebsd_kprintf__ -Wmissing-include-dirs -fdiagnostics-show-option -Wno-unknown-pragmas -Wno-error=tautological-compare -Wno-error=empty-body -Wno-error=parentheses-equality -Wno-error=unused-function -Wno-error=pointer-sign -Wno-error=shift-negative-value -Wno-address-of-packed-member -Wno-error=unused-but-set-variable -Wno-format-zero-length -mno-aes -mno-avx -std=iso9899:1999 -c /usr/home/vagrant/spdk_repo/spdk/dpdk/kernel/freebsd/contigmem/contigmem.c -o contigmem.o 00:12:16.327 ld.lld -m elf_x86_64_fbsd -warn-common --build-id=sha1 -T /usr/src/sys/conf/ldscript.kmod.amd64 -r -o contigmem.ko contigmem.o 00:12:16.327 :> export_syms 00:12:16.327 awk -f /usr/src/sys/conf/kmod_syms.awk contigmem.ko export_syms | xargs -J% objcopy % contigmem.ko 00:12:16.327 objcopy --strip-debug contigmem.ko 00:12:16.327 [179/231] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:12:16.327 [180/231] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:12:16.327 [181/231] Linking static target drivers/librte_bus_pci.a 00:12:16.327 [182/231] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:12:16.327 [183/231] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:12:16.327 [184/231] Linking static target drivers/libtmp_rte_bus_vdev.a 00:12:16.586 [185/231] Generating kernel/freebsd/nic_uio with a custom command 00:12:16.586 clang -O2 -pipe -include rte_config.h -fno-strict-aliasing -Werror -D_KERNEL -DKLD_MODULE -nostdinc -I/usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp -I/usr/home/vagrant/spdk_repo/spdk/dpdk/config -include /usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp/kernel/freebsd/opt_global.h -I. -I/usr/src/sys -I/usr/src/sys/contrib/ck/include -fno-common -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -fdebug-prefix-map=./machine=/usr/src/sys/amd64/include -fdebug-prefix-map=./x86=/usr/src/sys/x86/include -MD -MF.depend.nic_uio.o -MTnic_uio.o -mcmodel=kernel -mno-red-zone -mno-mmx -mno-sse -msoft-float -fno-asynchronous-unwind-tables -ffreestanding -fwrapv -fstack-protector -Wall -Wredundant-decls -Wnested-externs -Wstrict-prototypes -Wmissing-prototypes -Wpointer-arith -Wcast-qual -Wundef -Wno-pointer-sign -D__printf__=__freebsd_kprintf__ -Wmissing-include-dirs -fdiagnostics-show-option -Wno-unknown-pragmas -Wno-error=tautological-compare -Wno-error=empty-body -Wno-error=parentheses-equality -Wno-error=unused-function -Wno-error=pointer-sign -Wno-error=shift-negative-value -Wno-address-of-packed-member -Wno-error=unused-but-set-variable -Wno-format-zero-length -mno-aes -mno-avx -std=iso9899:1999 -c /usr/home/vagrant/spdk_repo/spdk/dpdk/kernel/freebsd/nic_uio/nic_uio.c -o nic_uio.o 00:12:16.586 ld.lld -m elf_x86_64_fbsd -warn-common --build-id=sha1 -T /usr/src/sys/conf/ldscript.kmod.amd64 -r -o nic_uio.ko nic_uio.o 00:12:16.586 :> export_syms 00:12:16.586 awk -f /usr/src/sys/conf/kmod_syms.awk nic_uio.ko export_syms | xargs -J% objcopy % nic_uio.ko 00:12:16.586 objcopy --strip-debug nic_uio.ko 00:12:16.586 [186/231] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:12:16.586 [187/231] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:12:16.586 [188/231] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:12:16.586 [189/231] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:12:16.586 [190/231] Linking static target lib/librte_cryptodev.a 00:12:16.586 [191/231] Linking static target drivers/librte_bus_vdev.a 00:12:16.586 [192/231] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:12:16.845 [193/231] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:12:16.845 [194/231] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:12:16.845 [195/231] Linking static target drivers/libtmp_rte_mempool_ring.a 00:12:17.103 [196/231] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:12:17.103 [197/231] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:12:17.103 [198/231] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:12:17.103 [199/231] Linking static target drivers/librte_mempool_ring.a 00:12:18.040 [200/231] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:12:24.613 [201/231] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:12:26.521 [202/231] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:12:26.521 [203/231] Linking target lib/librte_eal.so.24.0 00:12:26.780 [204/231] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:12:26.780 [205/231] Linking target lib/librte_timer.so.24.0 00:12:26.780 [206/231] Linking target lib/librte_ring.so.24.0 00:12:26.780 [207/231] Linking target lib/librte_meter.so.24.0 00:12:26.780 [208/231] Linking target drivers/librte_bus_vdev.so.24.0 00:12:26.780 [209/231] Linking target lib/librte_pci.so.24.0 00:12:26.780 [210/231] Linking target lib/librte_dmadev.so.24.0 00:12:26.780 [211/231] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:12:26.780 [212/231] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:12:26.780 [213/231] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:12:26.780 [214/231] Linking target lib/librte_mempool.so.24.0 00:12:26.780 [215/231] Linking target lib/librte_rcu.so.24.0 00:12:26.780 [216/231] Linking target drivers/librte_bus_pci.so.24.0 00:12:27.040 [217/231] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:12:27.040 [218/231] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:12:27.040 [219/231] Linking target drivers/librte_mempool_ring.so.24.0 00:12:27.040 [220/231] Linking target lib/librte_mbuf.so.24.0 00:12:27.040 [221/231] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:12:27.040 [222/231] Linking target lib/librte_reorder.so.24.0 00:12:27.040 [223/231] Linking target lib/librte_net.so.24.0 00:12:27.040 [224/231] Linking target lib/librte_cryptodev.so.24.0 00:12:27.040 [225/231] Linking target lib/librte_compressdev.so.24.0 00:12:27.299 [226/231] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:12:27.299 [227/231] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:12:27.299 [228/231] Linking target lib/librte_security.so.24.0 00:12:27.299 [229/231] Linking target lib/librte_hash.so.24.0 00:12:27.299 [230/231] Linking target lib/librte_cmdline.so.24.0 00:12:27.299 [231/231] Linking target lib/librte_ethdev.so.24.0 00:12:27.299 INFO: autodetecting backend as ninja 00:12:27.299 INFO: calculating backend command to run: /usr/local/bin/ninja -C /usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:12:27.870 CC lib/log/log.o 00:12:27.870 CC lib/log/log_flags.o 00:12:27.870 CC lib/ut_mock/mock.o 00:12:27.870 CC lib/log/log_deprecated.o 00:12:27.870 CC lib/ut/ut.o 00:12:28.129 LIB libspdk_ut_mock.a 00:12:28.129 LIB libspdk_ut.a 00:12:28.129 LIB libspdk_log.a 00:12:28.129 CC lib/dma/dma.o 00:12:28.129 CXX lib/trace_parser/trace.o 00:12:28.129 CC lib/ioat/ioat.o 00:12:28.129 CC lib/util/base64.o 00:12:28.130 CC lib/util/cpuset.o 00:12:28.130 CC lib/util/bit_array.o 00:12:28.130 CC lib/util/crc16.o 00:12:28.130 CC lib/util/crc32c.o 00:12:28.130 CC lib/util/crc32.o 00:12:28.130 CC lib/util/crc32_ieee.o 00:12:28.130 CC lib/util/crc64.o 00:12:28.130 CC lib/util/dif.o 00:12:28.390 LIB libspdk_dma.a 00:12:28.390 CC lib/util/fd.o 00:12:28.390 CC lib/util/file.o 00:12:28.390 CC lib/util/hexlify.o 00:12:28.390 CC lib/util/iov.o 00:12:28.390 CC lib/util/math.o 00:12:28.390 CC lib/util/pipe.o 00:12:28.390 CC lib/util/strerror_tls.o 00:12:28.390 CC lib/util/string.o 00:12:28.390 CC lib/util/uuid.o 00:12:28.390 CC lib/util/fd_group.o 00:12:28.390 LIB libspdk_ioat.a 00:12:28.390 CC lib/util/xor.o 00:12:28.390 CC lib/util/zipf.o 00:12:28.649 LIB libspdk_util.a 00:12:28.649 LIB libspdk_trace_parser.a 00:12:28.909 CC lib/rdma/common.o 00:12:28.909 CC lib/rdma/rdma_verbs.o 00:12:28.909 CC lib/json/json_parse.o 00:12:28.909 CC lib/json/json_util.o 00:12:28.909 CC lib/json/json_write.o 00:12:28.909 CC lib/idxd/idxd.o 00:12:28.909 CC lib/idxd/idxd_user.o 00:12:28.909 CC lib/env_dpdk/env.o 00:12:28.909 CC lib/vmd/vmd.o 00:12:28.909 CC lib/conf/conf.o 00:12:28.909 CC lib/env_dpdk/memory.o 00:12:28.909 CC lib/vmd/led.o 00:12:28.909 LIB libspdk_conf.a 00:12:28.909 CC lib/env_dpdk/pci.o 00:12:28.909 LIB libspdk_rdma.a 00:12:28.909 CC lib/env_dpdk/init.o 00:12:28.909 CC lib/env_dpdk/threads.o 00:12:28.909 CC lib/env_dpdk/pci_ioat.o 00:12:29.169 CC lib/env_dpdk/pci_virtio.o 00:12:29.169 CC lib/env_dpdk/pci_vmd.o 00:12:29.169 CC lib/env_dpdk/pci_idxd.o 00:12:29.169 LIB libspdk_json.a 00:12:29.169 CC lib/env_dpdk/pci_event.o 00:12:29.169 CC lib/env_dpdk/sigbus_handler.o 00:12:29.169 CC lib/env_dpdk/pci_dpdk.o 00:12:29.169 LIB libspdk_idxd.a 00:12:29.169 CC lib/env_dpdk/pci_dpdk_2207.o 00:12:29.169 CC lib/env_dpdk/pci_dpdk_2211.o 00:12:29.169 LIB libspdk_vmd.a 00:12:29.169 CC lib/jsonrpc/jsonrpc_server.o 00:12:29.169 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:12:29.169 CC lib/jsonrpc/jsonrpc_client.o 00:12:29.169 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:12:29.169 LIB libspdk_jsonrpc.a 00:12:29.429 CC lib/rpc/rpc.o 00:12:29.429 LIB libspdk_rpc.a 00:12:29.690 LIB libspdk_env_dpdk.a 00:12:29.690 CC lib/notify/notify.o 00:12:29.690 CC lib/notify/notify_rpc.o 00:12:29.690 CC lib/sock/sock.o 00:12:29.690 CC lib/sock/sock_rpc.o 00:12:29.690 CC lib/trace/trace.o 00:12:29.690 CC lib/trace/trace_flags.o 00:12:29.690 CC lib/trace/trace_rpc.o 00:12:29.690 LIB libspdk_notify.a 00:12:29.958 LIB libspdk_trace.a 00:12:29.958 LIB libspdk_sock.a 00:12:29.958 CC lib/thread/thread.o 00:12:29.958 CC lib/thread/iobuf.o 00:12:29.958 CC lib/nvme/nvme_ctrlr.o 00:12:29.958 CC lib/nvme/nvme_ctrlr_cmd.o 00:12:29.958 CC lib/nvme/nvme_fabric.o 00:12:29.958 CC lib/nvme/nvme_ns_cmd.o 00:12:29.958 CC lib/nvme/nvme_ns.o 00:12:29.958 CC lib/nvme/nvme_pcie_common.o 00:12:29.958 CC lib/nvme/nvme_pcie.o 00:12:29.958 CC lib/nvme/nvme_qpair.o 00:12:30.216 CC lib/nvme/nvme.o 00:12:30.475 CC lib/nvme/nvme_quirks.o 00:12:30.475 CC lib/nvme/nvme_transport.o 00:12:30.475 CC lib/nvme/nvme_discovery.o 00:12:30.475 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:12:30.475 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:12:30.475 LIB libspdk_thread.a 00:12:30.475 CC lib/nvme/nvme_tcp.o 00:12:30.475 CC lib/nvme/nvme_opal.o 00:12:30.733 CC lib/accel/accel.o 00:12:30.733 CC lib/blob/blobstore.o 00:12:30.733 CC lib/init/json_config.o 00:12:30.733 CC lib/blob/request.o 00:12:30.733 CC lib/blob/zeroes.o 00:12:30.733 CC lib/init/subsystem.o 00:12:30.733 CC lib/blob/blob_bs_dev.o 00:12:30.733 CC lib/accel/accel_rpc.o 00:12:30.733 CC lib/init/subsystem_rpc.o 00:12:30.733 CC lib/accel/accel_sw.o 00:12:30.733 CC lib/nvme/nvme_io_msg.o 00:12:30.733 CC lib/init/rpc.o 00:12:30.993 CC lib/nvme/nvme_poll_group.o 00:12:30.993 CC lib/nvme/nvme_zns.o 00:12:30.993 CC lib/nvme/nvme_cuse.o 00:12:30.993 CC lib/nvme/nvme_rdma.o 00:12:30.993 LIB libspdk_init.a 00:12:30.993 LIB libspdk_accel.a 00:12:30.993 CC lib/event/app.o 00:12:30.993 CC lib/event/log_rpc.o 00:12:30.993 CC lib/event/reactor.o 00:12:30.993 CC lib/event/app_rpc.o 00:12:30.993 CC lib/event/scheduler_static.o 00:12:30.993 CC lib/bdev/bdev.o 00:12:31.252 CC lib/bdev/bdev_rpc.o 00:12:31.252 CC lib/bdev/bdev_zone.o 00:12:31.252 CC lib/bdev/part.o 00:12:31.252 LIB libspdk_event.a 00:12:31.252 CC lib/bdev/scsi_nvme.o 00:12:31.511 LIB libspdk_nvme.a 00:12:31.511 LIB libspdk_blob.a 00:12:31.770 CC lib/lvol/lvol.o 00:12:31.770 CC lib/blobfs/blobfs.o 00:12:31.770 CC lib/blobfs/tree.o 00:12:32.029 LIB libspdk_bdev.a 00:12:32.029 LIB libspdk_lvol.a 00:12:32.029 LIB libspdk_blobfs.a 00:12:32.289 CC lib/nvmf/ctrlr.o 00:12:32.289 CC lib/nvmf/ctrlr_discovery.o 00:12:32.289 CC lib/nvmf/subsystem.o 00:12:32.289 CC lib/nvmf/ctrlr_bdev.o 00:12:32.289 CC lib/nvmf/nvmf.o 00:12:32.289 CC lib/nvmf/transport.o 00:12:32.289 CC lib/nvmf/nvmf_rpc.o 00:12:32.289 CC lib/nvmf/tcp.o 00:12:32.289 CC lib/nvmf/rdma.o 00:12:32.289 CC lib/scsi/dev.o 00:12:32.289 CC lib/scsi/lun.o 00:12:32.289 CC lib/scsi/port.o 00:12:32.289 CC lib/scsi/scsi.o 00:12:32.289 CC lib/scsi/scsi_bdev.o 00:12:32.289 CC lib/scsi/scsi_pr.o 00:12:32.548 CC lib/scsi/scsi_rpc.o 00:12:32.548 CC lib/scsi/task.o 00:12:32.548 LIB libspdk_scsi.a 00:12:32.807 CC lib/iscsi/conn.o 00:12:32.807 CC lib/iscsi/init_grp.o 00:12:32.807 CC lib/iscsi/md5.o 00:12:32.807 CC lib/iscsi/iscsi.o 00:12:32.807 CC lib/iscsi/param.o 00:12:32.807 CC lib/iscsi/portal_grp.o 00:12:32.807 CC lib/iscsi/tgt_node.o 00:12:32.807 CC lib/iscsi/iscsi_rpc.o 00:12:32.807 CC lib/iscsi/iscsi_subsystem.o 00:12:32.807 CC lib/iscsi/task.o 00:12:32.807 LIB libspdk_nvmf.a 00:12:33.375 LIB libspdk_iscsi.a 00:12:33.634 CC module/env_dpdk/env_dpdk_rpc.o 00:12:33.634 CC module/accel/dsa/accel_dsa.o 00:12:33.634 CC module/accel/dsa/accel_dsa_rpc.o 00:12:33.634 CC module/accel/error/accel_error.o 00:12:33.634 CC module/accel/error/accel_error_rpc.o 00:12:33.634 CC module/accel/ioat/accel_ioat.o 00:12:33.634 CC module/sock/posix/posix.o 00:12:33.634 CC module/accel/iaa/accel_iaa.o 00:12:33.634 CC module/blob/bdev/blob_bdev.o 00:12:33.634 CC module/scheduler/dynamic/scheduler_dynamic.o 00:12:33.894 LIB libspdk_env_dpdk_rpc.a 00:12:33.894 CC module/accel/ioat/accel_ioat_rpc.o 00:12:33.894 CC module/accel/iaa/accel_iaa_rpc.o 00:12:33.894 LIB libspdk_accel_error.a 00:12:33.894 LIB libspdk_accel_dsa.a 00:12:33.894 LIB libspdk_accel_ioat.a 00:12:33.894 LIB libspdk_accel_iaa.a 00:12:33.894 LIB libspdk_blob_bdev.a 00:12:33.894 LIB libspdk_scheduler_dynamic.a 00:12:33.894 CC module/bdev/nvme/bdev_nvme.o 00:12:33.894 CC module/bdev/delay/vbdev_delay.o 00:12:33.894 CC module/bdev/lvol/vbdev_lvol.o 00:12:33.894 CC module/bdev/malloc/bdev_malloc.o 00:12:33.894 CC module/bdev/error/vbdev_error.o 00:12:34.153 CC module/bdev/gpt/gpt.o 00:12:34.153 CC module/bdev/null/bdev_null.o 00:12:34.153 CC module/bdev/passthru/vbdev_passthru.o 00:12:34.153 CC module/blobfs/bdev/blobfs_bdev.o 00:12:34.153 LIB libspdk_sock_posix.a 00:12:34.153 CC module/bdev/error/vbdev_error_rpc.o 00:12:34.153 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:12:34.153 CC module/bdev/gpt/vbdev_gpt.o 00:12:34.153 CC module/bdev/nvme/bdev_nvme_rpc.o 00:12:34.153 CC module/bdev/null/bdev_null_rpc.o 00:12:34.153 LIB libspdk_bdev_error.a 00:12:34.153 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:12:34.153 CC module/bdev/malloc/bdev_malloc_rpc.o 00:12:34.153 CC module/bdev/nvme/nvme_rpc.o 00:12:34.153 CC module/bdev/delay/vbdev_delay_rpc.o 00:12:34.153 LIB libspdk_blobfs_bdev.a 00:12:34.153 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:12:34.153 LIB libspdk_bdev_null.a 00:12:34.153 LIB libspdk_bdev_passthru.a 00:12:34.153 CC module/bdev/nvme/bdev_mdns_client.o 00:12:34.153 LIB libspdk_bdev_malloc.a 00:12:34.153 LIB libspdk_bdev_gpt.a 00:12:34.153 LIB libspdk_bdev_delay.a 00:12:34.153 CC module/bdev/raid/bdev_raid.o 00:12:34.153 CC module/bdev/raid/bdev_raid_rpc.o 00:12:34.153 CC module/bdev/raid/bdev_raid_sb.o 00:12:34.153 CC module/bdev/split/vbdev_split.o 00:12:34.412 CC module/bdev/raid/raid0.o 00:12:34.412 CC module/bdev/zone_block/vbdev_zone_block.o 00:12:34.412 CC module/bdev/aio/bdev_aio.o 00:12:34.412 LIB libspdk_bdev_lvol.a 00:12:34.412 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:12:34.412 CC module/bdev/split/vbdev_split_rpc.o 00:12:34.412 CC module/bdev/aio/bdev_aio_rpc.o 00:12:34.412 CC module/bdev/raid/raid1.o 00:12:34.412 CC module/bdev/raid/concat.o 00:12:34.412 LIB libspdk_bdev_split.a 00:12:34.412 LIB libspdk_bdev_zone_block.a 00:12:34.412 LIB libspdk_bdev_aio.a 00:12:34.671 LIB libspdk_bdev_raid.a 00:12:34.671 LIB libspdk_bdev_nvme.a 00:12:35.239 CC module/event/subsystems/vmd/vmd.o 00:12:35.239 CC module/event/subsystems/vmd/vmd_rpc.o 00:12:35.239 CC module/event/subsystems/iobuf/iobuf.o 00:12:35.239 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:12:35.239 CC module/event/subsystems/scheduler/scheduler.o 00:12:35.239 CC module/event/subsystems/sock/sock.o 00:12:35.239 LIB libspdk_event_vmd.a 00:12:35.239 LIB libspdk_event_scheduler.a 00:12:35.239 LIB libspdk_event_sock.a 00:12:35.239 LIB libspdk_event_iobuf.a 00:12:35.498 CC module/event/subsystems/accel/accel.o 00:12:35.498 LIB libspdk_event_accel.a 00:12:35.757 CC module/event/subsystems/bdev/bdev.o 00:12:35.757 LIB libspdk_event_bdev.a 00:12:36.016 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:12:36.016 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:12:36.016 CC module/event/subsystems/scsi/scsi.o 00:12:36.016 LIB libspdk_event_scsi.a 00:12:36.016 LIB libspdk_event_nvmf.a 00:12:36.276 CC module/event/subsystems/iscsi/iscsi.o 00:12:36.276 LIB libspdk_event_iscsi.a 00:12:36.535 CXX app/trace/trace.o 00:12:36.535 CC examples/vmd/lsvmd/lsvmd.o 00:12:36.535 CC examples/ioat/perf/perf.o 00:12:36.535 CC examples/nvme/hello_world/hello_world.o 00:12:36.535 CC examples/sock/hello_world/hello_sock.o 00:12:36.535 CC examples/accel/perf/accel_perf.o 00:12:36.535 CC examples/bdev/hello_world/hello_bdev.o 00:12:36.535 CC examples/nvmf/nvmf/nvmf.o 00:12:36.535 CC test/accel/dif/dif.o 00:12:36.535 CC examples/blob/hello_world/hello_blob.o 00:12:36.535 LINK lsvmd 00:12:36.535 LINK ioat_perf 00:12:36.535 LINK hello_world 00:12:36.535 LINK hello_sock 00:12:36.535 LINK hello_bdev 00:12:36.535 LINK hello_blob 00:12:36.535 LINK dif 00:12:36.535 LINK nvmf 00:12:36.794 LINK accel_perf 00:12:36.794 LINK spdk_trace 00:12:37.364 CC examples/ioat/verify/verify.o 00:12:37.364 LINK verify 00:12:37.364 CC app/trace_record/trace_record.o 00:12:37.364 LINK spdk_trace_record 00:12:37.623 CC app/nvmf_tgt/nvmf_main.o 00:12:37.883 LINK nvmf_tgt 00:12:37.883 CC examples/nvme/reconnect/reconnect.o 00:12:37.883 LINK reconnect 00:12:39.817 CC examples/bdev/bdevperf/bdevperf.o 00:12:39.817 CC examples/vmd/led/led.o 00:12:39.817 LINK led 00:12:39.817 LINK bdevperf 00:12:40.077 CC app/iscsi_tgt/iscsi_tgt.o 00:12:40.337 LINK iscsi_tgt 00:12:40.907 CC examples/blob/cli/blobcli.o 00:12:41.168 LINK blobcli 00:12:41.737 CC examples/nvme/nvme_manage/nvme_manage.o 00:12:42.305 LINK nvme_manage 00:12:42.874 CC test/app/bdev_svc/bdev_svc.o 00:12:42.874 LINK bdev_svc 00:12:47.070 CC examples/nvme/arbitration/arbitration.o 00:12:47.070 LINK arbitration 00:12:51.273 CC examples/nvme/hotplug/hotplug.o 00:12:51.273 LINK hotplug 00:12:51.848 CC test/bdev/bdevio/bdevio.o 00:12:51.848 CC examples/util/zipf/zipf.o 00:12:51.848 LINK zipf 00:12:51.848 CC app/spdk_tgt/spdk_tgt.o 00:12:51.848 LINK bdevio 00:12:51.848 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:12:52.108 LINK spdk_tgt 00:12:52.108 LINK nvme_fuzz 00:12:54.641 CC examples/nvme/cmb_copy/cmb_copy.o 00:12:54.641 LINK cmb_copy 00:12:54.641 CC test/app/histogram_perf/histogram_perf.o 00:12:54.641 LINK histogram_perf 00:12:55.209 CC app/spdk_lspci/spdk_lspci.o 00:12:55.469 CC test/blobfs/mkfs/mkfs.o 00:12:55.469 LINK spdk_lspci 00:12:55.469 LINK mkfs 00:12:55.469 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:12:56.408 LINK iscsi_fuzz 00:12:56.668 CC examples/nvme/abort/abort.o 00:12:56.668 LINK abort 00:12:57.606 TEST_HEADER include/spdk/config.h 00:12:57.606 CXX test/cpp_headers/accel.o 00:12:57.866 CXX test/cpp_headers/accel_module.o 00:12:57.866 CC test/app/jsoncat/jsoncat.o 00:12:57.866 LINK jsoncat 00:12:57.866 CXX test/cpp_headers/assert.o 00:12:57.866 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:12:58.126 CXX test/cpp_headers/barrier.o 00:12:58.126 LINK pmr_persistence 00:12:58.126 CC app/spdk_nvme_perf/perf.o 00:12:58.126 CXX test/cpp_headers/base64.o 00:12:58.126 CXX test/cpp_headers/bdev.o 00:12:58.386 CXX test/cpp_headers/bdev_module.o 00:12:58.386 LINK spdk_nvme_perf 00:12:58.386 CC app/spdk_nvme_identify/identify.o 00:12:58.646 CXX test/cpp_headers/bdev_zone.o 00:12:58.646 CXX test/cpp_headers/bit_array.o 00:12:58.906 CXX test/cpp_headers/bit_pool.o 00:12:58.906 CC test/dma/test_dma/test_dma.o 00:12:58.906 CXX test/cpp_headers/blob.o 00:12:58.906 LINK spdk_nvme_identify 00:12:58.906 CXX test/cpp_headers/blob_bdev.o 00:12:58.906 LINK test_dma 00:12:59.166 CXX test/cpp_headers/blobfs.o 00:12:59.166 CXX test/cpp_headers/blobfs_bdev.o 00:12:59.425 CXX test/cpp_headers/conf.o 00:12:59.425 CXX test/cpp_headers/config.o 00:12:59.425 CXX test/cpp_headers/cpuset.o 00:12:59.685 CXX test/cpp_headers/crc16.o 00:12:59.685 CXX test/cpp_headers/crc32.o 00:12:59.685 CXX test/cpp_headers/crc64.o 00:12:59.945 CXX test/cpp_headers/dif.o 00:12:59.945 CXX test/cpp_headers/dma.o 00:12:59.945 CXX test/cpp_headers/endian.o 00:13:00.204 CXX test/cpp_headers/env.o 00:13:00.204 CXX test/cpp_headers/env_dpdk.o 00:13:00.204 CXX test/cpp_headers/event.o 00:13:00.464 CXX test/cpp_headers/fd.o 00:13:00.464 CXX test/cpp_headers/fd_group.o 00:13:00.724 CXX test/cpp_headers/file.o 00:13:00.724 CXX test/cpp_headers/ftl.o 00:13:00.724 CC test/app/stub/stub.o 00:13:00.724 LINK stub 00:13:00.724 CXX test/cpp_headers/gpt_spec.o 00:13:00.990 CXX test/cpp_headers/hexlify.o 00:13:00.990 CXX test/cpp_headers/histogram_data.o 00:13:01.258 CXX test/cpp_headers/idxd.o 00:13:01.258 CC test/env/mem_callbacks/mem_callbacks.o 00:13:01.258 CXX test/cpp_headers/idxd_spec.o 00:13:01.258 CXX test/cpp_headers/init.o 00:13:01.518 CXX test/cpp_headers/ioat.o 00:13:01.518 LINK mem_callbacks 00:13:01.518 CXX test/cpp_headers/ioat_spec.o 00:13:01.518 CXX test/cpp_headers/iscsi_spec.o 00:13:01.778 CC test/env/vtophys/vtophys.o 00:13:01.778 LINK vtophys 00:13:01.778 CXX test/cpp_headers/json.o 00:13:01.778 CXX test/cpp_headers/jsonrpc.o 00:13:02.038 CXX test/cpp_headers/likely.o 00:13:02.038 CC examples/thread/thread/thread_ex.o 00:13:02.038 CXX test/cpp_headers/log.o 00:13:02.038 LINK thread 00:13:02.038 CXX test/cpp_headers/lvol.o 00:13:02.298 CXX test/cpp_headers/memory.o 00:13:02.298 CXX test/cpp_headers/mmio.o 00:13:02.298 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:13:02.298 CXX test/cpp_headers/nbd.o 00:13:02.298 LINK env_dpdk_post_init 00:13:02.298 CXX test/cpp_headers/notify.o 00:13:02.557 CXX test/cpp_headers/nvme.o 00:13:02.557 CXX test/cpp_headers/nvme_intel.o 00:13:02.816 CXX test/cpp_headers/nvme_ocssd.o 00:13:02.816 CXX test/cpp_headers/nvme_ocssd_spec.o 00:13:03.075 CXX test/cpp_headers/nvme_spec.o 00:13:03.075 CXX test/cpp_headers/nvme_zns.o 00:13:03.075 CC examples/idxd/perf/perf.o 00:13:03.075 CXX test/cpp_headers/nvmf.o 00:13:03.333 LINK idxd_perf 00:13:03.333 CXX test/cpp_headers/nvmf_cmd.o 00:13:03.333 CXX test/cpp_headers/nvmf_fc_spec.o 00:13:03.592 CC test/event/event_perf/event_perf.o 00:13:03.592 CXX test/cpp_headers/nvmf_spec.o 00:13:03.592 LINK event_perf 00:13:03.592 CXX test/cpp_headers/nvmf_transport.o 00:13:03.851 CXX test/cpp_headers/opal.o 00:13:03.851 CXX test/cpp_headers/opal_spec.o 00:13:04.111 CC app/spdk_nvme_discover/discovery_aer.o 00:13:04.111 CXX test/cpp_headers/pci_ids.o 00:13:04.111 LINK spdk_nvme_discover 00:13:04.111 CXX test/cpp_headers/pipe.o 00:13:04.111 CXX test/cpp_headers/queue.o 00:13:04.111 CXX test/cpp_headers/reduce.o 00:13:04.370 CXX test/cpp_headers/rpc.o 00:13:04.370 CXX test/cpp_headers/scheduler.o 00:13:04.630 CXX test/cpp_headers/scsi.o 00:13:04.630 CXX test/cpp_headers/scsi_spec.o 00:13:04.630 CXX test/cpp_headers/sock.o 00:13:04.889 CXX test/cpp_headers/stdinc.o 00:13:04.889 CXX test/cpp_headers/string.o 00:13:04.889 CXX test/cpp_headers/thread.o 00:13:05.148 CXX test/cpp_headers/trace.o 00:13:05.148 CXX test/cpp_headers/trace_parser.o 00:13:05.407 CXX test/cpp_headers/tree.o 00:13:05.407 CXX test/cpp_headers/ublk.o 00:13:05.407 CXX test/cpp_headers/util.o 00:13:05.407 CXX test/cpp_headers/uuid.o 00:13:05.667 CXX test/cpp_headers/version.o 00:13:05.667 CXX test/cpp_headers/vfio_user_pci.o 00:13:05.668 CC test/env/memory/memory_ut.o 00:13:05.668 CXX test/cpp_headers/vfio_user_spec.o 00:13:05.668 CC test/event/reactor/reactor.o 00:13:05.926 CC test/env/pci/pci_ut.o 00:13:05.926 LINK reactor 00:13:05.926 CXX test/cpp_headers/vhost.o 00:13:05.926 CXX test/cpp_headers/vmd.o 00:13:05.926 LINK pci_ut 00:13:05.926 CXX test/cpp_headers/xor.o 00:13:06.186 CXX test/cpp_headers/zipf.o 00:13:06.186 CC test/event/reactor_perf/reactor_perf.o 00:13:06.186 LINK memory_ut 00:13:06.186 LINK reactor_perf 00:13:06.186 CC app/spdk_top/spdk_top.o 00:13:06.755 CC app/fio/nvme/fio_plugin.o 00:13:06.755 fio_plugin.c:1491:29: warning: field 'ruhs' with variable sized type 'struct spdk_nvme_fdp_ruhs' not at the end of a struct or class is a GNU extension [-Wgnu-variable-sized-type-not-at-end] 00:13:06.755 struct spdk_nvme_fdp_ruhs ruhs; 00:13:06.755 ^ 00:13:06.755 LINK spdk_top 00:13:06.755 1 warning generated. 00:13:06.755 LINK spdk_nvme 00:13:07.693 CC app/fio/bdev/fio_plugin.o 00:13:07.693 gmake[2]: Nothing to be done for 'all'. 00:13:07.693 CC test/nvme/aer/aer.o 00:13:07.693 LINK spdk_bdev 00:13:07.693 CC test/rpc_client/rpc_client_test.o 00:13:07.953 LINK aer 00:13:07.953 LINK rpc_client_test 00:13:07.953 CC test/nvme/reset/reset.o 00:13:08.213 LINK reset 00:13:08.473 CC test/nvme/sgl/sgl.o 00:13:08.473 CC test/nvme/e2edp/nvme_dp.o 00:13:08.473 LINK sgl 00:13:08.473 LINK nvme_dp 00:13:11.014 CC test/nvme/overhead/overhead.o 00:13:11.014 LINK overhead 00:13:11.583 CC test/thread/poller_perf/poller_perf.o 00:13:11.583 LINK poller_perf 00:13:11.843 CC test/thread/lock/spdk_lock.o 00:13:11.843 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:13:12.103 LINK histogram_ut 00:13:12.103 CC test/unit/lib/accel/accel.c/accel_ut.o 00:13:12.103 CC test/nvme/err_injection/err_injection.o 00:13:12.363 LINK err_injection 00:13:12.363 LINK spdk_lock 00:13:13.016 LINK accel_ut 00:13:13.617 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:13:14.185 CC test/nvme/startup/startup.o 00:13:14.185 LINK startup 00:13:14.444 CC test/unit/lib/bdev/part.c/part_ut.o 00:13:14.444 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:13:14.444 CC test/nvme/reserve/reserve.o 00:13:14.444 LINK reserve 00:13:14.703 LINK blob_bdev_ut 00:13:14.703 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:13:14.703 LINK scsi_nvme_ut 00:13:14.703 CC test/nvme/simple_copy/simple_copy.o 00:13:14.962 CC test/nvme/connect_stress/connect_stress.o 00:13:14.962 LINK simple_copy 00:13:14.962 LINK connect_stress 00:13:14.962 CC test/unit/lib/blob/blob.c/blob_ut.o 00:13:15.531 LINK part_ut 00:13:15.790 LINK bdev_ut 00:13:16.049 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:13:16.049 LINK tree_ut 00:13:16.308 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:13:16.590 CC test/nvme/boot_partition/boot_partition.o 00:13:16.590 LINK boot_partition 00:13:16.590 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:13:16.850 LINK blobfs_async_ut 00:13:17.110 LINK blobfs_sync_ut 00:13:17.369 CC test/unit/lib/dma/dma.c/dma_ut.o 00:13:17.629 LINK dma_ut 00:13:17.629 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:13:17.889 LINK blobfs_bdev_ut 00:13:18.148 CC test/nvme/compliance/nvme_compliance.o 00:13:18.148 CC test/unit/lib/event/app.c/app_ut.o 00:13:18.408 LINK nvme_compliance 00:13:18.408 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:13:18.408 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:13:18.408 LINK app_ut 00:13:18.408 LINK gpt_ut 00:13:18.668 CC test/nvme/fused_ordering/fused_ordering.o 00:13:18.668 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:13:18.668 LINK reactor_ut 00:13:18.668 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:13:18.928 LINK fused_ordering 00:13:18.928 LINK ioat_ut 00:13:18.928 CC test/nvme/doorbell_aers/doorbell_aers.o 00:13:19.187 CC test/nvme/fdp/fdp.o 00:13:19.187 LINK doorbell_aers 00:13:19.187 LINK fdp 00:13:19.447 LINK vbdev_lvol_ut 00:13:19.707 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:13:19.707 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:13:19.967 LINK blob_ut 00:13:20.227 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:13:20.227 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:13:20.227 LINK bdev_zone_ut 00:13:20.487 LINK conn_ut 00:13:20.487 LINK bdev_raid_ut 00:13:20.487 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:13:20.746 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:13:21.006 LINK init_grp_ut 00:13:21.006 LINK bdev_ut 00:13:21.266 LINK json_parse_ut 00:13:21.266 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:13:21.835 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:13:21.835 LINK bdev_raid_sb_ut 00:13:22.095 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:13:22.095 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:13:22.355 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:13:22.355 LINK json_util_ut 00:13:22.355 CC test/unit/lib/log/log.c/log_ut.o 00:13:22.355 LINK concat_ut 00:13:22.355 LINK jsonrpc_server_ut 00:13:22.355 LINK iscsi_ut 00:13:22.614 LINK log_ut 00:13:22.614 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:13:22.873 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:13:22.873 CC test/unit/lib/iscsi/param.c/param_ut.o 00:13:22.873 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:13:22.873 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:13:22.873 LINK raid1_ut 00:13:23.133 LINK param_ut 00:13:23.133 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:13:23.133 LINK vbdev_zone_block_ut 00:13:23.393 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:13:23.393 LINK portal_grp_ut 00:13:23.393 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:13:23.393 LINK json_write_ut 00:13:23.393 CC test/unit/lib/notify/notify.c/notify_ut.o 00:13:23.652 LINK tgt_node_ut 00:13:23.652 LINK notify_ut 00:13:23.652 LINK lvol_ut 00:13:23.912 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:13:23.912 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:13:23.912 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:13:23.912 CC test/unit/lib/sock/sock.c/sock_ut.o 00:13:23.912 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:13:23.912 LINK dev_ut 00:13:24.172 LINK lun_ut 00:13:24.431 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:13:24.431 LINK nvme_ut 00:13:24.431 LINK sock_ut 00:13:24.431 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:13:24.691 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:13:24.691 LINK scsi_ut 00:13:24.951 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:13:24.951 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:13:24.951 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:13:24.951 LINK tcp_ut 00:13:25.210 LINK nvme_ctrlr_cmd_ut 00:13:25.210 CC test/unit/lib/sock/posix.c/posix_ut.o 00:13:25.478 LINK scsi_bdev_ut 00:13:25.478 LINK bdev_nvme_ut 00:13:25.478 LINK posix_ut 00:13:25.478 CC test/unit/lib/thread/thread.c/thread_ut.o 00:13:25.747 LINK nvme_ctrlr_ut 00:13:25.747 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:13:25.747 LINK subsystem_ut 00:13:25.747 LINK ctrlr_ut 00:13:25.747 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:13:26.007 LINK scsi_pr_ut 00:13:26.267 LINK ctrlr_discovery_ut 00:13:26.527 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:13:26.527 LINK thread_ut 00:13:26.787 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:13:26.787 LINK iobuf_ut 00:13:26.787 CC test/unit/lib/util/base64.c/base64_ut.o 00:13:26.787 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:13:26.787 LINK base64_ut 00:13:26.787 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:13:27.047 LINK pci_event_ut 00:13:27.047 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:13:27.047 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:13:27.047 LINK nvme_ctrlr_ocssd_cmd_ut 00:13:27.047 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:13:27.047 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:13:27.306 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:13:27.306 LINK cpuset_ut 00:13:27.306 LINK ctrlr_bdev_ut 00:13:27.306 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:13:27.306 LINK nvme_ns_ut 00:13:27.306 LINK bit_array_ut 00:13:27.306 LINK subsystem_ut 00:13:27.566 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:13:27.566 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:13:27.566 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:13:27.566 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:13:27.566 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:13:27.826 LINK rpc_ut 00:13:27.826 LINK crc16_ut 00:13:27.826 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:13:27.826 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:13:27.826 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:13:27.826 LINK nvmf_ut 00:13:27.826 LINK crc32_ieee_ut 00:13:28.086 LINK nvme_ns_ocssd_cmd_ut 00:13:28.086 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:13:28.086 LINK nvme_ns_cmd_ut 00:13:28.086 LINK crc32c_ut 00:13:28.086 LINK nvme_poll_group_ut 00:13:28.345 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:13:28.345 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:13:28.345 LINK nvme_pcie_ut 00:13:28.345 LINK crc64_ut 00:13:28.345 CC test/unit/lib/util/dif.c/dif_ut.o 00:13:28.345 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:13:28.345 LINK idxd_user_ut 00:13:28.605 LINK nvme_qpair_ut 00:13:28.605 CC test/unit/lib/rdma/common.c/common_ut.o 00:13:28.605 CC test/unit/lib/util/iov.c/iov_ut.o 00:13:28.605 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:13:28.605 LINK common_ut 00:13:28.605 LINK iov_ut 00:13:28.865 CC test/unit/lib/util/math.c/math_ut.o 00:13:28.865 LINK math_ut 00:13:28.865 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:13:28.865 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:13:28.865 LINK rdma_ut 00:13:28.865 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:13:28.865 CC test/unit/lib/util/string.c/string_ut.o 00:13:28.865 LINK idxd_ut 00:13:28.865 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:13:29.124 LINK string_ut 00:13:29.124 LINK pipe_ut 00:13:29.124 CC test/unit/lib/util/xor.c/xor_ut.o 00:13:29.124 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:13:29.124 LINK nvme_quirks_ut 00:13:29.383 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:13:29.383 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:13:29.383 LINK xor_ut 00:13:29.383 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:13:29.383 LINK nvme_transport_ut 00:13:29.383 LINK transport_ut 00:13:29.642 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:13:29.642 LINK nvme_io_msg_ut 00:13:29.642 LINK dif_ut 00:13:29.642 LINK nvme_opal_ut 00:13:29.642 LINK nvme_fabric_ut 00:13:29.900 LINK nvme_pcie_common_ut 00:13:29.900 LINK nvme_tcp_ut 00:13:30.465 LINK nvme_rdma_ut 00:13:31.402 20:51:22 -- spdk/autopackage.sh@44 -- $ gmake -j10 clean 00:13:31.661 gmake[1]: Nothing to be done for 'clean'. 00:13:31.661 ps: stdin: not a terminal 00:13:34.953 gmake[2]: Nothing to be done for 'clean'. 00:13:35.523 20:51:26 -- spdk/autopackage.sh@46 -- $ timing_exit build_release 00:13:35.523 20:51:26 -- common/autotest_common.sh@718 -- $ xtrace_disable 00:13:35.523 20:51:26 -- common/autotest_common.sh@10 -- $ set +x 00:13:35.523 20:51:26 -- spdk/autopackage.sh@48 -- $ timing_finish 00:13:35.523 20:51:26 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:13:35.523 20:51:26 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:13:35.523 + [[ -n 1257 ]] 00:13:35.523 + sudo kill 1257 00:13:35.533 [Pipeline] } 00:13:35.551 [Pipeline] // timeout 00:13:35.557 [Pipeline] } 00:13:35.573 [Pipeline] // stage 00:13:35.578 [Pipeline] } 00:13:35.595 [Pipeline] // catchError 00:13:35.604 [Pipeline] stage 00:13:35.606 [Pipeline] { (Stop VM) 00:13:35.619 [Pipeline] sh 00:13:35.902 + vagrant halt 00:13:38.442 ==> default: Halting domain... 00:13:56.553 [Pipeline] sh 00:13:56.836 + vagrant destroy -f 00:13:59.375 ==> default: Removing domain... 00:13:59.388 [Pipeline] sh 00:13:59.672 + mv output /var/jenkins/workspace/freebsd-vg-autotest/output 00:13:59.682 [Pipeline] } 00:13:59.699 [Pipeline] // stage 00:13:59.704 [Pipeline] } 00:13:59.721 [Pipeline] // dir 00:13:59.726 [Pipeline] } 00:13:59.743 [Pipeline] // wrap 00:13:59.749 [Pipeline] } 00:13:59.764 [Pipeline] // catchError 00:13:59.773 [Pipeline] stage 00:13:59.775 [Pipeline] { (Epilogue) 00:13:59.790 [Pipeline] sh 00:14:00.074 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:14:00.087 [Pipeline] catchError 00:14:00.089 [Pipeline] { 00:14:00.102 [Pipeline] sh 00:14:00.386 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:14:00.386 Artifacts sizes are good 00:14:00.396 [Pipeline] } 00:14:00.413 [Pipeline] // catchError 00:14:00.423 [Pipeline] archiveArtifacts 00:14:00.430 Archiving artifacts 00:14:00.479 [Pipeline] cleanWs 00:14:00.491 [WS-CLEANUP] Deleting project workspace... 00:14:00.491 [WS-CLEANUP] Deferred wipeout is used... 00:14:00.497 [WS-CLEANUP] done 00:14:00.499 [Pipeline] } 00:14:00.516 [Pipeline] // stage 00:14:00.521 [Pipeline] } 00:14:00.537 [Pipeline] // node 00:14:00.543 [Pipeline] End of Pipeline 00:14:00.584 Finished: SUCCESS